The concept of recursive LLM prompts, which dates back to at least April 2023, has evolved from an academic exploration into a foundational technique behind several commercial AI products. As the community discusses, we're witnessing the transformation of theoretical ideas into practical applications at a remarkable pace.
From Theory to Practice
What began as an experimental approach to implement recursion using English as a programming language and LLMs as runtime environments has now become integral to modern AI systems. The technique involves creating prompts that generate slightly updated versions of themselves, effectively maintaining state between iterations while working toward a solution. As one commenter noted:
I see this as the same as a reasoning loop. This is the approach I use to quickly code up pseudo reasoning loops on local projects. Someone had asked in another thread how can I get the LLM to generate a whole book, well, just like this.
This practical application highlights how recursive prompting has moved beyond theoretical interest to become a genuine development technique.
![]() |
---|
This interface exemplifies how recursive prompting can effectively solve mathematical problems through structured reasoning |
Commercial Adoption and Token Economics
The community discussion reveals an interesting economic dimension to recursive prompting. Several commenters pointed out that AI companies have strong financial incentives to promote agent-based approaches and tools that leverage recursive prompting, as they significantly increase token usage. What could be accomplished in one prompt and a few hundred tokens often becomes dozens of prompts and thousands of tokens when implemented as recursive systems.
This observation comes at a particularly relevant time, with commenters noting that OpenAI only launched o1 (their agent-based system) in September 2024, despite these ideas having been explored for years. The gap between concept development and commercial implementation demonstrates how rapidly the field is evolving.
Limitations and Alternatives
Despite the enthusiasm, the community remains pragmatic about the limitations of using LLMs for certain tasks. Mathematical problems and citation work, for example, are often highlighted as areas where purpose-built software might be more efficient than LLM-based approaches. This practical perspective suggests that while recursive prompting opens new possibilities, it isn't always the optimal solution.
The discussions also touch on more experimental concepts, such as creating LLM quines (self-replicating programs) and proving that iterated LLMs are Turing complete, indicating that the theoretical exploration of these techniques continues alongside their commercial applications.
As recursive prompting techniques mature from academic curiosities into commercial products, we're witnessing the practical implementation of ideas that seemed purely theoretical just two years ago. The speed of this evolution underscores how quickly AI capabilities are advancing and being monetized, even as researchers continue to explore their theoretical limits and practical applications.
Reference: Recursive LLM prompts
![]() |
---|
This terminal output showcases the numerical results that may arise in experiments with LLMs, highlighting the practical limits of such models in mathematical tasks |