« BackRecursive LLM promptsgithub.comSubmitted by vlan121 3 months ago
  • mertleee 3 months ago

    "Foundational AI companies love this one trick"

    It's part of why they love agents and tools like cursor -> turns a problem that could've been one prompt and a few hundred tokens into dozens of prompts and thousands of tokens ;)

    • danielbln 3 months ago

      It's be nice if I could solve any problem by speccing it out in its entirety and then just implement. In reality, I have to iterate and course correct, as do agentic flows. You're right that the AI labs love it though, iterating like that is expensive.

    • ivape 3 months ago

      The bigger picture goal here is to explore using prompts to generate new prompts

      I see this as the same as a reasoning loop. This is the approach I use to quickly code up pseudo reasoning loops on local projects. Someone had asked in another thread "how can I get the LLM to generate a whole book", well, just like this. If it can keep prompting itself to ask "what would chapter N be?" until "THE END", then you get your book.

      • 2099miles 3 months ago

        ^

      • danielbln 3 months ago

        The last commit is from April 2023, should this post maybe have a (2023) tag? Two years is eons in this space.

        • gwintrob 3 months ago

          Crazy that OpenAI only launched o1 in September 2024. Some of these ideas have been swirling for a while but it feels like we're in a special moment where they're getting turned into products.

          • mentalgear 3 months ago

            Well, I remember Chain of Thought being proposed as early as the GPT-3 release (2 years before chatGPT).

          • vlan121 3 months ago

            I had a different title. It was somehow changed to the name of the repository.

            • jdnier 3 months ago

              The author is Co-founder of Databricks, creator of K Prize, so an early adopter.

            • kordlessagain 3 months ago

              I love this! My take on it for MCP: https://github.com/kordless/EvolveMCP

              • K0balt 3 months ago

                This is kind of like a self generating agentic context.. cool. I think regular agents, especially adversarial agents, are easier to get focused on most types of problems though.

                Still clever.

                • James_K 3 months ago

                  I feel that often getting LLMs to do things like mathematical problems or citation is much harder than simply writing software to achieve that same task.

                  • mentalgear 3 months ago

                    Trying to save state in a non-deterministic system, not the best idea. Those things need to be externalised.

                    • seeknotfind 3 months ago

                      Excellent fun. Now just to create a prompt to show iterated LLMs are turing complete.

                      • ivape 3 months ago

                        Let's see Paul Allen's prompt.

                      • NooneAtAll3 3 months ago

                        LLM quine when?

                      • mentalgear 3 months ago

                        Should definitely get a date tag.

                        • vlan121 3 months ago

                          I was leaving this one out, seems like a gag when you read it :D