• Txmm 2 hours ago

    Reassuring to see this approach coming out consistently. I’ve been doing the same for high volume data pipelines, extracting the deterministic actions from markdown instructions and leaving the LLM to do the analysis/act as the fluid coupling between deterministic parts.

    Over time you can refine this to be more and more codified, handle edge cases with agents/LLMs then turn them into first class deterministic branches too.

    This pattern seems to be emerging everywhere, the chain of thought and intent capture to improve it seems to be the next big thing

    • rossjudson 6 hours ago

      From a systems engineering standpoint, the purpose of LLMs is to construct, verify, and "push down" abstractions and deterministic layers. Deterministic layers are able to cope reliably with the law of medium numbers.

      • eddiehammond 6 hours ago

        Anthropic published a profile on what we're building at Kepler. Sharing because the architectural argument (LLM for intent, deterministic code for retrieval and computation, every number traceable to source) is the part I'd actually want HN to push on. Happy to answer questions in the thread.

        • jochem9 4 hours ago

          I'm on a very similar train. You cannot dump all the data into an LLM (for many reasons) and we also already have clearly defined rules that an LLM doesn't have to figure out.

          So keep organizing data (LLM powered, of course), so that you can query data as usual (multi modal, so not just graphs, but also time series, relational, etc). Feed that to deterministic computations. Let an LLM reason about the outcomes.

          Give the LLM the freedom to orchestrate the retrieval and computations. Make sure the way it orchestrates it is auditable.

          The key thing I want to achieve is beyond this system: I want to uncover hidden things in the system (missing in the ontology, computations, etc) and propose to add these. This will effectively give you a generic approach to create ever evolving systems aliging with reality while being fully auditable.

          • eddiehammond 3 hours ago

            The last part we're very excited by too: using orchestration logs and failure traces to surface gaps in the ontology and propose extensions. Early days, but that's where the architecture compounds, the system gets more complete every time it's used.

          • bjelkeman-again 6 hours ago

            Very interesting. What size team does it take to build this, incl. analysts, project managers, product managers etc.? How long did you spend in analysis before building and the how long to first customer using it?

            • saadatq 5 hours ago

              could I get a link to the Kepler finance site? googling for "Kepler financial" yields 5-6 other finserv companies

              • eddiehammond 3 hours ago

                Yep! kepler.ai We're working on improving SEO here, it's a popular name

            • hweaHG 5 hours ago

              The people who built this were at Palantir before. How is the verifiable targeting of girls' schools in Iran by the Claude-powered Maven system going?

              We are living in an age of hot air.

              • eddiehammond 3 hours ago

                Mandatory pitch - if working on this kind of problem is interesting to you, we're hiring! jobs.ashbyhq.com/kepler-ai

                • hbcondo714 3 hours ago

                  > Indexed 26M+ SEC filings

                  But the https://kepler.ai website says 10M+

                  • eddiehammond 3 hours ago

                    Good catch! The site was stale, updated it to reflect the 26M+

                    • hbcondo714 2 hours ago

                      Not to be picky but the careers page is saying Live in production. 10M+ SEC filings

                      https://jobs.ashbyhq.com/kepler-ai

                      I just wanted to learn more about the company but reside in California and open roles are in New York

                      • pugio 2 hours ago

                        This interaction was a delightful example of life in 2026 - the disparity between what AI can do, and what and how we use AI. (Which I like to term for myself "Phenomenal cosmic powers!... Itty bitty living space.")

                    • HoyaSaxa 4 hours ago

                      The title is misleading. They achieved a 94% accuracy rate which in financial services is a far cry from acceptable without a human-in-the-loop verifier.

                      • hansmayer 6 hours ago

                        > The duo’s answer was to build deterministic infrastructure that serves as a trust and verification layer for AI.

                        On the one hand, very encouraging to see plain old deterministic infra w/o using slop machines.

                        On the other hand, this is a recognition that LLMs are just additional friction in the system that we would better off without in the first place!

                        • bjelkeman-again 6 hours ago

                          Just friction? What do you mean? What would you do instead?

                          • hansmayer 5 hours ago

                            Well... You have a 'tool' that you cannot trust. Present everywhere due to unholly alliance between the LLM- companies and the exhilirated office worker cretins who "use" them to do "workflows". Now they fuck up stuff. Sounds like friction to me, or do you value the LLMs as net positive? WHy should I do something to fix their problems instead?

                          • SpicyLemonZest 5 hours ago

                            You're misunderstanding something about the problem space they're describing. The deterministic infra is for an underlying "execution layer"; the LLMs are providing utility by figuring out how to express English language queries in terms of the primitives of that verifiable layer. That way, you can describe your results deterministically even though the process of arriving at them was not necessarily deterministic.

                            • hansmayer 5 hours ago

                              Oh. I may have misread indeed. Ao its like, still LLM bullshit, but with really strongly worded .md instruction files begging them to please be correct?

                              • SpicyLemonZest 5 hours ago

                                No. The point of the verification layer is that you don't have to beg the LLM to please be correct.