• tines 10 hours ago

    So you have to be able to identify a priori what is and isn't an hallucination right?

    • ares623 10 hours ago

      The oracle problem is solved. Just use an actual oracle.

      • happyPersonR 9 hours ago

        I guess the real question is how often do you see the same class of hallucination ? For something where you're using an LLM agent/Workflow, and you're running it repeatedly, I could totally see this being worthwhile.

        • makeavish 10 hours ago

          Yeah, reading the headline got me excited too. I thought they are going to propose some novel solution or use the recent research by OpenAI on reward function optimization.

          • esafak 9 hours ago

            It's rather cheeky to call it "real-time AI hallucination detection" when all they're doing is checking for invalid moves and playing twice. You don't even need real-time processing for this, do you?

        • uncomputation 9 hours ago

          There’s a more generalizable work on this recently for those expecting more. https://github.com/leochlon/hallbayes

          • Zeik 6 hours ago

            I didn’t understand quite the point of the claims from end of the page. Surely automatic cars or health/banking services don’t use language models for anything important. Everyone knows those hallucinate. ML is lot better alternative.

            • yunwal 5 hours ago

              is this satire?