• nicbou 3 hours ago

    I have a little pipeline that monitors specific parts of specific webpages for a value. If it changes, it makes a pull request to update a constant in a file. Basically, "if the minimum wage changes, update MINIMUM_WAGE in constants.json".

    I use a tiny model (currently OpenAI, soon self-hosted Qwen) to extract values from raw text.

    This helps me maintain a growing collection of guides about German bureaucracy. I monitor about a hundred values. I aim to watch as many facts as possible that way.

    • Anon84 42 minutes ago

      That’s interesting, thank you.

    • sminchev 15 hours ago

      I have done integration it two applications: (1) I use it to analyze behavior, sleep and still patterns, and detect frequently visited locations, based on recorded and provided data. It is an monitoring app for elderly people; (2) suggestions, parsing files, as database. The first two are clear. The third, instead of supporting and maintaining a database, I call the AI to give me the information;

      Used LLMs: Gemini 2.5 Flash. ChatGPT 5

      • Anon84 14 hours ago

        Thank you, that’s helpful.

      • Leomuck 21 hours ago

        I'm a full-stack software dev, proficient in AI but also sceptical. I've found that staying away from the hype is key. Stop thinking about "WHAT COULD THIS DO", but rather try to find cases where LLMs actually benefit. I've seen so many projects trying to throw LLMs at things that could have been solved deterministically.

        My personal opinion is: LLMs give you the power of language. So far we could define rules, based on structured data, we couldn't process unstructed data that well. Now we can use LLMs to take any kind of input and either create responses to it or transform it to structured data. That is a huge leap of advance. But also, there are a million cases where it's not necessary.

        On the side, I'm working for a NGO caring about sustainable finance. They have a manually gathered database, they have lots of resources, but most users couldn't care enough to actually click through everything. So offering a chatbot to make that data available seemed reasonable. It works, quite well, and still most requests are so trivial you could have just blocked them.

        On my paid job, I'm working for a german radio/tv broadcast station and they're trying to involve AI in solving simple internal user issues. It seems to work quite well. We've built a RAG system based on Qdrant and LlamaIndex and it provides all available information in a format users couldn't find before - because the systems were chaotic and complciated. So in my book, that's a good use case. Users in a very complicated environment with lots of information.

        I've worked with OpenAI API, Anthropic API, Azure Foundry, local models, IONOS Model Hub, etc. One thing that keeps coming up is privacy and (in Europe) GDPR-compliance. Use the capabilities of LLMs without sacrificing data that should not go into the next training round.

        Anyway, I think LLMs offer a lot of possibilities, but many people tackle them from the wrong side - "what could we do with this?" instead of "what problems do we need to solve?".

        • Anon84 18 hours ago

          Thank you for the thoughtful answer

        • sdevonoes 21 hours ago

          We are not. We are still making money and providing payslips for real humans. We are doing fine