• ZeroGravitas a day ago

    I was trying this the other day with opencode and ollama and it all seemed kind of broken/useless.

    I don't understand the two llm approach plus telegram here?

    It seems like the bot is both creating the telegram interface and using it to be a coding assistant?

    • advanced-stack a day ago

      It's mostly a test of the coding capabilities of the 9B model. The 0.8B is used to make the telegram bot smarter than a if/then/else.

      I find LM Studio more usable for local setups (desktop/laptop) and I would use directly llama.cpp stack for a (local) server deployment

      • ZeroGravitas 2 hours ago

        I revisited this and using the models directly via ollama run was actually surprisingly fast.

        A bug or misconfiguration with the connection to opencode seemed to be the culprit.

    • undefined 2 days ago
      [deleted]