• matthewbauer 2 hours ago

    There's also https://github.com/manzaltu/claude-code-ide.el if you're just using claude code.

    I like that agent-shell just uses comint instead of a full vterm, but I find myself missing a deeper integration with claude that claude-code-ide has. Like with claude-code-ide you can define custom MCP tools that run Emacs commands.

    • ryanobjc 2 hours ago

      I've used chatgpt-shell, but I have since turned my LLM usage to gptel inside org-mode buffers. Every day I use org-roam-dailies-goto-today to make a new file and turn on gptel (the use of org-roam-dailies is 100% optional). Then I do my interactions with gptel in here, using top level bullets and setting topics to limit context.

      I have 10 months of chats, and now I can analyze them. I even had claude code write me up a program do that: https://github.com/ryanobjc/dailies-analyzer - the use of gptel-mode allows me to know which parts of the file are LLM output and which I typed in, via a header in the file.

      Keeping your own data as plain text has huge benefits. Having all my chats persistent is good. It's all private. I could even store these chats into a file.gpg and emacs will auto encrypt-decrypt it. Gptel and the LLM only gets the text straight out of emacs, and knows nothing about the encryption.

      I found this better than the 'shell' type packages, since they don't always keep context, and are ultimately less flexible than a file as an interaction buffer. I described how I have this set up here: https://gist.github.com/ryanobjc/39a082563a39ba0ef9ceda40409...

      All of this setup is 100% portable across every LLM backend gptel supports, which is basically all of them, including local models. With local models I could have a fully private and offline AI experience, which quality based on how much model I can run.