• gslepak 5 hours ago

    This doesn't seem to use local LLMs... so it's not really local. :-\

    Is there a deep searcher that can also use local LLMs like those hosted by Ollama and LM Studio?

    • drdaeman 5 hours ago

      Looking at the code (https://github.com/zilliztech/deep-searcher/blob/master/deep...), I think it probably may work at least with Ollama without any additional tweaks if you run it with `OPENAI_BASE_URL=http://localhost:11434/v1` or define `provide_settings.llm.base_url` in `config.yaml` (https://github.com/zilliztech/deep-searcher/blob/6c77b1e5597...) and tweak the model appropriately.

      From a quick glance, this project doesn't seem to use any tool/function calling or streaming or format enforcement or any other "fancy" API features, so all chances are that it may just work, although I have some reservations about the quality, especially with smaller models.

      • phantompeace 4 hours ago

        I’ve been having issues parsing the LLM responses using Ollama and llama3.2, deepseek-r1:7b and mistral-small. I think the lack of structured output/schema is hurting it here

        • drdaeman 4 hours ago

          Yep, I haven't tried this particular project but that's my overall experience with similar projects as well. Smaller models that can be ran locally in compute-poor environments really need structured outputs and just prompting them to "you can ONLY return a python list of str, WITHOUT any other additional content" (a piece of prompt from this project) is nowhere sufficient for any resemblance of reliability.

          If you're feeling adventurous, you can probably refactor the prompt functions in https://github.com/zilliztech/deep-searcher/blob/master/deep... to return additional metadata (required output structure) together with the prompt itself, update all `llm.chat()` calls throughout the codebase to account for this (probably changing the `chat` method API by adding an extra `format` argument and not just `messages`) and implement a custom Ollama-specific handler class that would pass this to the LLM runner. Or maybe task some of those new agentic coding tools to do this, since it looks like a mostly mechanical refactoring that doesn't require a lot of thinking past figuring out the new API contract.

    • parhamn 6 hours ago

      I think the magic of Grok's implementation of this is that they already have most of the websites cached (guessing via their twitter crawler) so it all feels very snappy. Bing/Brave search don't seem to offer that in their search apis. Does such a thing exist as a service?

      • binarymax 5 hours ago

        Web search APIs can't present the full document due to copyright. They can only present the snippet contextual to the query.

        I wrote my own implementation using various web search APIs and a puppeteer service to download individual documents as needed. It wasn't that hard but I do get blocked by some sites (reddit for example).

        • parhamn 3 hours ago

          Is this true? Wouldn't all the "site to markdown" type services be infringing then?

        • tekacs 6 hours ago

          I’ve been wondering about this and searching for solutions too.

          For now we’ve just managed to optimize how quickly we download pages, but haven’t found an API that actually caches them. Perhaps companies are concerned that they’ll be sued for it in the age of LLMs?

          The Brave API provides ‘additional snippets’, meaning that you at least get multiple slices of the page, but it’s not quite a substitute.

          • fragmede 6 hours ago

            the common crawl dataset is rather massive, though I can't speak to how well it would perform here

            http://commoncrawl.org

          • Daniel_Van_Zant 7 hours ago

            Have been searching for a deep research tool that I can hook up to both my personal notes (in Obsidian) and the web and this looks like this has those capabilities. Now the only piece left is to figure out a way to export the deep research outputs back into my Obsidian somehow.

            • jianc1010 7 hours ago

              Sometimes I wanted to do a little coding to automate things with my personal productivity tool so i feel a programatic interface that open source implementation like this provides is very convenient

            • stefanwebb 6 hours ago
              • vineyardmike 7 hours ago

                I’m curious how this compares to the open-source version made by HuggingFace [1]. As I can tell, the HF version uses reasoning LLMs to search/traverse and parse the web and gather results, then evaluates the results before eventually synthesizing a result.

                This version appears to show off a vector store for documents generated from a web crawl (the writer is a vector-store-aaS company)

                [1] https://github.com/huggingface/smolagents/tree/main/examples...

                • stefanwebb 3 hours ago

                  There's quite a few differences between HuggingFace's Open Deep-Research and Zilliz's DeepSearcher.

                  I think the biggest one is the goal: HF is to replicate the performance of Deep Research on the GAIA benchmark whereas ours is to teach agentic concepts and show how to build research agents with open-source.

                  Also, we go into the design in a lot more detail than HF's blog post. On the design side, HF uses code writing and execution as a tool, whereas we use prompt writing and calling as a tool. We do an explicit break down of the query into sub-queries, and sub-sub-queries, etc. whereas HF uses a chain of reasoning to decide what to do next.

                  I think ours is a better approach for producing a detailed report on an open-ended question, whereas HFs is better for answering a specific, challenging question in short form.

                • fuddle 8 hours ago

                  Considering all the major AI companies have basically created the same deep research product, it would make sense that they focus on a shared open source platform instead.

                  • bilater 6 hours ago

                    Nice - I like people's different twist on Deep Research. Here is mine...with Flow I'm trying a new workflow.

                    https://github.com/btahir/open-deep-research

                    • zitterbewegung 6 hours ago

                      I actually tried using this and I came into some issues and I had to replace the openAI text embeddings with the MilvusEmbedding.

                      https://gist.github.com/zitterbewegung/086dd344d16d4fd4b8931...

                      The QuickStart had a good response. [1] https://gist.github.com/zitterbewegung/086dd344d16d4fd4b8931...

                      • cma 7 hours ago

                        Cloudflare is going to ruin self hosted things like this and force centralization to a few players. I guess we'll need decentralized efforts to scrape the web and be able to run it on that.

                        • redskyluan 7 hours ago

                          Amazing!

                          Search is not a problem . What to search is!

                          Using reasoning model, it is much easier to split task and focus on what to search

                          • gnatnavi 2 hours ago

                            +1. Asking the right questions is always the most difficult thing to do.