There's multiple fundamental problems people need to be aware of.
- LLM's are typically pre-trained on 4k text tokens and then extrapolated out to longer context windows (it's easy to go from 4000 text tokens to 4001). This is not possible with images due to how they're tokenized. As a result, you're out of distribution - hallucinations become a huge problem once you're dealing with more than a couple of images.
- Pdf's at 1536 × 2048 use 3 to 5X more tokens than the raw text (ie higher inference costs and slower responses). Going lower results in blurry images.
- Images are inherently a much heavier representation in raw size too, you're adding latency to every request to just download all the needed images.
Their very small benchmark is obviously going to outperform basic text chunking on finance docs heavy with charts and tables. I would be far more interested in seeing an OCR step added with Gemini (which can annotate images) and then comparing results.
An end to end image approach makes sense in certain cases (like patents, architecture diagrams, etc) but it's a last resort.
I think it would be good to combine traditional OCR with an LLM to fix up mistakes and add diagram representations - LLMs have the problem of just inventing plausible-sounding text if it can't read it, which is worse than just garbling the result. For instance, GPT4.1 worked perfectly with a screenshot your comment at 1296 × 179 but if I zoom out to 50% and give it a 650 × 84 screenshot instead, the result is:
"There's multiple fundamental problems people need to be aware of. - LLM's are typically pre-trained on text tokens and then extrapolated out to longer context windows (it's easy to go from 4000 text tokens to 4001). This is not possible with images due to how they're tokenized. As a result, you're out of distribution - hallucinations become a huge problem once you're dealing with more than a couple of images. - A PNG at 512x 2048 is 3.5k more tokens than the raw text (so higher inference costs and slower responses). Going lower results in blurry images. - Images are inherently a much heavier representation in raw size too, you're adding latency to every request to just download all the needed images.
Their very small benchmark is obviously going to outperform basic text chunking on finance docs heavy with charts and tables. I would be far more interested in seeing an OCR step added with Gemini (which can annotate images) and then comparing results.
An end to end image approach makes sense in certain cases (like patents, architecture diagrams, etc) but it's a last resort."
It mostly gets it right but notice it changes "Pdf's at 1536 × 2048 use 3 to 5X more tokens" to "A PNG at 512x 2048 is 3.5k more tokens".
True but modern models such as gemma3 pan& scan and other tricks such as training from multiple resolutions do alleviate these issues.
An interesting property of the gemma3 family is that increasing the input image siwmze actually does not increase processing memory requirements, because a second stage encoder actually compresses it into fixed size tokens. Very neat in practice.
You can add OCR with Gemini, and presumably that would lead to better results than the OCR model we compared against. However, it's important to note that then you're guaranteeing that the entire corpus of documents you're processing will go through a large VLM. That can be prohibitively expensive and slow.
Definitely trade-offs to be made here, we found this to be the most effective in most cases.
VLM’s capable of parsing images with high fidelity are 10 - 50X cheaper than the frontier models. Any savings from not parsing, are quickly going to be wiped out if someone has any actual traffic. Not to mention the massive hits to long context accuracy and latency.
That's what their document parse product is for. I think people feed things to an LLM sometimes and sure it might work but it could also be the wrong tool for the job. Not everything needs to run through the LLM.
LLMs are exactly the tool to use when other parsing methods fail due to poor formatting. AI is for the fuzzy cases.
This makes sense, but is something to shaking up the RAG pipeline? Perhaps you could take each RAG result and then do a model processing step to ask it to extract relevant information from the image directly pertaining to the user query, once per result, and then aggregate those (text) results as the input to your final generation. That would sidestep the token limit for multiple images, and allow parallelizing the image understanding step.
Context window extrapolation should work with hierarchical/multi-scale tokenization of images, such as Haar wavelets
Some colleagues and myself did implemented exactly this six months ago for a French gov agency.
It's open source and available here: https://github.com/jolibrain/colette
It's not our primary business so it's just lying there and we don't advertise much, but it works, somehow and with some tweaks to get it really efficient.
The true genius though is that the whole thing can be made fully differentiable, unlocking the ability to finetune the viz rag on targeted datasets.
The layout model can also be customized for fine grained document understanding.
You don't have a license in your repository top-level. That means that nobody who takes licensing at all seriously can use your stuff, even just for reference.
Good catch, will add it tomorrow. License is Apache2.
They do have: https://github.com/jolibrain/colette/blob/main/pyproject.tom...
I agree it's better to have the full licence at top level, but is there a legal reason why this would be inadequate?
Standard practice now is to just have an LLM read the whole repo and write a new original version in a different language. It’s code laundering.
Great, thanks for sharing your code. Could you please add a license so I and others can understand if we're able to use it?
Yeah the fine tuning is definitely the best part.
Often, the blocker becomes high quality eval sets (which I guess always is the blocker).
Hey we've done a lot of research on this side [1] (OCR vs direct image + general LLM benchmarking).
The biggest problem with direct image extraction is multipage documents. We found that single page extraction (OCR=>LLM vs Image=LLM) slightly favored the direct image extraction. But anything beyond 5 images had a sharp fall off in accuracy compared to OCR first.
Which makes sense, long context recall over text is already a hard problem, but that's what LLMs are optimized for. Long context recall over images is still pretty bad.
That's an interesting point. We've found that for most use cases, over 5 pages of context is overkill. Having a small LLM conversion layer on top of images also ends up working pretty well (i.e. instead of direct OCR, passing batches of 5 images - if you really need that many - to smaller vision models and having them extract the most important points from the document).
We're currently researching surgery on the cache or attention maps for LLMs to have larger batches of images work better. Seems like Sliding window or Infinite Retrieval might be promising directions to go into.
Also - and this is speculation - I think that the jump in multimodal capabilities that we're seeing from models is only going to increase, meaning long-context for images is probably not going to be a huge blocker as models improve.
This just depends a lot on how well you can parse down the context prior to passing to an LLM.
Ex: Reading contracts or legal documents. Usually a 50 page document that you can't very effectively cherry pick from. Since different clauses or sections will be referenced multiple times across the full document.
In these scenarios, it's almost always better to pass the full document into the LLM rather than running RAG. And if you're passing the full document it's better as text rather than images.
I spent a good amount of time last year working on a system to analyse patent documents.
Patents are difficult as they can include anything from abstract diagrams, chemical formulas, to mathematical equations, so it tends to be really tricky to prepare the data in a way that later can be used by an LLM.
The simplest approach I found was to “take a picture” of each page of the document, and ask for an LLM to generate a JSON explaining the content (plus some other metadata such as page number, number of visual elements, and so on)
If any complicated image is present, simply ask for the model to describe it. Once that is done, you have a JSON file that can be embedded into your vector store of choice.
I can’t say about the price-to-performance ration, but this approach seems to easier and more efficient than what is the author is proposing.
You can ask the model to describe the image, but that is inherently lossy. What if it is a chart and the model gets most x, y pairs, but the user asks about a missing "x" or "y" value. Presenting the image at inference is effective since you're guaranteeing that the LLM is able to answer exactly the user's question. The only blocker here becomes how good retrieval is, and that's a smaller problem to solve. This approach allows us to only solve for passing in relevant context, the rest is taken care of by the LLM, otherwise the problem space expands to correct OCR, parsing, and getting all possible descriptions to images from the model.
This is a great example of how to use LLMs thanks.
But it also illustrates to me that the opportunities with LLMs right now are primarily about reclassifying or reprocessing existing sources of value like patent documents. In the 90-00s many successful SW businesses were building databases to replace traditional filing.
Creating fundamentally new collections of value which require upfront investment seems to still be challenging for our economy.
how often has the model hallucinated the image though?
I speak from experience that this is a bad idea.
There are cases where documents contains text with letters that look the same in many font. For example, 0 and O looks identical in many fonts. So if you have a doc/xls/PDF/html then you lose information by converting it into an image.
For cases like serial numbers, not even humans can distinguish 0 vs O (or l vs I) by looking at them.
PDFs don’t always contain actual text. Sometimes they just contain instructions to draw the letters.
For that reason, IMO rendering a PDF page as an image is a very reasonable way to extract information out of it.
For the other formats you mentioned, I agree that it is probably better to parse the document instead.
> PDFs don’t always contain actual text. Sometimes they just contain instructions to draw the letters.
Yeah, but when they do, it makes a difference.
Also, speaking from experience, most invoices do contain actual text.
The more I learn about PDF, the more I am : what?
It makes sense. If you "print" to pdf it makes far more sense to keep the vector representation around. Rasterizing it would simultaneously bloat the file size and lower the quality level when transformed.
Completely agree with this. This is what we've observed in production too. Embedding images makes the RAG a lot more robust to the "inner workings" of a document.
This is within the context of using it as an alternative to OCR, which would suffer the same issues, with more duct tape and string infrastructure and cost.
You can win any race if you can cherry-pick your competitors.
Strangely the linked marketing text repeatedly comments regarding OCR errors (I counted at least 4 separate instances), which is extremely weird because such a visual RAG suffers precisely the same problem. It is such a weird thing to repeatedly harp on.
If the OCR has a problem understanding varying fonts and text, there is zero reason using embeddings instead is immune to this.
I’m confused. Wouldn’t the LLM be able to read the text more correctly than traditional OCR by virtue of inferring what that looks like vs what makes sense for it to look like from training? I would think it would be less prone to making fewer typographic interpretation errors than a more traditional mechanical algorithm.
Modern OCR is using machine learning technologies, including ViT and precisely the same models and technologies used in the linked solution. I mean, if their comparison was with OCR from 2002, sure, but they're comparing against modern OCR solutions that generate text representations of documents, using the very latest machine learning innovations and massive models (along with textual transformer-based contextual inferrals), with their own solution which uses precisely the same stack. It's a weird thing for them to continually harp on.
Their solution is precisely as subject to ambiguities of text that the comparative OCR solutions are.
For HTML, in a lot of cases, using the tags to chunk things better works. However, I've found that when I'm trying to design a page, showing models the actual image of the page leads to way better debugging than just sending the code back.
1 vs I or 0 vs O are valid issues, but in practice - and there's probably selection bias here - we've seen documents with a ton of diagrams and charts (that are much simpler to deal with as images).
I was trying to copy a schedule into Gemini to ask it some questions about it. I struggled with copying and pasting it for several minutes, just wouldn't come out right even though it was already in HTML. Finally gave up, screenshotted it, and then put black boxes over the parts I wanted Gemini to ignore (irrelevant info) and pasted that image in. It worked very well.
Could someone please help me understand how a multi-modal RAG does not already solve this issue?[1]
What am I missing?
Flash 2.5, Sonnet 3.7, etc. always provided me with very satisfactory image analysis. And, I might be making this up, but to me it feels like some models provide better responses when I give them the text as an image, instead of feeding "just" the text.
Multimodal RAG is exactly what we argue for. In their original state, though, multivectors (that form the basis for multi-modal RAG) are very unwieldy - computing the similarity scores is very expensive and so scaling them up in this state is hard.
You need to apply things like quantization, single-vector conversions (using fixed dimensional encodings), and better indexing to ensure that multimodal RAG works at scale.
That is exactly what we're doing at Morphik :)
And the Gemini(s) aren't already doing this at GoogleCorp?
I get that ColPali is straightforward and powerful, but document processing still has many advantages:
- lexical retrieval (based on BM25, TFIDF) which is better at capturing specific terms - full-text search
This is something I've done as well - I wanted to scan all invoices that came into my mail so I just exported ALL ATTACHMENTS from my mailbox and used a script to upload them one by one, forcing a tool call to extract "is invoice: yes / no" and a bunch of invoice line, company name, date, invoice number, etc fields.
It had a surprisingly high hit rate. It took over 3 hours of LLM calls but who cares - It was completely hands-off. I then compared the invoices to my bank statements (aka I asked an LLM to do it) and it just missed a few invoices that weren't included as attachments (like those "click to download" mails). It did a pretty poor job matching invoices to bank statements (like "oh this invoice is a few dollars off but i'm sure its this statement") so I'm afraid I still need an accountant for a while.
"What did it cost"? I don't know. I used a cheap-ish model, Claude 3.7 I think.
In your use case, for that simple data matching that it errors on I think it would be better to have the LLM write the code that can be used to process the input files (the raw text that it produced from images and the bank statements), rather than have the LLM try to match up the data in the files itself.
"The results transformed our system, and our query latency went from 3-4s to 30ms."
Ignorging the trade-offs introduced, the MUVERA paper presented a drop of 90% in latency with evidence in the form of a research paper. Yet, you are reporting "99%" drops in latency. Big claims require big evidence.
Is the text flattened? You don't need to run PDFs through OCR if not. The text can be extracted. Even with JavaScript in the web browser. You only need OCR for hand written text or flatted text. Google's document parse can help as well. You could also run significantly cheaper tools on the PDF first. Just sending everything to the LLM is more costly. What about massive PDFs? They won't fit in the context window sometimes or will cost a lot.
LLMs are great, but use the right tool for the job.
Our argument in general is that even in the non-flattened cases, we see complex diagrams pop up in documents that won't work with a text-based approach.
In the context of RAG, the objective is to send information to the model, so LLMs are the right tool for the job.
Something just feels a bit off about this piece. It seems to labour the point about how “beautiful” or “perfect” their solution is a few times too many, to the point where it starts to feel more like marketing than any sort of useful technical observation.
I disagree. It feels like something you would say when you finally come across the "obviously right" solution, that's easier to implement and simpler to describe. As Kolmogorov said, the simplest solution is exponentially more correct than the others.
It is marketing of course. Regardless of what it says it's a company blog. That sets constraints on the sort of stuff they say vs. a regular blog. Not picking on this company as it is the same for all such blogs.
Problem is transcription errors will mess things up for sure. With the text, you just do not have to worry about transcription errors. Sure, its a bit tricky handling tables and chunking is a problem as well, but unless my document is more images than text, I would prefer handling it the "old-fashioned" way.
> You might still need to convert a document to text or a structured format, that’s essential for syncing information into structured databases or data lakes. In those cases, OCR works (with its quirks), but in my experience passing the original document to an LLM is better
Has anyone done any work to evaluate how good LLM parsing is compared to traditional OCR? I've only got anecdotal evidence saying LLMs are better. However whenever I've tested it out there were always an unacceptable level of hallucinations.
Looks like they cracked it? But I found both OCR and reading the whole page (Open AI various models) has been unusable for scanning a magazine say. And getting which heading is for wheat text.
Would love to try our hand at it! We have a couple magazine use cases, but the harder it is, the more fun it is :)
The emphasis on PDFs for RAG seems like something out of the 1990s. Are there any good frameworks for using RAG if your company doesn't go around creating documents left and right?
After all, the documents/emails/presentations will cover the most common use cases. But we have databases that have all the questions the RAG might be asked, far more answers than that which live in documents.
That's because PDFs are the hard part. If you're starting with small pieces of text, RAG becomes much much easier.
Using modern tools I would naturally be inclined to:
1. Have the LLM see the image and produce an text version using a kind of semantic markup (even hallucinated markup)
2. Use that text for most of the RAG
3. If the focus (of analysis or conversation) converges one image, include that image in the context in addition to the text
If I use a simple prompt with GPT 4o on the Palantir slide from the article I get this: https://gist.github.com/ianb/7a380a66c033c638c2cd1163ea7b2e9... – seems pretty good!
Interesting article, but this is also an ad for a SaaS.
It makes sense that a lossy transformation (OCR which removes structure) would be worse than perceptually lossless (because even if the PDF file has additional information, you only see the rendered visual). But it's cool and a little surprising that the multi-modal models are getting this good at interpreting images!
Can you report the relative storage requirements for multivector COLPALI vs multivector COPALI with binary vectors vs MUVERA vs a single vector per page? Can your system scale to millions of vectors?
Yes! We have a use case in production with over a million pages. MUVERA is good for this, since it is basically akin to regular vector search + re-ranking.
In our current setup, we have the multivectors stored as .npy in S3 Express storage. We use Turbopuffer for the vector search + filtering part. Pre-warming the namespace, and pre-fetching the most common vectors from S3 means that the search latency is almost indistinguishable from regular vector search.
ColPali with binary vectors worked fine, but to be honest there have been so many specific improvements to single vectors that switching to MUVERA gave us a huge boost.
Regular multivector ColPali also suffers from a similar issue. Chamfer distance is just hard to compute at scale. Plaid is a good solution if your corpus is constant. If it isn't, using the regular mulitvector ColPali as a re-ranking step is a good bet.
Can multimodal llms read the pdf file format to extract text components as well as graphical ones? Because that would seem to me to be the best way to go.
LLMs are not yet there for complex and diverse document parsing use cases, especially at an enterprise scale (processing millions of pages).
Some of the reasons are:
Complex layouts, nested tables, tables spanning multiple pages, checkboxes, radio-buttons, off-oriented scans, controlling LLM costs, checking hallucinations, Human-in-the-loop integration, and privacy.
More on the issues: https://unstract.com/blog/why-llms-struggle-with-unstructure...
Wow, this is tempting me to use Morphik to add memory to in terminal AI agents for personal use even. Looks powerful and easy.
Would love feedback :)
Related question: what is today‘s best solution for invoices?
This would depend on the exact use case. Feeding in the invoice directly to the model is - in my opinion - the best way to approach this. If you need to search over them, then directly embedding them as images is definitely a strong approach. Here's something we wrote explaining the process: https://www.morphik.ai/docs/concepts/colpali
> The ColPali model doesn't just "look" at documents. It understands them in a fundamentally different way than traditional approaches.
I’m so sick of this.
In what sense?
“It’s not just X, it’s Y” is the calling card of ChatGPT right now.
I did a bit of work in that space. Its not that simple. models that work with images are not perfect either and often have problem finding the right information. So you trade parsing issues with much more difficult to debug corner cases. At the end of the day, whatever works better should be assessed by your test/validation set.