We put together an open-source collection of Pydantic schemas for a variety of document categories (W2 filings, invoices etc.), including instructions for how to get structured JSON responses from any visual input with the model of your choosing. Run everything locally.
I've used "structured output" (with supplied schema) on Google and openai, and function calling / tool use on those, anthropic and others- and afaict they are functionally the same (if you force a specific function / schema). Has someone had a different experience?
They’re slightly nuanced - every model provider has a slightly different Pydantic /JSON schema compatibility (i.e for handling Literals, Unions, nested subtypes etc).
So you end up hitting roadblocks for seemingly simple Pydantic schemas.
I meant between "structured output" and "function calling". Afaict one is outputting according to a schema and the other is outputting according to a schema... which will be used as the parameters to a function.
But they seem to be considered disparate concepts. So I'm trying to understand if there's some additional nuance I'm missing.
Ah ok, I misunderstood. As far as I've seen, structured outputs is essentially "json-mode" with some constraints (i.e. guided decoding over a known schema) - so the model effectively emits valid JSON that conforms to the schema. In function calling, the model is asked to emit "code" that conforms to some function parameter spec. You could use json-mode for function-calling, but probably not the other way around.
I've generally found json-mode to be more useful than function-calling, even though the latter is what everyone fixates on because of it's obvious use in agents.
I don't understand the difference based on your explanation (or the significance of "code") and have used function calling for outputting json according to a schema.
With function calls the model may or may not output something that matches the schema, with structured output the schema is enforced at the logit level.
At least in the case of openai, you can set "strict" to "true" and function calling / tool use must / is enforced to follow the schema too.
The model might not use the tools every completion, depending on your setup.
Super cool! We at BAML had been thinking about doing something like this for our ecosystem as well - we’d love to add BAML models to this repo!
If you haven’t heard of us, we provide a language and runtime that enable defining your schemas in a simpler syntax, and allow usage with _any_ model, not just those that implement tool calling or json mode, by by relying on schema-aligned parsing. Check it out! https://github.com/BoundaryML/baml
Would love to chat! reach out scott@vlm.run
Have you folks tried finetuning models for data extraction from visual data?
That's one of our main focuses, yes: https://docs.vlm.run/api-reference/v1/fine-tuning/post-finet...
Interesting. We're using a SAAS solution for document extraction right now. I don't know if it's in our interest to build out more but I do like the idea of keeping extraction local.
Cool, what types of documents do you currently handle? We could share some of our learnings/schemas here too.
Mostly tax forms, state-specific formations documents (Articles of X), and state-specific payroll registration documents.
Different commenter; Here I'm extracting data from commerical invoices, POs and bills of lading.
Ah cool, care to share a few examples? We can probably add those schemas in the next few days if there's enough folks who could benefit from this. A basic invoice schema is already there: https://github.com/vlm-run/vlmrun-hub/blob/main/vlmrun/hub/s...
You can see some of the qualitative results on GPT4o, Gemini, Llama 3.2 11B, Phi-4 here: https://github.com/vlm-run/vlmrun-hub?tab=readme-ov-file#-qu...
Our customers insist we run everything on their docs locally.
Absolutely, we’ve been hearing the same from our customers - which is why we thought it makes sense to open source a bunch of schemas so that they’re reusable and compatible across various inference providers (esp. Ollama/local ones).
What are the most promising ways to extract information from picture like this, if the domain has strict time constraints? What's the second best way that is still fast?
You can always distill VLMs into much smaller / faster models that’s specific to your domain or use-case.
What’s the use-case and what kind of latency do you require?
When making a new repo, reset your initial branch back to master with the following command:
git config --global init.defaultBranch master
There's the equivalent setting in GitHub.
This seems to work for videos as well. Pretty cool demo and very nice interface for the pydantic types.
Yes, good catch. We'll be adding several more schemas for videos in the next few weeks.
A few video schemas are already added to the main catalog: https://github.com/vlm-run/vlmrun-hub/blob/main/vlmrun/hub/c...
This is something I was searching for..Thanks for creating!
I'd really like to play with Qwen2.5-VL at some point, perhaps for reading data-sheets for microchips. Nicely for some applications, it's also very good at reporting position of what it finds, which many ML tools are pretty mediocre at. https://qwenlm.github.io/blog/qwen2.5-vl/
Not really this application, but QvQ for visual reasoning is also impressive. https://qwenlm.github.io/blog/qvq-72b-preview/
Meta has used Qwen as the basis for their Apollo research. https://arxiv.org/abs/2412.10360
Is Qwen2.5-VL on Ollama? Could give it a try with a few of the schemas we have.
We’ve locally tested with Llama 3.2 11B Vision on Ollama: https://github.com/vlm-run/vlmrun-hub/blob/main/tests/benchm...
FWIW I think Ollama structured outputs API is quite buggy compared to the HF transformers variant.