Parsing PDFs has slowly driven me insane over the last year. Here are 8 weird edge cases to show you why PDF parsing isn't an easy problem. 🧵
PDFs have a font map that tells you what actual character is connected to each rendered character, so you can copy/paste. Unfortunately, these maps can lie, so the character you copy is not what you see. If you're unlucky, it's total gibberish.
PDFs can have invisible text that only shows up when you try to extract it. "Measurement in your home" is only here once...or is it?
Math is a whole can of worms. Remember the font map problem? Well, math is almost always random characters - here we get some strange Tamil/Amharic combo.
Math bounding boxes are always fun - see how each formula is broken up into lots of tiny sections? Putting them together is a great time!
Once upon a time, someone decided that their favorite letters should be connected together into one character - like ffi or fl. Unfortunately, PDFs are inconsistent with this, and sometimes will totally skip ligatures - very ecient of them.
Not all text in a PDF is correct. Some PDFs are digital, and the text was added on creation. But others have had invisible OCR text added, sometimes based on pretty bad text detection. That's when you get this mess:
Overlapping text elements can get crazy - see how the watermark overlaps all the other text? Forget about finding good reading order here.
I've been showing you somewhat nice line bounding boxes. But PDFs just have character positions inside - you have to postprocess to join them into lines. In tables, this can get tricky, since it's hard to know when a new cell starts:
You might be wondering why you should even bother with the text inside PDFs. The answer is that a lot of PDFs have good text, and it's faster and more accurate to just pull it out.
Throughput measured using 1x H100 with Nvidia MPS enabled, 10 workers, and chunking. Finalizing a VLLM config for improved performance. (arch is mostly llama/qwen, but some non-standard stuff).
We've improved marker (PDF -> markdown) a lot in 3 months - accuracy and speed now beat llamaparse, mathpix, and docling.
We shipped:
- llm mode that augments marker with models like gemini flash
- improved math, w/inline math
- links and references
- better tables and forms
Find marker at
Benchmarking markdown conversion isn't easy - different services have different formats. We use both a heuristic text matching method, and llm as a judge.
Marker v1 is out! This is a complete rewrite - 2x faster, much more accurate, easy to extend, with markdown + JSON chunk output.
Just run `pip install -U marker-pdf`.
Find it at .
Marker v1 does layout and order in one step, which turns three model calls into one. The layout model handles more block types, like code and lists, that were tricky before. github.com/VikParuchuri/m…
The code is modular, with a consistent internal schema. It's easy to extend with your logic. Data comes in via providers, processors operate on individual blocks, and output is generated through renderers. You can override any part of the system.
Table extraction is a task frontier LLMs have trouble with; this is gemini flash extracting the first table. Columns are added, mixed up, and values hallucinated.
Announcing Surya OCR 2! It uses a new architecture and improves on v1 in every way:
- OCR with automatic language detection for 93 languages (no more specifying languages!)
- More accurate on old/noisy documents
- 20% faster
- Basic English handwriting support
Find Surya here - .
Surya OCR 2 is more accurate across all document types. It also compares favorably to Tesseract and Google Cloud OCR. The benchmarking script is in the repo.
My earlier benchmark compared mainly clean documents, so I made a new noisy document benchmark to compare v2 and v1. This was created from tapuscorpus by @Alix_Tz. Again, language is not hinted.
I just released new surya layout and text detection models:
- 30% faster on GPU, 4x faster on CPU, 12x faster on MPS
- Accuracy very slightly better
- When I merge this into marker, it will be 15% faster on GPU, 3x on CPU, 7x on MPS
I used a modified version of efficientvit from MIT - - which was then adapted by @wightmanr . I made some small modifications, including adding a segmentation head. Thanks for much for the architecture/code!github.com/mit-han-lab/ef…
I didn't change the training data much, but the new models do allow for higher resolution (since there's no global softmax attention), so benchmark scores are slightly better.