
OCR for Sensitive Data on Your Own GPU
In this second part, we focus on the practical implementation of this high-performance pipeline. We show step-by-step how to set up a dedicated, fast processing server on your own NVIDIA GPU using Podman (on Rocky Linux) and the vLLM inference engine. We then build an asynchronous Python client to fully leverage the GPU's power and process even large stacks of documents.
Published December 17, 2025
Read more →
How LLMs Are Revolutionizing OCR-Based Document Analysis
In this first part we look at the conceptual advantages of Large Language Models (LLMs) in document analysis. The technical implementation and practical code examples of the two contrasting pipelines will follow in detail in an accompanying article.
Published December 02, 2025
Read more →