Forget tokenization. Forget layout loss.
With pure visual embeddings, LAYRA understands documents like a human โ page by page, structure and all.
LAYRA is a next-generation Retrieval-Augmented Generation (RAG) system powered by pure visual embeddings. It treats documents not as sequences of tokens but as visually structured artifacts โ preserving layout, semantics, and graphical elements like tables, figures, and charts.
Built for both research exploration and enterprise deployment, LAYRA features:
- ๐งโ๐ป Modern frontend stack: TypeScript-based Next.js 15 and TailwindCSS 4.0 โ delivering a snappy, responsive, and developer-friendly interface.
- โ๏ธ Async-first backend architecture: Built on FastAPI, seamlessly integrated with fully asynchronous components including Redis, MySQL, MongoDB, and MinIO, optimized for high-performance data flow and scalability.
- ๐ง Visual-first multimodal foundation: Uses Qwen2.5-VL series as the current default large language model, with plans to support GPT-4o, Claude, Gemini, and other multimodal models in future releases.
- ๐ฏ Image-level embedding: Document parsing and visual embedding is powered by the Colpali project โ using the colqwen2.5 to convert pages into rich semantic vectors stored in Milvus.
LAYRA aims to be an enterprise-ready, plug-and-play visual RAG platform, bridging unstructured document understanding with multimodal AI.
๐ง Currently in active development:
The first test version is now available for trial, with support for PDF documents only.
๐ Future releases will add support for more document types (e.g., Word, PPT, Excel, images, Markdown).
๐ For details, see the Roadmap.
- Latest Updates
- Why LAYRA?
- First Trial Version Available
- System Architecture
- Query Flow
- Upload & Indexing Flow
- Key Features
- Tech Stack
- Deployment
- Use Cases
- Roadmap
- Contributing
- Contact
- Star History
- License
-
(2025.4.6) First Trial Version Now Available:
The first testable version of LAYRA has been released! Users can now upload PDF documents, ask questions, and receive layout-aware answers. Weโre excited to see how this feature can help with real-world document understanding. -
Current Features:
- PDF batch upload and parsing functionality
- Visual-first retrieval-augmented generation (RAG) for querying document content
- Backend fully optimized for scalable data flow with FastAPI, Milvus, Redis, MongoDB, and MinIO
-
Upcoming Features:
- Expanded document format support (Word, PPT, Excel, and images)
- Support for additional large models, including GPT-4o and Claude
- Integration of intelligent agent for multi-hop reasoning and advanced document analysis
Most RAG systems rely on OCR or text-based parsing to process documents. But these approaches:
- โ Lose layout fidelity (columns, tables, hierarchy collapse)
- โ Struggle with non-text visuals (charts, diagrams, figures)
- โ Break semantic continuity due to poor OCR segmentation
LAYRA changes this.
๐ It sees each page of the document as a whole โ just like a human reader.
By using pure visual embeddings, LAYRA preserves:
- โ Layout structure (headers, lists, sections)
- โ Tabular integrity (rows, columns, merged cells)
- โ Embedded visuals (plots, graphs, stamps, handwriting)
- โ Multi-modal consistency between layout and content
โ The first testable version of LAYRA is now available!
Upload your own PDF documents, ask questions, and receive layout-aware answers.
You can now explore the first version, which supports PDF uploads and returns questions about your documents with visual layout context.
LAYRAโs pipeline is designed for async-first, visual-native, and scalable document retrieval and generation.
The query goes through embedding โ vector retrieval โ anser generation:
PDFs are parsed into images and embedded visually via ColQwen2.5, with metadata and files stored in appropriate databases:
Feature | Description |
---|---|
๐ง Visual-First RAG | Embeds raw document images without relying on OCR |
๐งพ Layout-Preserving QA | Understands tables, headers, and multi-column layouts |
๐ Visual Content Support | Parses and reasons over plots, diagrams, and charts |
โ๏ธ Async Document Parsing | Background document processing via Kafka |
๐ Fast Vector Search | Powered by Milvus for scalable dense retrieval |
๐ค Flexible LLM Backend | Supports Qwen2.5-VL series, and extensible to GPT-4o, Claude 3, etc. |
๐ Modern Web UI | Built with Next.js + Typescript + TailwindCSS + Zustand |
Frontend:
Next.js
,TypeScript
,TailwindCSS
,Zustand
Backend & Infrastructure:
FastAPI
,Kafka
,Redis
,MySQL
,MongoDB
,MinIO
,Milvus
Models & RAG:
- Embedding:
colqwen2.5-v0.2
- LLM Serving:
VLM (Qwen2.5-VL series)
# Clone the repo
git clone https://github.com/liweiphys/layra.git
cd layra
# Set up database and FastAPI environment configuration
vim .env
vim web/.env.local
vim gunicorn_config.py
# Or use default settings
# Launch Milvus, Redis, MongoDB, Kafka, and MinIO via Docker Compose.
cd docker
sudo docker-compose -f milvus-standalone-docker-compose.yml -f docker-compose.yml up -d
# Back to project root
cd ../
# Install Python 3.10.6 and create virtual environment (optional)
# python -m venv venv && source venv/bin/activate
# Or install with conda
conda create --name layra python=3.10
conda activate layra
# Install system dependencies
# For Ubuntu/Debian:
sudo apt-get update && sudo apt-get install -y poppler-utils
# For Fedora/CentOS:
# sudo dnf install -y poppler-utils
# Install dependencies
pip install -r requirements.txt
# Download ColQwen2.5 model weights
# โ ๏ธ If Git LFS not installed, run:
git lfs install
# Download base model weights
git clone https://huggingface.co/vidore/colqwen2.5-base
# For users in China:
# git clone https://hf-mirror.com/vidore/colqwen2.5-base
# Download LoRA fine-tuned weights
git clone https://huggingface.co/vidore/colqwen2.5-v0.2
# For users in China:
# git clone https://hf-mirror.com/vidore/colqwen2.5-v0.2
# Modify the `base_model_name_or_path` field in `colqwen2.5-v0.2/adapter_config.json`
base_model_name_or_path="/absolute/path/to/colqwen2.5-base"
# Set it to local path of colqwen2.5-base
# Set the following in your .env file
COLBERT_MODEL_PATH="/absolute/path/to/colqwen2.5-v0.2"
# Initialize MySQL database
alembic init migrations
cp env.py migrations
alembic revision --autogenerate -m "Init Mysql"
alembic upgrade head
# Start backend with Gunicorn
gunicorn -c gunicorn_config.py app.main:app
# http://localhost:8000
# Start ColBERT embedding model server
python model_server.py
# Frontend development
cd web
npm install
npm run dev
# http://localhost:3000
# Or build frontend (recommended)
# cd web
# npm install
# npm run build
# npm start # http://localhost:3000
๐งช Note: Milvus, Redis, MongoDB, Kafka, and MinIO are expected to run locally or via Docker.
Now that everything is up and running, enjoy exploring and building with Layra! ๐
In the future, we will support multiple deployment methods including Docker, Kubernetes (K8s), and other containerized environments. More details will be provided when these deployment options are available.
- ๐งพ Intelligent document QA: Contracts, invoices, scanned reports
- ๐ Policy/legal documents: Structure-rich PDF understanding
- ๐ญ Industrial manuals: OCR-unfriendly layouts, tables, flowcharts
- ๐ Visual analytics: Trend analysis from plots and charts
- Knowledge Base PDF batch upload and parsing functionality
- RAG-based dialogue system for querying and answering
- Support openai-compatible API interface๏ผollamaใsglangใvllm๏ผ
- Code architecture and modular optimization for scalability
- Support for additional large models
- Expanded document format support (e.g., Word, PPT, Excel)
- Integration of intelligent Agent for multi-hop reasoning
- Integration with knowledge graph
- Deployment with Docker Compose
- Public Knowledge Base API access
Contributions are welcome! Feel free to open an issue or pull request if youโd like to contribute.
We are in the process of creating a CONTRIBUTING.md file, which will provide guidelines for code contributions, issue reporting, and best practices. Stay tuned!
liweiphys
๐ง liweixmu@foxmail.com
๐ github.com/liweiphys/layra
๐บ bilibili: Biggestbiaoge
๐ ๅพฎไฟกๅ
ฌไผๅท๏ผLAYRA้กน็ฎ
๐ผ Available for hire โ open to new opportunities!
This project is licensed under the Apache License 2.0. See the LICENSE file for more details.
LAYRA sees what OCR cannot. It reads documents like we do โ visually, structurally, holistically.