How InsureAI Works

A production-grade RAG + LLM architecture that gives you genuinely personalised insurance recommendations — not just keyword matches.

System Architecture

1. Data Ingestion

CSV datasets (AutoInsurance, claims data, premium tables) are parsed and converted into rich policy documents with pricing factors.

Python · Pandas · NumPy

2. Embedding & Indexing

Policy documents are embedded using sentence-transformers (all-MiniLM-L6-v2) and stored in a Qdrant vector database for semantic search.

Sentence-Transformers · Qdrant

3. RAG Retrieval

User input is converted to a query vector. Top-20 semantically similar policies are retrieved, filtered for eligibility and budget.

Qdrant cosine search · Eligibility filters

4. LLM Analysis

Retrieved policies + user profile are sent to Claude (Sonnet) which ranks policies, calculates personalised premiums, writes pros/cons and an AI summary.

Claude Sonnet 4.6 · Anthropic API

Tech Stack

Frontend

Next.js 15 · React 19 · Tailwind CSS

Backend

FastAPI · Python 3.11+

Vector DB

Qdrant (local file-based or cloud)

Embeddings

all-MiniLM-L6-v2 (sentence-transformers)

LLM

Claude Sonnet 4.6 (Anthropic API)

Data

Kaggle car insurance datasets (3 sources)

Data Sources

AutoInsurance.csv9,135 customer profiles with policy types, premiums, and vehicle classes
Car_Insurance_Claim.csvClaim history with vehicle type, driver demographics, and accident data
cgr-premiums-table.csv92,000+ real premium records with territory, gender, and age-based pricing
Synthetic Policy Catalog96 policy documents generated across 8 providers and 12 coverage templates