Enterprise Data Platform

FP Lens Connect

An intelligent natural language interface that lets business users query databases conversationally, without writing SQL.

RustAxumLeptos WASMpgvectorRAGOry

The problem

Organisations sit on vast amounts of structured data locked behind SQL expertise. Analysts wait for engineering teams to build reports, data scientists spend more time writing queries than analysing results, and business stakeholders are entirely dependent on intermediaries to answer time-sensitive questions.


The solution

FP Lens Connect is a multi-tenant, enterprise-grade platform that translates natural language into safe, validated SQL queries using large language models augmented with Retrieval-Augmented Generation (RAG). It combines conversational AI with deep schema understanding, business glossary awareness, and real-time collaboration — turning any authorised user into a self-service data analyst.


Architecture highlights

Full-Stack Rust

The entire system — backend API server, WebAssembly frontend, identity provider, and authorisation client — is written in Rust across a 6-crate workspace. This delivers memory safety, high concurrency, and predictable latency without a garbage collector.

Conversational Query Sessions

Users interact through a chat interface where context accumulates across turns. The system maintains checkpointed sessions with full conversation history, enabling follow-up questions like "now break that down by region" or "exclude last quarter".

Multi-Provider LLM Integration

The platform abstracts across four LLM providers — OpenAI, Anthropic, Google Gemini, and Ollama (on-premises) — through a trait-based factory pattern. Organisations configure their preferred provider per tenant, with API keys managed securely through HashiCorp Vault. Prompt caching and cost tracking are built in.

RAG-Powered Schema Intelligence

A 5-phase pipeline extracts schema metadata from customer PostgreSQL databases, generates business-friendly documentation, computes vector embeddings, and stores them in pgvector for hybrid search (full-text + cosine similarity with HNSW indexing). Four knowledge types feed the RAG context: schema documentation, curated query examples, extracted query patterns, and a business glossary — giving the LLM rich, domain-specific context for accurate SQL generation.

Multi-Provider Embedding Engine

Supports five embedding providers (OpenAI, Azure OpenAI, Cohere, HuggingFace, Ollama) with blue-green deployment for zero-downtime model upgrades. A traffic routing layer enables gradual rollouts and A/B testing between embedding models, with automatic rollback on degradation.

Enterprise Security Stack

Authentication through a full Ory ecosystem integration — Kratos for identity management, Hydra as the OAuth2/OIDC provider, and Keto for fine-grained, namespace-based authorisation. Seven permission namespaces (User, Organisation, Session, Query, DbConnection, LlmConfig, Knowledge) enforce row-level access control. SQL execution is sandboxed: only SELECT/WITH/EXPLAIN are permitted, with configurable query length limits, execution timeouts, and result row caps.

Real-Time Streaming

A WebSocket pub/sub layer delivers conversation progress in real time — status updates, SQL proposals, and execution results stream to the browser as they're generated, with ephemeral ticket-based WebSocket authentication.


Key capabilities

  • Natural language to SQL with multi-turn conversational context
  • Pluggable LLM backends — swap providers without code changes
  • Automated schema discovery with background introspection jobs
  • Business glossary and query example management for domain-specific accuracy
  • Blue-green embedding deployments with traffic routing and rollback
  • SQL safety validation — parameterised execution, forbidden keyword blocking, row/time limits
  • Comprehensive audit trail — every query, every action, every access logged
  • Rate limiting — per-user and per-organisation request throttling
  • Job tracking system — hierarchical progress reporting for long-running operations

What this demonstrates

CompetencyEvidence
Systems-Level Rust EngineeringAsync services, connection pooling, zero-copy serialisation, trait-based polymorphism across a multi-crate workspace
LLM Application ArchitecturePrompt engineering, RAG pipeline design, provider abstraction, cost-aware token management
Enterprise Security DesignOIDC integration, fine-grained authorisation, secret management, SQL injection prevention, audit compliance
Full-Stack DeliveryFrom PostgreSQL schema design through REST APIs to a reactive WebAssembly frontend, all in a single language
Infrastructure Orchestration8 containerised services with proper networking, initialisation ordering, and credential bootstrapping

Technology stack

LayerTechnology
LanguageRust (edition 2021), 6-crate workspace
API ServerAxum 0.8 + Tokio async runtime
FrontendLeptos 0.8 (WebAssembly, client-side rendered)
DatabasePostgreSQL 17 with pgvector 0.8
Vector SearchHNSW indexing, cosine similarity, hybrid ranking
SecretsHashiCorp Vault (AppRole auth, dynamic credentials)
IdentityOry Kratos + Hydra (OIDC/OAuth2)
AuthorisationOry Keto (namespace-based, Zanzibar-inspired)
InfrastructureDocker Compose, 8 orchestrated services
Schema44 database migrations, 40+ tables
API Surface80+ REST endpoints + WebSocket
All products

Want to discuss?

We build these products with the same rigour we bring to our consulting engagements.

Get in touch