Skip to main content
Web3One logoWeb3One
All case studies
LiveAI / ML EngineeringProduct BuildBackend Infrastructure

/ Helix AI Copilot

RAG-powered support copilot for SaaS teams.

Helix is an AI customer-support copilot we built that drafts replies grounded in a company's docs, past tickets, and product changelog — cutting first-reply time and offloading repetitive queries.

SOURCESdocs.md · 12ticket #4831faq.md · 394% conf

/ Project overview

RAG-powered support copilot for SaaS teams.

Helix is the studio's flagship AI product, currently in production beta with three SaaS partners. It connects to Intercom, Zendesk, and Slack, ingests product documentation and historical tickets into a vector store, and uses a multi-step RAG pipeline (query rewriting → retrieval → reranking → grounded generation) to draft customer-support replies. Every answer cites its sources and includes a confidence score; agents review, edit, and send. We built the ingestion pipelines, the retrieval layer, the agent dashboard, the eval harness for monitoring drift, and the per-tenant cost controls.

/ Case study

The story.
Start to ship.

01/ Challenge

The problem.

Mid-stage SaaS support teams are drowning in repetitive tier-1 tickets — password resets, billing FAQs, integration steps — while real product issues sit in the queue. Off-the-shelf chatbots either hallucinate confidently or refuse to engage. Teams want AI that drafts grounded answers, cites sources, and lets a human approve before sending.

02/ Approach

How we built it.

We designed Helix as an agent-assist tool, not a customer-facing bot. The pipeline is multi-step: a small model rewrites the user query for retrieval, pgvector returns top candidates from the company's docs and past resolved tickets, a reranker scores them, and Claude generates an answer grounded only in retrieved context — with an explicit refusal path when retrieval comes back weak. Every draft surfaces its sources and a confidence score so agents trust what they're sending. We built per-tenant ingestion, an eval harness that flags answer-quality drift, and tight token-cost guardrails.

03/ Outcome

What shipped.

Helix is in production beta with three SaaS partners. Internal tests on partner historical tickets show meaningful reductions in agent draft time and a measurable quality lift over single-shot LLM baselines. Helix also drives our own studio support workflow, which is how we keep dogfooding it. Open beta planned for late 2026.

/ The work

What we built.

Key features

  • RAG pipeline with query rewriting & reranking
  • Source citations + confidence scoring on every reply
  • Intercom, Zendesk, and Slack integrations
  • Per-tenant data isolation and cost controls
  • Eval harness for drift monitoring
  • Human-in-the-loop review dashboard

Tech stack

Anthropic ClaudeOpenAIpgvectorNext.jsSupabaseTypeScriptPython

Let’s build

You have an idea.
We’ll have it shipping by next month.

Book a free 30-minute call. No pitch, no pressure. You’ll walk away with a scope, a timeline, and a clear number — or an honest referral.

Reply time: under 12 hours · New Delhi, India · Serving globally

WhatsAppTelegram