Chattr

Open-Source, Self-Hosted Support Chatbot

An open-source, self-hosted support chatbot for docs, support, and customer questions on any website. Add a single script tag, connect your content, and launch a branded assistant with RAG, guardrails, and multi-tenant control.

Chattr logo mark and wordmark

Project Overview

Role: Full-Stack Engineer

Duration: Ongoing

Team: Solo Project

Year: 2026

GitHub: View Code

Blog Post: Read Article

MKV

Technologies Used

TypeScriptHonoVercel AI SDKSQLitesqlite-vecbetter-sqlite3ZodCheerioDockerTurborepo

Project Details

I built Chattr to explore what it actually takes to ship a practical support chatbot for real websites, not just a polished demo. The project combines a Hono API, an embeddable vanilla TypeScript widget, retrieval powered by SQLite and sqlite-vec, configurable guardrails, and a multi-tenant architecture so one deployment can power multiple branded assistants. Teams can scrape their site or ingest documents, stream grounded answers with source links, localize the experience in English or Dutch, and self-host the stack with their preferred model provider, including Ollama for fully local setups.

Results & Impact

  1. Released Chattr

    As an open-source, MIT-licensed, self-hosted support chatbot for docs, support, and customer questions

  2. Shipped a one-script embeddable chat widget

    With tenant-aware branding, starter questions, and streamed responses

  3. Retrieval on SQLite

    And sqlite-vec, avoiding the need for a separate vector database while keeping self-hosting lightweight

  4. A multi-tenant architecture

    With isolated databases, prompts, guardrails, escalation flows, and allowed origins per tenant

  5. Sitemap-driven scraping

    And local document ingestion to turn existing site content into a searchable knowledge base quickly

  6. Guardrails

    For prompt-injection detection, topic controls, rate limiting, output filtering, and system-prompt leak prevention

  7. Improved support reliability

    With confidence scoring, source citations, follow-up suggestions, and safe handoff flows for low-confidence answers

  8. Multiple providers including OpenAI

    Anthropic, Azure OpenAI, and Ollama, including fully local setups

  9. Streamlined adoption

    With onboarding and Docker-based self-hosting flows that take teams from install to working assistant quickly

Challenge to Solution

  1. Challenge

    Build an embeddable support chatbot that is genuinely production-friendly: easy to launch with a single script tag, grounded in real site content, safe enough for customer-facing use, flexible across model providers, and able to serve multiple brands from one deployment without content or configuration leaking between tenants.

  2. Solution

    Chattr uses a config-driven multi-tenant architecture where each tenant gets its own SQLite vector database, system prompt, branding, escalation settings, allowed origins, and guardrails.

    Content can be ingested from local files or sitemap-driven scraping, chunked and embedded into sqlite-vec, then retrieved with reranking, confidence scoring, and source deduplication at chat time. The server streams answers through the Vercel AI SDK, while input and output guardrails handle prompt-injection checks, topic restrictions, rate limits, content filtering, and system-prompt leak detection. A lightweight widget bootstraps from the server, inherits tenant branding, supports starter questions and feedback, and can be embedded on any site with one script tag.

Product in Use

Three Chattr widget screens showing starter questions, a grounded support answer, and suggested next steps with source links
Three-screen widget showcase covering onboarding, grounded answers, and follow-up support actions

Key Features

  1. Embed on any site

    With a single script tag

  2. Scrape websites or ingest local files

    Into a searchable knowledge base

  3. Stream grounded answers

    With source links and retrieval confidence

  4. Support tenant-specific branding

    Prompts, escalation flows, and guardrails

  5. Run multiple branded assistants

    From one deployment with isolated SQLite databases

  6. Include prompt-injection detection

    Rate limiting, topic restrictions, and output filtering

  7. Support English

    And Dutch UI and response guidance

  8. Collect answer feeßdback

    With thumbs up/down and reason tracking

  9. Work with OpenAI

    Anthropic, Azure OpenAI, and Ollama

  10. Self-host

    With Docker and use onboarding flows for quick setup