
Ops Agent
WhatsApp-First Personal Operations Agent
A personal experiment in building an AI agent from scratch, inspired by all the OpenClaw hype. Delivers a daily morning brief to WhatsApp and stays live as a conversational assistant for on-demand queries, tasks, and free-form AI chat.
Project Details
With AI agents everywhere in the news and tools like OpenClaw getting a lot of attention, I wanted to actually understand how this stuff works by building one myself. The result is a self-hosted Python agent that sends a daily 07:00 brief to WhatsApp with weather, calendar, inbox, GitHub, crypto, and news, and stays live between briefs as a conversational assistant. You can ask for your agenda, add tasks, query your inbox, or just ask a free-form question in Dutch. Everything runs on my own VPS, data stays in a local SQLite file, and nothing passes through a third-party platform.
Results & Impact
A conversational command router
That handles 10+ structured Dutch commands and falls through to free-form GPT-4o-mini for unmatched queries
A connector abstraction (BaseConnector)
With health reporting used by Gmail, Calendar, GitHub, WhatsApp, shell, and weather
A four-mode policy engine (observe
Draft, act_with_approval, trusted_auto) gating all writes and shell commands
An approvals API
With GET /approvals, POST /approvals/{id}/approve, and POST /approvals/{id}/deny endpoints
Composed morning briefs
With GPT-4o-mini using a Dutch-language prompt capped at 300 words with three priority actions
A plain-text fallback renderer so the scheduler always produces output even
Without an OpenAI key
OAuth 2.0
With a browser-based consent flow for Gmail and Google Calendar, storing tokens in SQLite
Deployed
To a Ubuntu VPS with two systemd services (API + scheduler) and nginx reverse proxy with HTTPS via certbot
Registered Meta WhatsApp webhook
For inbound message handling, dispatching replies in background tasks to return 200 immediately
Configured a shell policy
With an explicit allowlist of safe commands and a blocklist of destructive ones
Challenge to Solution
What had to be solved
The main thing I wanted to understand was how to build an agent that feels responsive rather than sluggish.
Tools like OpenClaw add managed infrastructure between you and the answer, which works fine for a once-daily brief but feels slow for anything conversational. Building it myself meant I could keep the stack lean and the latency low.
How it came together
The conversation.py module is a priority-ordered router: structured Dutch commands (brief, weer, agenda, inbox, followups, taken, taak, klaar) resolve directly to connector calls with no LLM cost or latency; anything unmatched falls to a GPT-4o-mini call with a system prompt constrained to plain text, Dutch, and under 200 words.
The morning brief follows the same pattern but uses all six connectors in parallel before composing. A plain-text fallback renderer means the scheduler always produces output even if OpenAI is unreachable. The FastAPI approvals API exposes a human-in-the-loop queue for writes and shell commands. Two systemd services (API + scheduler) on a Ubuntu VPS with nginx and certbot handle deployment and uptime.
Key Features
Delivers a daily 07:00 morning brief
To WhatsApp with weather, meetings, inbox, GitHub items, crypto, and news
GPT-4o-mini composes a concise Dutch brief under 300 words
Ending with three priority actions
Conversational assistant available any time
Via WhatsApp: ask for weather, agenda, inbox, or follow-ups on demand
Task manager built
Into WhatsApp: add tasks with 'taak:', list with 'taken', mark done with 'klaar'
Free-form AI chat
For any question not matched by a command, grounded in the same connector data
Policy engine enforces four operating modes
From read-only to trusted auto-send
Any write or shell action
Outside the trust boundary is queued as a pending approval
FastAPI approvals API
Lets you approve or deny queued actions from anywhere
Typer CLI
For running briefs, initializing the database, and managing the scheduler
OAuth 2.0 flow
For Gmail and Google Calendar with token persistence in SQLite
All data stays local
In a single SQLite file on your own VPS, no managed platform required
Plain-text fallback renderer so the scheduler always produces output
Without an OpenAI key