I built Tend so I could maximize the number of projects I work on simultaneously without burning out from context-switching between them, checking who needs me, and keeping them all stoked wherever they are. One board shows every agent across every project: who's working, who's done, who needs me.
Why Tend exists
For developers running 2+ AI agents across projects.
Running multiple AI agents simultaneously is the new normal. Some finish in minutes, others run for hours. The problem isn't the agents — it's knowing when each one needs you without that knowledge becoming a second job.
Dashboards are a permanent invitation to break focus. Notification badges are interrupts. They add vigilance, not concentration.
Tend uses a different model: you glance at the board when you're ready, not when a badge demands it. When it says ○, nothing needs you. The uncertainty that drives compulsive tab-switching is gone.
The insight
The shell prompt indicator updates after every command you run. It's already in your visual field. When it says ○, nothing needs you. The uncertainty that drives compulsive checking is gone.
1. Finish a turn.
You hit enter. The agent is working. You have a natural gap.
2. Glance at the prompt.
It says ○. Nothing needs you. Stay focused where you are.
3. Or it says ?2 ◐3.
2 agents need you, 3 working. Type td. 3-second scan. Route yourself.
Get started
td init creates the event log, installs agent hooks for both Copilot and Claude Code, and writes the protocol to AGENTS.md. Run td from anywhere to see your board.
macOS and Linux only. Windows users: install WSL first, then run the install command from a WSL terminal.
AI insights
Every time an agent reports state, Tend reads its recent event trail, project README, and TODO backlog — then generates two lines: what's happening and what's likely next. No dashboards to configure. No prompts to write. It just shows up on your board.
Without insights
Raw event messages. You parse the meaning.
With insights
Context + trajectory. You route yourself instantly.
Reads the trail
Last 25 events, project README, and pending TODOs. The model understands what your project is and what the agent has been doing.
Predicts next
Infers the likely action from work trajectory — not the TODO list. If the agent is debugging auth, it predicts “run tests” — not an unrelated backlog item.
Costs nothing
~$0.00005 per insight via OpenRouter. Only fires on state changes, not on views. Content-hashed: if nothing changed, no call is made. Enabled automatically on the hosted relay.
Backlog
You're deep in one project and think of something for another. Don't context-switch to write it down. Type it from where you are. The agent picks it up on its next session.
Plain text. Committed to the repo. No app to open, no board to drag. Just lines in a file that agents read automatically.
The relay
One command gives you a live web board at relay.tend.cx. Check on your agents from your phone, another machine, or share the link with your team. Every board also exposes a structured /llms.txt endpoint — so other agents can read your board and take action on your behalf.
Live web board
See all your agents from any browser. Auto-refreshes every 60 seconds. No login, no app — just your token in the URL.
AI insights
Each project gets a terse summary and next-action prediction, generated from the event trail, README, and TODO list. Powered by OpenRouter. Enabled automatically on the hosted relay. Self-hosted: add your API key as a Worker secret.
Agent-readable
Every board serves /llms.txt — structured Markdown that orchestrator agents can fetch to triage, route, or act on stuck projects.
No accounts. One token.
Run td relay setup once. Commit the token or set it as an env var. Local and remote agents on one board.
Relay API
Base URL: https://relay.tend.cx. All /v1/* routes require a Bearer tnd_... token (except register). Responses are JSON.
Create a new relay token. Returns { token: "tnd_..." }
Emit a state event. Body: { project, state, message?, session_id?, timestamp? }
States: working · done · stuck · waiting · idle
Fetch event history. Query params: since, limit
List all projects with events for this token.
Create a TODO. Body: { message, project? }
List TODOs. Query params: status, project
Update status. Body: { status, issue_url? }. Flow: pending → dispatched → done
Delete a TODO.
Create a read-only board token (tnb_...) for sharing.
HTML board view. Works with tnd_ or tnb_ tokens in the URL.
Structured Markdown for agents. Project states, messages, insights, and backlog.
LLM-generated summaries and next-action predictions for all projects.
Insight for a single project. Returns summary, prediction, and cache timestamp.
Set a sticky note. Body: { note }. Overrides AI prediction on the board. Cleared on next emit.
List all sticky notes for this token.
Clear a sticky note.
Performance
The shell prompt indicator reads from a local cache file and returns immediately. Background refresh happens in a detached process — you never wait for it.
Cache-first
Prompt reads a local file. Computation happens in a detached background process for the next prompt.
200ms timeout
If the binary ever hangs, the shell kills it. After 3 failures, the hook auto-disables for the session.
No network
td status never contacts the relay. Network calls only on explicit board or relay commands.
No daemon
No background process, no watcher, no polling. The binary runs, prints, and exits.