Key flows, patterns, and design choices — without the wall of boxes.
The model is the least reliable component in the system. The job of the architecture is to make its output grounded, durable, and safe.
Answers cite data, documents, or explicit tool results. If we can’t point to evidence, we treat it as a hypothesis.
Runs survive refreshes, retries, and worker restarts. The UI is a subscriber, not the source of truth.
Write operations require explicit confirmation and produce an audit trail. “No invisible side effects.”
We’ll keep one diagram and reuse it. Each flow just highlights different edges.
Tool results have to fit inside a context window. We treat “too much data” as an interface bug. The fix is structural: bounded results + refinement hints + UI-only artifacts.
Small, deterministic summaries and capped lists. If truncated, the response carries a refinement hint.
Artifacts (“sidecars”) for charts and structured displays. Artifacts are not sent to the model.
Tools are deterministic interfaces (fetch/act) with typed schemas. Agents are planners with policies, memory, and tool access.
chat-agent orchestrates. Specialists are invoked as tools to keep responsibilities crisp and runs traceable.
Goa types + generated schemas enforce the boundary. We prefer loud failures over silent drift.
Run a chat turn, then reconnect SSE from a prior from_event_id. Proves durability and observability in one move.
Trigger Ada from a chat turn and show child-run events in Pulse. Great moment to talk about boundaries and policies.
Upload an image + a PDF. Then show vision grounding vs citation-backed retrieval.
Start a seeded task run and watch step progress events stream. It feels like “chat”, but behaves like a workflow.