The previous two posts covered how events flow from the SDK to the UI. This post focuses on visualizing one specific type of event: tool calls. Tool invocations are the most frequent operations in an Agent application. A typical task might call tools twenty or thirty times—reading files, writing files, executing commands, searching code. If every tool call renders as the same gray block, it's hard
Post 1 covered how AgentBridge converts the SDK's AsyncStream<SDKMessage> into [AgentEvent]. This post looks at what [AgentEvent] becomes — how TimelineView renders 18 event types, handles scroll behavior, and stays smooth when the event count gets large. TimelineView is the main body of the workspace, filling all the space between the sidebar and the input box. Its view hierarchy is shallow: Time
AI coding tools are starting to look similar on the surface: they all offer chat, agents, code edits, terminal awareness, and some form of autocomplete. But the real differences are in the workflow. The question is less “which one has AI?” and more “where does the AI live in your development process?” For me, VS Code is still the baseline. It is flexible, extensible, familiar, and easy to compose