The previous two posts covered how events flow from the SDK to the UI. This post focuses on visualizing one specific type of event: tool calls. Tool invocations are the most frequent operations in an Agent application. A typical task might call tools twenty or thirty times—reading files, writing files, executing commands, searching code. If every tool call renders as the same gray block, it's hard
Post 1 covered how AgentBridge converts the SDK's AsyncStream<SDKMessage> into [AgentEvent]. This post looks at what [AgentEvent] becomes — how TimelineView renders 18 event types, handles scroll behavior, and stays smooth when the event count gets large. TimelineView is the main body of the workspace, filling all the space between the sidebar and the input box. Its view hierarchy is shallow: Time
Maybe recognition is less about detached labels than it is about where something is, what surrounds it, and how that context gets reinforced. That is what pushed me to build this repo. I wanted a smaller problem that still felt close to the thing I cared about. The question that kept pulling me back was whether location and context are a big part of knowing something at all, and whether they help