The Problem with AI Terminals Today Every AI terminal tool works the same way: you describe what you want, the AI suggests a command, you copy it, alt-tab, paste it, run it, check the output, alt-tab back, describe the next thing... rinse and repeat. There is a cognitive cost to every context switch. When you are debugging a production issue at 2 AM, those seconds add up. WinkTerm takes a differ
The previous two posts covered how events flow from the SDK to the UI. This post focuses on visualizing one specific type of event: tool calls. Tool invocations are the most frequent operations in an Agent application. A typical task might call tools twenty or thirty times—reading files, writing files, executing commands, searching code. If every tool call renders as the same gray block, it's hard
Post 1 covered how AgentBridge converts the SDK's AsyncStream<SDKMessage> into [AgentEvent]. This post looks at what [AgentEvent] becomes — how TimelineView renders 18 event types, handles scroll behavior, and stays smooth when the event count gets large. TimelineView is the main body of the workspace, filling all the space between the sidebar and the input box. Its view hierarchy is shallow: Time
Introduction "The best developers have always built their own tools." — The cmux Zen This is the 54th article in the "One Open Source Project a Day" series. Today, we are exploring cmux. If projects like pi-mono or Warp are redefining terminal interaction logic, cmux is building a new "physical space" for the AI Agent era. It is not just another terminal emulator; it is a highly programmable te