Topic Proposal: From Tool Calls to Interfaces - OpenUI for MCP and Agent Workflows
Hi Thesys team,
The README invites new topic pitches, so I would like to propose a developer-focused article about using OpenUI as the interface layer for agent workflows that are already built around tools, MCP servers, and structured outputs.
Why This Topic
Most agent demos still stop at plain text plus a list of tool calls. That is fine for debugging, but it is a weak user experience once the agent starts returning plans, search results, diffs, task queues, metrics, approvals, or account setup checklists. Those outputs are not naturally conversational. They want UI.
OpenUI is a good fit for this gap because agent systems already think in structured intermediate representations. The article would show how to turn those intermediate states into inspectable, interactive interfaces instead of forcing everything back through markdown.
Proposed Angle
This would be a practical integration article, not a product pitch. The core claim:
Agents do not just need better reasoning. They need better surfaces for review, approval, and action.
The article would use a small agent workflow as the running example:
- A tool returns a list of candidate tasks.
- The agent ranks them with evidence and risk.
- The UI renders sortable cards, approval controls, and status transitions.
- The user approves one action.
- The agent receives that approval as structured input and continues.
Proposed Structure
-
The interface problem in agent workflows
Why plain text breaks down when agents produce structured, multi-step state.
-
What tool calls already give you
Tool inputs and outputs are close to UI state: records, enums, booleans, confidence scores, links, and action labels.
-
Mapping agent state to OpenUI
A simple translation pattern from tool result schema to OpenUI components.
-
Human approval as a first-class interaction
Rendering approve, reject, request changes, and defer actions without losing the agent's structured context.
-
Streaming state changes
Showing intermediate states such as researching, verifying, blocked, ready, submitted, and complete.
-
Testing the interface contract
Structural validation for generated UI, schema checks for action payloads, and a small golden set for regression tests.
-
Where OpenUI fits
OpenUI as the rendering layer for agent state, not as a replacement for the agent framework itself.
Code Examples
I would include compact TypeScript examples for:
- A mock MCP-style tool result schema.
- A translator function that maps ranked task records into OpenUI-ready structure.
- An approval action payload that returns structured state to the agent.
- A simple validator that rejects unsafe or malformed UI actions before execution.
What I Would Avoid
- Generic "AI agents are transforming workflows" filler.
- Treating OpenUI as magic.
- Hand-wavy screenshots with no working shape behind them.
- Private or platform-specific examples that cannot be reused by other developers.
Deliverable
A markdown article, roughly 1,800 to 2,400 words, with code-first examples and an honest tradeoff section. If useful, I can also include a tiny companion repo that runs the mock tool-to-UI flow.
Happy to narrow the scope if you prefer this as an MCP-specific article, a broader agent-UX article, or a tutorial built around one concrete stack.
Topic Proposal: From Tool Calls to Interfaces - OpenUI for MCP and Agent Workflows
Hi Thesys team,
The README invites new topic pitches, so I would like to propose a developer-focused article about using OpenUI as the interface layer for agent workflows that are already built around tools, MCP servers, and structured outputs.
Why This Topic
Most agent demos still stop at plain text plus a list of tool calls. That is fine for debugging, but it is a weak user experience once the agent starts returning plans, search results, diffs, task queues, metrics, approvals, or account setup checklists. Those outputs are not naturally conversational. They want UI.
OpenUI is a good fit for this gap because agent systems already think in structured intermediate representations. The article would show how to turn those intermediate states into inspectable, interactive interfaces instead of forcing everything back through markdown.
Proposed Angle
This would be a practical integration article, not a product pitch. The core claim:
The article would use a small agent workflow as the running example:
Proposed Structure
The interface problem in agent workflows
Why plain text breaks down when agents produce structured, multi-step state.
What tool calls already give you
Tool inputs and outputs are close to UI state: records, enums, booleans, confidence scores, links, and action labels.
Mapping agent state to OpenUI
A simple translation pattern from tool result schema to OpenUI components.
Human approval as a first-class interaction
Rendering approve, reject, request changes, and defer actions without losing the agent's structured context.
Streaming state changes
Showing intermediate states such as researching, verifying, blocked, ready, submitted, and complete.
Testing the interface contract
Structural validation for generated UI, schema checks for action payloads, and a small golden set for regression tests.
Where OpenUI fits
OpenUI as the rendering layer for agent state, not as a replacement for the agent framework itself.
Code Examples
I would include compact TypeScript examples for:
What I Would Avoid
Deliverable
A markdown article, roughly 1,800 to 2,400 words, with code-first examples and an honest tradeoff section. If useful, I can also include a tiny companion repo that runs the mock tool-to-UI flow.
Happy to narrow the scope if you prefer this as an MCP-specific article, a broader agent-UX article, or a tutorial built around one concrete stack.