From f5e78bc7972e7b864425363d060e2629d5d0bf47 Mon Sep 17 00:00:00 2001 From: Imaan Date: Wed, 13 May 2026 02:38:20 -0400 Subject: [PATCH 1/2] Add plain text vs OpenUI article --- ... Plain Text (And How OpenUI Fixes Them).md | 260 ++++++++++++++++++ 1 file changed, 260 insertions(+) create mode 100644 Articles/5 Things That Look Terrible as Plain Text (And How OpenUI Fixes Them).md diff --git a/Articles/5 Things That Look Terrible as Plain Text (And How OpenUI Fixes Them).md b/Articles/5 Things That Look Terrible as Plain Text (And How OpenUI Fixes Them).md new file mode 100644 index 0000000..09e4363 --- /dev/null +++ b/Articles/5 Things That Look Terrible as Plain Text (And How OpenUI Fixes Them).md @@ -0,0 +1,260 @@ +# 5 Things That Look Terrible as Plain Text (And How OpenUI Fixes Them) + +Most chat interfaces make the model answer in a format that is easy for the model, not easy for the user. + +Markdown is fine when the user wants an explanation, a short list, or a chunk of code. It falls apart when the answer is supposed to be compared, filtered, edited, confirmed, refreshed, or acted on. At that point the user has to mentally rebuild the interface that should have been there in the first place. + +OpenUI changes the contract. Instead of asking the model to describe an interface in prose, your app gives the model a component library. The model responds in OpenUI Lang, and the renderer maps that compact output to your real React components. The result is still generated by the model, but it behaves like application UI: tables, forms, charts, cards, tabs, actions, state, and tool-connected data. + +The snippets below assume your app has registered the referenced components in its OpenUI library. That is the important design point: OpenUI does not ask the model to invent frontend code. It asks the model to compose components your product already trusts. + +Here are five common AI responses that get awkward fast as plain text, and what they look like when the model can generate UI instead. + +--- + +## 1. Product Comparisons + +Product comparison is one of the most common chatbot use cases, and one of the worst fits for a paragraph. + +Imagine a user asks: + +> Compare these three project management tools for a 12-person engineering team. + +A text-only answer usually becomes something like this: + +```text +Tool A is good for teams that want issue tracking and sprint planning. +It costs $X per user and includes integrations with GitHub and Slack. + +Tool B is simpler and better for non-technical teams. It has fewer +developer features but is easier to configure. + +Tool C is more expensive but includes reporting, roadmaps, automation, +and portfolio-level views. +``` + +The answer may be accurate, but the user still has to do the work: + +- Find the price in each paragraph. +- Remember which tool had the better GitHub integration. +- Compare features that are not written in the same order. +- Decide which tradeoffs matter for their team. + +The shape of the information is a table, but the interface forced it into prose. + +With OpenUI, the model can return a comparison table, score cards, and a recommendation panel: + +```openui +cols = [ + Col("Tool", products.name), + Col("Best for", products.bestFor), + Col("Dev workflow", products.devWorkflow), + Col("Admin effort", products.adminEffort), + Col("Monthly cost", products.monthlyCost) +] +table = Table(cols) +rec = Callout("info", "Recommended short list", "Start with Tool A if GitHub workflow depth matters. Trial Tool B only if setup speed is the priority.") +root = Stack([CardHeader("Project management comparison"), table, rec]) +``` + +The point is not that tables are prettier. The point is that comparison is a visual task. The user needs alignment, scanning, sorting, and a clear recommendation. A generated UI can preserve those relationships instead of flattening them. + +This is also where a component library matters. Your app can decide what a `Table`, `Callout`, or `ScoreCard` looks like. The model chooses and configures components from the library; it does not invent arbitrary frontend code. + +--- + +## 2. Scheduling And Availability + +Scheduling is another place where text creates unnecessary friction. + +A plain response might say: + +```text +You are free on Tuesday from 10:00-11:30, Wednesday from 2:00-4:00, +and Friday after 1:00. Jordan is free Wednesday from 3:00-5:00 and +Friday from 9:00-2:00. The best overlap is Wednesday at 3:00 or +Friday at 1:00. +``` + +That answer contains the right facts, but it is not the right interface. The user cannot click a time, adjust duration, or see why a slot was recommended. If they want a 45-minute meeting instead of a 30-minute meeting, the conversation has to restart. + +The generated UI version can expose the available slots directly: + +```openui +$duration = "30" +slots = Query("find_meeting_slots", {duration: $duration}, {rows: []}) +durationFilter = Select("duration", $duration, [ + SelectItem("30", "30 min"), + SelectItem("45", "45 min"), + SelectItem("60", "60 min") +]) +slotList = SlotPicker(slots.rows) +confirm = Button("Book selected slot", Action([@Run(bookMeeting)])) +root = Stack([CardHeader("Available meeting times"), durationFilter, slotList, confirm]) +``` + +Now the user can change the duration, inspect the new slots, and confirm an action. OpenUI's reactive state model is useful here: when `$duration` changes, any query that depends on it can re-run and the interface updates. + +That turns the model's answer from "here is a paragraph about availability" into "here is the scheduling surface you needed." + +--- + +## 3. Analytics Summaries + +Analytics is where plain text often looks confident while hiding the work. + +A model might answer: + +```text +Revenue increased this week, mainly because activation improved in the +self-serve segment. Enterprise pipeline was flat. Churn risk is up in +accounts created before the pricing change. The biggest issue is a +drop in trial-to-paid conversion for teams with more than 20 seats. +``` + +This is a useful executive summary, but it is not enough for the person who has to make a decision. They need to see the trend, inspect the segment, and change the time window. The answer also mixes observations with implied priorities. + +An OpenUI response can keep the summary, but attach it to the data: + +```openui +$range = "7d" +metrics = Query("growth_metrics", {range: $range}, {kpis: [], series: [], segments: []}) +rangePicker = SegmentedControl("range", $range, [ + Option("7d", "7D"), + Option("30d", "30D"), + Option("90d", "90D") +]) +kpis = KPIGrid(metrics.kpis) +trend = LineChart(metrics.series.date, [Series("Revenue", metrics.series.revenue), Series("Trials", metrics.series.trials)]) +segments = Table([ + Col("Segment", metrics.segments.name), + Col("Conversion", metrics.segments.conversion), + Col("Risk", metrics.segments.risk) +]) +root = Stack([CardHeader("Growth summary"), rangePicker, kpis, trend, segments]) +``` + +The text summary still belongs in the product. It should explain what changed and why it matters. But the underlying answer wants controls, charts, and drill-downs. + +OpenUI's architecture is a good fit for this because the model can generate the initial wiring, then the runtime can execute queries directly. The user changing a filter does not have to mean another LLM roundtrip. The UI can call tools, fetch fresh data, and update components as application state changes. + +That distinction matters in production. If every click goes back through the model, the interface inherits model latency and token cost. If the model generates the interface once and the runtime executes the interaction, the UI feels much closer to normal software. + +--- + +## 4. Forms And Intake Workflows + +Forms are painful in chat because a form is not really a list of questions. It is state, validation, defaults, dependencies, and submission. + +A chatbot might write: + +```text +Please provide the following: +1. Company name +2. Website +3. Team size +4. Monthly support ticket volume +5. Current helpdesk provider +6. Main pain point +``` + +The user can reply with all six fields, but then the app has to parse the response. If one field is missing, the model asks another question. If the website is malformed, the validation happens late. If team size changes the relevant follow-up questions, the conversation branches into more text. + +A generated form is simpler for both the user and the system: + +```openui +$company = "" +$website = "" +$teamSize = "10-50" +$ticketVolume = "" +$provider = "" +$painPoint = "" +submit = Mutation("create_intake", { + company: $company, + website: $website, + teamSize: $teamSize, + ticketVolume: $ticketVolume, + provider: $provider, + painPoint: $painPoint +}) +form = Form("support-intake", Button("Submit intake", Action([@Run(submit), @Reset($company, $website, $ticketVolume, $provider, $painPoint)])), [ + FormControl("Company", Input("company", $company)), + FormControl("Website", Input("website", $website)), + FormControl("Team size", Select("teamSize", $teamSize, [SelectItem("1-10", "1-10"), SelectItem("10-50", "10-50"), SelectItem("50+", "50+")])), + FormControl("Monthly tickets", Input("ticketVolume", $ticketVolume)), + FormControl("Current provider", Input("provider", $provider)), + FormControl("Main pain point", Textarea("painPoint", $painPoint)) +]) +root = Stack([CardHeader("Support automation intake"), form]) +``` + +This is not a cosmetic upgrade. It changes the reliability of the workflow. + +The fields are explicit. The defaults are visible. Required fields can be validated before submission. The mutation can run only when the user clicks submit. The app can persist form state, hydrate it later, and pass the result to a backend tool. + +OpenUI supports actions, form state, validation, and mutations, so the generated response can move from "please answer these questions" to "fill out this workflow." + +--- + +## 5. Troubleshooting And Runbooks + +Troubleshooting answers often become giant walls of conditional text. + +For example: + +```text +If your deployment is failing, first check whether the build step passed. +If the build failed, inspect the package manager logs. If the build passed +but the health check failed, confirm that the PORT environment variable is +set correctly. If the app starts but returns 500, check database connectivity. +``` + +This is useful, but it forces the user to hold a decision tree in their head. It also treats every step as equal, even though some checks are already known from logs or tool calls. + +A better troubleshooting surface is progressive and stateful: + +```openui +status = Query("deployment_status", {}, {build: "unknown", health: "unknown", logs: []}, 15) +checks = Checklist([ + CheckItem("Build completed", status.build == "passed"), + CheckItem("Health check passed", status.health == "passed"), + CheckItem("Database reachable", status.database == "reachable") +]) +logs = LogViewer(status.logs) +retry = Button("Retry deployment", Action([@Run(retryDeploy), @Run(status)])) +root = Stack([CardHeader("Deployment troubleshooting"), checks, logs, retry]) +``` + +Now the answer is connected to the current system state. It can refresh automatically, show the failing step, and expose the next action without another prompt. + +This pattern applies to support bots, internal developer portals, security triage, CI debugging, and ops runbooks. Text is good for explaining the diagnosis. UI is better for showing status, preserving context, and letting the user act safely. + +--- + +## The Practical Rule + +Text is still the right output when the user asks for an explanation, a summary, a draft, or a recommendation that does not need interaction. + +UI is usually better when the answer includes any of these: + +- Multiple objects with comparable fields. +- User-editable inputs. +- Actions like approve, book, retry, save, or submit. +- Data that should be filtered, sorted, refreshed, or drilled into. +- Status that changes over time. +- A workflow where validation matters. + +The failure mode is easy to spot: if the model's answer makes the user copy values into another tool, scan a paragraph for structured data, or ask a follow-up just to change a parameter, the response probably wanted to be UI. + +OpenUI gives developers a way to make that boundary explicit. You define the components your app trusts. The model selects and composes those components. The renderer parses the generated OpenUI Lang and renders your UI progressively. For tool-connected apps, queries and mutations let the interface keep working after generation, without forcing every click through the model. + +That is the real upgrade. The model is not just writing nicer Markdown. It is producing a usable surface for the task. + +## Further Reading + +- [OpenUI introduction](https://www.openui.com/docs/openui-lang) +- [OpenUI architecture](https://www.openui.com/docs/openui-lang/how-it-works) +- [Reactive state](https://www.openui.com/docs/openui-lang/reactive-state) +- [Queries and mutations](https://www.openui.com/docs/openui-lang/queries-mutations) +- [Interactivity](https://www.openui.com/docs/openui-lang/interactivity) +- [OpenUI benchmarks](https://www.openui.com/docs/openui-lang/benchmarks) From 5b1e8fccbbbb5b63cd225993ce4ec152ad62fbe6 Mon Sep 17 00:00:00 2001 From: Imaan Date: Wed, 13 May 2026 03:16:14 -0400 Subject: [PATCH 2/2] Add text versus UI decision matrix --- ...rrible as Plain Text (And How OpenUI Fixes Them).md | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/Articles/5 Things That Look Terrible as Plain Text (And How OpenUI Fixes Them).md b/Articles/5 Things That Look Terrible as Plain Text (And How OpenUI Fixes Them).md index 09e4363..54954a2 100644 --- a/Articles/5 Things That Look Terrible as Plain Text (And How OpenUI Fixes Them).md +++ b/Articles/5 Things That Look Terrible as Plain Text (And How OpenUI Fixes Them).md @@ -10,6 +10,14 @@ The snippets below assume your app has registered the referenced components in i Here are five common AI responses that get awkward fast as plain text, and what they look like when the model can generate UI instead. +| If the answer needs... | Plain text usually gives users... | Generated UI can give users... | +| --- | --- | --- | +| Comparison | Paragraph scanning | Aligned fields and sortable structure | +| Choice | A recommendation they must copy elsewhere | Selectable options and next actions | +| Editing | Another prompt | Inputs, validation, and local state | +| Monitoring | A stale summary | Refreshing status and tool-backed data | +| Workflow | Instructions | A guided surface with submit/retry/approve actions | + --- ## 1. Product Comparisons @@ -250,6 +258,8 @@ OpenUI gives developers a way to make that boundary explicit. You define the com That is the real upgrade. The model is not just writing nicer Markdown. It is producing a usable surface for the task. +For developers, the practical implementation question is not "Can the model draw any interface?" It is narrower and more useful: "What component vocabulary should this feature expose to the model, and what actions should remain under application control?" That boundary is what keeps generative UI from turning into unreviewable frontend generation. + ## Further Reading - [OpenUI introduction](https://www.openui.com/docs/openui-lang)