Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
302 changes: 302 additions & 0 deletions Articles/5 Things You Cannot Build with Plain Text AI Responses.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,302 @@
# 5 Things You Cannot Build with Plain Text AI Responses

Ask an AI assistant:

```txt
Which customers are at risk this month?
```

A text-only product can answer with a paragraph:

```txt
Acme, Northwind, and Globex look risky because usage dropped,
support tickets increased, and renewal dates are coming up.
```

That is useful for about ten seconds.

Then the user wants to sort the accounts, inspect the evidence, filter by segment, open the latest ticket, assign an owner, change the threshold, or mark one account as handled. The response is no longer an answer. It wants to become a workspace.

This is the core limitation of plain text AI responses. Text is good at explanation. Software is good at action. Most real AI products need both.

Generative UI exists because many AI responses are not naturally paragraphs. They are tables, forms, timelines, cards, maps, charts, approval flows, and decision surfaces. When the model only has text as an output primitive, the user has to manually translate the answer back into product actions.

Here are five useful things you cannot really build with plain text AI alone.

## 1. A Comparison Surface

Text can list options. It cannot make comparison ergonomic.

Take a travel assistant. The user asks:

```txt
Find me a quiet hotel in Lisbon near public transit under $180 per night.
```

A text response can return three options with bullet points. But comparison is a visual and interactive task. The user needs to scan tradeoffs:

- price
- neighborhood
- walk time to transit
- cancellation policy
- rating
- noise notes
- room type
- availability

If this comes back as prose, the user has to keep all of it in working memory. If it comes back as a generated interface, the product can show a sortable table, hotel cards, a map preview, filters, and a booking action.

The text version says:

```txt
Hotel A is cheaper, Hotel B is closer to transit, Hotel C has better reviews.
```

The UI version lets the user:

- sort by total price
- filter to free cancellation
- compare walking distance
- select two hotels side by side
- save one option
- change the dates without starting over

That last point matters. A static answer dies as soon as the user changes one constraint. A living interface can preserve state and rerun the right part of the task.

This is why "AI search results" often feel unfinished. The model found the information, but the product did not give the user a surface to decide.

## 2. A Stateful Decision Flow

Many AI interactions are not one-turn answers. They are small workflows.

Imagine a procurement assistant:

```txt
Help me choose a laptop for the design team.
```

Plain text can ask follow-up questions:

```txt
What is your budget? How many people need laptops? Do they need GPU-heavy workloads?
```

That works, but it turns the workflow into a chat transcript. The user answers, scrolls, corrects details, and hopes the model remembers the latest state.

A better interface would show:

- team size
- budget range
- required specs
- approved vendors
- warranty requirements
- recommended devices
- total cost
- approval status

Each answer updates the same surface. The user can see the current state instead of reconstructing it from messages.

Text-only AI has a hidden state problem: the model may know what has happened, but the user cannot easily inspect the state of the task. A generated UI can make that state visible and editable.

For example, a configuration flow should not be buried in prose:

```txt
You selected 12 laptops, 32GB RAM, AppleCare, and expedited shipping.
```

It should look like an order builder with fields, totals, warnings, and undoable choices.

The interface does not need to be fancy. It needs to be inspectable. A user should be able to answer:

```txt
What have I selected, what changed, and what happens if I click submit?
```

Plain text can explain that. A stateful UI can make it obvious.

## 3. A Safe Action Panel

AI products become much more useful when they can act. They also become much riskier.

Suppose an infrastructure assistant says:

```txt
I found the likely cause. The latest deploy increased error rates. I recommend rolling back service-api to version 2.14.8.
```

If the product only has text output, the next step is awkward. The model can tell the user to run a command. It can paste a command. It can describe a rollback. But it should not silently execute one.

Actions need UI.

A safe action panel can show:

- affected service
- current version
- rollback target
- error rate before and after deploy
- expected impact
- required permission
- confirmation button
- audit note

The model can recommend. The application should control execution.

This is one of the strongest arguments for generative UI: it separates planning from acting. The model can compose an action surface, but the product decides which actions exist, who can use them, and what confirmation is required.

That boundary is hard to express in plain text. A paragraph cannot enforce permission checks. A markdown button does not carry the same product contract as a real component backed by application logic.

For example, the useful output is not:

```txt
Run: deploy rollback service-api --to 2.14.8
```

The useful output is a controlled rollback component:

```txt
Rollback service-api
Current: 2.15.0
Target: 2.14.8
Requires: Production deploy permission
Status: Ready for approval
```

The user still makes the decision. The UI makes the decision safer.

## 4. A Data Exploration Tool

Dashboards are where text-only AI breaks especially fast.

Ask:

```txt
What changed in revenue this quarter?
```

The model can write:

```txt
Enterprise revenue increased 12%, mainly from expansion in healthcare and finance,
while SMB revenue declined 4%.
```

That summary helps, but it is not enough for analysis. The next questions come immediately:

- Which accounts drove the increase?
- Is the change volume, price, or mix?
- What happened by region?
- Are there outliers?
- How does this compare to last quarter?
- Can I export the account list?

A living dashboard can show the summary next to the evidence:

- line chart for revenue over time
- stacked bars by segment
- account table with deltas
- filters for region and plan
- annotations for major events
- CSV export

The model should not have to choose between "write a long explanation" and "dump a giant table." The right response is often a short explanation plus an interface that lets the user explore.

This is the difference between a static dashboard and a living interface.

A static dashboard is designed for a known set of questions. A living interface can assemble the right view for the question the user just asked. The product still owns the components. The AI chooses the arrangement.

That is not cosmetic. It changes the user's relationship to the data. Instead of reading an analyst-style answer, the user gets a temporary analysis surface tailored to the task.

## 5. A Recovery Experience

Plain text is weak at failure states.

Consider a tax filing assistant:

```txt
Upload my documents and tell me what is missing.
```

The model can respond:

```txt
You are missing a 1099-INT and your W-2 image is blurry.
```

Useful, but incomplete.

The actual product needs a recovery flow:

- show each uploaded document
- mark accepted files
- explain rejected files
- request the missing form
- offer a re-upload control
- validate file type and size
- preserve progress
- show what can continue without the missing document

If the response is only text, the user has to figure out where to go next. If the response is UI, the next step is already there.

Recovery experiences are everywhere:

- a payment failed
- an import partially succeeded
- a data source disconnected
- a generated report needs one missing permission
- a deployment was blocked by a failing check

Text can describe the failure. UI helps the user recover from it.

This is also where generic chat interfaces create unnecessary anxiety. A user should not have to parse a paragraph to know whether their work is saved, what failed, and which action fixes it.

## What Changes Architecturally

The wrong conclusion is:

```txt
Let the model generate arbitrary frontend code.
```

That gives you flexibility, but it also gives you unpredictable layouts, security concerns, broken states, and components that do not match the product.

The better pattern is:

```txt
Let the model compose from UI components the application already trusts.
```

The application defines the available components and their props. The model chooses which components fit the user's intent. The renderer turns that structured output into real product UI.

In practice, that means teams need a component contract:

- component names
- prop schemas
- allowed actions
- data boundaries
- loading states
- empty states
- permission rules

This is where tools like OpenUI fit naturally. OpenUI's approach is built around model-generated UI that streams into a renderer, while the application keeps control over the actual components. The model is not asked to invent the product. It is asked to compose an allowed interface for the current task.

That distinction is the whole game.

## The Product Shift

Plain text AI is good when the user needs an explanation, draft, summary, or answer.

It is not enough when the user needs to compare, configure, approve, investigate, recover, or act.

Those verbs are where software begins.

The next generation of AI products will not be judged only by model quality. They will be judged by the quality of the surfaces wrapped around the model: the tables, forms, cards, charts, timelines, confirmations, and controls that turn an answer into a usable workflow.

Chat is still useful. Text is still useful. But when the response contains structure, state, or action, the final form should not be a paragraph pretending to be a product.

It should be an interface.

## References

- OpenUI website: https://www.openui.com/
- OpenUI GitHub README: https://github.com/thesysdev/openui
- OpenUI Creator Program brief: https://github.com/thesysdev/openui-creator-program/issues/5