Hacker Newsnew | past | comments | ask | show | jobs | submit | mercury24aug's commentslogin

Giving LLM the ability to generate UI is a cool concept, but our models are not there yet. MCP Apps can be extremely powerful, for example, you can play Doom inside ChatGPT: https://x.com/rauchg/status/1978235161398673553?s=20

I don't think we can generate anywhere close to this kind of UI just yet.

We built https://usefractal.dev/ to make it easier for people to build ChatGPT Apps (they are technically MCP Apps) so I have seen the use cases. Most of these use cases LLM cannot generate the UI on the fly.


Looks like OpenAI, Anthropic, and the MCP-UI team actually worked together on a common standard for MCP Apps: https://blog.modelcontextprotocol.io/posts/2025-11-21-mcp-ap...

Honestly, I think the biggest friction for MCP adoption has been how un-userfriendly it is. It’s great for devs, but not the average users. Users don't always want to chat, sometimes they just want to click a button or adjust a slider. This feels like the answer to that problem.

Full disclosure, I'm partial here because of our work at https://usefractal.dev. We were early adopters when MCP first came out, but we always felt like something was missing. We kept wishing for a UI layer on top, and everyone says it's gonna take forever for the industry to adopt, maybe months, maybe years.

I cannot believe the adoption comes so quickly. I think this is gonna be huge. What do you guys think?


You are touching on an important point. Basically OpenAI and others provide a lot of poorly integrated tools and components. You can build nice things with those but you have to deal with a lot of issues and it's a non trivial amount of work that even they aren't doing apparently. Even such a simple thing as triggering an OAuth signin to get access to models is not part of SDKs. Most developer tools require configuring API keys in some file instead. No normal user is ever going to do that,.

Things like ChatGPT are remarkably limited from a UX/UI point of view. The product can do amazing things but the UI is nothing special. The mac version currently has a bug where option+shift+1 opens a chat window but doesn't give it focus. When I do that from vs code it adds the editor window. But it's completely blind to any browser tab on which I do that. I'm sure there are good reasons for all that. But it strikes me a bit as a work in progress that a good product owner would spot.

With apps some of the more powerful capabilities (llms driving UIs directly, doing things in agentic loops, tool and API usage) are going to require much deeper integrations than are currently there. We get hints of what is possible and nice technology demos. But it's still hard to build more complicated workflows around this. Unless you build your own applications.

We've been staring at this from the point of view of automating some highly tedious stuff that we currently do in our company manually. For example, working with chat GPT seems to involve a lot of copy paste and manually doing things that it can't really do by itself. Even something as simple as working on a document it will do alright work on the text but then make a complete mess of the formatting. I spend an hour a few days ago iterating on a document where I was basically just fixing bullets and headers. Most alternatives I've tried aren't any better at this.

None of this seems particularly hard; it's just a lot of integration work that just hasn't happened yet. We have a bunch of lego bricks, not a lot of fully mature solutions. MCP isn't a full solution, it's a pretty lego brick. Mostly even OpenAI and Anthropic aren't getting around to doing much more than simplistic technology demos with their own stuff. IMHO their product teams are a lot less remarkable than their AI teams.


Who wants a button that has indeterministic actions?


Unless the MCP server itself has an LLM call inside of it (rare), the MCP server is pretty deterministic. It’s the AI that invokes it that’s actually indeterministic, but the user is already using that.


This is a very strict definition of MCP. An agent (with LLM call inside) can be an MCP. Event a UI component can be an MCP.


> pretty deterministic

This is an oxymoron.


I meant “pretty” as in, using a search engine is pretty deterministic, any REST API is deterministic.

MCP servers’ tools are literally just function calls. It’s the LLM MCP client that’s not deterministic, not the MCP server.


No it's not.

In the real world, where it is (at least in our current state of overall programming language tooling, and the existence of physics) intractable to prove all eventualities and absence of side-effects of executed code, determinism is indeed a spectrum.

If we want to be specific here, I would say the "pretty deterministic" is equal to "as deterministic as your typical non-LLM REST API call", which still spans a big range of determinism.


Calling it... Vibe clicking


The popularity of slot machines suggests there is a market


AI fills in a form, and you want to adjust the form before clicking submit. How often do you have to adjust AI's answer vs accepting it as is


> button that has indeterministic actions

google.com (1998-present)

    [I'm feeling lucky]


Thank you so much! I'm glad you like it


It's funny how OpenAI announced Apps SDK without the SDK. Anyway, we was so excited to get my hands dirty that we built our own SDK: https://github.com/fractal-mcp/sdk


This account was created 4hr ago just to write negative comments on this thread.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: