Hacker Newsnew | past | comments | ask | show | jobs | submit | choilive's commentslogin

What PM thought this was a good idea? This has to be the result of some braindead we need more AI in the product mandate

That is an easily falsifiable statement. If I ask ChatGPT or Claude what MCP is Model Context Protocol comes up, and furthermore it can clearly explain what MCP does. That seems unlikely to be a coincidental hallucination.


Training data =/= web search

Both ChatGPT and Claude will perform web searches when you ask them a question, which the fact that you got this confused is ironically topical.

But you're still misunderstanding the principle point because at some point these models will undoubtedly have access to that data and be trained on it.

But they didn't need to be, because LLM function & tool calling is already trained on these models and MCP does not augment this functionality in any way.


Claude gives me a lengthy explanation of MCP with web search disabled


Great! It's still irrelevant.


> But they didn't need to be, because LLM function & tool calling is already trained on these models and MCP does not augment this functionality in any way.

I think you're making a weird semantic argument. How is MCP use not a tool call?


You're misinterpreting OP.

OP is saying that the models have not been trained on particular MCP use, which is why MCP servers serve up tool descriptions, which are fed to the LLM just like any other text -- that is, these descriptions consume tokens and take up precious context.

Here's a representative example, taken from a real world need I had a week ago. I want to port a code base from one language to another (ReasonML to TypeScript, for various reasons). I figure the best way to go about this would be to topologically sort the files by their dependencies, so I can start with porting files with absolutely zero imports, then port files where the only dependencies are on files I've already ported, and so on. Let's suppose I want to use Claude Code to help with this, just to make the choice of agent concrete.

How should I go about this?

The overhead of the MCP approach would be analogous to trying to cram all of the relevant files into the context, and asking Claude to sort them. Even if the context window is sufficient, that doesn't matter because I don't want Claude to "try its best" to give me the topological sort straight from its nondeterministic LLM "head".

So what did I do?

I gave it enough information about how to consult build metadata files to derive the dependency graph, and then had it write a Python script. The LLM is already trained on a massive corpus of Python code, so there's no need to spoon feed it "here's such and such standard library function", or "here's the basic Python syntax", etc -- it already "knows" that. No MCP tool descriptions required.

And then Claude code spits out a script that, yes, I could have written myself, but it does it in maybe 1 minute total of my time. I can skim the script and make sure that it does exactly what it should be doing. Given that this is code, and not nondeterministic wishy washy LLM "reasoning", I know that the result is both deterministic and correct. The total token usage is tiny.

If you look at what Anthropic and CloudFlare have to say on the matter (see https://www.anthropic.com/engineering/code-execution-with-mc... and https://blog.cloudflare.com/code-mode/), it's basically what I've described, but without explicitly telling the LLM to write a script / reviewing that script.

If you have the LLM write code to interface with the world, it can leverage its training in that code, and the code itself will do what code does (precisely what it was configured to do), and the only tokens consumed will be the final result.

MCP is incredibly wasteful and provides more opportunities for LLMs to make mistakes and/or get confused.


I think the apt comparison is the rental car business. They are reasonably good at quality standards because the competition is stiff, and if the vehicles aren't reliable and clean, you will just use the company next door. This incentivizes prudent fleet management, and thanks to economies of scale, having in-house mechanics to constantly maintain the fleet quickly becomes cost efficient.


I always found it ironic that Intel benignly neglected the mobile CPU/SoC market and also lost their process lead despite this supposed culture of never underestimating the competition. The paranoid Intel of the 80s/90s is clearly not the one that existed going into the 2000s and 2010's


Intel missed mobile, graphics, and AI, while failing to deliver 10nm, and it was all self-inflicted. They didn't understand what was coming. Transmeta was an easily identified threat to Intel's core CPU products so Intel was more likely to pull out all the stops with above-board competing on product as well as IP infringement and tortious interference. Intel had good risk management in having a team working on evolutions of P6, if that hadn't already been a going concern (see also Timna) coming up with a competitive product in the early 2000s would have been much harder.


I can buy 3 or 4 x JetKVMs for 1 PiKVM, pretty hard to justify going for PiKVM unless there is a PiKVM feature you need


Its remarkably straightforward. Not fool-proof, but easy. Bacteriostatic water, single use needles/syringes, and self healing injection port vials makes it simple to maintain sterility throughout the process.

Multiple doses can be mixed and stored in the fridge for 4-6 weeks.


While I agree and this all seems reasonable, I think you give the average person far too much credit.


it's extremely common. everyone i know that's on a glp-1 does it this way. that way you can buy it in bulk for a discount. i buy mine roughly 35 weeks worth of doses at a time.


How do they have confidence that the vials they're getting are sterile / pure / free of endotoxins / etc?


This is commonly done for injectable fertility treatments, though in my experience they are hydrated just before use.


> I have such a hard time imagining a sub 250g drone under 50ft being justifiably something that the state can deny people in such a huge radius.

Ukraine would like a word


I agree that there's still a lot of places where drones probably can't be allowed!

But a 15 mile radius is huge. DC is only 10 miles to a side. It feels absurd to have so so so much residential area, far from anything like city, where drones are verboten.


Safety


Also better performance, since solid state batteries are lighter. More flexible car layouts and longer range since they're more compact. Faster charging due to reduced resistance. More stable in extreme cold or hot temperatures. It truly will revolutionize EVs if they can mass produce these.


Can anyone recommend a good source that quantifies these qualities against LFP/NMC benchmarks?


With LFP being supposedly more safe than NMC would it have a weight benefit as well?


What if you don't have any talents. (Or at least havn't discovered it yet) I seem to be quite mediocre at everything.


This is normal, most people are like this. The idea that there’s something out there that you’re just amazing at without even trying very hard is a trap and believing it will destroy your life. You just have to pick something you want to be good at and do it until you are.


There is probably something you are naturally better at, but it might not be something that is very visible, like juggling, or playing guitar. It might be something like being patient with people.

Also, do you have things that you are really interested in, or enjoy doing? I think sometimes the basis for what people call "talent", is really just that some people happen to really like a thing, and then spend a lot of time doing it. )There is obviously also the flip side of this, which is people often prefer doing things they are naturally good at.)

As a final thing, if you really don't have any particular talents, who cares? You are no less valuable a human because of this.


Jeremy Utley has some good advice for such a situation. https://youtu.be/wv779vmyPVY


Sometimes it's the environment.

The people around you, the job, the inventives, your health (ADHD), the bad habbits.

There's always a chance.


It's unlikely you'll be a natural at anything. This is nonsense pushed by the self-improvement demagogues. Even in much simpler societies humans develop skills through practice. The way most of us get good at anything is through repetition. Just show up and do it.


I knew this day would come.. but not this soon.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: