Hacker Newsnew | past | comments | ask | show | jobs | submit | teiferer's commentslogin

Given how much money is on the line, it would be gross negligence if anything came publicly out of the CEO's mouth or is otherwise published by the company that's not marketing.

The question is whether they need to massage the results for them to be marketable.

Sometimes you gotta let people know how awesome you are. The real question is if you're misrepresenting yourself(all marketing, no substance).

> Marketing is not intentional.

That's an odd definition of "intentional". Evolution has filtered for people with certain views and the marketing has just emerged from their actions. ... So?

A deadly virus (naturally occurring one let's say) wasn't created intentionally. Evolution selected for it. It's still bad and kills people. Doesn't make it nice because of lack of intention.


I think that's a reasonable analysis, but it's very different than the one that's usually implied by "marketing". Most people I see talking about Dario and his "marketing" go on to express confusion or frustration on why he would decide to message this way, ignoring what I (and perhaps you?) consider to be the obvious answer that he believes it's true.

Every reply here forgets/overlooks the main reason for why this is not going to happen: The astronomical AI data center investments currently underway. Those place are not just for training. They are for inference too and the way all those investments are expected to eventually pay off. The whole AI sector of our industry depends on running models in these places.

These astronomical AI data centers will be used for high-value inference with smarter models that really are too large for running locally. The investments will be fine once they pivot to that use. Currently available open models are not in that range.

I don't buy that that will be a useful distinction.

First of all, no AI model will say "I'm too smart for this question, I suggest you use a cheaper one so I don't make unnecessary money for my owner" or "I'm too dumb, so instead of hallucinating I'll suggest you go to the cloud and ask my smarter sibling".

Second, there is no incentive in the market for tooling to evolve that way. There will be the illusion that some models will do that, similar to today (or maybe some harnesses rather) but nobody will willinglylet money sit on the table. These data centers are not being built to solve world hunger. They are built to ultimately hook you on more realistic fake bs youtube videos so you feel good while getting even more ads injected into your life.


A counter argument would be that all programming languages of the last decades have been plain text based. No other more structured format has ever gained traction even though modern editors could be argued to be able to support that easily. Turns out, it doesn't actually work that way.

HTML is plain text based at the same level as any programming language I can think of.

But we’re not even dealing with a programming language in any classical sense here. Interacting with an LLM coding system is a multi-mode communication system with on-demand, purpose-generated ephemeral UI. That doesn’t fit any of the established categories, so I think carrying over constraints from them doesn’t make sense either.

>with on-demand, purpose-generated ephemeral UI

Nope, it's a fixed, coded and shipped UI: the agent TUI.


Even Claude Code can whip up interactive, tabbed, multiple choice questions for example. If you use the superpowers plugin, it'll sometimes spawn a small web server demoing UI concepts or previewing more complex choices using LLM-generated HTML. Claude Code on the web will do even more involved React apps on the fly next to the chat. There's no technical reasons this couldn't get more complex, or vertically integrated with code editors.

I'd definitely call that on-demand, purpose-generated ephemeral UI.


Most people edit documents in Microsoft word, though, so it didn’t seem too far fetched that LLM content would be edited similarly, especially as more and more non-programmers use it.

MS Word uses HTML under the hood, right? (Or some SGML at least.)

It was less a comment about the format and more a comment about the application used to do the editing.

There's a visual editor for Windows Forms apps that is well thought of.

AI pricing is not mainly about cost, it's about market realities, i.e., charging exactly the sweet spot to maximize profit.

While this sounds generous (and in some ways it is), it does not address the general point that GP is making. That is, the systematic disadvantage which large parts of humanity have w.r.t. to access to the tools. You could say they can't drive a Lambhorgini either, but that also doesn't solve the problem.

You're absolutely right (pun intended).

An aside: It was a very nice gesture and completely unexpected by me, so even if it doesn't work out, it made my day. I personally believe that kind gestures have a lot of power.

Back on topic: There is a real danger of the gap between rich and poor universities significantly widening in all fields if the rich can afford Pro level models, or even hardware that can run their own comparable models, and this being fiscally inaccessible to the rest.

One can sweep this under the rug by blaming the educational funding but this just shoots down all discussion. Even if GDP of a country goes up by a lot -- such as Poland -- it takes time before any budget benefit trickles to the education budget, and with some governments it might never do.

I believe Microsoft et al do have the most power here to boost affordable access to AI for researchers on a large scale; the fact that they cut some too expensive models (Opus, 5.5) from their academic benefits package is a grim omen. I do realize they would like universities to pay them also, and ultimately the universities should do that -- but then we are back at the institutional level of the problem.


It isn't a nice gesture---it is guerilla marketing! (pun also intended but I mean it)

Its a problem of the individual institutions and countries. The budget required for AI tools currently is negligible compared to other university expenses. We don't need to call everything a systemic disadvantage when the disadvantaged (at the institution level) have agency here.

Can you tell me what is the budget necessary to supply AI tools capable of substantial research assistance to all academic staff at a university?

You seem to have a good estimate in your head; I definitely do not.

From personal experience, ChatGPT 5.5 (the Plus tier) is excellent for programming tasks and also for various teaching related tasks but I have not observed the research benefits that Tim Gowers has when I asked it questions in my area of expertise. So the costs are definitely higher than a few dozen $ a month per PhD/professor.

You might be right that universities should immediately spring into action and demand funding for research level AI resources and hardware. One thing you might be mistaken in is that public universities are unfortunately very inflexible institutions; one reason for this is that they have a large internal leadership structure AND they are funded by the state, so even if the entire university agrees on something, the funding is at the whim of the ministry of education and thus the current political leadership.


> Can you tell me what is the budget necessary to supply AI tools capable of substantial research assistance to all academic staff at a university?

I think the GP meant that *if the tools provide substantial benefit* to staff, their costs can be compared to salaries and other large expenses of the university. The $100/month subscription costs less than your office space.


Which is good, since public money is tax money, so it better be spent wisely and not just thrown at the latest hype without thinking properly about it. It's a feature that public spending moves slowly, we should all be thankful for it.

> The budget required for AI tools currently is negligible compared to other university expenses.

Is it? Do you have any idea what the salary of a mid-tier university researcher in an Eastern European country is? Or in Africa or south-east Asia? With sota LLM pricing you easily get into the same order of magnitude, so essentially labour cost would double for researchers at such universies. Not "negligible" at all.


I feel like this is one of the most advantaged times in history in terms of regular citizens having access to cutting edge tools.

Looking online it seems like the low end estimate might be $30k a year for such math researchers? And ChatGPT pro or whatever you want will run $100 a month, and should be coverable by grants. I’m quite sure matlab alone cost more in the past


> While this sounds generous (and in some ways it is), it does not address the general point that GP is making. That is, the systematic disadvantage which large parts of humanity have w.r.t. to access to the tools. You could say they can't drive a Lambhorgini either, but that also doesn't solve the problem.

This was also the case historically, when being at certain universities, with better professors, better scope of works available at the library, etc, would necessarily provide systematic advantage.

This is the reality of progress. It is always unevely distrubuted.

I do think the open source side of model development is a substantial counter to the pessimism here.


I mean, I don't think OpenAI should be wading into the policies and practices of foreign institutions and governments. Look at all the blowback we see from the collision of Anthropic or OpenAI and the US government.

At present, the tools are available for whomever wants to buy them. Not OpenAI's fault that parent comment's government and/or institutions policies haven't been updated to allow for their purchase and use.

I'd argue that the OpenAI dude/dudettes level of generosity is appropriate given the circumstances.


Thanks for sharing!

My contribution to this discussion is the place in BTTF which makes fun of this concept, the home of Marty McFly: Hill Valley

Not a tautological name but an oxymoronic one!


You're not thinking fourth-dimensionally!

We need to stand up against this by refusing to adapt. Let them scream. They are wrong. I refuse to tune texts into less-fine-tuned form just to avoid being labeled LLM output.

Why would they have to? Just to avoid being accused of using a slop machine? If that is the only criticism you have against LLM produced text, then there is no problem.

And I'm saying this as somebody who is strongly against LLM-generated content of this form.


I have no problem with AI-generated text.

But I do have somewhat of a problem with unedited text. Personally, I even take the time to edit my HN comments.

And, for the same reason I'd have a problem watching the same episode of the same show every day, I have a problem with reading text that feels like a super derivative clone of tons of other writing. Which is usually what you get when you don't edit your AI-generated text.


All good, and I agree.

But the question was about somebody who does write the text themselves, who edits it themselves, no AI has ever touched it, but the result still has elements of what AI text typically has. Because it's their style. Why should such people have to adapt? Just so they don't end up in a witch hunt? How about texts older then 2, 5, 10 years? Should they be changed too? And how about if "LLM style" changes over time?


Really? Do we now suspect everybody who uses the most basic of stylistic elements of producing slop?

Pendulums always swing back and forth between extremes but oh boy did this one swing fast into witch hint territory.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: