Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Can you blame the engineers? If you realize LLM tech is neat but ultimately overhyped and probably decades away from truly realizing the promises of general purpose AI, why not just switch goals to making as much money as you can?


Yes you absolutely can blame them for it. This type of shift (and a million other possible permutations) is why we invented the concept of "charters" around the same time we invented writing.

The entire point of the pre-commitment device is because you (or other stakeholders) are anticipating that your thinking will get distorted over time. If you could be trusted to make such a decision in the future then you wouldn't have written a charter to bind yourself.


It’s like joining a non profit trying to protect the rain forest and then finding gold. They say screw the forest let’s mine gold now. Do the original employees then stay. Same here. Greed infected them all

Weclome to San Francisco


Good comparison.


Yes. They joined OpenAI with the understanding that it was meant to be an non-profit with a mission to benefit humanity.


This entire saga is really an example of the absurdity of non-profits and philanthropy in general.

The only difference between nonprofit and for-profit entities is that nonprofits divert their profits to a nebulous “cause”, with the investors receiving nothing, while for-profits can distribute profits to their funders.

Other than that, they are free to operate identically.

Generally, entities subject to competitive pressures and with incentives for performance are much better at “benefitting humanity.” Therefore, non-profit status really only makes sense when, one, a profitable enterprise oriented around the intended result isn’t viable (e.g., conservation) or two, there’s a stakeholder that we’ve decided ought to be sheltered from the dynamics of private enterprise, e.g, university students or neutral public broadcasters.

But even in these cases, the non-profit entities basically behave like profit-oriented companies, because their goal is still profitability, just without a return to investors.

OpenAI as a nonprofit would behave the exact same way. There’s no law that the models would have to be open. They’d still be making closed models, charging users, and paying massive salaries. Literally the only difference is that they wouldn’t be able to return money to their investors, and therefore have a much harder time attracting investors, and therefore be less equipped to accomplishing their goal of developing powerful AI.

The irony is that nonprofits are usually only good for things that make for shitty businesses, and things that make shitty businesses usually aren’t that beneficial to humanity. As soon as something becomes really good at what it does, for-profit status makes sense.

What this means, imo, is that most philanthropy dollars are wasted and we would be much better off if they were invested instead. The irony is that this is the point of much philanthropic giving - it ends up being a game of how much money you can burn on nothing, a crass status symbol.


Matt Levine likes to say that the big Wall Street banks are socialist paradises that funnel almost all of the returns to the workers.

It happens everywhere


Did you not read what I said? They joined a non-profit and eventually realized the mission is futile.


I read what you said, and I apologize for not being a bit more clear.

I completely understand your perspective, and I hope I'm always strong enough to listen to my conscience and obey my morals.

One of the first interviews I was ever offered in a technical role was for Bechtel, in 2004. I was desperate to break into a career, I accepted the interview. I was in the car driving to the location, and just realized I couldn't do it. I couldn't ignore my morality to work for such a clear and direct war profiteer, that as a private company, had no oversight.

If I join a non-profit that has a humanitarian mission, I do so because I'm into the mission and feel fulfilled by that more than my comp. I can't imagine trading that in just because @sama got thirsty.

The mission is futile, the mission at this organization has been compromised and corrupted. Resign and continue your mission elsewhere.


So dissolve it, return the money, go start a commercial enterprise, then raise some money.


I’m sure that’s what each and every member of Hackernews would have done in the same position.


Hmm no, I don't think that's the case, but what exactly is the legal or ethical relevance of it?

You don't generally get to excuse bad behavior because you can make up a hypothetical different person doing the same bad thing in that situation.


Not all of us are as morally bankrupt as that. I personally think I could make tons of money with a dumb AI product in my specific area of expertise, but I don’t see how any tech from today would improve outcomes versus the SOTA that’s not AI, but it would add costs and complexity. I would personally be annoyed if a company I worked at changed its goals to make money rather than something more noble. It’s happened a few times to me, unfortunately.


Not all, but also not enough.


It is career limiting to have morals.


It's much harder living without strong moral virtue.


No matter what path you choose, you still die.


Sure, after fees and expenses..


This is a great observation which I have not heard before. I think it greatly changes the way I think about openAI’s success/infamy




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: