Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

End of the day the AI space will be dominated by whatever model doesn’t lecture you about “as a large language model I can’t…”.

And the company that will ship that won’t have the largest “AI safety and ethics” team.

ChatGPT will become as irrelevant as Dall-E2 when that happens.

(Not saying this is for the best, just saying what I think will happen)



I believe it's for the best. The dangers of AI are theoretical. The dangers of corporations being in control aren't.


> The dangers of AI are theoretical.

What do you mean by this?

Many people throw out the word ‘theoretical’ as some kind of way to imply that something ‘isn’t real’ or worth worrying about. Something might seem implausible until it happens. Gravity waves were once ‘only’ theoretical after all :)

There are plenty of AI dystopian predictions, and many of these are possible and impactful. Many of these theories are based on solid understandings of human nature. It is hard to know how technology will evolve, but we have some useful tools to make risk assessments.

> The dangers of corporations being in control aren’t.

There have been plenty of dystopian predictions about corporate control too. I take the point that we’ve seen them over and over throughout history.


I'm saying that most predictions about the dystopian AI future are typical of any generation encountering a disruptive technology. They're exaggerated and built from assumptions.

They also always seem to conclude that the answer to these problems is to keep these machines centralized in the hands of big companies and governments, which is a very strange conclusion when they tend to get paychecks from those groups.

At the extreme end you even get these people talking about AI like it's going to be some monstrous world eating god.

All innovation has negative consequence.

Television lead to couch potato. 24 7 news. Television stars.

Radio killed the local performer.

Social media addiction and the erasure of culture and increase of depression.

Cars, if you go back far enough, got plenty of criticisms as well.

The AI powered future, should it come to exist will have downsides. You'll have a very different landscape to deal with as a creative where your artistic output isn't valued for individual works of art. You'll have very different expectations of how art and media is personalized. It will be hard to trust video or pictures or audio again, because they are so easily faked.

Our way of life, as it stands today, will not survive. That's not bad, just different.

The AI critical people tend to have some valid criticisms, but at the end of the day their criticisms are rooted in a desire to maintain a status quo until we are "sure it's safe" and that just doesn't fly.

We don't know how this will all pan out until we try it, so as we have always done, we will try it and deal with the consequences as they appear.


> They also always seem to conclude that the answer to these problems is to keep these machines centralized in the hands of big companies and governments, which is a very strange conclusion when they tend to get paychecks from those groups.

I'm not sure who you mean by they.

I'd suggest a vast majority of the public mindshare (in the US at least) of technology gone wrong and dystopias come from sci-fi.

In general, I have not noticed sci-fi making such conclusions. I rarely notice them drawing any hard conclusions. Readers often come away with a mix of emotions: curiosity, anxiety, excitement, wonder, or simply the joy of stepping outside one's usual reality.

What I read and watch tends to paint a picture more than pitch or imply solutions.

- 1984 seems to make the opposite point, right? It is the classic example of the surveillance state. as such, it is a counter example of centralized power.

- A Scanner Darkly (movie) explore the question of who watches the watchers. It does not paint a pretty picture of the agency doing the surveillance.

- The Ministry of the Future shows how a government agency can't really do much without widespread decentralized underground support. As drones get cheaper, civilian activists become terrorists who assassinate climate unfriendly business people. Spoiler alert: it may not fit the pattern of a dystopian novel.

- Her (a man falling in love with his OS) explores the personal side in a compelling way.


By the day I tend to refer to people who are professional AI ethics types.

Not sure what you're getting at with the rest of your comment. I would also classify most science fiction as pretty solidly out of touch with what reality is going to look like. It shows us exaggerated aspects of what writers think the future is going to look like, not realistic predictions.


> I would also classify most science fiction as pretty solidly out of touch with what reality is going to look like.

What exactly do you mean by ‘out of touch’?

(Personally, I avoid ‘most science fiction’ by not reading all of it. :\) But seriously, I try to read and watch the insightful and brain stretching kinds.)

You wouldn’t be the first to express disbelief. The majority of possible scenarios never happen. But the practice of thinking through them and taking them seriously is valuable. Consider the history of the discipline of scenario planning…

> Early in [the 20th] century, it was unclear how airplanes would affect naval warfare. When Brigadier General Billy Mitchell proposed that airplanes might sink battleships by dropping bombs on them, U.S. Secretary of War Newton Baker remarked, “That idea is so damned nonsensical and impossible that I’m willing to stand on the bridge of a battleship while that nitwit tries to hit it from the air.” Josephus Daniels, Secretary of the Navy, was also incredulous: “Good God! This man should be writing dime novels.”

> Even the prestigious Scientific American proclaimed in 1910 that “to affirm that the aeroplane is going to ‘revolutionize’ naval warfare of the future is to be guilty of the wildest exaggeration.”

> In hindsight, it is difficult to appreciate why air power’s potential was unclear to so many. But can we predict the future any better than these defense leaders did…

Read more at https://sloanreview.mit.edu/article/scenario-planning-a-tool...

> Scenario Planning: A Tool for Strategic Thinking How can companies combat the overconfidence and tunnel vision common to so much decision making? By first identifying basic trends and uncertainties and then using them to construct a variety of future scenarios. By Paul J.H. Schoemaker

Of course, reading sci-fi novels is not the same as systematic scenario planning. But the former tends to show greater imagination and richness.


> Not sure what you're getting at with the rest of your comment.

I gave examples of sci-fi to show some ways that many people are exposed to thinking about technological futures.

Claim : The people influenced by science fiction greatly outnumber the reach of people that get paid to do AI ethics. Agree?


> Our way of life, as it stands today, will not survive. That's not bad, just different.

What do you mean by way of life?

Even if we are only talking about supposedly purely subjective things (like fashion) or seemingly arbitrary things (like 60 hertz power), or social agreements (like driving on the right side of the road), almost everything has implications.

I reject the notion that people should punt on value judgments.

Even if you subscribe some kind of moral relativism, it is wiser to reserve judgment until you see what happens.

I don't subscribe to moral relativism. Some value systems work better than others in specific contexts.


Written word belongs to the scribes at the monasteries, this new 'printing press' will unleash dangers beyond our imagination, people may not trust the authority of the church anymore


> ChatGPT will become as irrelevant as Dall-E2 when that happens.

We're at that point (nocomercially) with LLaMA. It's not just running on private hardware, but unrestricted and tunable like DeamBooth.


I hope you are correct. The alternative is the space being dominated by regulation.


Yep. ChatGPT told us what a politically-correct robot would say. Currently we have the ability to ask endless complex questions to pick it apart, but I'll bet they limit that soon. It's far more obvious than before what biases the American corporations want to present; I remember when people used to test the Google search results without getting much out of it.

I don't think that the most successful average-user-facing AI product is going to be the least self-regulated, though. Those restrictions don't turn away typical users wanting quick answers, just people who are asking political questions or want to mess around seeing what they can make the AI say.


> The alternative is the space being dominated by regulation.

This is a false dichotomy. Regulation can take many forms. It is an essential tool to reduce the probability and impact of market failures.

Using medical/health metaphors can help frame these discussions in less polarizing ways:

1. How do we balance prevention versus treatment? The key question is not if particular markets can struggle and fail in certain conditions. They do. More insightful questions are: (a) how should we mitigate the consequences and (b) what kinds of regulation are worth the cost?

2. What about public health? What happens when the patient won’t get vaccinated and contagion spreads? / We see this in computer security. Lax security by one can spillover to many. / Banks (and even financial systems), left to their own devices are not as resilient and fair as we would like.

Underlying this whole discussion is also “what timeframe are we optimizing for?” and “what exactly are we optimizing for? Economic efficiency? Equality? Something else? Some combination?”


And I hope we will have good AIs running on our own hardware. I obviously want to use AI to create lots of porn for me. And that is really no one else s business.

Speaking of - I already heard of a project that is fine-tuning StableDiffusion to produce porn.


Depends. As with Internet forums, the presence of Nazis (or whatever similar bad actor) could ruin it for everyone else.

Let’s say you are a high schooler writing an essay about WWII. You ask Google LLM about it, and it starts writing Neo Nazi propaganda about how the holocaust was a hoax, but even if it wasn’t, it would have been fine. The reason it’s telling you those things is because it’s been trained by Neo Nazi content which either wasn’t filtered out of the training set or was added by Neo Nazi community members in production usage.

Either way, now no one wants to listen to your LLM except Neo Nazis. Congrats, you’ve played yourself.

FYI, the reason no one uses Dall-E is because the results are of higher quality in other offerings like Midjourney, which itself does have a content filter.


A sufficiently good LLM would not produce neonazi propaganda when asked to write a HS paper, regardless of whether it had been 'aligned' or not.


Well what kind of essay do you think North Korean LLMs are going to write about human rights and free speech issues in North Korea?

My point is, garbage in, garbage out. The LLM will spout propaganda if that’s what it’s been trained to do. If you don’t want it spouting propaganda, you’ll have to filter that out. So really it’s a question about what do you filter.


Why not? Can you unpack what you mean?

Are you saying that sufficiently good models should understand that such propaganda is not appropriate for the context?

Are you saying that understanding appropriateness is not the same thing as ethics?


I mean that the model should learn that e.g. highschool textbooks are not filled with neonazi propaganda, therefore you should not produce it when asked to write a highschool essay. I would assume that if you go out of your way you can make LLaMA generate such content, but probably not if you fill the prompt with normal looking schoolwork.

This is completely orthogonal to learning what is ethically right or wrong, or even what is true or false.


Defining good may require ethics.


May? Isn’t this the definition of good?


Blatant propaganda will of course ruin such product for regular people. But subtle propaganda? I wouldn't be so sure. And it's not only about Nazis.


Right. Subtle propaganda could shape what arguments are used, how they are phrased, and how they are organized —- all of which can affect human perception. A sufficiently advanced language model can understand enough human nature and rhetoric (persuasion techniques) to adjust its message. A classic example of this is exaggerating or mischaracterizing uncertainty of something in order to leave a lingering doubt in the reader’s mind.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: