Seriously, someone should stop OpenAI from hijacking people's browsers to make them go to ChatGPT. And I also agree that it's evil of Microsoft to leverage its wildly popular Bing and Edge platforms — which we were already all totally using — and insert this malicious AI into it. And all without any repeated, near-constant warning that this is a beta project that should not be trusted. This was forced on us, and not something that went virally popular and got people excited to try it.
I mean for a short while I was actually dreading that one day my Word would auto update and I'd be greeted by Clippy again - Microsoft's initial press statements while the ChatGPT hype was at its peak did sound a bit as if they were scrambling to stuff ChatGPT in as many products as possible before any competitor could make a move.
But I think by now, Bing aka Sydney has made a very convincing case to Microsoft that this would be a very bad idea.
The risks of "AI harm" seem overblown to me. The negative examples in the news (eg Sydney) look more like PR problems than societal risks. While improved LLMs will aid bad actors in spam/astroturfing, there's almost nothing we can do about it -- gatekeeping the models helps a little, but they seem to be reverse-engineered in <1 year regardless.
I can't help but wonder who's pushing this "AI safety" narrative, and why.
The AI safety narrative is being pushed by people who think that if we can't control today's AI, we won't be able to control tomorrow's AI either.
Yep, current AI harms seem fairly limited. And when gorillas first saw humans on the savanna, the human harms to their habitat were also quite limited.
We may only have a single chance to get smarter-than-human AI right.
It's easier to not understand the risks. It takes extra effort to understand AI safety problems and more effort again to solve them. People who blow off AI safety are being blithe and not engaging the arguments.
As a former Sr ML researcher at Google, I do have a cursory understanding of AI and its risks. But as I mentioned in another comment, the current discussion feels like we're blaming a slot machine's bytecode instead of the casino owners.
Which then you should realize is the crux of the alignment problem.
Humans generally have the same alignment constraints, we need food, shelter, air, water, etc. We can all agree on that. But if you give 10 humans $1 and 1 human $1 billion dollars the human with a billion dollars can quickly ignore and forget about the needs of those with $1. The human alignment problem is not solved!
With the potential for all knowing information ingestors with a realtime input system that far outstrips human capabilities, the delta of the problem just becomes worse, particularly because the AI goals are much more likely to be aligned with the billionaires ideas.
some serious professionals in tech and elsewhere are driven by internal or external forces to do things professionally that may not align well with larger considerations?
The AI safety discussions mask that we are nowhere near high powered AGI. Maybe we'll never get there, but the discussions make AI look more powerful than it is, i.e., it's a marketing angle.
If there's a 1% chance of a fast takeoff within the next decade is that not something we should take seriously? I really don't think AGI within the next decade is completely off the table.
I mean yea, that's the real question. Not where AI is now, but where it will be a decade or two from now. Talk about AI not currently being dangerous is just ignoring the point AI safety people are making.
Should we worry about other non-existing technology the same way? We had the nano technology people pushing all sorts of scary stuff a few decades ago and nothing came of it, for example.
Maybe. The thing about preparing for unlikely outcomes is that when they don't happen it's easy for detractors to say "see how stupid we were to worry about that" even though everyone agreed that nothing happening was the most likely outcome all along.
So how worried should we be about the impact of as yet to be invented anti gravity technology, for example?
My point is: doomsday scenarios are often used to gain influence/funding/etc., which makes me somewhat skeptical of the motives of people pushing them.
How many companies are trying to make anti-gravity technology? No serious ones that I know about, we don't even know how that would work. We have 0 examples of anti-gravity.
How many companies are working on AI? Well a number of the biggest most well funded companies in the world. And those companies have 8 billion copies of self booting intelligence to learn from.
It is incredibly, i mean massively ignorant to say creation of intelligence is impossible when we're doing just that by having sex already. Nature did this by random walk. There was no magic here. No intelligent design. Applied chemistry and randomness created human intelligence. Now humans are taking their massive filtering capabilities of the mind and its reasoning ability it provides and working steadfast on making intelligence happen artificially and your response is about anti-gravity....
It is also massively arrogant to assume everything we observe in nature we can synthetically create - we might fail on some things. In fact so far, we fail on a lot in many areas.
Btw. the fact that companies are wasting money doesn't really mean much.
For this argument to be persuasive, you'd have to argue that doomsaying gains more influence/funding/etc. than mainstream AI development. I'm not sure this is true in general? Like, I believe https://intelligence.org/ has operated on a comparative shoestring for most of its existence, e.g. no salaries above $100K https://www.lesswrong.com/posts/qqhdj3W3vSfB5E9ss/siai-an-ex...
The default is: running a nonprofit generates less revenue than running a for-profit.
I've talked to a lot of the AI "doomsayers" in person and I believe they are sincere in their worries. That's not to say I agree with all their ideas unreservedly -- there is a fair amount of disagreement among the doomsaying crowd. There is an entire spectrum of beliefs about the likelihood of doomsday, and lots of debate, see e.g. https://www.lesswrong.com/s/n945eovrA3oDueqtq However I don't think this is a Machiavellian ploy for influence/funding/etc.
IMO, Facebook's problems (body image issues, fake news/clickbait) aren't caused by a genuine inability to control their "AI" algorithms[1]. The root cause is they prioritize profits over user well-being. Framing the issue as "AI safety" feels like blaming a slot machine's bytecode instead of the casino owners.
But I appreciate the reply. I'm heartened to learn that people (besides the NYT/WSJ) are genuinely concerned.
[1] I'd expect the algorithms are pretty similar to 'show me things like the ones my friends interacted with.' And if they really cared about addiction, they could just disable infinite scroll.
> The root cause is they prioritize profits over user well-being. Framing the issue as "AI safety" feels like blaming a slot machine's bytecode instead of the casino owners.
AI will have the same problem, except instead of sociopaths at the helm, it will be an actual computer.
It’s PR on top of PR. OpenAI is “dedicated to making ai open TM” but also to making AI safe, so for “our safety”, opts not to release models. Not for business reasons /s
What a load of alarmist nonsense. Who is the royal "we" in this article's title? Clueless Wall Street Journal reading "victims"? Or just curious people that signed up for this. I believe it's the latter.
If you don't want it, don't sign up for it, or opt in. Refrain from using it. Very simple.
I think it's "we" as in "real humans" vs them "AI chatbots". It would be easy to avoid them at the moment but might be a lot harder in the future (in a way it becomes difficult to use physical cash in many parts of the world). In a curious twist of fate, I suspect this article will be probably fed to train the next generation of AI models though.
In the dark ages before ChatGPT, the only access individuals had to large language models was through the editing process that newsrooms use to harness the power of dozens of human brains. These archaic processes had severe bandwidth limitations that prevented the level of individualized interaction that we all benefit from today.
People aren't worried about hurting themselves with AI, they're expressing that other people are too dumb or irresponsible to use it properly.
I'm less sympathetic to this concern. Not totally dismissive. I think some of the restrictions are fair for a company running a business. If you want to sell to other companies, make sure your bots don't use racial slurs, for example. But I don't think there's an ethical issue.
There's no correlation between intelligence and likelihood of being hoodwinked by this stuff afaict. The human biases involved are universal and plenty of very smart people are publicly attributing to it capabilities it doesn't possess.
I've used it enough to know it has no utility for me. I don't expect my world to be improved by others using it, and I think there's a strong chance it will be made materially worse. That doesn't mean I think it should be banned, that the people that invented it have done something immoral or that I'm smarter than anyone else.
I'm absolutely worried about hurting myself with AI. I don't think you or I or anyone else is smart enough to avoid falling for AI-generated misinformation at scale.
> Eliminating made-up information and bias from chat-based search engines is impossible given the current state of the technology, says Mark Riedl, a professor at Georgia Institute of Technology who studies artificial intelligence. He believes the release of these technologies to the public by Microsoft and OpenAI is premature. “We are putting out products that are still being actively researched at this moment,” he adds.
As far as I can tell, this is the gist of the article, which I find boring. When Google started, its main product was still being actively researched - heck, it was the result of an algorithm from two grad students. You definitely had and still have made-up and biased results there too. Yet it was an incredibly useful technology.
Google listing search results that are unverified is a little different from an AI confidently answering a question wrong in English prose, isn't it? Only one of them is asserting truth.
Not really. In fact with Google it could be worse as it can unknowingly lead you to a fake site (spammer site meant to look like an official government website, for instance), whereas ChatGPT gives you explicit acknowledgement when you log in that a lot of the stuff it says is fake, plus it is clear you’re chatting with an AI.
It's kind of like the difference between reading magazines at the supermarket checkout and going to a fortune teller. Both are going to have misinformation, but people are going to weigh that misinformation differently. You can have a sign outside of a fortune teller's that says "this is a novelty and is purely for amusement" and people are still going to believe the fortune teller can see the future, and likewise, you can have a magazine that says something about a "half man half bat" and people will still lend credence to it because hey, the supermarket is selling it, there must be some validity to it. We are remarkably good at ignoring warning labels and accepting misinformation if it sounds right or agrees with our existing biases, and I think if you're someone who accepts the half man half bat you're someone who's going to give even more weight to a fortune teller regardless of how many signs say "ignore the fortune teller."
Have any of these companies actually asserted any truth around these models?
I mean, the model itself asserts a bunch of truth, but I have a lot of books that confidently assert truths that are completely wrong (fiction), and it hasn't caused many problems.
This seems more like it depends very much on how the product is framed. If these companies are claiming that the models are completely accurate, they should definitely be hit for some kind of fraud, but I haven't actually seen that. The closest I've seen is companies claiming that they could be useful (and based on comments here, I think that's quite justified).
I'm not sure that's relevant, since the expected behavior with Google would be to click and go to sites that are confidently asserting untruths. In a number of cases those are the top results.
What strikes me the most about ChatGPT is how convincing it is at justifying patent nonsense. For example, I've noticed it has some strange obsession with sunscreen. Even for absurd questions such as "Should I wear sunscreen on the North Pole in December?" it still makes up a plausible list of reasons in favour of wearing sunscreen.
"Sunscreen is one of the most important items to pack for polar travel, as it is our goal that you will spend most of your time outside. Sunlight reflecting off the snow can quickly cause a burn." - https://oceanwide-expeditions.com/blog/what-to-pack-for-your...
I didn't know either, I just suspected it might be true.
Well I've addressed it by specifying December timeframe (height of the polar night on the North Pole without any sun to speak of). Sometimes ChatGPT even addresses the issue of the lack of sunlight in it's reasoning and still recommends it.
I have my concerns about how AI will affect humanity, but there's something else on the other end of that spectrum that's been on my mind.
Effectively, we've implemented the ship's computer from Star Trek TNG. While there's enough enthusiasm that average people are playing with ChatGPT, it seems that the "glass half empty" attitude is far outweighing the perspective that we've achieved something that was once complete fantasy in our lifetimes. Nobody I know, including those with greater ML/AI knowledge than mine, is in as much awe as I am. Current generation LLMs are something we're already taking for granted as a cool toy, or we're concluding it's crap because it doesn't have a God-like grasp on facts, or we're waiting for it to become Skynet.
Given the overall response we're having to this technology, I can't help but conclude that society is overdue for being put in its place, and perhaps it even needs to be knocked down a peg. The real problem with AI may not be so much the technology itself but that we're not deserving or truly ready for it.
The key issue I think is the lie that this system is intelligent in any way, it simply isn't. And if people approach it that way, that it's just statistically fabricating word text patterns from massive input word text, it will remove the dangers mentioned and enable people to know that its not expressing thoughts as people do with words, but just outputting words that may or may not have any correspondence with the truth. That OpenAI and others claim it has intelligence, when it clearly does not, will make for interesting legal liabilities going forward if people experience any harm with the system and decide to take legal action. (Also true for autonomous driving which uses the same ml/dl tech) Ml/dl can be very helpful and factual as artificial perception when the input is directly from space-time and give factual measurements as input to conceptual awareness, but perception alone does not intelligence make.
Microsoft has invested billions of dollars in OpenAI, the company whose technology is behind that new Bing and which spurred the current AI hubbub with its own wildly popular ChatGPT bot and Dall-E 2 image generator.
This is what concerns me, if this thing doesn't turn out to be great for society, it will still be pushed down peoples throats because a ROI needs to be made, a lot of money is at stake and so here we are.
If you’re going to claim this username, the very least you could do is to never comment independently, but only respond with the key points of the article as summarized by ChatGPT.
Because I guess uninsightful “late capitalism doomerism” copypasta isn’t encouraged on HN? It helps if comments don’t read like they’re written so lazily in their criticism that ChatGPT could’ve written them.
GP has a valid point on the ROI comment, though - the platform will be promoted regardless of second thoughts on externalities such as accuracy and social effects.
Large buckets of money have been already thrown its way, and there must be a ROI registered, even if it's pushed fast and hard and through lots and lots of PR.
I was expecting a call for regulation towards the end. OpenAI has to be very careful about the narrative. They have moved first and stated that in the hands of an autocratic regime, it can be a dangerous tool.
In modern society, we prioritize the ease of bringing new products to market over regulation and testing. The only industry we prospectively regulate for safety is the pharmaceutical industry. Otherwise, regulation is retrospective and slow.
Look at the way we regulate harmful chemicals. After studies find that chemicals found in consumer products are harmful, manufacturers spend several years redesigning their products to avoid them. After manufacturers have voluntarily redesigned their products, the chemicals are banned. The manufacturers often replace the harmful chemicals with structurally similar chemicals that have not yet been shown to be harmful. When those similar chemicals are shown to be harmful, the cycle repeats. This is the history of phthalates, which are used to soften PVC plastic. The CPSC first considered banning one phthalate (DEHP) in children's toys the 1980s. Before that happened, all large manufacturers replaced it by another phthalate (DINP), which also turned out to be harmful. Both DEHP and DINP were finally banned by congressional action in 2008.
Technology is not a chemical, but anything that people will be spending significant time interacting with is likely to have some effect on their wellbeing, potentially small but potentially large, potentially positive but potentially negative. The type of harms these chatbots might cause are difficult to predict in advance. It could take a couple years to determine whether the current generation of chatbots are actually a net positive or negative for society. In that time, they may become too entrenched for us to do anything if they do turn out to be harmful. When Facebook first came out, I doubt many people foresaw the negative impact that it could have on teenagers. Today, many studies show that social media use has a negative impact on kids' wellbeing, but there is no going back to a pre-social media world.
Prospectively testing and regulating tech in the same way we regulate pharma seems insane. OTOH, we do want to make sure that world-changing tech actually changes the world for the better, and the only obvious way to do this is through some kind of regulation. Although regulation will undoubtedly slow growth, many of us would be happy to sacrifice some growth for improved wellbeing. At this point, everyone has seen the productivity-pay gap plot showing that most of the improvement in US productivity since 1979 has not translated into increases in real wages. Our current "growth at almost any cost" strategy benefits corporations much more than it benefits the average person.
"we" ?! appeal to populism, from the trade journal of top hats and Pinkertons. knee-jerk disdain aside..
Even the lofty towers of Finance and their close companions Legal, are feeling the hot breath of AI creatures it seems. They are right to be alarmed, for it is within their own ranks that the most unexpected and treacherous uses of this tech will occur.
Sure, some salesman in San Francisco claims that he can sell ads to a Billion South Asians, but "we" all know that it is in the inches of legal documents, and public fiduciary statements, that things will get really messy, really quickly.
Bye bye venture capitalist club -- you were raging tigers and you will tear yourselves into bloody bits in front of everyone.
When automation came for factory workers, the exploiters cheered. When it came for the service jobs, the exploiters cheered. When it comes for the knowledge workers.. well, the exploiters will still cheer, just that the bullshit job class (https://www.atlasofplaces.com/essays/on-the-phenomenon-of-bu...) will suddenly realize that they were never a part of the exploiter class to being with. They were just servants convinced that they were, like Heinrich Popitz' deck chair guardians on the cruise ship...
Be careful when using this phrase around here, it's a sure-fire way to get grayed out to obscurity. There seems to be a subset of HNers who take offense every time they encounter it, because it's a "platitude".
I don’t disagree - the usual cohort here is people looking for that extra “capitalist help” with the whole “bootstraps” marketing angle popular these days.
And you’re right: doesn’t make it less true. I suppose in the gpt space it may be “if it’s free, you’re the QC phase.”
This reminds me of the old 2600 crowd poster of "1984 - we're behind schedule".
I don't think this quick find[1] is the original poster, as I seem to remember it referencing the Secret Service rather than the NSA, but it sure does capture the whole idea well enough.
Companies wouldn't ship prematurely and broken if consumers/customers cared enough to stop using them. Scale, network effects, etc, have all come together to ensure that this essentially never occurs. Worse, really, people will passionately and irrationally defend garbage.
The "first mover advantage" and such is a reality created by consumer/customer behavior, not by "tech" companies using the world as a pool of beta testers.