> you can't say for sure, from where we're standing now.
We can already see how LLMs can be substantially worse - they can cosplay human thoughts and sentiment in a way that wasn’t previously possible.
So much of this debate seems focused on this conception of Skynet-style murderous AI - or at least manipulative and scheming HAL 9000 types. But an order of magnitude greater risk has already arrived just from scaling existing harms.
Phishing schemes and pig butchering scams have destroyed countless lives. They’re now easier and more scalable than ever. As are fake news and disinformation campaigns.
Several companies are productising AI girlfriends with predatory pricing models, capitalising the human desire for connection and intimacy in the way slot machines and sports betting monetise our desire for a better life. That’s new.
It may not be an order of magnitude worse for everyone yet - but for certain vulnerable groups, that future has arrived already.
In defence of the comparison - it is not at all a consensus that the Holodomor was deliberate from the beginning, that’s an active debate with prominent experts on both sides.
In both famines, there was a refusal to intervene to alleviate the famine once it had begun, and in both cases that was unequivocally a deliberate choice of the British/Soviet leadership.
Further - there are many cases through history of companies steering the state violence, from Colonial India to Blair Mountain to Aaron Swartz.
The broad point here is that the Soviet Union is constantly used in our Western discourse for our own brand of whataboutism.
Our systems fail people constantly and brutally. Our supermarket shelves are stocked, but most of the Anglosphere is in the grips of an unprecedented housing crisis.
There’s absolutely lessons we can learn from the Soviets in housing policy, but we won’t if any mention of them ends up reduced back to their worst failures. They didn’t get their shelves stocked by talking about MKUltra or smallpox blankets all day.
You can argue that the grass is greener overall, but there’s still dead patches all over our lawn. That’s the broader point.
>it is not at all a consensus that the Holodomor was deliberate from the beginning, that’s an active debate with prominent experts on both sides.
Either you're being deliberately dishonest or haven't read enough of the details. Yes, there is debate on what percentage of that gargantuan human tragedy was started by tyrannical incompetence and how much of it was done through deliberate vengefulness by the Stalin government, further moved forward by local initiative, but virtually all experts agree that at least deliberate indifference allowed things to grow monstrously and prolonged them too.
The leaders in Moscow (especially Stalin) and local commissars could soon clearly see that the collectivization policy was practically extinguishing all human life in the Ukrainian countryside, yet they continued to pursue it and even block all avenues of escape, while at the same time exporting grain they'd confiscated from people who were by then dying in their millions.
This is extremely important to remember, especially when Tesla describe their neural net approach as being easily fine-tuned to different jurisdictions.
I’ve seen the “human-like” behaviour of FSD 12.x praised a lot by channels like this, particularly where the car is breaking the rules in a way they consider “normal”. And it’s a fair argument that predictable behaviour improves safety.
However, behaviour that is common in the US - like making a turn into a side street while a pedestrian is beginning to cross - would be considered exceptionally aggressive and reckless here in Australia. It’s a cultural difference I’ve adapted to when moving back and forth.
At the end of the day though, when I walk across a street, I don’t want to have to worry if Tesla has fine tuned their model correctly to match our local expectations of yielding. I’d rather they just followed the law as closely as possible - because that’s the most predictable behaviour of all.
As a college student, I use emoji constantly to communicate all sorts of abstract sentiments, but in my experience they can also be irritatingly ambiguous and highly dependent on cultural norms and interpretation.
Take the thumbs up emoji - within my social circles, the exact same emoji can be interpreted both as a enthusiastic agreement ("Sure!") and also as a sarcastic affirmation ("Good for you.").
It's often difficult to infer the intended meaning, even with context, and in some circumstances I've found emojis have actually added significantly to the ambiguity and cognitive burden in parsing a text. That's not a problem I have often faced with simple smileys.
We can already see how LLMs can be substantially worse - they can cosplay human thoughts and sentiment in a way that wasn’t previously possible.
So much of this debate seems focused on this conception of Skynet-style murderous AI - or at least manipulative and scheming HAL 9000 types. But an order of magnitude greater risk has already arrived just from scaling existing harms.
Phishing schemes and pig butchering scams have destroyed countless lives. They’re now easier and more scalable than ever. As are fake news and disinformation campaigns.
Several companies are productising AI girlfriends with predatory pricing models, capitalising the human desire for connection and intimacy in the way slot machines and sports betting monetise our desire for a better life. That’s new.
It may not be an order of magnitude worse for everyone yet - but for certain vulnerable groups, that future has arrived already.