Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think humankind has managed massive shifts in what and who you could trust several times before.

We went from living in villages where everyone knew each other to living in big cities where almost everyone is a stranger.

We went from photos being relatively reliable evidence to digital photography where anyone can fake almost anything and even the line between faking and improving is blurred.

We went from mass distribution of media being a massive capital expenditure that only big publishers could afford to something that is free and anonymous for everyone.

We went from a tiny number of people in close proximity being able to initiate a conversation with us to being reachable for everyone who could dial a phone number or send an email message.

Each of these transitions caused big problems. None of these problems have ever been completely solved. But each time we found mitigations that limit the impact of any misuse.

I see the current AI wave as yet another step away from trusting superficial appearances to a world that requires more formal authentication protocols.

Passports were introduced long ago but never properly transitioned into the digital world. Using some unsigned PDF allegedly representing a utility bill as proof of address seems questionable as well. And the way in which social security numbers are used for authentication in the US is nothing short of bizarre.

So I think there are some very low hanging fruits in terms of authentication and digital signatures. We have all the tools to deal with the trust issues caused by generative AI. We just have to use them.



During these boundaries people can die. Consider the advent of yellow journalism and the connection with the Spanish-American war 1898: https://en.m.wikipedia.org/wiki/American_propaganda_of_the_S...


No doubt, people die of absolutely everything ever invented and also of not having invented some things.

The best we can ever hope to do is find mitigations as and when problems arise.


Which is why we started saying "whoa, slow down" when it came to some particular artifacts, such as nuclear weapons as to avoid the 'worse than we can imagine' scenario.

Of course this is much more difficult when it comes to software, and very few serious people think the idea of a ever present government monitoring your software would be a better option then reckless AI development.


Outside of the transition to a large city, virtually everything you've mentioned happened in the last 1/2 century. Even the phone was expensive, and not widely in use in under 100 years ago.

That's massive fast change, and we haven't culturally caught up to any of it yet.


Here's another one: We went from in-person story telling to wide distribution of printed materials, sometimes by pseudonymous authors.

This happened from the 15th century onward. By the 19th century more than half the UK population could read and write.


Just because we haven't yet destroyed the human race through the use of nuclear weapons doesn't mean that it can't or won't happen now that we have the capacity to do so. And I would add that we developed that capacity in less than 50 years of creating the first atomic bomb. We're now living on a knife's edge and at the merge of safeguards which we don't give much thought to on a daily basis because we hope that they won't fail.

That's how I look at where we're going with AI. Plunge along into the new arms race first and build the capacity, then later figure out the treaties and safeguards which we hope will keep our society safe (and by that I don't mean a Skynet-like AI-powered destruction, but the upheaval of our society potentially as impactful as the industrial revolution.)

Humanity will get through it, I'm sure. But I'm not confident it will be without a lot of pain and suffering for a large percentage of people. We also managed to survive 2 world wars in the last century--but it cost the lives of 100 million people.


I tend to think the answer is to go back to villages, albeit digital ones. Authentication only enforces that an account is accessed by the correct "user", but particularly in social media many users are bad actors of various stripes. The strongest account authentication in the world doesn't help with that.

So the question, I think, is how do we reclaim trust in a world where every kind of content can be convincingly faked? And I think the answer is by rebuilding trust between users such that we actually have reason to simply trust the users we're interacting with aren't lying to us (and that also goes for building trust in the platforms we use). In my mind, that means a shift to small federated and P2P communication since both of these enable both the users and the operators to build the network around existing real-world relationships. A federation network can still grow large, but it can do so through those relationships rather than giving institutional bad actors as easy of an entrance as anyone else.


But this causes other problems such as the emergence of insular cultural or social cliques imposing implicit preconditions for participation.

Isn't it rather brilliant that you can just ask questions of competent people in some subreddit without first becoming part of that particular social circle?

It could also reintroduce geographical exclusion based on the rather arbitrary birth lottery.


More tech won’t solve it. Areas, either physical or logical, with no or low tech might help.


> Each of these transitions caused big problems. None of these problems have ever been completely solved. But each time we found mitigations that limit the impact of any misuse.

This a problem with all technology. The mitigations are like technical debt but with a difference. You can fix technical debt. Short of societal collapse mitigations persist, the impacts ratchet upward and disproportionately affect people at the margin.

There's an old (not quite joke) that if civilization fell, a large percentage of the population would die of the effects of tooth decay.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: