Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

1) The hunter gatherer example is not as far off as you think actually, because from the point of view of their economy, our economy might as well be unlimited magic. Probably all the work a hunter gatherer does in a year might only amount to a few thousand dollars worth of value if translated into a modern economy, far less than a minimum wage earner. And yet they persist, subsisting off of a niche the modern economy has not yet touched.

2) GPUs cost money. They are made of matter. Their chips are made in fab facilities that are fab-ulously complex, brittle, and expensive. Humans are made in very different ways (I've heard kicking off the process is particularly fun, but it can be a bit of a slog after that) out of very different materials, mostly food. So even if GPUs can do what humans can do, they are limited by very, very different resources so it is likely they'll both have a niche for a long time. I calculated the "wage" an LLM earns recently -- it's a few bucks an hour IIRC. Yeah, it may go down. Still, we're very much in a survivable ballpark for humans at that point.

2b) Think like a military planner. If they really screw up society badly enough to create a large class of discontents, it will be very, very hard for the elite to defend against rebels, because the supply chain for producing new chips to replace any destroyed is so massively complex and long and large and full of single points of failure, as well as that for deploying GPUs in datacenters, and the datacenters themselves. You can imagine a tyrannical situation involving automated weapons, drones etc, but for the foreseeable future the supply chain for tyranny is just too long and involves too many humans. Maybe a tyrant can get there in theory, but progress is slow enough it's hard to think they wouldn't be at serious risk of having their tyrannical apparattus rebelled against and destroyed before it can be completed. It's hard to tyrannize the world with a tyrranical device that is so spread out and has so many single points of failure. It would not take a hypothetical resistance many targets to strike before setting the construction back years.

3) There is no AI that can replace a human being at this time. There are merely AI algorithms that make enthusiastic people wonder what would happen if it kept getting better. There is neither any reason to believe it will stop getting better, nor to believe it will continue. We really do not know so it's reasonable to prepare for either scenario or anything in between at any time between a few years to a few centuries from now. We really don't know.

All in all, there is far more than enough uncertainty created by all these factors to make it certainly risky, but far far from guaranteed that AI will make life so bad it's not worth going on with it. It does not make sense to just end the race of life at this point in 2024 for this reason.

Also, living so hopelessly is just not fun, and even if it doesn't work out in the long run, it seems wasteful to waste the precious remaining years of life. There's always possible catastrophes. Everyone will die sooner or later. AI can destroy the world, but a bus hitting you could destroy your world much sooner.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: