Are the employees in the demo high-directives of OpenAI? I can understand Altman being happy with this progress, but what about the medium/low employees? Didn't they watch Oppenheimer? Are they happy they are destroying humanity/work/etc for future and not-so-future generations?
Anyone who thinks this will be like the previous work revolutions is nonsense. This replaces humans and will replace them even more on each new advance. What's their plan? Live out of their savings? What about family/friends? I honestly can't see this and think how they can be so happy about it...
"Hey, we created something very powerful that will do your work for free! And it does it better than you and faster than you! Who are you? It doesn't matter, it applies to all of you!"
And considering I was thinking in having a kid next year, well, this is a no.
Have a kid anyway, if you otherwise really felt driven to it. Reading the tealeaves in the news is a dumb reason to change decisions like that. There's always some disaster looming, always has been. If you raise them well they'll adapt well to whatever weird future they inherit and be amongst the ones who help others get through it
Thanks for taking the time to answer instead of (just) downvoting. I understand your logic but I don't see a future where people can adapt to this and get through it. I honestly see a future so dark and we'll be there much sooner than we thought... when OpenAI released their first model people were talking about years before seeing real changes and look what happened. The advance is exponential...
> I don't see a future where people can adapt to this and get through it.
You lack imagination then. If you read history and anthropology more, which you haven't done enough of, clearly, then your imagination will expand and you will easily be able to imagine such a future. Why? Because you will become aware of so many other situations where it looked bleaker and plenty of groups of people got by anyway and managed to live satisfying lives as best they could.
To this day there are still some hunter gatherer tribes left in the Amazon, for example, despite all the encroaching modernity. Despite anything that could happen, I can imagine being able to be resourceful and find some mediocre niche in which to survive and thrive in, away from the glare of the panopticon.
Or as an another example, no matter how much humans dominate with their industrial civilization, cockroaches, pigeons, and rats still manage to survive in the city, despite not only not being actively supported by civilization, but actually being unwanted.
Or if you want to compare to disasters, how about the black plague? Living through that would likely have been worse than most anything we complain or worry about.
Your kids will have at least as good a chances as any of those. The key is raising them with appropriate expectations -- with the expectation that they may have to figure out how to survive in a very different world, not some air conditioned convenience paradise. Don't raise kids that are afraid to sleep outdoors or afraid to eat beans or cabbage. Those folks will do poorly if anything goes wrong. If they have a good resilient character, I really think they'll likely be fine. We are the descendants of survivors.
I am not aware of any example in the past were humans beings could be magicked out of nothing (and disposed of) in unlimited numbers at the snap of a finger, at practically zero cost. I don't think history gives us any comparison for what it's going to happen.
1) The hunter gatherer example is not as far off as you think actually, because from the point of view of their economy, our economy might as well be unlimited magic. Probably all the work a hunter gatherer does in a year might only amount to a few thousand dollars worth of value if translated into a modern economy, far less than a minimum wage earner. And yet they persist, subsisting off of a niche the modern economy has not yet touched.
2) GPUs cost money. They are made of matter. Their chips are made in fab facilities that are fab-ulously complex, brittle, and expensive. Humans are made in very different ways (I've heard kicking off the process is particularly fun, but it can be a bit of a slog after that) out of very different materials, mostly food. So even if GPUs can do what humans can do, they are limited by very, very different resources so it is likely they'll both have a niche for a long time. I calculated the "wage" an LLM earns recently -- it's a few bucks an hour IIRC. Yeah, it may go down. Still, we're very much in a survivable ballpark for humans at that point.
2b) Think like a military planner. If they really screw up society badly enough to create a large class of discontents, it will be very, very hard for the elite to defend against rebels, because the supply chain for producing new chips to replace any destroyed is so massively complex and long and large and full of single points of failure, as well as that for deploying GPUs in datacenters, and the datacenters themselves. You can imagine a tyrannical situation involving automated weapons, drones etc, but for the foreseeable future the supply chain for tyranny is just too long and involves too many humans. Maybe a tyrant can get there in theory, but progress is slow enough it's hard to think they wouldn't be at serious risk of having their tyrannical apparattus rebelled against and destroyed before it can be completed. It's hard to tyrannize the world with a tyrranical device that is so spread out and has so many single points of failure. It would not take a hypothetical resistance many targets to strike before setting the construction back years.
3) There is no AI that can replace a human being at this time. There are merely AI algorithms that make enthusiastic people wonder what would happen if it kept getting better. There is neither any reason to believe it will stop getting better, nor to believe it will continue. We really do not know so it's reasonable to prepare for either scenario or anything in between at any time between a few years to a few centuries from now. We really don't know.
All in all, there is far more than enough uncertainty created by all these factors to make it certainly risky, but far far from guaranteed that AI will make life so bad it's not worth going on with it. It does not make sense to just end the race of life at this point in 2024 for this reason.
Also, living so hopelessly is just not fun, and even if it doesn't work out in the long run, it seems wasteful to waste the precious remaining years of life. There's always possible catastrophes. Everyone will die sooner or later. AI can destroy the world, but a bus hitting you could destroy your world much sooner.
> a future where people can adapt to this and get through it
there are people alive today who quite literally are descendants of humans born in WW2 concentration camps. some percentage of those people are probably quite happy and glad they have been given a chance at life. of course, if their ancestors had chosen not to procreate they wouldn't be disappointed, they'd just simply never have come into existence.
but it's absolutely the case that there's almost always a _chance_ at survival and future prosperity, even if things feel unimaginably bleak.
> I don't see a future where people can adapt to this and get through it.
You lack imagination then. If you read history and anthropology more, which you haven't done enough of, clearly, then your imagination will expand and you will easily be able to imagine such a future. Why? Because you will become aware of so many other situations where it looked bleaker and plenty of groups of people got by anyway and managed to live satisfying lives as best they could.
To this day there are still some hunter gatherer tribes left in the Amazon, for example, despite all the encroaching modernity. Despite anything that could happen, I can imagine being able to be resourceful and find some mediocre niche in which to survive and thrive in, away from the glare of the panopticon.
Or if you want to compare to disasters, how about the black plague?
Your kids will have at least as good a chances as any of those. The key is raising them with appropriate expectations -- with the expectation that they may have to figure out how to survive in a very different world, not some air conditioned convenience paradise. If they have that I really think they'll likely be fine.
Seems I accidentally double posted. Sorry. Thanks (genuinely) to whoever kindly voted the dupe down to zero while leaving the original alone, that was a good choice. Too late to delete unfortunately.
>>>>> when OpenAI released their first model people were talking about years before seeing real changes and look what happened.
For what it's worth most of the people in my social circle do not use ChatGPT and it's had zero impact on their life. Exponential growth from zero is zero.
The future is very hard to predict and OpenAI is notoriously non-transparent.
If they were stumped as to how to improve the models further, would they tell you, or would Altman say "Our next model will BLOW YOUR MIND!" Fake it till you make it style to pump up the company valuation?
Anyone who thinks this will be like the previous work revolutions is nonsense. This replaces humans and will replace them even more on each new advance. What's their plan? Live out of their savings? What about family/friends? I honestly can't see this and think how they can be so happy about it...
"Hey, we created something very powerful that will do your work for free! And it does it better than you and faster than you! Who are you? It doesn't matter, it applies to all of you!"
And considering I was thinking in having a kid next year, well, this is a no.