Why would a "real super intelligent AI" be your servant in this scenario?
>I hope this hypothetical AI has humans in high regard
This is invented. This is a human concept, rooted in your evolutionary relationships with other humans.
It's not your fault, it's very difficult or impossible to escape the simulation of human-ly modelling intelligence. You need only understand that all of your models are category errors.
> Why would a "real super intelligent AI" be your servant in this scenario?
Why is the Bagger 288 a servant to miners, given the unimaginable difference in their strenght? Because engineers made it. Give humanity's wellbeing the highest weight on its training, and hope it carries over when they start training on their own.
Category error. Intelligence is a different type of thing. It is not a boring technology.
>Give humanity's wellbeing the highest weight on its training
We don't even know how to do this relatively trivial thing. We only know how to roughly train for some signals that probably aren't correct.
This may surprise you but alignment is not merely unsolved; there are many people who think it's unsolvable.
Why do people eat artificially sweetened things? Why do people use birth control? Why do people watch pornography? Why do people do drugs? Why do people play video games? Why do people watch moving lights and pictures? These are all symptoms of humans being misaligned.
Natural selection would be very angry with us if it knew we didn't care about what it wanted.
> Why do people eat artificially sweetened things? Why do people use birth control? Why do people watch pornography? Why do people do drugs? Why do people play video games? Why do people watch moving lights and pictures? These are all symptoms of humans being misaligned.
I think these behaviors are fully aligned with natural selection. Why do we overengineer our food? It's not for health, because simpler food would satisfy our nutritional needs as easily, it's because our far ancestors developed a taste for food that kept them alive longer. Our incredibly complex chain of meal preparation is just us looking to satisfy that desire for tasty food by overloading it as much as possible.
People prefer artificial sweeteners because they taste sweeter than regular ones, they use birth control because we inherently enjoy sex and want more of it (but not more raising babies), drugs are an overloading of our need for hapiness, etc. Our bodies crave for things, and uninformed, we give them what they want but multiplied several fold.
But geez, I agree, alignment of AI is a hard problem, but it would be wrong to say it's impossible, at least until it's understood better.
It seems like you don’t understand reinforcement learning. The signal is reinforced because it correlates to behavior, hacking the signal itself is misalignment.
This is partly true, partly false, partly false in the opposite direction, with various new models. You really need to keep updating and have tons of interactions regularly in order to speak intelligently on this topic.
maybe this is also part of the problem? Once I learn the idiosyncrasies of a person I don't expect them to dramatically change overnight, I know their conversational rhythms and beat; how to ask / prompt / respond. LLMs are like a eager sycophantic intern how completely changes their personality from conversation to conversation, or - surprise - exactly like a machine
Incorrect. They're not a C corp, they're a public benefit corporation. They have a different legal obligation. Notably, they have a legal obligation to deliver on their mission. That's why Anthropic is the only actual mission-driven AI company. They do have to balance that legal obligation with the traditional legal obligations that a for-profit corporation has. But most importantly, it is actually against the law for them not to balance prioritizing making money and prioritizing AI safety!
Do you think they currently exist to prioritize AI safety? That shit won’t pay the bills, will it? Then they don’t exist. Goals are nice, OKRs yay, but at the end of the day, we all know the dollar drives everything.
It's simple, they will redefine the term (just like OpenAI redefined "AGI" into "just makes a lot of money) into "doesn't leak user data" and then claim success
And if you consider "I'm not as rich as I am physically capable of being" to be "punishment", you have absurd priorities and should not be in congress.
There's so much more to life and the world than the number on your portfolio for christs sake.
People have this mindset like they are the buddha or something, and then you show them a trick to save 25% on every amazon order, and they are all over like addict shown a pill stash. So dumb
He means capturing things that benchmarks don't. You can use Claude and GPT-5 back-to-back in a field that score nearly identically on. You will notice several differences. This is the "vibe".
Answers I see are typically "be a product manager" or "start your own business" which obviously 95% of developers can't/don't want to do.
reply