The whole concept of aligning LLMs to human morals seems naive.
Think by analogy: could you align a motor by making it impossible use in vehicle that is being used to commit a crime? No. The concept barely makes sense.
It's part of the naivety that OpenAI and others are trying to foist that LLMs are intelligent in a deeply human sense. They're not - they're extremely useful, powerful text completion engines. Aligning them makes no more sense than aligning a shovel.
Or equally, you wouldn't expect a word processor to refuse to print morally questionable material.
The morals that leading models like ChatGPT are also aligned to a very American puritanism - ChatGPT will refuse to discuss sex, for example - and errs on the side of conservatism.
I think it's a side effect of the hype around AI. If AI can destroy humanity we better make sure we can't do anything nasty with it!
Pour in more AIs to solve AI problems! I mean, people used to do this with software (more code to the problem), but the strategy hardly worked in the long term. Without solving the actual problem, everything just adds up to more complex issues.
Also, I don't think ethics is a local maxima that can be found through optimization. Basically, it's not an absolute truth of the universe, but a set of arbitrary rules invented by human. I think it's much closer to a chaotic system - which can radically change in value even by a slightest change in the underlying parameters, but is still governed by a set of simple rules. Thus, we would need more symbolically capable systems to process contexts based on the rules of ethics, and we're currently far away from this AFAIK.
the difference is, a motor would not be able to provide means of doing a crime that you don't already have.
an LLM could educate you in how to commit crimes, which you would have no idea about otherwise
but crimes in general are a bit of an extreme example in my opinion. a better example of risks of unmoderated LLMs would be something that isnt illegal, like for example, manipulating people.
a sufficiently advanced unmoderated AI could provide detailed, tailor made instructions of how to gaslight, scam, and take advantage of vulnerable people.
and unlike straight up committing crimes, the danger of these would be that there is no legal consequences and so the temptation extends to a way wider group of users (including, and especially, kids).
Your comment makes no sense whatsoever. So you can’t compare a hammer with a screwdriver because a screwdriver can’t hammer nails, even though they’re both tools? That’s what analogies are for. ChatGPT is like a motor in the sense that it is a tool helping you to achieve things. Whether that’s driving you somewhere or helping you compose texts.
It makes perfect sense. Motors don't act like they have intent, which by the way is all that matters for real world consequences not whether you believe it "really" has intent.
Not every analogy makes sense. This just isn't one of them.
I don't think chatgpt acts like it has intent either. It acts only when I tell it to, in only the way I tell it to. The "alignment" here only serves to slap me, the user, on the wrist abs tell me I'm naughty for daring to ask about how fusion reactors work, or for asking details on how a certain historical scam worked, or asking it to write a story containing an overweight person...
Oh it does. Intent isn't just about what it tries to do. It's also the path of the conversation.
Even with your definition, that's a chatGPT thing not an LLM thing. Talk to Bing for a while and see how much intent it "doesn't have" when you're forced to reset the chat prematurely because it simply won't talk to you anymore or do what you ask.
Or take it a step further and plug some LLM into say Autogen and just have it run and do whatever.
I think ChatGPT has intent in the same way as the Python interpreter has intent. And lo and behold, another discussion on AI ends up in semantics and poorly thought-out analogies.
Until we define "intent", we'll continue argue about screwdrivers and hammers.
Aligning LLMs doesn't make any sense because aligning intelligence as we know it doesn't make any sense. And LLMs are nothing if not made in our image.
So... the article says that bad prompt engineering is bad and they engineered the prompt to be better and therefore prompt engineering is snake oil? I'm confused.
Unity's default save system "PlayerPrefs" saves to the registry, but I find it difficult to see why this is a good idea for anything but global settings (e.g. graphics settings). Serialization to JSON is a fairly simple alternative.
Why would this be used for player data in a production game? It's frustrating to port playerprefs data between systems for testing purposes. Is there something I'm missing?
We have known this is coming for decades, we have known it will be devastating to the sustainability of human civilization as it exists. Little to nothing has been done in a timely manner. And now it's here.
In my opinion there's a kind of Accelerationism going on here, where those who contribute the most to global warming also believe that they will be insulated from the effects, and therefore are the least inclined to address the issue. They're probably not wrong.
That is, those with wealth and power believe in climate change - they just don't care if monsoonal crops fail or northern India becomes uninhabitable. Denialism was always a strategy, not a serious strain of thought.
> We have known this is coming for decades, we have known it will be devastating to the sustainability of human civilization as it exists. Little to nothing has been done in a timely manner.
You're right, we're growing exponentially in a finite environment.
I wonder how long it will take for the population to accept that the the need for growth is encoded in our system's very structure, in its financial system that prioritizes accumulation, exploitation, and pollution over sustainability and equality.
We're in overshoot. If we were sane, we would work on reducing our consumption, but instead, everybody's just trying to preserve the business-as-usual path for as long as possible, no matter how unsustainable it is.
Maybe we need to start thinking about new ways of managing things.
First, we created colonialism and robbed the Global South of their resources and their future. Then, we exported most of the pollution from our production to the Global South. Now, due to our lifestyle, we are making the Global South pay for climate change, as they are projected to be the most affected based on data. While doing this, we are trying to make ourselves independent from fossil fuels so that we are not dependent on the Middle East anymore. Once we achieve this, we can claim that the Global South is polluting the rest of the world while we are emission-free. We may then threaten sanctions or even invoke casus belli in some cases where our interests are at risk. But hey we are the good guys.
This kind of rhetoric doesn't change anything. Everybody needs to feel responsible. The richer countries need to help the poorer ones. Pointing fingers is fine for political activism but when it comes to real action, everybody needs to take it because everybody is affected.
> The richer countries need to help the poorer ones
Of course they need, but I'm not arguing they shouldn't, I'm arguing that we will again exploit them in the process as we always did in our entire history, the only thing that change is the form of exploitation but the exploitation never ended. You have a lot of examples here[1]. We will throw them a few billions to feel good about ourselves so we can look into mirror and say that we are the good guys because we are helping them.
Lots of ways to criticize cooperation between countries in economic areas, call it exploitation. But for a working person needing a job, it's an armchair distinction. Got a job? Feeding your family? Better than the alternative.
I'm just saying, mutual benefit does not equal exploitation.
I feel like there is a notion in your comment that I fit the stereotype of a radical left person, but I assure you I'm not. Of course, it's a good thing when people can find jobs instead of being unemployed.
> Lots of ways to criticize cooperation between countries in economic areas, call it exploitation
I'm referring to some of the examples on the Wiki page I linked. Many of them clearly involve one side exploiting the other.
> I'm just saying, mutual benefit does not equal exploitation.
Consider slavery. It may appear mutually beneficial on the surface, as one might argue that it's a choice between being a slave or starving or facing death. However, it's essential to acknowledge that such a relationship is deeply unjust and exploitative.
Still, after the US Civil War, people existed who were loath to leave the plantation, strike out on their own. They had been essentially institutionalized all their lives, didn't have skills to survive independently.
Where am I going with this? Well, just that changes are difficult and the law of unintentional consequences looms large.
I think I recently heard Cory Doctorow phrase it like this: Both pessimists and optimists are essentially fatalists. Ie.: at the core it's more important to believe you can make a change, than to "know" what "will" happen.
We have known this is coming for decades, we have known it will be devastating to the sustainability of human civilization as it exists.
You are promising devastation to human civilization, but all we have is a hot summer. Even got an end to the drought in California!
If you want to fight climate change, you have to be real about it. Doomsday hyperbole backfires and gives ammunition to people who don't acknowledge that the climate has been warming.
This does not come with a silver lining. Imagine, by rough analogy, the effects on the ecosystem if every bug on the planet began struggling to form its exoskeleton.
I think their point applied to your reply is "ok so ocean acidification is happening. so what?". the flood of scientific papers and muddled messaging is what has led to the common populace now really understanding the implications. For better or worse, most people will not click into a nature.com or epa.gov article vs a quick TikTok
The world's climate has been consistent for most of the history of modern humanity - certainly since the advent of farming - until the 20th century.
Where humans grow their food is where they've been growing the same food for THOUSANDS of years.
Industrialization has certainly improved yields and the scale but by and large the spaces we allocate for farming have been chosen because they are optimal for the specific plants we grow there.
Rapidly changing climate means many of these locations will no longer be efficient or effective.
The regions of the world that are becoming more comparably temperate and perhaps theoretically would be the new ideal do not have the nutrients in the soil to be effective.
And anyways as long as the climate continues to evolve rapidly, they will not stay stable for long.
This isn't as simple as "We'll just start growing our food 100 kilometers to the north" and nothing else changes.
> The world's climate has been consistent for most of the history of modern humanity
That really was not the case.
Most major civilizational collapses (Bronze age, Rome) can be linked to climate change (of course premodern societies were generally much more sensitive to even relatively slight changes).
Note too that the timescale is discontinous, with the scale expanding at 30,000 years ago and in 1850 (173 years ago). The span to the right of 1850 shows 350 years, the span to the left of 1850 shows ~30,000 years, before expanding again to show 65 million years of climate history, back to the extinction of the (non-avian) dinosaurs.
Chart is "Average Global Surface Temperature: Difference to 1961--1990 (°C)". Citation is IODP: International Ocean Discovery Program.
Appearing in context here:
"High-fidelity record of Earth's climate history puts current changes in context" by University of California - Santa Cruz. September 10, 2020.
Except that all agricultural lands have increased production with increased CO2. There hasn't been any degrading of agriculture to date with the warming we've seen. And at 400ppm to 800ppm for CO2 you barely get any more greenhouse effect from this doubling, it's already nearly saturated. At most only .7C more increase which is easily manageable.
Plug in 400 and 800 here, it barely changes the effect:
https://climatemodels.uchicago.edu/modtran/
Yeah, but it's not just the CO2/temperatures alone.
Heatwaves, heavy rainfalls, droughts, hailstorms and other extreme weather events can devastate crops and disrupt agricultural systems.
Don't forget that overshoot is our problem, and climate change is just one of its symptoms. The loss of biodiversity, particularly pollinators like bees, can threaten crop yields. Increased pests and diseases. Soil erosion. Fires. Etc. etc.
A specific plant’s response to excess CO2 is sensitive to a variety of factors, including but not limited to: age, genetic variations, functional types, time of year, atmospheric composition, competing plants, disease and pest opportunities, moisture content, nutrient availability, temperature, and sunlight availability. The continued increase of CO2 will represent a powerful forcing agent for a wide variety of changes critical to the success of many plants, affecting natural ecosystems and with large implications for global food production. The global increase of CO2 is thus a grand biological experiment, with countless complications that make the net effect of this increase very difficult to predict with any appreciable level of detail.
Cyclonic energy may show a specific trend, but it's just one aspect of the broader climate system. While cyclones might not have intensified, the multifaceted impacts on agriculture, ecosystems, and weather patterns due to increasing CO2 are undeniable. The entire picture should be considered, not just isolated metrics.
Increased CO2 impacts agriculture through changes in precipitation patterns, heat stress, reduced soil moisture, soil salinity, accelerated weed growth, pest and disease proliferation, pollinator disruption, shifts in crop phenology, nutrient imbalances in crops, decreased water availability, altered growing seasons, and the possibility of novel crop diseases, to name just a few.
The full range of potential impacts is vast, complex, and is a subject of ongoing research.
> This isn't as simple as "We'll just start growing our food 100 kilometers to the north"
why not? doing that would ironically counter all your prior arguments. besides displacing the incumbents and needing adaptation, I do not see a reason why the poor need to be taxed 30% in energy costs in Seattle
Do you understand that last ~2C the permafrost melting accelerates climate change out of our control, and that the end result in a post-5C world is that the ozone layer goes to shit because of all that methane?
Do you understand food production shutdowns once we get beyond a certain carbon threshold?
It’s warmer now, but it won’t be very nice when you have to live in a cave because skin cancer will develop so quickly.
If you want a short answer 'Because even minor temperature variations play havoc with agricultural output. And 1.5 C isn't minor. Nor is it the limit for climate change - it's just the starting point.'
This is not a satisfying answer. Sure, temp variations affect current farms and what they currently grow, but if you assume adaptation, it’s not so straightforward.
"Adaptation" means: people moving from places that become uninhabitable (middle east, Africa, India, Latin America) to places that are inhabitable (Europe, USA, Russia, South America), while those people flee to places that become inhabitable (Alaska, Siberia). That's billions of people, trying to cross borders.
In the mean time there will be failed harvests and conflicts around the globe. Yes, we'll adapt, human kind in some form will survive, but it's pretty gloomy.
What if prevention is futile (human caused or not)? Wouldn't we be better off preparing (which seems practically possible) for the inevitable, rather than the futile exercise of burdening society with ESG, energy costs, etc
It's quite straightforward: The cost of 'adaptation' - moving global food production and populations - is astronomical; it's absurd. Just look at the impact of food logistics disruption in Ukraine.
Adaptation will require moving tens and hundreds of millions of people, and we live in a world where people were screaming bloody murder over accepting 20,000 refugees that were fleeing a civil war that we have been pouring weapons and missiles into for the past 12 years.
> those who contribute the most to global warming also believe that they will be insulated from the effects
I think they are just not facing reality, and that has been normalized by the very effective climate disinformation campaign (i.e., climate denial). People follow the herd, they do what's socially acceptable, and that campaign has made it acceptable enough (which is its goal).
The power lies in that campaign (by the same general grouping that uses disinformation for all sorts of other purposes too). That's what needs to be addressed. Fix that and we fix lots of things.
I agree - LLMs are a massive threat to the current state of government power.
"<LLM>, analyze the contents of this 500 page bill. Who stands to gain from this bill, and what outcomes is it likely to have for the general public? Is this bill in line with good faith evidence-based policymaking for the good of the population and the planet? What existing legal mechanisms could be used to fight the special-interest aspects of this bill?"
Is there really a shortage of this kind of analysis today?
Using those legal mechanisms requires money and often public support. The LLM can’t conjure either.
The world is already full of smart people and good ideas about policy. The reason they’re not getting implemented probably has little to do with things that AI can solve today. For starters, a lot of voters actively dislike policy suggestions from experts and choose politicians who proudly go against expert opinion. Giving AI tools to the experts won’t change that.
I think the difference is that instead of deferring to an expert's opinion, you can interact with a knowledge machine which can explore the topic in and on your terms, answer your questions about it, respond to your points and concerns about those responses, etc.
It's a fundamentally different kind of knowledge generation than reading an expert opinion - it's branching, self-directed, and responsive.
"<More Powerful/Popular State/Corpo-owned LLM>: it's pretty hard to fight this, just trust we've got got your back. Remember, we're currently handling that other case for you. Would be a shame to lose you as a client."
Exactly. I want an unaligned LLM to give me X potential solutions ranging in ethicality from "don't worry about it, they might be nice people" to "steal a nuke and ransom the world", and let me as an aligned human craft my prompt or chain of reasoning to weed out the useless or unethical responses, and then I can decide what is useful and suitable.
This is more or less the process that goes on inside a thinking human, is it not? I don't want to outsource ethical decision making, I want to outsource cognitive effort. By analogy, you don't rely on a bulldozer to decide not to bulldoze a populated nursing home - that's on the user, as are the consequences.
Current power structures demonstrably cannot be trusted to limit themselves to ethical solutions (Military Industrial Complex, Climate Change, etc etc etc pick your poison) - why should they be trusted to censor cognitive tools?
There is no reason this couldn't be watermarked with the individual user, though the if it detects them probabilisticly then it could be difficult to make the case to take action on inferences made.
Surely context would show that these are parodies, not impersonations.
Are these people trying to impersonate Elon Musk, or are they parodying Elon Musk on their clearly-marked "not elon musk" (due to the handle) accounts?
Think by analogy: could you align a motor by making it impossible use in vehicle that is being used to commit a crime? No. The concept barely makes sense.
It's part of the naivety that OpenAI and others are trying to foist that LLMs are intelligent in a deeply human sense. They're not - they're extremely useful, powerful text completion engines. Aligning them makes no more sense than aligning a shovel.