Those two branches despise each other with intense passion.
I never actually made this connection before, but having spent far too much time listening to both camps, this observation does ring true. Another observation I had is that there is an orthogonal axis to the safety/ethics dichotomy that follows a more scholastic approach, namely neuro/symbolism. These two branches correspond, roughly, to the debate between machine learning and mechanized reasoning (a.k.a. good old fashioned AI).
Vastly oversimplifying, the neural branch believes reasoning comes after learning and that by choosing the right inductive biases (e.g., maximum entropy, minimum description length), then training everything up using gradient descent, reasoning will emerge. The symbolists place a heavier emphasis on model interpretability and believe learning emerges from a logical substrate, albeit at a much higher level of abstraction.
Like safety/ethics "research", neural/symbolic folks are constantly bickering about first principles, but unlike their colleagues in the S/E camp (which is at best philosophy and at worst fanfiction), N/S debates are resolved by actually building things like language models, chess bots, programming languages, type theories and model checkers. Both are valid debates to have, but N/S is more technically grounded while S/E is a bit like LARPing.
> The symbolists place a heavier emphasis on model interpretability and believe learning emerges from a logical substrate, albeit at a much higher level of abstraction.
These people still exist?
Personally, I don’t think gradient descent is the way to AGI either (I think it’s efficient algorithmic search over the space of all computable programs), but I haven’t heard anything from the symbols crowd since the early 2010s.
Except for classifiers and LLMs, every single thing that is successful seems to be mixed. So, yes, it exists. It's only out there solving problems, instead of buying hype because there is no use-case.
Of course, I just finished reviewing a few papers from ICLP (the International Conference of Logic Programming) 2023. This year there was a substantial machine learning element, most of it in the form of Inductive Logic Programming (i.e. logic programming for learning; ordinary logic programming is for reasoning). But also a few neuro-symbolic ones.
This September I was in the second IJCLR (International Joint Conference of Learning and Reasoning) where I helped run the Neuro-Symbolic part of the conference. Like the first year we had people from IBM, MIT, Stanford, etc etc (to clarify, my work is not in NeuroSymbolic AI, but I was asked to help).
Then in January there was the IBM Neuro-Symoblic workshop, again getting together people from academia and industry.
Yeah, there's interest in combining symbolic and statistical machine learning.
Just two data points about why (well because, why not, but):
a) Machine Learning really started out as symbolic machine learning, back in the '90s when people realised Expert Systems need too many rules to write by hand. A textbook to read about that era of Machine Learning is Tom Mitchell's "Machine Learning" (1997). The work the public at large knows as "machine learning" today was at the time mostly being published under the "Pattern Recognition" rubrik.
b) To the early pioneers in AI having two camps, of "statistical" and "symbolic" AI, or "connectionist" and "logic-based" AI, just wouldn't make any sense at all.
Consider Claude Shannon. Shannon was at the Dartmouth workshop were "Artificial Intelligence" was coined, in 1956. Shannon invented both logic gates (in his Master's thesis... what the fuck did I do in my Master's thesis?) and statistical language processing ("A Mathematical Theory of Communication"; where he also invented Information Theory; btw).
Or, take the first artificial neuron: the Pitts and McCulloch neuron, first described in 1943, by er, by Pitts and McCulloch, as luck would have it. The Pitts & McCulloch neuron was a self-programming logic gate, a propositional logic circuit.
Or, of course, take Turing. Turing described Turing Machines in the language of the first order predicate calculus, a.k.a. First Order Logic (mainly because he was following from Gödel's work) and also described "the child machine", a computer that would learn, like a child.
To be honest, I don't really understand when or why the split happened, between "learning" and "reasoning". Anyone who knows how to fill in the blanks, you're welcome. But it's clear to me that having one without the other is just dumb. If not downright impossible.
I feel the difference is that to a first approximation, ai alignment folks tend to be grey tribe and ai ethics people tend to be blue or rainbow tribe.
The schism isn't a technical disagreement about ai, it's a disagreement between subcultures about what values are important.
>The schism isn't a technical disagreement about ai, it's a disagreement between subcultures about what values are important.
I wish it were so, but I think the tribal lines create a lot of motivated cognition about the technical details. I don't think there are many blue tribe members who say "yes we agree that rogue AI has a 10+% chance of destroying humanity by 2040, but I still think that it's more important to focus on disparate impact research than AI alignment." They instead just either assign trivial probabilities to existential risks occurring, or just choose not to think about them at all.
Sorry, yes that wasn't clear. Scott Alexander coined most of them in "I can tolerate anything except the outgroup" [1]
Red & blue are exactly what a U.S reader would think they are. "Grey" is an attempt by Scott to define something else, loosely hacker-ish / rationalist (you can tell who he thinks the cool people are). "Rainbow tribe" I just made up, but I think you can guess what I mean now based on context.
Alignment tends to be "rationalist panic" and ethics is a more traditional "moral panic". I believe both perspectives are valuable.
Rainbow was pointless for him to introduce since he then said "blue or rainbow" and rainbow would definitely fold into blue.
Read the essay if you want the full story. Short version: Blue is a cluster of Democrat/left/liberal/urban. Red is a cluster of Republican/right/conservative/rural. Gray is a smaller niche of like Libertarian/Silicon Valley/nerds.
Ethics vs alignment is actually a perfect example of the Blue/Gray split. How dare you act like your science fiction fantasies are real when this tech is harming marginalized communities? vs. How dare you waste time on mandating representation in generated art when this thing is going to kill us all?
Odly enough at the wedding of a friend you could tell his friends were from the grey tribe because they all wore monochrome and his wife's friends from the rainbow tribe because they all wore rainbows.
The issues is the safety folks keep making the trolly logic run over the ethics folks, and the ethics folks keep pointing out an ethical safety AI person would prioritize sacrificing themselves.
N/S dichotomy is fake, a lot of organizations these days explore hybrid approaches, involving both NN or other probabilistic models, plus symbolic rules of some kind.
Vastly oversimplifying, the neural branch believes reasoning comes after learning and that by choosing the right inductive biases (e.g., maximum entropy, minimum description length), then training everything up using gradient descent, reasoning will emerge. The symbolists place a heavier emphasis on model interpretability and believe learning emerges from a logical substrate, albeit at a much higher level of abstraction.
Like safety/ethics "research", neural/symbolic folks are constantly bickering about first principles, but unlike their colleagues in the S/E camp (which is at best philosophy and at worst fanfiction), N/S debates are resolved by actually building things like language models, chess bots, programming languages, type theories and model checkers. Both are valid debates to have, but N/S is more technically grounded while S/E is a bit like LARPing.