I keep getting the feeling that the tech media is it's own worst enemy, or it shows how little tech media like the verge understands about technology.
Though I agree that language is not intelligence, suggesting that the AI boom is only about LLMs, or that LLMs do not create value is incredibly misleading.
I disagree with the base concept that to justify the current investment, we must arrive at AGI.
Estimates put total AI investement at $1.5T.
Is it a lot? Sure.
But what is going to come out of it.
Faster and likely improved results from medical imaging.
Lower cost and widespread ability to create images and videos, denting the multi-trillion dollar marketing industry.
Self-driving and advanced driver assistance, lives saves, costs reduced.
Improvements in education and low-cost tutoring available to more people.
Let's say these investements just break even with spend. Isn't that better for society?
I know people are going to say "but what about the radiologists"?
We have a shortage of GPs and doctors to care for an aging population. These highly trained doctors can do more in the medical community.
What about the actors/directors/sound engineers, etc in the media industry. This will likely shrink, but we can't ignore their expertise entirely. This industry won't go away. A friend is a voice actor. He isn't being replaced, however, he is getting less work, not because of the final production, but because in the pre-production phases, they don't need his expertise because they can use AI as 'good enough".
The lens I look at this through is my grandfather, who developed film for a living. That job disappeared. Nobody cried out for the film developers when we made the shift to digital. We create more imagery now than ever before, more people are employed in the imaging business than ever before.
As a (former) software engineer myself, do I believe AI will replace engineers?
I think this is true as much as packages replaces engineers. When I started programming, there weren't a ton of open-source packages and there weren't a ton of tools like NPM, Cargo, for managing packages. I saw the transition from "write most of the code yourself" to "lego-brick" style programming. It didn't reduce the amount of programmers, it increased what we could do, and allowed us to do more interesting work than boilerplate.
The contraction of events and the semantic residue, into tokens, symbols, metaphors, sentences, images, segments, parameterized in any way shape/form, is irrelevant next to the semantic load the event contains in a variety of analog states.
No creativity emerges from this, just the mimicry of the tokens, symbols in their reduced, modeled parameters.
That's the problem with LLMs and frontiers, RL. It's all subject to the bottlenecks of math and symbols.
The analog load of semantic (use tasks, actions, variance, scales), the immensity of the nested forms these take between brain, body, screen, canvas, aren't replicable as creation.
It's not information or stimuli we engage with, these are convenient false reductions we're realizing are inconvenient. There's much more the brain, eye detect in reflection. The words and symbols are an aftereffect the computer detects on its way to disregarding the events.
We see this aftereffect as creativity, thinking, etc only as a clever illusion the words, tokens hand us. Nothing more.
I disagree with your view on creativity, and indeed find LLMs to be remarkably creative. Or perhaps, to sidestep the anthropomorphization issue, I find that LLMs can be used as tools to produce creative works, greatly amplifying one's creativity to the point that a "near 0 creativity" subject (whatever that is) can create works that others will view as profoundly creative.
In truth, I don't think there's likely to be a correct definition for what "creativity" is. We're all just moving each other's goal posts and looking at different subject aspects of our experience.
What I do know is that I have talked to dozens of people, and have friends who have talked to hundreds. When shown what LLMs and generative models can do, there's a significant portion of them who label this work as creative. Rather than deem these people ignorant, I would rather consider that maybe if enough people see creativity somewhere, perhaps there is something to it. LLMs creating poems, stories, connecting concepts which you wouldn't otherwise see connected, birthing ideas in images that video which had never existed before.
Of course, I'm aware that by this logic many things fall apart (for example, one might be tempted to believe in god because of it). Nonetheless, on this issue, I am deeply on the creativity camp. And while I am there, the things I get LLMs to create, and people I know do so as well, will continue to induce deep emotion and connection among ourselves.
If that's not creativity to some, well, that's life, full of creative ways to define creativity or eschew it from the world.
LLMs can't innovate form. And form/format innovation is the core of creativity. Neither can RL or frontier. That's creativity, the ability to sense wordless states beyond existing status quo cause effect exchanges. Tokens have no game here. It's that simple, it's how cutting-edges are broken through. It's always trapped behind the aesthetics input, that's just another fatal blow to it's ability to create.
I'm not concerned with "hundreds" of people's opinions. And stories, poems, these are dead/dying forms.
You may be on the creativity camp, stay there. It's generic as creativity. From my POV, someone building a hybrid between game and movies, this tech is dead dull artifactual, like a fossilized version of art.
Yes of course we have definitions of creativity, see Anna Abraham. "creativity is doing something unique and new, using existing networks for things they weren't designed for"
And we have the wordless processes that materialze as affinities and events in the imagination.
I'm very struck by coders who are neither engineers, scientists, nor creatives in the arts, yet have the false intuition and self-delusion that this code is creative. Nonsense.
I'm afraid LLMs have zilch in comprable abilities to both as they exclude words, symbols, tokens etc.
Perhaps in their basic form but you could do a variation on them like
>AlphaEvolve is an evolutionary coding agent for designing advanced algorithms based on large language models such as Gemini. https://en.wikipedia.org/wiki/AlphaEvolve
which came up with a novel improvement in matrix multiplication
It's not evolutionary, that's functionality. There are no algorithms in nature.
Evolution is a tinkering process that finds by selection, and functionality is what we post-hoc add to the success.
Anything reduced to symbols is simply functionality within models, to claim it's creative is illusory. Creativity, imagination, etc exist in a realm beyond, and preceding symbols, metaphors, etc. Creativity is 'thinking around symbolic bottlenecks'.
The matrix multiplication breakthrough is still under the glass ceiling symbols provide. A new way to do math isn't the same as visual scale invariant paradoxes. It's not creativity, the word is a metaphor, the closer word is optimization.
As a (rather obsessive, perhaps compulsive) poet, I will indeed remain in my camp :)
(I do get what you're saying. Yet, I am not convinced that processes such as tokenization, and the inherent discretization it entails, are incompatible with creativity. We barely understand ourselves, and even then we know that we do discretize several things in our own processes, so it's really hard for me to just believe that tokenization inherently means no creativity).
Keep in mind there are events, they are real. Words are just symbols we use to make the thoughts or the events cohere as memories, but they're artifacts that have no direct connection to the thoughts. Nor do any tokens, symbols, codes, etc. These are bottlenecks. They have no analog, they lack specificity. Creativity is the ecological exchange between body and ecology to make records that engage paradox wordlessly.
That's what the article's big reveal is about. The events (which can be externalized creatively) are not really creative as words. That's a big problem for the species in general. That's a glass ceiling no one is recognizing or taking notice of. And that sort of gives you an idea how poetry and code are trapped behind it.
> Let's say these investements just break even with spend. Isn't that better for society?
That's a big leap you're making. Because you're first of all assuming that all of what you describe will happen (show me a true self driving car outside of test cases for example).
And on top you're saying that it breaks even with spend. Which is something we simply don't know. My guess is that every vendor is subsidizing to gain market share.
I don't want to debt utility. But when it's accurately priced, will we see where it's useful
It's largely around LLMs, plus some generative image models. That is _where the money is_. Sure, some people are doing some CV stuff that might have some medical applications, but no-one's spending tens of billions of borrowed money on datacenters for _that_.
> or that LLMs do not create value is incredibly misleading.
They're not suggesting that.
Fundamentally, the current economic bubble around 'AI' is based on 'AGI' being achievable, soon. Otherwise, the valuations and amounts of money being spent simply do not make sense. LLMs being somewhat useful will not cut it; the spending implies expectation of dramatically increased capabilities, soon.
>>the spending implies expectation of dramatically increased capabilities, soon.
Even more than that, the spending is a death-race in expectation of:
1) a team reaching runaway superintelligence,
2) that it will be winner-take-all for the first team to get there, and
3) fear that somebody else will get there first.
So, everyone is spending as much as they can beg, borrow, or steal because in their view, there is only first place or being part of the underclass ruled by the other team that gets there first.
This is not about just making useful products/services that can increase productivity for their customers.
The shortage of doctors and medical professionals is artifical though, and unevenly distributed. Technology like AI isnt going to solve for structural issues like health insurance and private equity hospital takeovers distorting the healthcare market.
“ The lens I look at this through is my grandfather, who developed film for a living. That job disappeared. Nobody cried out for the film developers when we made the shift to digital.”
This is how progress has created so much human suffering, by impacting people in a way just sparse enough that you can believably imply “no one cried out for x”.
Yes they did, it impacted thousands if not multiples of that; many in ways that they never recovered from. Some of them probably committed suicide, or ended up homeless, or in some other way had their life destroyed. And that impact reverberated onto their friends, families and communities. And maybe the New York Times even wrote an article about it, but we collectively as a society ignored that suffering.
(I make this remark merely as a gag. I think you pinpoint an issue which has been unresolved for ages and is knee-deep into ethics. One could argue that many of our disagreements about AI and progress (in a broader sense) stem from different positions on ethics, including utilitarianism).
I’m a utilitarian, but a transparent one. I think most people are uncomfortable saying “I acknowledge the pain and suffering it took to make my iPhone, and the tradeoff is worth it.”
Though I agree that language is not intelligence, suggesting that the AI boom is only about LLMs, or that LLMs do not create value is incredibly misleading.
I disagree with the base concept that to justify the current investment, we must arrive at AGI.
Estimates put total AI investement at $1.5T.
Is it a lot? Sure.
But what is going to come out of it.
Faster and likely improved results from medical imaging. Lower cost and widespread ability to create images and videos, denting the multi-trillion dollar marketing industry. Self-driving and advanced driver assistance, lives saves, costs reduced. Improvements in education and low-cost tutoring available to more people.
Let's say these investements just break even with spend. Isn't that better for society?
I know people are going to say "but what about the radiologists"? We have a shortage of GPs and doctors to care for an aging population. These highly trained doctors can do more in the medical community.
What about the actors/directors/sound engineers, etc in the media industry. This will likely shrink, but we can't ignore their expertise entirely. This industry won't go away. A friend is a voice actor. He isn't being replaced, however, he is getting less work, not because of the final production, but because in the pre-production phases, they don't need his expertise because they can use AI as 'good enough".
The lens I look at this through is my grandfather, who developed film for a living. That job disappeared. Nobody cried out for the film developers when we made the shift to digital. We create more imagery now than ever before, more people are employed in the imaging business than ever before.
As a (former) software engineer myself, do I believe AI will replace engineers? I think this is true as much as packages replaces engineers. When I started programming, there weren't a ton of open-source packages and there weren't a ton of tools like NPM, Cargo, for managing packages. I saw the transition from "write most of the code yourself" to "lego-brick" style programming. It didn't reduce the amount of programmers, it increased what we could do, and allowed us to do more interesting work than boilerplate.