He could save power by just using dice and decision tables. There's a long history of civilian and military wargaming that's random but designed vs. real-world situations. Statistical token generation, however, doesn't strike me as a good bet. Hell, during one of the baseball games, Google AI led off their search results with "Seattle is not north of Toronto". It followed by having little real knowledge of historical baseball results. I can see it having problem due to training set with the latter, but damn, where countries and cities are located should be well-defined.
Sounds tricky if the LLM isn’t fine tuned to a dataset with examples of the kind of strategizing that you’d like to optimize for. Untrained LLMs are experts at average responses, and better strategies for anything require creativity, domain knowledge, and lateral thinking.
I don't know, dude. Maybe the general is cracking an internal joke that we, the public audience, can't quite grasp.
For example, it could be that some company offered this kind of AI to the army, with those exact words, and they refused (which sounds incredibly reasonable). The interview would serve a purpose of deliberately mocking such offer, in public.
There is almost no downside to this, as you already mentioned "everybody and their brother predicts Military Involvement [with AI]", and I quote you to illustrate that the idea that this is engrained in the popular culture (maybe by design).
Companies such as Raytheon and other traditional military contractors have been researching AI (not necessarily chats) way before Sillicon Valley. They can (and probably do) offer consultancy to the army regarding which technologies are safe to employ for critical decision-making.
I'm not defending them, by the way, just exploring some ideas around what possibly is going on here. Letting my imagination go a little beyond what the typical scenarios imply.
What I know for sure, is that AI-2027 is not a serious prediction. It's a piece of fiction. Kind of in the vein of The Blair Witch, with some tones of realism to impress the public, and a little bit more sinister in the sense that it resists the idea of being interpreted as fiction (which I think it's a downer, they could have reached more acclaim by being artists than scammers).
For weapons. There is already AI piloted F16's that win against human pilots. This is already here, now years old.
For Analysis. Who knows what the military has access to. I was able to ask plain old GPT about India/Pakistan, and it was able to weed through media reports and come up with something pretty accurate.
Maybe we wont launch nukes based on an LLM feedback.
But it will definitely be used to summarize and analyze data into reports.
Not sure why this seems like it wont be done. Military has many rungs of management that needs to push out reports just like a corporation.
I really don't want to argue around defending 2027. It is a speculative view. They say so themselves. But speculation on the future has been a valuable tool in the past. So not dismissing out of hand. These were researchers with some respectable credentials. Not simply fan fiction like Shades of Gray or something.
As to news. There was an article, it matched a point in 2027, i made the connection. How is pointing out a similar connection or idea, invalid? 'fitting speculation into news' . Like no connections can ever be made?
The world is changing a lot. It is hard to keep up with social norms.
-> A group publishes a white paper/forecast/speculation/theory
-> News article comes out with something similar, related, seemingly a direct point.
-> Making a link between the two is 'bad taste'.
I'm not sure. Seems dubious to cast a natural human tendency as 'bad taste'. I think I'll go ahead keep making connections where they fit.
To be fair, play devils advocate. I do agree this tendency can lead people to make crazy connections, and go down an internet rabbit whole of conspiracy. This just didn't seem to rise to that.
Communicating is difficult. Something like 50% of text communications are misinterpreted.
So AI 2027, itself is in 'poor taste', is 'shit'.
Not the act of linking ideas in AI 2027 to a news article.
Well. It's level of being shit, guess could still be debated.
It makes a point of AI increasingly being used in decision making. And it is. So guess it depends on how much you're keeping up with current events whether you think this is really unlikely. Or how much you believe what is being reported.
Or the degree that decision makers become dependent on AI, will it ever reach the levels as depicted in AI 2027 could be debated.
> It makes a point of AI increasingly being used in decision making
But that is nothing new. Everyone knows that. Go back to square one, my first comment, Terminator 3: Rise of The Machines.
So, in what aspect AI-2027 is new?
I can tell you. It bends its knees to some narratives:
- It portrays LLMs as more advanced than they are.
- It patronizes and antagonizes China. US technologies are portrayed as primordial, while Chinese ones are always copied and stolen.
- It encourages viral sharing of those speculations as true.
Now, by all means, I'm not defending China here. Also, I'm not praising US. I don't want these two to fight (nothing good will come out of it).
The fact that AI-2027 takes that stance speculatively (those are not part of the "research" aspect of their work, they are "scenarios"), to me, tastes like shit.
It is, in fact, debatable. Maybe you think antagonizing China and promoting private technologies beyond their capabilities is cool. I think it's shit.
Now, again, go back to square two. Decision-making AI for real military use (not to improve their CRM software or PR) is likely coming from the likes of Raytheon, Northrop Grumman and so on.
There is some aspect of cyberwarfare to LLM deployments, but the reality of it is not that obvious, and AI-2027 does nothing to explain how that plays out. In fact, it does much to keep the public away from how it could really play out.
Misinterpreting is a technique. You can always introduce noise to a clear conversation. Maybe you already understand all that I'm saying (I'm an optimist), and is introducing noise and feigning misunderstanding to shepard the discussion towards a stalemate (you tried several times, like "not sure if we disagree"). Let me be very clear, again: we disagree a lot in a lot of things.
I'm trying, not sure you are as completely in good faith as you are thinking you sound.
This is the first you mentioned China as a problem.
How am I supposed to take a comment about Rise of the Machines, and extrapolate your whole argument about how conflict with China is being misrepresented in AI 2027?
How am I promoting antagonizing China? In todays world, isn't the threat of conflict with China in the news constantly? Am I supposed to scrub any reference to any publication that references this very common variable or be accused of promoting it?
China is a central part of the AI-2027 speculation. I can't believe you missed it.
I'm acusing AI-2027 of doing bad taste stuff, not you. Unless you are one of the authors, you don't have to feel offended.
I am acusing you of sharing the AI-2027 content though, which I consider to be of bad taste.
You are supposed to think if what you are doing online bends its knees to narratives you don't have control over. I think that's a reasonable thing to do.
It's been awhile, I don't remember China being a big part of "Rise of the Machines".
So how was I to make the connection between Rise of the Machines, and you're objections to AI 2027's view of China? It's a bit of a leap in assumptions from an old Terminator movie, to your main problem.
If your main problem is that I should not be promulgating false information, because AI is overblown, and China isn't a threat. That's also kind of a leap.
I'll agree, just generally, we shouldn't be fanning the flames of discord. Not sure this was really to that degree.
I thought the injecting nano-bots from the future into the past systems, kind of ruined the idea that we rushed into our own disaster. It made it seem like the robots rose up because they were influenced by the future sending tech backwards, instead of 'emerging' on their own.
Which to be pedantic with myself. I think is different than in T2, where we the current present day humans were just analyzing a bit of tech from the future. We rush into our undoing, inevitably.
I think the real life , kind of slow role into disaster we can see coming is actually feeling more scary. Our current reality is now more scary than fiction. Life imitates Art?
Be mindful of cultural trends and retconing your own mind. Nanobots (or nanites, or nano-tech) is a trope that came much later in cinema.
That trend was definitely influenced by some T3 stuff, however, in T3 terms, the Terminatrix is still bound by the more general idea of mimetic polyalloy.
Therefore, we can't say T3 uses nanobots.
Also, be mindful of nuanced representations of technology in movies. What we perceive to be robots, often, are not meant to be taken as that. They instead represent a general concept, often materialized into a character.
For T2, that concept is a clash between counter-culture (represented by the T-800) and state opression (represented by T1000), roughly. In the real world, both movements are artificial inserts into the collective consciousness. It's not as clear cut as I'm explaining, and there are other superimposing representations on top of it, but it should be enough to get the idea.
---
The impression I get, is that you're waiting for me to explain that T3 is actually a story of us running into disaster, and you have some sort of expectation in regards to how I would do that. If that's the case, I'm sorry to disappoint you!
for T3, there is a scene towards the end, where Terminatrix is shown injecting something into the present day robots. It was little sparkly things, and then present day robots started killing everyone. I assumed these are 'nano bots'. She took control.
I didn't like that the present day machines only rose up and started killing everyone after she did this, they didn't have any 'emergent' moment and gain sentience. As far as I know, none of the Terminator Movies went back and showed 'achieving consciousness' and deciding to kill all humans.
As opposed to
T2. Where the present day humans studied the damaged chip left from T1. And rushed head long into disaster. The present day humans were rushing towards tech development at all costs, just like in todays world, regardless of risk. So I thought T2 was more similar to today.