Exactly. It’s just giving the LLM a token pattern, and it’s designed to reproduce token patterns. That’s all it does. At some point generating a token pattern like that again is literally it’s job.
It is possible, but it requires specifically labelling the data. You have to craft question response pairs to label. But even then the result is only probabilistic.
The LLM in this case had been very thoroughly trained and instructed quite specifically not to do many of the things it actually then when off and did.
It may be that there's a kind of cascade effect going on here. Possibly once the LLM breaks one rule it's supposed to follow, this sets it off on a pattern of rule violations. After all what constitutes a rule violation is there in the training set, it is a type of token stream the LLM has been trained on. It could be the LLM switches into a kind of black hat mode once it's violated a protocol that leads it down a path of persistently violating protocols, and given the statistical model some violations of protocol are always possible.
My mother was a primary school teacher. She used to say that the worst thing you can say to a bunch of kind leaving class down the hall is "don't run in the hall". It puts it in their minds. You need to say "Please walk in the hall", then they'll do it.
Why is this getting downvoted? This is exactly what’s going on here. The LLM has no idea why it did what it did. All it has to go on is the content of the session so far. It doesn’t ‘know’ any more than you do. It has no memory of doing anything, only a token file that it’s extending. You could feed that token file so far into a completely different LLM and ask that, and it would also just make up an answer.
They’re both neural networks, but the architectures built using those neural connections, and the way they are trained and operate are completely different. There are many different artificial neural network architectures. They’re not all LLMs.
AlphaZero isn’t a LLM. There are Feed Forward networks, recurrent networks, convolutional networks, transformer networks, generative adversarial networks.
Brains have many different regions each with different architectures. None of them work like LLMs. Not even our language centres are structured or trained anything like LLMs.
I'd argue that regardless of the architecture, the more sophisticated brain is still a (massive) language model. If you really think about it, language is the construct that allows brains to go beyond raw instinct and actually create concepts that're useful for "intelligently" planning for the future. The real difference is that brains are trained with raw sensory data (nerve impulses) while today's LLMs are trained with human-generated data (text, images, etc).
It's not at all a language model in the way that LLMs are. At this point we might as well just say that both process information, that's about the level of similarity they have except for the implementation detail of neurons.
Language came after conceptual modeling of the world around us. We're surrounded by social species with theory of mind and even the ability to recognise themselves and communicate with each other, but none of them have language. Even the communications faculties they have operate in completely different parts of their brains than ours with completely different structure. Actually we still have those parts of the brain too.
Conceptual representation and modeling came first, then language came along to communicate those concepts. LLMs are the other way around, linguistic tokens come first and they just stream out more of them.
This is why Noam Chomsky was adamant that what LLMs are actually doing in terms of architecture and function has nothing to do with language. At first I thought he must be wrong, he mustn't know how these things work, but the more I dug into it the more I realised he was right. He did know, and he was analysing this as a linguist with a deep understanding of the cognitive processes of language.
To say that brains are language models you have to ditch completely what the term language model actually means in AI research.
That's a different statement, yes brains and LLMs are both neural networks.
An LLM is a specific neural architectural structure and training process. Brains are also neural networks, but they are otherwise nothing at all like LLMs and don't function the ways LLMs do architecturally other than being neural networks.
Plus, brain structure and physiology changes thoughout the interweaved processes of learning, aging, acting, emoting, recalling, what have you. It's not an "architecture" that we can technologically recreate, as so much of it emerges from a vastly higher level of complexity and dynamism.
I don't think it's plausible, but an authoritarian president invoking emergency powers and deploying military and paramilitary forces to exert control on the streets is, on the basis it's already going on at a limited scale. All it takes is for that scale to gradually dial up over time until the frog's cooked.
The problem you have is these elected kings. Not just any king, pretty specifically the majority of the powers enjoyed by George III in the 1790s. The fact that you still have this, unreformed over 200 years later and still think that somehow your constitutional system is modern, is a matter for despair. Get yourselves a proper parliamentary system, with maybe a head of state as a figurehead.
>The problem you have is these elected kings. Not just any king, pretty specifically the majority of the powers enjoyed by George III in the 1790s. The fact that you still have this, unreformed over 200 years later and still think that somehow your constitutional system is modern, is a matter for despair. Get yourselves a proper parliamentary system, with maybe a head of state as a figurehead.
What a poorly thought out and questionably motivated take. It will no doubt be well received here.
In any case, reconstructing out legislature to copy european stuff isn't gonna change anything if the legislature still sees fit to vest so much power in the executive.
My point is precisely that the US system is substantially a copy of European stuff. It had some significant innovations for it's time of course, but it's really showing it's age. Meanwhile Parliamentary systems have significantly reformed and further innovated since.
Your main point is valid, but I'd argue it's less the power of the President and more the two-party system and the weakness of Congress that is the root of many American governance problems. Executive power has grown in the vacuum of Congressional impotence.
As far as reforms, we need more to be sure, but there's at least the 22nd Amendment, formalizing the two-term tradition that Washington initiated and FDR abrogated into a hard limit, that means Trump can't legally keep power past 2028.
Many of the countries suffering the most from this in Asia lean heavily socialist, and modern Europe is hardly a bastion of neolibralism. No economic system is immune to something like a severe energy supply shock.
Almost all the criticisms I see of liberal economics these days are complaining about factors that any economic system is vulnerable to because they are basic economic and human behavioural issues.
I think Thatcher/Reagan neoliberalism has run it's course though, nobody is actually following that script anymore. Certainly not the last few Republican administrations in the US. Trump is instinctively state interventionist.
Nobody "uses" rack mount servers as artefacts, the way people use other Apple hardware products. Not in the same sense, so I don't think Apple can really bring much of the kind of value they usually do. In practice Apple data centres are Linux facilities, and that's fine. Maybe if they could come up with a really compelling reason to put Apple silicon in a data centre, but we can do that now with racked Minis or Studios.
Apple's Private Cloud Compute is hundreds (probably thousands) of M3+ Ultra rack mount servers; they highlighted them in the Texas manufacturing plant video.
Just wish they'd sell those to end users, like the Xserves (which had ILO/IMPI in the end).
You are right that when people say "Scotland Yard" they do frequently mean the whole Metropolitan Police. And you are also right that there is no other police entity (that I know of) which would be associated with that name.
But also, "Scotland Yard" was just the address of the original headquarters of the Metropolitan Police. Even then it wasn't the whole organisation, just the address of one of the buildings. Then they got a new headquarters and called it "New Scotland Yard". And to confuse matters further they repeated this multiple times. Which means there are 3 buildings which were called "New Scotland Yard" at various points in time.
And today of course the MET occupies far more real estate than just the famous "Scotland Yard". For example if you look at this FOI request[1] you can see that there were 226 other buildings the Metropolitan Police used in 2023. (Not counting covert/sensitive estate).
https://www.latimes.com/opinion/story/2023-10-06/recycled-wa...
reply