How the mouse's brain is scanned, very intrusively.[2] That's both impressive and scary. They're scanning the surface of part of the brain at a good scan rate at high detail. They're
seeing the activation of individual neurons. This is much finer detail than non-intrusive functional MRI scans.
Does the data justify the conclusions? The maze being used is a simple T-shaped maze. The "state machine" supposedly learned is extremely simple. They conclude quite a bit about the learning mechanism from that. But now that they have this experimental setup working, there should be more results coming along.
1. We can observe how the state machine gets generated, first just a jumble of locations in a hub and spokes topology (no correlations), then some correlations start happening pairwise, making a kind of a beads on a string topology, and then finally the mental model snaps marvelously to two completely separate paths that meet at ends. It's amazing to see these mental models get formed in vivo out of initial unstructured perceptions.
2. In addition to standard HMM modeling, authors find that a "biologically plausible recurrent neural network (RNN) trained using Hebbian learning" can mimic some of this (but not exactly). But more interestingly, they find that LSTMs or transformers cannot. Which makes sense structurally, but it's a good reminder for those who believe the anthropomorphic hype that transformers have memory or other such (they don't :) ).
Couldn't memoryless neural networks still possibly learn the Next-State function of a Finite-State machine? Depending on the training algorithm. Especially if the eventual usage of such networks is to be called over and over again to generate the next token; conceptually this to me seems analogous to the process of finitely unrolling a while loop or a computer pipeline.
So technically we have the technology to simulate a human brain. Just not anywhere near real time. And not at any semblance of reasonable cost. And not guaranteed to simulate the important parts.
In principle that seems plausible. Assuming that there is nothing immaterial about consciousness in the brain (not unreasonable), one may conclude that we are biological information processing mechanism, which should then computable by computers. But that goes against any person's intuition of what it is to be inside of a brain, doesn't it?
Beyond the philosophical problem, there's quite a bit of science fiction that has already thought about some scenarios once we become able to create digital humans. If you've got just a few minutes read Lena [1], it's a fun uncanny read.
If you mean it on the trivial meaning that "computers most in principle be able to simulate a brain", then yes, and it has been obvious for many decades already.
If you want to say that we know what algorithms to use to simulate a brain, then no, and this paper is one advance on the goal of knowing those algorithms. But it does not go all the way there.
The civilian level is the state of the art. The chip industry is at the cutting edge, there is nothing beyond it that is available at scale, and in this instance: scale matters!
There are some small exceptions of course: RSFQ digital logic is insanely fast (hundreds of gigahertz), but nobody has scaled it to large integrated circuits.
Supercomputers are built with somewhat esoteric parts, but not secret unobtainium. At least in principle the same RDMA switches and network components are commercially available. Similarly, the specialised CPUs like the NVIDIA Grace Hopper are available, although I doubt any wholesalers have it in stock!
To believe otherwise is to believe that governments (plural!) have secretly hidden tens of billions in cutting edge chip fabs, tens of billions of chip design shops, and more.
In reality the government buys their digital electronics from the same commercial suppliers you and I do.
Only a handful of specialised circuits are made in secret, such as radar amplifiers.
Are processors and processor speed the only limiting factor in terms of applications? (probably that they are fast enough anyway, could be a non-factor, communication between neuron is not that fast compared to clock speed if I remember well)
Especially in an era in which the recorded data can be fed to an algorithm that can approximate dynamic brain maps with more or less accuracy?
One concern is the lack of ethics, or more accurately, the different ethical considerations in the spy agencies.
They have every motivation to capture personal phone calls and text chats in bulk and run them all through an LLM-like training regime so that they can ask it questions like: “Does so-and-so plan a terrorist attack?”
Somewhere in an NSA data centre there is a model being trained on your emails, right now.
This is a misconception that shows up now and then.
In the 1940s through the 1970s, the military really did have a broad tech edge over the civilian market. The USAF was once the largest buyer of transistors. In the 1980s, the civilian electronics market became much larger and passed the military market. This upset some military people. Articles about "premature VHSIC integration" appeared, complaining that civilian electronics was ahead of military devices.
There were a lot of minicomputers in DoD systems for years after everybody else was using smaller and cheaper microprocessors. Some stuff was even older. The USAF's satellite control facility and NORAD at Cheyenne Mountain had the same consoles NASA used for 1960s Apollo well into the late 1980s.
We had a very early Sun workstation, one with an auxiliary color monitor. Someone put a world map on the screen, and overlaid it with the current positions of USAF satellites and ground stations,
as a demo. A visiting USAF general saw this, and demanded that the entire system be immediately shipped to the USAF's satellite control facility. They were still using big manual plotting boards, updated by people reading printouts, to track what was coming into range of each ground station. So the USAF got the Sun system and it was immediately replaced by a new one.
There was some cool stuff. I got to see a system in the 1980s where you could look at stored photos and do pan, zoom, and rotate. The UI was a zoom lever, a pan joystick, and a rotate knob. Took a half rack of custom electronics. They were building something like Google Earth for satellite photos. Now, of course, everybody has that capability.
There were niches where DoD tried to stay ahead. NSA put much effort and money into cryogenic computing. They had gigahertz electronics in the 1960s. There were several generations of NSA cryogenic technology, but each time, the commercial market pulled ahead with a cheaper and faster technology.[1]
If you read DARPA solicitations, you can see where DoD is trying to get ahead. Non-GPS precision navigation is a big thing, for example.
This was the typical pattern by 1990. DoD would be ahead in some very narrow niche for which there was little commercial market, but overall, behind commercial technology. I've been out of that world for many years, but from what I hear, it's still pretty much like that in the land of classified tanks.
How the mouse's brain is scanned, very intrusively.[2] That's both impressive and scary. They're scanning the surface of part of the brain at a good scan rate at high detail. They're seeing the activation of individual neurons. This is much finer detail than non-intrusive functional MRI scans.
Does the data justify the conclusions? The maze being used is a simple T-shaped maze. The "state machine" supposedly learned is extremely simple. They conclude quite a bit about the learning mechanism from that. But now that they have this experimental setup working, there should be more results coming along.
[1] https://www.biorxiv.org/content/10.1101/2023.08.03.551900v2....
[2] https://bmcbiol.biomedcentral.com/articles/10.1186/s12915-01...