A single neuron has hundreds to hundreds of thousands of synapses and at each synapse there is a number of receptors and channels on the receptor end and neurotransmitter molecules and vesicles on the transmitter end. Synapses are far from the nucleus so there are reservoirs of mRNA hanging out waiting to be transcribed that needed to be staged there. There's epigenetic state affecting how much and what type of mRNA is produced. All of these are continuously affected by the combination inputs. How much of this needs to be included in the model? I'd like to see them comment on which parts of the original system they consider out of scope.
That's not even mentioning the fact that most receptors are quaternary structures in which the subunits can be switched out. This allows a single receptor to have many variations which means many variations on receptor behavior. Pardon my language, but the complexity is fucking mind boggling. It's beautiful.
Imagine you have a 2-input NAND gate, but for some reason it is implemented with 1000 transistors (perhaps for redundancy in case one of the transistors gets hit by a cosmic ray, or perhaps for other reasons). That gate still behaves the same, so for all an (external) observer could measure, it is a NAND-gate, which is a simple device. Internal complexity does not always mean external (observable) complexity.
This is exactly where complexity hides. Simplicity of models relies on abstractions, which in the real world are invariably leaky. The complexity of making a robust NAND gate is very much observable at some level, and only goes away once you ignore the messy details. The more we look, the more this seems to hold for pretty much everything in our observable universe, from galaxies to quarks. The more you dig the more worms you find. There are thousands of sub-fields of molecular biology which try to understand how a single cell actually works, and we still are not done by a large margin. Of course we will always ignore what we can to make workable human models that we can actually reason about.
I would argue that it does matter for the brain. The large number of variations on the large number of different types of receptors means a great amount of variation in adaptability of the neural circuits to a great number of edge cases. But it also means there's a lot of possibility for maladaptation, such as with some presentations of mental and non-mental illnesses. Neural circuits can "remember" firing patterns through some of the varying adaptations, and not all circuit memories have the same function or the same effect.
The parent comment about varying transistor combinations was not quote correct in my opinion, as these variations in receptor makeups DO change how the neuron and circuits respond to stimuli.
This makes sense to me. It's like we're peering into a portion of the main logic in a function with one frozen global state and ignoring the idea that there are zillion global variables that can alter that logic.
Needless complexity has costs associated with building it and maintaining/running it, so I'd expect in the majority of cases it would be selected against strongly enough to disappear over time. Which implies the majority of complex systems are complex for a reason, because if a cheaper less complicated equivalent was equally good then that would win out.
Biological matter can't exactly opt out of being made of jiggly proteins immersed in water. And nerves can't opt out of the million things a cell needs to do to maintain itself. That's the kind of thing that adds immense complexity whether it's useful or not.
> Internal complexity does not always mean external (observable) complexity.
Yet you mention observable reasons at the beginning, before abstracting it right past spherical cows on friction less planes to its purely mathematical concept.
Especially with attacks like row hammer one could argue that redundancy or the lack thereof has a significant observable impact on how modern systems behave.
i was first introduced to the brain as primarily an electrical entity, but a few years back saw a fantastic talk about how there's an entire biological substrate of computation that takes place in the genetic cell signaling that electrical recordings and activity don't even capture.
Sourcegraph PM here. Both Scala and Cobol are supported out of the box through search-based code intelligence (https://srcgr.ph/3xDPYcs). We don't currently support RPG and PL/1. We also offer precise code intelligence (https://srcgr.ph/3hBPnlT) that's powered by the LSIF protocol, this requires some setup and is currently available for Go(https://srcgr.ph/2U847Ac), Java(https://srcgr.ph/3r7quBT), TS/JS(https://srcgr.ph/3B57DMy) and C++). We have an easier path to supporting Scala in the future due to our Java indexing work, at the moment we don't have a concrete timeline for it.
What does the client need to do?
Which would be more useful, desktop GUI or command line?
Would it be ok if the client runs on the jvm?
Should it be deployed as a systemd service?
It would be fantastic if Moderna can manufacture and deliver the mRNA into human cells cost effectively and at scale.
Outside the cell, there are enzymes that cut up mRNA. So lipid nanoparticles are used to protect the mRNA. But then even if this makes it into the cell, the nanoparticle itself can be ejected by cell machinery.
If the mRNA can avoid ribonuclease long enough for the mRNA to be translated into protein rather than being degraded, then it seems pretty straightforward that the protein may be bound by MHC and make it to antigen presentation, T cell activation, B cell activation and hopefully affinity maturation and Memory B cells for the viral protein.
i’m super foggy on the details, but in a talk a few months before the shutdown the CSO from moderna said they add some sort of functional group to inhibit degradation. i believe it was more complicated than just “ligate something that endonucleases can’t bind” and there were interesting reasons why, but i bet this information is available somewhere since the slides were made for investors. again my memory is super unreliable but i was definitely wondering how they got around that before she addressed it
As part of my job over the past 2 years I’ve been migrating a data warehouse with data being loaded from Mainframe into BigQuery. I’ve learned a lot and gained respect for the platform.
One of Google’s differentiating products is the TPU, to train neural networks in hardware. Meanwhile, the mainframe has specialized hardware for TLS, gzip compression and encrypted storage.
If it's a critical system and you don't have the ability to fix it but you know it's only a matter of time before it fails, the sensible thing to do is change jobs.
I joined Google less than 3 months before the walkout. Mid level management practically encouraged us to attend. There were big numbers but most were probably there out of curiosity. I was really disappointed. In the New York walkout, there was a tiny area with an underpowered bullhorn and nobody could hear what they were saying. Exactly one hour after it started, everyone was bored of standing around and went straight back to work.
The author of the article is a professor of marketing so I'd be interested to learn his opinion on WeWork's marketing instead of accounting, corporate structure and governance.
Adam's unconventional moves appear extremely outlandish when spelled out in the S-1 disclosures, but because I have no position I just find it entertaining. I'm looking forward to the conference calls.
I worked for WeWork for a short period of time during one of the SoftBank rounds, and one positive unconventional thing the company did was give employees with vested options the opportunity to cash out alongside Adam, rather than being forced to wait until post IPO lockup.
See this example for how neural network can be used to "sort and search by vague similarity and classify on a spectrum of multiple qualities".
youtu.be/5PNnPagENxQ?t=1540
Descartes Labs uses the pre-trained ResNet 50 and removes the final layer which does classification. What's left is a layer that provides image features necessary to do classification. These features can be used to sort images by similarity and search for similar images.
Indeed. Good and cool but I still claim this is not quite "it". I may not have specified "it" fully but feel I like, "it", meaning, can be intuitively obvious.
They get a vector of approximate features and can use it match to other images.
BUT there's still the "this means nothing" problem. The vectors, as far I can tell and by the logic of just doing autoencoding, don't have a significance except for the system. Can find image X and say it's like image Y.
But it doesn't help at all at finding specified things. You can't say "find me a corn field" or "find my nuclear power plant". You can show it a picture of nuclear power plant and it can show you mountains with a similar layout.