But isn’t it conceivable, because the original quantum state contains probabilities of different outcomes, that one imprint might correspond to “up” and another to “down,” [...] [Zurek’s theory] predicts that all the imprints must be identical.
Does this not imply that there is an asymmetry, one half of the state gets imprinted, the other half neglected? This however also raises the question about the basis, what is a superposition and what is not depends on the choice of basis. Is there a special basis just as pointer states are somehow special?
Indeed, as you say, Decoherence explains why certain bases are special: when a system is in a pointer basis state, it does not continue entangling with environment (or, at least, does so minimally). When a spinning particle enters a Stern-Gerlach apparatus oriented in z-direction, spin-z is the pointer basis of the system during its time in the apparatus. A spin-up or spin-down particle does not entangle with the environment, but spin +x state would quickly entangle with environment, placing environment in a superposition and "branching" the total state vector of all the stuff in the universe.
Quantum Darwinism is just a refinement of this picture in which the "environment" interacting with the system is itself modeled a series of fragments (i.e. all the different photons that bounce off object). It turns out that the information about which pointer basis state the system is in (spin up or spin down) is redundantly encoded in each of these fragments. Hence, intercepting one photon that interacted with system and reveals "spin-up" (because the particle is in upper path) agrees with other photons that also bounce off object.
BUT, of course, due to linearity of unitary time evolution, there is another "branch" in which spin-down was the outcome of the measurement and everyone agrees on spin-down. This is exactly the Everett picture.
These debates over the interpretation of Quantum Mechanics (i.e. what ultimately happens when a “measurement” takes place) are important but don’t bear on the effectiveness of quantum computing. Regardless of your favorite interpretation (almost) everyone agrees that quantum computers should work and be able to do things classical computers cannot.
Yes, the MWI is falsifiable. It asserts that objective collapse does not occur, therefore any observation of objective collapse (such as predicted by GRW or Penrose-Diosi) would falsify it.
That's not true falsifiability; its asserting a negative.
I think people resort to MWI because they think it explains everything neatly; it does not!
For example, from my perspective, it does not explain what world I end up in, and if you are saying it's random, you need to come with a fundamental theory of randomness, unless the response is: it just exists, deal with it.
A negative, that can be falsified by a single black swan.
You are mixing up another kind of argument. Claiming there are elephants in the room we cannot see or touch is an example of an unfalsifiable claim.
In fact, claiming there is a quantum collapse, which always looks just like the field equations without a quantum collapse, would make collapse an unfalsifiable theory.
(I don’t believe most proponents of collapse are making that claim.)
It doesn’t. Decoherence is the technical step in the Everett picture defining what a “classical branch” even is and explaining how the state vector branches. Every claim that “Decoherence” somehow offers a distinct interpretation to Everett is pure confusion.
Can you believe that This is Spinal Tap, The Sure Thing, Stand by Me, Princess Bride, When Harry Met Sally, Misery, and A Few Good Men were all directed by the same man? What an eclectic set of masterpieces.
The Maple syntax may superficially seem easier but actually leads to more problems in practice. The point of the [ ] is that argument of a function is logically distinct from algebraically grouping terms in an equation. Also, Mathematica is a camel case language since underscore is for pattern recognition, hence the capitalization of function names. Personally, I’ve found every little Mathematica design feature to be incredibly well thought out, logical, and consistently implemented over the whole of the language.
The introduction to Vol 1 of Weinberg’s Quantum Theory of Fields does this really well, albeit briefly. It feels like getting an “insider’s view” of the historical developments.
Everyone seems to be unsurprised by this move, but I’m genuinely shocked. What a shoot your own foot business decision. Google, evil though it be, doesn’t post the text of your gmails in its search results because who would consider using Gmail after that? This is the llm equivalent. Am I missing something?
I don't think https is responsible for that. Google owns the data, it doesn't matter how it is transported. It does, however, matter how it is stored (which I hope is encrypted in a way only you can retrieve it)
Google mines the bejeezus out of your email, and uses it to any number of ends, including manipulating you into buying things, and also passing your correspondence on to the US government. While this is not the same as outright making your emails universally searchable - training Claude on your emails is also not the same as posting their contents.
And - this behavior of Google's has not been penalized, I'm afraid.
"Enterprise and educational customers will continue operating under their existing privacy protections, as the policy changes specifically exclude Claude for Work and Claude for Education services. These commercial accounts remain governed by separate contractual agreements that maintain stricter data handling standards.
Organizations using Claude through business partnerships or educational licenses can continue their operations without concern for the new training policies affecting their sensitive communications or proprietary information."
Thus, I think your claim
> What a shoot your own foot business decision.
likely does not hold: the non-commercial accounts likely led to Anthropic loosing money, so they are not liked by Anthropic anyway (but are a an "inconvenient necessity" to get people to notice and try out your product offering). With this new decision, Anthropic makes this "free-riding" less attractive.
I bet that Anthropic will soon release a press statement (that exists in the drawers for quite a long time) "We are listening to your concerns, and will thus extend our 'privacy-conscious offering' to new groups of customers. Only 30 $ per month."
> With this new decision, Anthropic makes this "free-riding" less attractive
Certainly not for any users like you and me, it takes two seconds and three clicks to review the new terms and decline chat training. This is more like Anthropic getting easy training from people who are unaware or don't care.
Seems the same thing. They're giving plausible deniability, but knowing they'll still scoop up a worthwhile amount of data/profit from some % of users.
> Many users chose Anthropic exactly because they were not like the others.
Companies are less like people and more like bacteria. They are programmatic, like algorithms.
What they will do has already been decided for them, programmed into them, by the rules of capitalism. It is inevitable. There are no good guys, and there are no bad guys, there's just... microbes.
Those who do not engage in capitalism, perhaps they do not seek money at all, have no such hard limitations. But they are rare, because money is blood.
Ok, to be clear, let’s say I’m dumb and accidentally go with the default (I get the color of the opt out button wrong or something). As if there’s a “publish my private emails to the internet” default-on button in email. Then, I use it to edit a rec letter for student X, with my signature Y. (Yes I know this is dumb and I try changing names when editing but am sure some actual names may slip through.) A few months later the next model is released trained on the data. Student X asks Claude what Y would write in a rec letter about X. Such a button is a “wings stay on / wings fall off” button on a plane.
The LLM equivalent is what Google does do, which is train its spam filters on the contents of your emails coupled to the signal of what human beings flag as spam.
(It was one of the first significant value-adds of GMail: at its scale, Google could create a global-concept understanding of the content and pattern of spam across hundreds of millions of users. That was the kind of Big Data that made it possible to build filters where one could confidently say "This is tuned on all spam in the wild, because we've seen all spam in the wild").
My path crossed Nguyen many years ago and I can vouch that he is a very smart, nice, ethical, and solid dude who knows his stuff. I’m also a physicist and know enough about the relevant math and physics to evaluate Nguyen v. Weinstein, though I haven’t processed either of their papers deeply. But, fwiw, Tim’s critique is detailed and readable. In particular, what he says about a faulty complexification step makes perfect sense and would spell death for an approach to unification that hinges on detailed accidents of representation theory (as Weinstein’s seems to). To really judge this, I’d have to delve into Weinstein’s baroque-yet-vague theory, which I’m unwilling to do as I’m pretty sure it would be a waste of time.