Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Negative temperature (wikipedia.org)
121 points by occamschainsaw on March 15, 2022 | hide | past | favorite | 71 comments


Statistical physicist here. Negative temperatures occur when a system has a finite number of high energy states. On average, when temperature increases, both the energy and the "randomness" or entropy of a configuration increase as well. Of course, if there are only a few high energy states available, then the randomness will not increase, it will decrease. That's negative temperature! Because we define temperature as the rate of change of the energy with respect to the energy, in systems like the one I described, this rate of change becomes negative.


I find your explanation much better than the one at https://simple.wikipedia.org/wiki/Negative_temperature, please consider expanding that page a bit!

Back to the substance of the topic: I feel let down here, negative temperature sounds like something amazing but it turns out to be more of a quirk due to the definition. I wonder if physicists would have chosen this definition, if had they been aware of this when they did.

Also I wonder if there's an intensive thermodynamic property that actually says how much thermal energy is in the system, since temperature apparently won't do it?


> Also I wonder if there's an intensive thermodynamic property that actually says how much thermal energy is in the system, since temperature apparently won't do it?

Thermodynamic beta[1] does exactly that. If we consider temperature as "tendency to give energy away", then the scale starts at zero, heads out through positive infinity, comes in through negative infinity, and then stops at negative zero. I.e. if T_a > T_b > 0, then system a gives energy to system b. Then, if T_c < T_d < 0, then system d gives energy to system c.

Thermodynamic beta (really just 1/T, the "coldness" of a system) fixes this: if B_a < B_b anywhere on the number line, then B_b is colder than B_a.

[1]: https://en.wikipedia.org/wiki/Thermodynamic_beta


Do you think the big bang could have been an "entropy population inversion" event? All high entropy states were occupied, so whatever event started the big bang caused the universe to dip into negative temperature and entropy and allowed our universe to form?

Or.. am I just a crank?


So the basic idea is that I’d you have a two particle system (a and b) and each particle can be in a low (a and b) or high energy (A and B) state then you can have the following combinations: ab, Ab, aB, and AB. Since particles are indistinguishable Ab and Ba are the high entropy states but ab and AB are low entropy states?


Basically. Instead of a and b, let's just look at L(ow) and H(igh) energy states. If I have the ground state,

    L L L L
there's only one way to arrange the system: everything in the low state, and the entropy is log(1) = 0. If I add one quantum of energy, I can have

    L L L H, L L H L, L H L L, H L L L
and the change in energy is dE = 1, while the change in entropy is dS = log(4) - log(1). "Temperature" is really just a scaling factor between dE and dS, so in this case T > 0. Adding another quantum,

    L L H H, L H L H, H L L H, L H H L, H L H L, H H L L
Again, dE = 1 (by construction), and dS = log(6) - log(4) which is less than log(4). dS > 0 so T is also greater than 0. Adding one more quantum, however,

    L H H H, H L H H, H H L H, H H H L
and we have dE = 1 and now dS = log(4) - log(6)! We've added energy (conventionally made it "hotter"), but the system has become more ordered. Adding one last quantum,

    H H H H
dE = 1 and dS = log(1) - log(4). This is as hot as the system can get, it cannot accept any more energy and can only give it away which is why "negative temperatures are hotter than all positive temperatures". If this system is brought into contact with any conventional positive temperature system, statistical fluctuations mean at least one of the energy quanta we added will flow towards it, cooling this system and heating the other.


Thank you, this made a lot more sense!


I wish I had more than one upvote.


Rate of change of entropy i presume is what you meant


What’s an example of such a system?


Lasers do this. The essential idea is that an incoming photon (of some specified energy E) knocks a bit of energy (specifically also E) out of an excited particle which leaves as another photon. To make this happen, you need more particles in an excited state than in the ground state[1] which is exactly the same condition necessary for negative temperature.

[1]: https://en.wikipedia.org/wiki/Population_inversion


Take an atom that somehow only has two energy levels, 0 and 1. Connect the atom to a heat sink at temperature T. At very low T that atom almost always has energy 0. We know the energy of each atom very well, they're (almost) all 0. People say: there's very little entropy. At very high T the atom has expected energy nearly 0.5, and the probabilities of energy 0 or energy 1 are nearly equal. So that's maximum entropy. We're as ignorant as it's possible to be.

At negative temperature the expected energy of each atom is >0.5. But as the expected energy approaches 1.0, we know the energy of each atom very well. They're (almost) all 1. That's weird. It's weird enough that you can't assign a positive temperature to these atoms.

Physically, you can get to negative temperature by sneaking atoms into the 1 state. Pumping a laser is an example. But you can't get to negative temperature by just heating with finite-temperature heaters.

Entropy can be measured in bits. If we have 10 two-level atoms at extremely high temperature, that's 10 bits of entropy. The state might be 10 1100 1110 or 01 1101 0111 or any of 2^10 possibilities. On the other hand if we have 10 two-level atoms at extremely low positive temperature, the state is usually 00 0000 0000 and the entropy is close to 0 bits.

"Entropy" in bits is just the size of the random number you need in order to represent the system.


I'm so bad at (physical) science I had to see if "simple" Wikipedia had an entry for this because I couldn't follow it at all and.. it does! :-) https://simple.wikipedia.org/wiki/Negative_temperature


Keep in mind that the "simple" in "simple English wikipedia" (SEW) does not refer to the simplicity of the explanation/exposition, only to the simplicity of the language; it is intended for readers less proficient in English.

>Use Basic English words and shorter sentences. This allows people to understand complex terms or phrases. > Write good pages. The best encyclopedia pages have useful, well-written information. > Use the pages to learn and teach. These pages can help people learn English. You can also use them to make a new Wikipedia to help other people. > Simple does not mean short. Writing in Simple English means that simple words are used. It does not mean readers want basic information. Articles do not have to be short to be simple; expand articles, add details, but use basic vocabulary.

https://simple.wikipedia.org/wiki/Main_Page

It's true you might find a explanation on SEW that you find simpler or easier than on the regular English Wikipedia page, and this might (or might not) be because the constraint of using simple English forces it. However, there is no reason this explanation could not also be put in the regular English Wikipedia page. So you should be bold and consider improving the regular English Wikipedia if you find it inaccessible to the intended audience.


does not refer to the simplicity of the explanation/exposition, only to the simplicity of the language

Sure, but the use of simple language commonly seems to play out in easier to follow explanations of things on Simple Wikipedia.

So you should be bold and consider improving the regular English Wikipedia if you find it inaccessible to the intended audience.

I appreciate your confidence, but I find the topic almost unintelligible. I dare say a science educator or the like would make a more reliable job of it. Plus, I'm not sure articles should be dumbed down, because for topics I do understand, I would find it annoying if Wikipedia entries were boiled down into layman's terms.


But it's not Simple Wikipedia. It's Wikipedia in Simple English.

I do agree the two correlate, though, even though I think you could use simple language to describe complex topics without losing information.


I appear to have been caught out by Google's rewriting of page titles – https://zyppy.com/seo/google-title-rewrite-study/ – since if you Google "simple wikipedia" it gives the title of the site as "Simple Wikipedia". But if you don't, it doesn't(!)


This is unfortunate and indeed is going to contribute to confusion about what Simple English Wikipedia is. Ugh.


Jeez, I had no idea they did that! You are correct, I get the same result.


> does not refer to the simplicity of the explanation/exposition, only to the simplicity of the language > Sure, but the use of simple language commonly seems to play out in easier to follow explanations of things on Simple Wikipedia.

I explicitly acknowledged this in my comment: "It's true you might find a explanation on SEW that you find simpler or easier than on the regular English Wikipedia page, and this might (or might not) be because the constraint of using simple English forces it."

> I appreciate your confidence, but I find the topic almost unintelligible. I dare say a science educator or the like would make a more reliable job of it. Plus, I'm not sure articles should be dumbed down, because for topics I do understand, I would find it annoying if Wikipedia entries were boiled down into layman's terms.

The article should be intelligible to its intended audience. If that audience includes non-experts, the article should include descriptions that are intelligible to non-experts. This does not require removing expert-level descriptions, although it is good to warn the reader of what sections can be understood at what level. Depending on the circumstance, it may also be appropriate to split the discussion into two articles, as is done with the Introduction to Quantum Mechanics and Quantum Mechanics articles:

https://en.wikipedia.org/wiki/Introduction_to_quantum_mechan...

https://en.wikipedia.org/wiki/Quantum_mechanics

There are some articles on Wikipedia that are essentially only of interest to experts, and here it is probably not worth the effort to make them accessible, but the Negative Temperatures article is not one of them. It should include laymen understandable descriptions in addition to ones more useful to experts.

I must again emphasize: Simple English Wikipedia is for Simple English language. It's not Non-Technical Wikipedia and it's not for dumbing down anything. If something's not appropriate for the regular English Wikipedia, it's not appropriate for Simple English Wikipedia.


Thanks for the link; I glazed over when the article said "Confined point vortices are a system with bounded phase space as their canonical momenta are not independent degrees of freedom from their canonical position coordinates."

It's like I understand the individual words like phase, canonical and coordinates, but the combination is... an entirely different field of study.


Haha, no yeah, that's a very jargony sentence indeed.

Phase space is jargon for the set of all allowed configurations of a system. In the simplest case, an ideal gas of n spherical atoms, each atom is specified by 3 independent numbers for position and 3 for momentum, so you need 6 n numbers total to specify the whole configuration... That is the phase space. This phase space is only partly bounded, it is bounded in position (V^n where V is the volume each particle can occupy) but not in momentum (the temperature can always go higher). Because each of these numbers can vary independently from each other, they are called “independent degrees of freedom.”

Now, point vortices are an abstract mathematical limit of physical vortices as they get really small. Generally we take these mathematical limits because it's much easier to work with and we are lazy. The article is saying that Onsager chose to explain his theory with point vortices because you can't give them arbitrarily high momentums, in fact their positions have to be tied up with where they are going, so that they are not independent anymore like they were with the gas.

So Onsager knew that if he chose this system then the boundary on volume (well, area, the vortices live in 2D) would become a boundary on momentum space and then you couldn't dump infinite energy into the system. And this is important because it means that when you add more energy, the number of allowed configurations with that energy eventually shrinks. This is the negativity of negative temperature, increase something and something else decreases. The thing that is decreasing, a volume in phase space, is also called the entropy. Increase energy, and entropy decreases... most normal systems work the opposite way, entropy increases.

As systems share things to maximize total entropy, this overcharged vortex system is going to share energy with any normal system at any temperature, because that normal system will increase its entropy with energy coming into it, and the overcharged vortex system will gain entropy when it loses energy. So it is hotter than any other thing in the world.

(I am erasing the jargon word “canonical” from this explanation... It just refers to a particular procedure we use to define momentum coordinates based on your position coordinates. When you are not dealing with a point particle with defined mass in a flat space, you have to make these sorts of choices.)


Huh, I stopped trying to understand at that sentence. Maybe Wikipedia could start linking to "simple" articles for these topics? Or at least include some more links to more basic concepts. That sentence is a fortress even for a curious person.


I do think Wikipedia should make the link to the "simple" branch more obvious on standard English Wikipedia. Currently they treat it merely as an obscure alternative language, but it could really help a lot of people if it were more prominently linked on each article that has a simple variant.


Thanks for the link!

PS If you ever see something like that on regular Wikipedia, please add the {{Jargon}} tag to it. In theory, at some point, some editor might come and make it better.


Thank you. The key is the last sentence:

     Only very small things discussed in quantum mechanics can reach this state.


This is not really true - lots of macroscopic objects have negative temperatures, like lasers in their pumped state.


"Temperature is loosely interpreted as the average kinetic energy of the system's particles. The existence of negative temperature, let alone negative temperature representing "hotter" systems than positive temperature, would seem paradoxical in this interpretation. The paradox is resolved by considering the more rigorous definition of thermodynamic temperature as the tradeoff between internal energy and entropy contained in the system, with "coldness", the reciprocal of temperature, being the more fundamental quantity. Systems with a positive temperature will increase in entropy as one adds energy to the system, while systems with a negative temperature will decrease in entropy as one adds energy to the system."


It's very common to have apparent paradoxes when you have multiple definitions, one more intuitive and one more rigorous or specific.

Popular science journals use group velocity, wave velocity vs. signal velocity confusion to write a "faster than light" article every year.


Not unlike the silly algebra tricks that “prove 0==1” and other such, which some of us played with at school. We had a good maths teacher who joined in, reasoned & unreasoned a few of them, and pointed us to off-curriculum things we might want to read. A nice lesson on the joy of thinking and applying critical thinking.


FTL is easy. Send one object due north at 3/4 c. Send one object due south at 3/4 c. They are now separating at ~1.5c from your point of view.


Isn't the speed at which one of the objects is moving away from the other 24c/25 per the relativistic velocity addition formula?


It's quite possible to have the apparent rate of separation of two particles, from the standpoint of third frame of reference, be superluminal. It happens in astronomical observations. See http://spiff.rit.edu/classes/phys200/lectures/superlum/super... for a nice example and explanation.

But you are quite right that from the frame of reference of either of the two particles sent traveling at .75c in opposite directions, the other particle is receding at subluminal velocity, not FTL.


This reminds me of "negative resistance" in Gunn diodes[1], where current decreases as voltage increases under certain conditions.

[1] https://en.wikipedia.org/wiki/Gunn_diode


Reminds me of a thermodynamics/statistical mechanics class homework problem from way back.

Two systems with negative temperatures A and B, where |A| > |B|, are put in contact. Which way does the heat flow?

As the article alludes, it can be easier to think about inverse temperature (which they called coldness), so heat will flow from less cold system B to more cold system A.

In this context, negative epsilon would be the ‘hottest’ temperature, and a negative infinite temperature the same as a positive infinite temperature.


One of my favorite stat mech facts is that you can get to inverse temperature as a Lagrange multiplier using only probability and information theory:

Given a system that has many states each with energy E_i, what is the highest Shannon entropy distribution that has average energy E?

If you define a Lagrange multiplier \beta to enforce the constraint \sum_i p_i E_i = E and maximize the entropy you get the Boltzmann distribution!


One minor nitpick is that in this derivation your choice of coordinates matters.

Luckily (classical) physics provides a canonical set of coordinates which even comes with a measure that is preserved under the equations of motion. This magical property of the phase space can't be emphasized enough if you ask me.


Can you expand on this? Is there a nicer coordinate-free way to define entropy?


The definition of entropy is not coordinate independent. Very briefly the reason for this is that the uniform distribution isn't coordinate independent.

There is a related coordinate independent way to measure distances from one distribution to another, which is the Kullback leibler divergence, and looks like:

D_KL(P || Q) = \int dP/dQ log(dP/dQ) dQ = E_P[log(dP/dQ)]

(This is by far the most general definition and uses the Radon-Nikodym derivative to be applicable to discrete, continuous and even weirder distributions. For reasons that I'll try to explain I also view this as a generalization of the notion of entropy)

If you let Q be a uniform distribution this is equivalent to the entropy (up to some sign changes and a constant). However uniform distributions aren't coordinate independent and therefore the notion of entropy isn't.

My personal conclusion is that there's no way of doing statistics without picking a(n improper) prior, since you inevitably need to pick some coordinates which ends up doing the same thing. You can then equivalently talk about the KL-divergence of a distribution from this prior or the entropy of a distribution, both end up being more or less the same thing.


I'm not sure this is right, when you make energy flow to the colder system the entropy decreases (by definition).


I'm also not sure it's right.

Consider two systems of spins in an external magnetic field. Suppose system A has all the spins in their high energy state (wrong way given the field) - so it has the most negative temperature possible. Suppose system B is similar, except a few of the spins are in their low energy state - so it has a slightly less extreme negative temperature. If the two systems interact, the few low-energy spins ought to even out between the two systems, which means energy has flowed from A to B (lower temperature to higher temperature).


Here’s the misunderstanding.

Your example above, where all spins are in their highest possible energy state, really has temperature T=-0, an infinitesimal temperature just below absolute zero. This would not be the negative temperature possible.

You are correct that energy would flow out of this maximal-energy system. But it corresponds to System B in my example - the system at the less negative temperature.

The animation near the top of the article shows this nicely. The particles approach their max possible state as T flips to negative and approaches zero from below.

Temperatures of positive and negative infinity are statistically equivalent in this example, where states of any energy are equally likely.

It’s only at smaller less-negative temperatures where the population inversion occurs.


Thanks. I see now that you're right.


Two systems at different temperatures, when put into thermal contact, will come to thermal equilibrium at an intermediary temperature. Total entropy (over both systems) will have increased.


Is the early universe, around the Big Bang, an example of high energy and low entropy state?

Seems like we’ve been cooling off since, so we were hotter earlier and negative temperatures would be hotter than non-negative temperatures.


I think the low entropy in the early universe was mostly/entirely due to gravity- a uniformly mixed universe could collapse into denser regions in lots of ways, but (eg) one black hole at the other end of time is very high entropy.

"According to the Big Bang theory, the Universe was initially very hot with energy distributed uniformly. For a system in which gravity is important, such as the universe, this is a low-entropy state (compared to a high-entropy state of having all matter collapsed into black holes, a state to which the system may eventually evolve)." https://en.wikipedia.org/wiki/Entropy_as_an_arrow_of_time#Ov...


In the limit of the universe being a point, yes. Entropy must then have been zero since phase space is a point. But that’s just conjecture.


The very early universe more or less must have had much lower entropy than we do today, you don't need to wind back all the way. A uniformly mixed goop of a universe is going to be very low entropy due to all the different ways it could collapse under the influence of gravity.


Can you (or anyone) explain how entropy interacts with gravity? A uniformly mixed goop should be the highest entropy state, with stuff partitioned into clumps being a lower entropy state... apart from gravity. But if you add gravity to the mix, then... what?


My understanding is fully based on pop-sci, but:

Eventually the influence of gravity starts generating black holes, which are really high entropy objects (in fact, for a given region of space, you can't actually fit any more entropy into it than a black hole does). So it seems to me that stars and planets and so on are intermediate steps between a hot, low-entropy universe at one end and a cold, high-entropy universe consisting of black holes. Eventually the black holes maybe evaporate leaving an extremely fine mist of radiation, but by that point space will have stretched out the the point that the undiffentiated radiation cannot collapse again.

I found this from an actual physicist:

"our gravitationally bound ball of gas has a negative specific heat! In other words, the less energy it has, the hotter it gets."

https://math.ucr.edu/home/baez/entropy.html


A uniformly mixed universe sounds like maximal entropy.


Any very slightly denser patches eventually start collapsing under gravity, and as they collapse they're doing work or could be made to do work so entropy begins to rise as the universe starts to sort itself into denser and less dense areas. Eventually some bits get so dense they collapse into black holes, which are maximally entropic regions of space (if you start trying to pack more entropy in, it just gets bigger).


So these negative Kelvin temperatures are just values in excess of +infinite Kelvin?

Reminds me of complexity classes in theoretical CS. It is odd though to join two together on +- scale, because it jams a non-enumerable set into enumeration by making infinity a single tick on the scale.


> It is odd though to join two together on +- scale, because it jams a non-enumerable set into enumeration by making infinity a single tick on the scale.

Can you elaborate? I fail to see what you mean.


Think of it like an unbounded array of integer temperatures values -- if you keep adding memory you can keep adding more positive values to the end of the array infinitely, each with an indexable location (i.e. enumerating the set). But this concept breaks if we just say we'll throw all the negative values on after infinity; if we're infinitely adding positive values to the array, we'll never have the chance to stop and start with those negative Kelvin values on after. When this happens it's called non-enumerable (or more formally fails the test of diagonalization). It seems that the authors of this system chose to make +infinity an arbitrary enumarable point to show the negative Kelvin values in excess of that point. Having said that, the set of all integers (negative, 0, positive) should be enumerable (because you can just *=-1 each index), but not when infinity is a member of that set. I think there's a numberphile and veratisium on why not all infinity's are equal if curious to explore.


Adding infinity to a set does not change it from numerable to non-numerable. There are plenty of ways to add infinity to a set, for example, you can take the natural numbers N and just add infinity:

0, 1, 2, 3, ..., ∞

It's also possible to add another set of natural numbers after infinity, but in this case infinity is usually call ω: https://en.wikipedia.org/wiki/Ordinal_number

0, 1, 2, 3, ..., ω, ω+1, ω+2, ω+3, ...

Your example with an array is just:

0, 1, 2, 3, ..., ∞, ..., -3, -2, -1

It's possible to define a "<" relationship there and a topology and most of the other usual stuff, but it's a numerable set.

There are similar constructions for non-numerable sets, like the real numbers. You can add one infinite on both sides and you get something that is topologically equivalent to the border of a circle, or add two infinites (one on each side) and you get something that is topologically equivalent to a closed interval, or add even more infinites https://en.wikipedia.org/wiki/Compactification_(mathematics)


We have to be careful with multiple infinities when one takes place after another, and in this case the "<" between two infinities is what I've highlighted as odd.

Indeed as you've illustrated the union of countable sets is countable, but unions aren't appropriate when order matters. The use of an array instead of a set data structure highlights this difference. The negative temperatures in the post begin after +∞. Because ordinals are an extension of enumerability we cannot simply drop ∞ into an array location and still call it enumerable. Speaking from turing recognizability / recursively enumerable languages there is no way for a machine to accept negative integers after all positive integers have been input.


and 0, 1, 2, 3, ..., ∞, ..., -3, -2, -1 is kind of an "integer projective line"

https://en.wikipedia.org/wiki/Projective_line#Line_extended_...


The "integer projective line" wraps around, and after -1 there is another 0 again, something like

..., 0, 1, 2, 3, ..., ∞, ..., -3, -2, -1, 0, 1, 2, 3, ..., ∞, ..., -3, -2, -1, 0, 1, 2, 3, ..., ∞, ..., -3, -2, -1, ...

where all the repetitions of a number are equivalent. In particular, you can not define a "<".

The example was

[stop], 0, 1, 2, 3, ..., ∞, ..., -3, -2, -1, [stop]

where you can define "<". This example is more similar to the temperature classification in the article in Wikipedia. You can pass through infinity, but you can't pass through 0. (Perhaps to be more similar, we should remove 0.)


You seem to be conflating concepts.


I was nerd-sniped by the units for thermodynamic beta (inverse temperature): gigabytes per nanojoule. I knew entropy and information science are related but I didn't realize I'd ever find myself looking at GB/nJ.


A practical demonstration of negative temperature from The Action Lab: https://www.youtube.com/watch?v=jdjTYlReE-I


the hottest temperature is negative 0 :)


Hence why the one true temperature scale is thermodynamic beta.

“Boy howdy, it’s a balmy 38 reciprocal electron-Volts outside! who wants to go for a swim?”


Thanks, but I'll be sticking to Kelvin* when setting my oven.

* Or Rankine if you're American.


unsigned int t = 0; t—-; t > 0 … makes sense to me! :-P

OTOH, I did learn a new way of thinking about temperature from this article. It’s still counter-intuitive and pretty fascinating.


Beautiful. If computer science and physics had a child, this would be their Euler's Identiy.


I always assumed going below absolute zero was undefined behaviour. I guess someone just needs to rewrite all of physics in Rust.



Looks like our universe doesn't have overflow handling for temperatures. It just wraps around.


Obligatory sixtysymbols video: https://www.youtube.com/watch?v=yTeBUpR17Rw




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: