*Orthogonal*

How many different ways can a particle have a given amount of energy, *E*? The energy of a particle fixes the magnitude, *p*, of its momentum vector **p**, but that vector might still point
in any direction in three-dimensional space, so it’s free to lie anywhere on the surface of a sphere. If we don’t know the particle’s
energy precisely, only that it’s between *E* and *E*+δ*E* for some small quantity δ*E*, then *p* will lie between two values *p* and *p*+δ*p*
determined by those energies, and the momentum vector will lie somewhere within a spherical shell. Of course there are an infinite number of distinct vectors in this shell, so
asking “how many ways” the particle can have an energy in this range isn’t quite the right question, at least in classical mechanics; quantum mechanics transforms the situation so
that it really *does* involve a number of discrete possibilities. But in classical physics, it turns out to be useful to consider the *volume* of the region in which we know the particle
lies — whether that’s a volume in momentum space, or in ordinary space, or both.

If we think of the state of the particle as a point in an abstract space known as phase space, where the coordinates of every point
include both the particle’s position *and* its momentum
in all three dimensions, then knowing a particle has an energy between *E* and *E*+δ*E* tells us that its state lies within a certain region of phase space. If we assume that the
particle is confined to a fixed volume of ordinary space regardless of its energy, then the volume of *phase space* that it lies within will depend on the possible momentum vectors compatible with what
we know about its energy — those that lie within the spherical shell we’ve just described.

Why should we care about volumes in phase space, though, rather than some other description of the state of the particle?
The reason is a result in applied mathematics known as Liouville’s Theorem, which concerns
the way the probability of finding a system’s state at a given point in phase space changes over time.
Using Liouville’s Theorem it can be shown
that if we have a large number of versions of a system, and their states are scattered uniformly throughout phase space, then as time progresses and the points describing the different
systems move about, they will continue to be distributed in the same uniform way. If we’d chosen any old description of the states of the system,
this property wouldn’t hold; it’s only the particular mathematics of the way states move around in *phase space* that makes this true.

This becomes important when we want to deal with systems in a statistical fashion.
When we do something with a complicated system with billions of particles – such as putting a hot gas in contact with a cold solid – and then wait for
everything to stop changing and settle down into an equilibrium, we don’t expect to track the location and momentum of every particle. But once the system is in equilibrium,
Liouville’s Theorem tells us that we can treat it as being *equally likely* to lie at any location in phase space that’s compatible with what we do know about it.
If that sounds obvious, keep in mind that there are lots of equally “obvious-sounding” alternatives that
are false: for example, a system usually *isn’t* equally likely to have any energy within the range we know it occupies.

Returning to the example of a single particle, let’s calculate the volume of the region in phase space in which a particle must lie if its
energy is known to be between *E* and *E*+δ*E*, and its location is known to be confined to some region of ordinary space with volume
*V*.

In *Lorentzian* physics, the total energy of a free particle is greater than or equal to its rest mass *m*, and the relationship between energy and momentum is:

p^{2}=E^{2}–m^{2}

from which we can derive:

dp/dE=E/p=E/ √(E^{2}–m^{2})

If the particle’s energy lies in a small range of values between *E* and *E*+δ*E*, the volume of the region in phase space where the particle’s state must be is:

Ω_{Lorentzian}= 4 πp^{2}δpV

= 4 πp^{2}(dp/dE) δEV

= 4 π √(E^{2}–m^{2})EδEV

In *Riemannian* physics, the total energy of a free particle lies between zero and the particle’s rest mass, and we have:

p^{2}=m^{2}–E^{2}

dp/dE= –E/p= –E/ √(m^{2}–E^{2})

The fact that the momentum falls with increasing energy is important in its own right, but to find the volume in phase space we take the absolute value of δ*p* = (d*p*/d*E*) δ*E*,
to get:

Ω_{Riemannian}= 4 πp^{2}|dp/dE| δEV

= 4 π √(m^{2}–E^{2})EδEV

As the plots of Ω make clear, these are two very different functions. However, it’s worth noting that for energies close to the rest mass, *m*, they
are very nearly mirror images of each other. This is because at low velocities, particles in either universe are really just obeying Newtonian physics; the only difference in the
Riemannian universe is that the total energy falls as the kinetic energy increases.

The most striking difference is that Ω_{Lorentzian} is always increasing with energy. Whatever the particle’s energy, if you add a bit
more that will always make a larger volume of phase space accessible to it. In contrast to this, for Ω_{Riemannian} there is a maximum volume of phase space that becomes accessible
at a certain energy. If we compute the rates of change of phase space volume with energy for the two cases, we find:

dΩ_{Lorentzian}/ dE= 4 π δEV(2E^{2}–m^{2}) / √(E^{2}–m^{2})

dΩ_{Riemannian}/ dE= 4 π δEV(m^{2}– 2E^{2}) / √(m^{2}–E^{2})

so the Riemannian phase space volume has a maximum at *E* = *m*/√2. It might seem that the largest volume in phase space ought to be associated with the largest possible momentum,
*p*=*m*, which occurs at *E* = 0. But although that makes for the largest radius of the spherical shell in momentum space,
we mustn’t forget about the changing thickness of the shell. At *E* = 0, d*p*/d*E* is also zero, so δ*p*, the thickness of the shell, is zero.
The actual maximum volume occurs as a compromise between the radius of the shell and its thickness.

What’s the significance of the way phase space volume changes with energy? To understand this, suppose we have two systems, and we know their energies, *E*_{1} and *E*_{2},
and also the way their individual phase space volumes change with energy, Ω_{1}(*E*_{1}) and Ω_{2}(*E*_{2}). If we think of the two systems
as a single system, we can define a new phase space with position and momentum coordinates for all the particles in both systems. The volume within that phase space that the combined system occupies
is then just the product of the individual volumes:

Ω_{total}(E_{1},E_{2}) = Ω_{1}(E_{1}) Ω_{2}(E_{2})

Now suppose that we allow energy to move between the original two systems. The total energy *E*_{1} + *E*_{2} must remain fixed, but the energies of the
individual systems can change. If an amount of energy *Q* flows from system 1 to system 2, we have:

Ω_{total}(E_{1}–Q,E_{2}+Q) = Ω_{1}(E_{1}–Q) Ω_{2}(E_{2}+Q)

and the way this affects the total phase space volume is described by:

dΩ_{total}(E_{1}–Q,E_{2}+Q) / dQ= –(dΩ_{1}/ dE_{1}) Ω_{2}+ Ω_{1}(dΩ_{2}/ dE_{2})

where we’ve used the rule that the derivative of a product is found by taking the derivative of each factor in turn and adding the results.

Does the total phase space volume Ω_{total} go up, go down, or stay the same if energy flows from system 1 to system 2? It will be easier to determine this if we define a new quantity for each sub-system,
its **temperature**, *T*:

T_{1}= Ω_{1}/ (dΩ_{1}/ dE_{1})

T_{2}= Ω_{2}/ (dΩ_{2}/ dE_{2})

If we know the relationship between *T*_{1} and *T*_{2}, we can multiply through by the derivatives that appear in their denominators to find out whether dΩ_{total} / d*Q*
is positive or negative. First, let’s assume that dΩ_{1} / d*E*_{1} and dΩ_{2} / d*E*_{2} are **both positive**;
because Ω itself is always positive this is equivalent to saying that *T*_{1} and *T*_{2} are both positive. Then we have the following:

- If
*T*_{1}>*T*_{2}> 0, then Ω_{1}(dΩ_{2}/ d*E*_{2}) > Ω_{2}(dΩ_{1}/ d*E*_{1}) and dΩ_{total}/ d*Q*is positive. - If
*T*_{2}>*T*_{1}> 0, then Ω_{1}(dΩ_{2}/ d*E*_{2}) < Ω_{2}(dΩ_{1}/ d*E*_{1}) and dΩ_{total}/ d*Q*is negative. - If
*T*_{1}=*T*_{2}, then Ω_{1}(dΩ_{2}/ d*E*_{2}) = Ω_{2}(dΩ_{1}/ d*E*_{1}) and dΩ_{total}/ d*Q*is zero.

We can translate this into the following, more physical statements:

- If two systems both have positive temperatures, and energy flows from the hotter system to the cooler, the total volume of the region in phase space in which the combined system lies will
**increase**. - If two systems both have positive temperatures, and energy flows from the cooler system to the hotter, the total volume of the region in phase space in which the combined system lies will
**decrease**. - If two systems have equal temperatures, the total volume of the region in phase space in which the combined system lies will be
**unaffected**by energy moving between them.

It’s tempting to look at these statements and blithely declare “energy will flow from hot to cold”, but on reflection it can’t actually be that simple, because whatever direction we chose for time, we’re always equally entitled to choose the opposite direction ... and the conclusion can’t possibly hold in both directions! So we need to be clear that the intuitive expectation that the passage of time will lead the combined system to “escape” from a small region of phase space into a larger one depends on there being an unambiguous arrow of time, with entropy increasing in the direction we’ve chosen as the future.

So let’s make that assumption, which is certainly true in our own universe, and might well hold in parts of the Riemannian universe as well. We are now entitled to say
that we’ve managed to define a useful property called *temperature*, and that when two systems with positive temperatures are brought together, energy will flow from the one with the higher temperature
to the one with the lower temperature.

For ordinary systems in the Lorentzian universe, that’s the end of the story, but there are actually some exotic systems in our universe that exhibit *negative* temperatures ... and in the
Riemannian universe negative temperatures are in some sense the norm, since they occur when particles are moving at unspectacular speeds between zero and, for a single particle, the speed *v*_{0}
at which the total energy equals *m*/√2 (the energy at which the phase space volume is at a maximum):

E=m/ √(1 +v_{0}^{2}) =m/√2

v_{0}= 1

You might have heard that temperature is an “emergent phenomenon” that only makes sense for systems with a huge number of particles, but strictly speaking that’s not true;
we’ve defined temperature in a way that makes perfect sense for any system at all, even a single particle. What depends on there being many particles is the near certainty of the energy flow we associate with
a temperature difference. If two lone particles interact, it wouldn’t be all that strange if, by chance, the hotter particle actually took energy from the colder one. But for the
energy flow to go from cold to hot in a system with 10^{23} particles would be astronomically unlikely.

What are the consequences of having a negative temperature? The argument about the direction of energy flows that we made for two systems with positive temperature depended on multiplying an inequality between them by
(dΩ_{1} / d*E*_{1}) (dΩ_{2} / d*E*_{2}). If two systems *both* have negative temperatures, the product of those two derivatives will again be positive, so exactly
the same argument still holds.

This means that if *T*_{2} < *T*_{1} < 0, we will expect energy to flow from system 1 to system 2. A gas of relatively slow particles in the Riemannian universe will have a
negative temperature that is higher (that is, closer to zero) than another gas with slightly faster-moving particles, so energy will flow from the slower-moving particles to the slightly faster ones. But the energy we’re talking about
is *total energy*, which has the opposite sense to kinetic energy. So the slower-moving particles that lose total energy will *gain* kinetic energy at the expense of the slightly faster ones.
In other words, the flow of kinetic energy will be just the same as it is in our universe.

But a gas of sufficiently fast-moving particles will have a *positive* temperature by the Riemannian definition, so however we choose to define energies this will certainly be the opposite temperature to the
more ordinary gas. How does the energy flow if these two systems come into contact? This time, the product of the derivatives (dΩ_{1} / d*E*_{1}) (dΩ_{2} / d*E*_{2})
will be negative, so when we multiply by it, it reverses the direction of the inequality in temperatures. It follows that if *T*_{1} < 0 < *T*_{2}, then
dΩ_{total} / d*Q* will be positive — and so we expect energy to flow from a system with negative temperature to a system with positive temperature.
So we can think of a system with negative temperature as **hotter than infinitely hot**: it will give energy to *any* system with a positive temperature, however hot that second system might be.

Of course again we’re talking about total energy, and in the Riemannian universe the kinetic energy flows the other way: a system with a negative temperature will *gain* kinetic energy from any system
with a positive temperature. This leads us to the eminently sensible result that a gas with particles moving so fast that it has a positive temperature will always pass kinetic energy to any gas with slower-moving particles.

So we have a temperature scale in the Riemannian universe in which a gas with slow-moving particles has a temperature slightly below absolute zero.
The temperature *drops* as the particles speed up, reaches minus infinity at
relativistic speeds, crosses into very high positive values, then, as the velocities of the particles rise towards infinity, the temperature comes down towards absolute zero, this time from above.

All the way across this bizarre, doubly-infinite temperature range, we do still have the perfectly normal fact that heat will flow from a gas with faster-moving particles to one with slower-moving particles.
This might make you wonder if anything is all that different from our own universe, and if the whole distinction between positive and negative temperatures is just a mathematical abstraction.
But a gas comprising a fixed number of particles is not the only kind of system there is, either in our universe or the Riemannian universe. In any system where *light* can be freely created,
we can ask what temperature the associated “gas of photons” will have, when the number of particles it contains is not fixed.
The answer is: a positive temperature. Intuitively it’s easy to see why this is the case. Turning energy into a whole new particle can be expected to open up new possibilities, making a larger region in
phase space accessible to the system. This means dΩ/d*E* will be positive, so the temperature will be positive.

An ordinary system in the Riemannian universe, such as a gas with a fixed number of slow-moving particles, can never be in thermal equilibrium with light. With its negative temperature, the ordinary system will always be disposed to give some total energy to the positive-temperature light, and in doing so gain kinetic energy.

This is what makes the ability to create light so dangerous in the Riemannian universe. If it can be done in a controlled way, there is no reason for the light source to be in thermal equilibrium with the light; incandescence — the glow of a hot object due to the random exchange of energy with photons — is not the only way to create illumination. But if the process gets out of control and does turn into such a random exchange, the light source itself risks ending up with a positive temperature — which in the Riemannian universe entails being torn apart into a relativistic gas.

There is extra material on this topic for readers who don’t mind a slightly higher level of mathematics.