# Orthogonal

## Riemannian Electromagnetism [Extra]

Orthogonal

### The Riemannian Proca Equation

How would electromagnetism in our own universe be different, if the photon had mass? In the 1930s, the Romanian physicist Alexandru Proca generalised Maxwell’s equations to develop a theory of massive particles producing a force analogous to electromagnetism, in ground-breaking work to explain the weak nuclear force. Proca doesn’t seem to be as well-known as he should be, but his results were mentioned by Wolfgang Pauli in his 1946 Nobel Prize lecture. As you might guess from the connection with the weak force, giving the force-carrying particle rest mass diminishes its range. If photons were heavy in our universe, the Coulomb potential would experience an exponential fall-off with distance.

But as we’ve seen, the Riemannian Coulomb potential doesn’t suffer from exponential decay; instead, it undergoes oscillations across space. The change from Lorentzian to Riemannian geometry makes all the difference.

To obtain the Riemannian version of Proca’s equation, we start with the Riemannian Vector Wave equation with a source term, j, which we call the four-current, plus the transverse condition that we impose on any vector wave A in order to rule out solutions that are scalar waves in disguise.

 ∂x2A + ∂y2A + ∂z2A + ∂t2A + ωm2 A + j = 0 (RVWS) ∂x Ax + ∂y Ay + ∂z Az + ∂t At = 0 (Transverse)

One nice result we can get immediately from this pair of equations is:

 ∂x jx + ∂y jy + ∂z jz + ∂t jt = 0 (1)

which follows from the transverse condition, and the fact that the four-current j is equal to a linear combination of A and its derivatives. This amounts to a statement of conservation of charge: the rate at which the density of charge is increasing over time at some point, ∂t jt, is the opposite of the divergence of the current density, ∂x jx + ∂y jy + ∂z jz, which describes the net amount of charge flowing out of a small region around that point.

We previously noted that there are big problems with the energy-momentum four-vector when it’s computed by different observers, because there’s no objective means to decide which way it should point along an object’s world line. The four-current doesn’t suffer from that problem, because it’s defined as j = ρ u where ρ is the charge density in the charged material’s rest frame, and if we swap the sign of u we also swap the sign of ρ, since a time reversed positive charge looks negative, and vice versa. (Of course the assignment of the labels “positive” and “negative” to charges is just a matter of convention, but that’s a choice that can be made globally, once and for all.)

Just as in ordinary electromagnetism, we define the electromagnetic field, F, in terms of A:

 Fab = ∂a Ab – ∂b Aa (2)

The quantities Aa here are the components of the dual vector corresponding to the vector A. It’s a good idea to keep track of this distinction, though in orthonormal rectangular coordinates in Riemannian space, components of vectors, such as Aa, and components of the corresponding dual vectors, such as Aa, are identical. In Lorentzian space-time, that’s almost true, but not quite; we have Ax = Ax, Ay = Ay and Az = Az, but At = –At.

Suppose we pick three coordinates, and call them a, b and c. Then simply as a matter of the definition of F, and the fact that derivatives commute (that is, ∂ab = ∂ba), we have:

 ∂a Fbc + ∂b Fca + ∂c Fab = ∂a (∂b Ac – ∂c Ab) + ∂b (∂c Aa – ∂a Ac) + ∂c (∂a Ab – ∂b Aa) = 0 (3)

It also follows from the definition of F that:

 ∂b Fab = ∂b (∂a Ab – ∂b Aa) = ∂a (∂b Ab) – ∂b ∂b Aa = –∂b ∂b Aa (4)

where we’re using the Einstein Summation Convention, and ∂b Ab vanishes by the transverse condition.

Inserting the result (4) into the Riemannian Vector Wave equation with a source term, (RVWS), we get the Riemannian Proca Equation. We also have equation (3) which follows solely from the definition of F, and is consequently shared between the Riemannian and Lorentzian versions of electromagnetism. Maxwell’s Equations in their four-dimensional form are shown for comparison. In this and everything that follows, we are choosing units for the Lorentzian equations where the speed of light is 1, and the permittivity of the vacuum, ε0, is 1. We are also using a (– + + +) signature for the Lorentzian metric, as opposed to the (+ – – –) signature used in some literature.

 Riemannian Proca Equation ∂b Fab – ωm2 Aa – ja = 0 (Riemannian) ∂a Fbc + ∂b Fca + ∂c Fab = 0 (Common) Maxwell’s Equations ∂b Fab – ja = 0 (Lorentzian) ∂a Fbc + ∂b Fca + ∂c Fab = 0 (Common)

### What Becomes of Gauss and Ampère

The four-dimensional equations for Riemannian electromagnetism are concise, but to see clearly what’s going on in a variety of situations, and compare them with the Lorentzian equivalents, it will help to give three-dimensional versions, where instead of talking about the electromagnetic field F we describe everything in terms of two three-dimensional vector fields: an electric field E and a magnetic field B.

#### Common Ground

We start with some definitions. The components of the electric field E are taken to be the components of the electromagnetic field F with the same spatial index first and t as the second index, while each component of the magnetic field B is the component of F whose indices are the other two spatial directions, preserving the cyclic order xyz.

Electric Field, E
(Ex, Ey, Ez) = (Fxt, Fyt, Fzt) (Common)
Magnetic Field, B
(Bx, By, Bz) = (Fyz, Fzx, Fxy) (Common)
Electromagnetic Field, F
Using (– + + +) signature for Lorentzian metric
Fab =
 0 –Ex –Ey –Ez Ex 0 Bz –By Ey –Bz 0 Bx Ez By –Bx 0
(Common)

Note that in the matrix above, the first index on F refers to the row, the second to the column, and the t components are shown in the first row and column. So for example, Fxt is the entry in the first column of the second row.

We define the electric scalar potential, φ to be the opposite of the time component of the (dual vector) four-potential A, and the three-dimensional magnetic vector potential, A(3), to consist of the remaining part of A.

 Electric potential, φ φ = –At (Common) Magnetic Potential, A(3) (A(3) x, A(3) y, A(3) z) = (Ax, Ay, Az) (Common)

And finally we define the charge density, ρ, and the three-dimensional current-density, j(3), in terms of the four-current vector j.

 Charge Density, ρ ρ = jt (Common) Current Density, j(3) (j(3)x, j(3)y, j(3)z) = (jx, jy, jz) (Common)

These definitions — as we’ve given them, with upper and lower indices exactly like this — are the same regardless of whether we’re doing Riemannian or Lorentzian physics. But if you want to make comparisons with the literature on the Lorentzian version, remember that raising or lowering a t index will produce the opposite of the original quantity. Also, note that some of the literature uses a (+ – – –) signature for the Lorentzian metric, whereas the Lorentzian formulas here use (– + + +).

Along with the definition of F in terms of the four-potential A via equation (2), these definitions let us describe the electric field as the opposite of the gradient of its potential φ minus the time rate of change of the magnetic potential, and the magnetic field as the curl of its potential, A(3). Again, this is just as in conventional electromagnetism.

 Fields From Potentials E = –∇φ – ∂t A(3) (Common) B = ∇×A(3) (Common)

Next, consider the four-dimensional force f on a particle with charge q and four-velocity u:

 f = q F u (5)

The four-force is the rate of change with respect to proper time τ of the particle’s energy-momentum vector, P, so this is equivalent to:

 ∂τ P = q F u (6)

Now, the spatial part of P is just the three-dimensional momentum p, whereas the spatial part of the four-velocity u isn’t quite the ordinary velocity v. The ordinary velocity describes the particle’s rate of change of spatial coordinates with respect to coordinate time, t, whereas the spatial part of u gives rates of change with respect to proper time, τ — so the spatial part of u is (dt/dτ) v. However, we can absorb that factor of (dt/dτ) by switching to a rate of change of p with respect to coordinate time.

What we end up with is known as the Lorentz force law. Again this is common to the Riemannian and Lorentzian versions (although the effects of relativistic motion on the particle’s momentum p are of course not the same).

 Lorentz Force Law ∂t p = q (E + v × B) (Common)

Next, we translate the conditions forced upon F by its definition, equation (3), into the consequences for E and B. When we take a, b, c in equation (3) to be x, y and z it tells us that the divergence of B must be zero. This is known as Gauss’s Law For Magnetism, and states that there are no magnetic monopoles (which is true in conventional electromagnetism, though of course there are speculative theories where such monopoles do exist).

When we take a, b, c in equation (3) to be t and two spatial coordinates — for each of the three pairs of spatial coordinates — that tells us that the sum of the curl of E and the time rate of change of B is zero. This result is known as the Maxwell-Faraday Equation, and describes the way an electric field is created when a magnetic field is varying in time.

 Gauss’s Law For Magnetism ∇ · B = 0 (Common) Maxwell-Faraday Equation ∇ × E + ∂t B = 0 (Common)

#### Changes

Finally we come to the differences between Riemannian and Lorentzian electromagnetism, which arise from replacing the equation of Maxwell’s that involves the source of the field with the Riemannian Proca equation.

When we set the index a in the Riemannian Proca or Maxwell equation to t, we get two versions of Gauss’s Law, which in the Maxwell case tells us that lines of electric flux only begin and end on charges. In the Riemannian Proca case, this no longer holds: flux lines appear out of the vacuum, with the electric potential acting just like a charge density in that respect.

 Gauss’s Law ∇ · E = ωm2 φ – ρ (Riemannian) ∇ · E = ρ (Lorentzian)

When we set the index a in the Riemannian Proca or Maxwell equation to each of the spatial coordinates, we get two versions of the Ampère-Maxwell Law, which describes the creation of a magnetic field by a current, a changing electric field — and, in the Riemannian case, directly from the magnetic vector potential.

 Ampère-Maxwell Law ∇ × B + ∂t E = ωm2 A(3) + j(3) (Riemannian) ∇ × B – ∂t E = j(3) (Lorentzian)

### Examples from Electrostatics

#### The Coulomb Potential

We discussed the Riemannian Coulomb potential on the main page of the notes on electromagnetism. We now have the tools to derive that potential. What we are interested in is the field around a point charge, q, which is motionless in our coordinates. The situation is unchanging in time and perfectly radially symmetric in space, so it’s really a one-dimensional problem where everything is a function of the distance, r, from the charge.

The curious twist that Riemannian electromagnetism brings to this is that lines of electric flux — which in Lorentzian electromagnetism always start and end on charges — can now terminate in the middle of the vacuum, an effect that depends on the potential. In the diagram on the right, the arrows indicate the direction of the electric field — but what’s drawn here are flux lines, not vectors: the strength of the field is indicated by how closely packed the lines are, not their length.

The main mathematical difficulty, shared with the Lorentzian case, is the fact that we have an infinite density of charge at the location of the particle. The way around that is to work with an integral over space of the charge density; integrating over a region that includes the particle will yield a finite value of q. But our equations are all differential equations, so first we have to convert one of them to a suitable form. To do that, we make use of the divergence theorem, a result in pure mathematics which says that for any vector field E and any region of space, the integral of the dot product of E with the outward normal to the surface of the region is equal to the volume integral of the divergence of E:

 ∫Surface E · n = ∫Volume ∇ · E (7)

We choose the vector field E to be the electric field, and we choose the region of integration to be a sphere of radius r around our point charge. We expect the electric potential φ to be a function only of r, and from the radial symmetry of the problem we expect the electric field to point radially towards or away from the charge. Given that the problem is static, we can express E in terms of the electric potential alone, as:

 E(r) = –∇φ(r) = –φ'(r) er (8)

where er is a unit vector pointing away from the charge. The Riemannian version of Gauss’s Law gives us ∇ · E as a function of the charge density ρ and the electric potential φ. We make use of that, along with (8), in the integrals of (7) applied to our spherical region around the charge. We also use the fact that any volume integral of ρ over a region containing the charge simply yields q. After dividing both sides by 4 π, we end up with:

 –r2 φ'(r) = ωm2 ∫0r φ(s) s2 ds – q / (4 π) (9)

Now, we make an educated guess on the basis of our experience of conventional electrostatics that things will be simpler if we write φ in terms of a new function, f, divided by r:

 φ(r) = f(r) / r (10) φ'(r) = f '(r) / r – f(r) / r2 (11)

In terms of the new function f, equation (9) becomes (12); we evaluate (12) at r = 0 to get (12a):

 f(r) – r f '(r) = ωm2 ∫0r f(s) s ds – q / (4 π) (12) f(0) = – q / (4 π) (12a)

The derivative of (12) with respect to r gives us (13), and some simple rearrangement gives us (14):

 – r f ''(r) = ωm2 f(r) r (13) f ''(r) + ωm2 f(r) = 0 (14)

Equation (14) is a very well-known differential equation, whose general real-valued solution is:

 f(r) = C1 cos(ωm r) + C2 sin(ωm r) (15)

Equation (12a) tells us that C1 = – q / (4 π).

What about C2? Any value we choose for C2 will yield a valid solution to the problem, but since this term has nothing to do with the point charge, q, we set C2 to zero. As in conventional electromagnetism, the most general solution to a problem often includes some form of radiation that’s merely passing through the region of interest — in this case, radially symmetric radiation that happens to be motionless in the rest frame of the charge.

So we have derived the Riemannian Coulomb potential. We also give the corresponding electric field, E = –∇φ, below.

 Coulomb potential φ(r) = –[q / (4 π r)] cos(ωm r) (Riemannian) φ(r) = q / (4 π r) (Lorentzian) Coulomb field E(r) = –[q / (4 π r2)] [cos(ωm r) + ωm r sin(ωm r)] er (Riemannian) E(r) = q / (4 π r2) er (Lorentzian)

#### A Green’s Function for Riemannian Electromagnetism

The Coulomb potential for a single, motionless point charge allows us, in principle, to find the electric field of any static distribution of charge, simply by integrating over the source of the field. However, it will be useful to have an even more fundamental solution to the equations of Riemannian electrodynamics: one that is associated with an instantaneous “blip” of charge that comes into existence at a certain event in four-space, and then immediately vanishes. Obviously that behaviour violates conservation of charge, but by integrating the solution over the world lines of any number of charges with complete histories, a solution that respects conservation of charge can be found.

A fundamental solution like this is known as a Green’s function.

We begin by looking for four-dimensional rotationally symmetric solutions to the Riemannian Scalar Wave Equation, with no source term. This is the Helmholtz equation in four dimensions, and when we impose four-rotational symmetry we get an ordinary differential equation for a function G of a single variable s, the distance in four-space from the origin:

 G''(s) + (3 / s) G'(s) + ωm2 G(s) = 0 (16)

The general solution to this equation is:

 G(s) = C1 J1(ωm s) / s + C2 Y1(ωm s) / s (17)

where J1 and Y1 are Bessel functions of the first and second kind. Although this is a solution to the sourceless equation for s>0, the Bessel function Y1 goes to minus infinity as s approaches zero, which suggests the kind of singular behaviour we would expect for a Green’s function associated with a point charge.

We can explicitly integrate G for a motionless point charge along its entire world line with the help of a change of variable from t, the time coordinate along the world line, to s = √(r2 + t2), the four-space distance from an event on that world line to an event a spatial distance r from the point charge. Using t = √(s2r2) and dt = (s/t) ds, we have:

 ∫–∞∞ G(√(r2 + t2)) dt = 2 ∫r∞ G(s) (s / √(s2 – r2)) ds = 2 ∫r∞ [C1 J1(ωm s) + C2 Y1(ωm s)] / √(s2 – r2) ds = 2 [C1 sin(ωm r) – C2 cos(ωm r)] / (ωm r) (18)

We can match this with the Riemannian Coulomb potential of a point particle with charge q by setting C1 = 0 and C2 = q ωm / (8 π).

We’ve done this calculation for a scalar potential, φ, but the result will be most useful if we express it in terms of four-vectors. In those terms, each infinitesimal segment of a particle’s world line makes a contribution to the four-potential A that is parallel to the particle’s four-velocity u. We add a minus sign because φ = –At.

 Green’s function Particle has charge q. Its world line y(τ) is parameterised by proper time τ. Its four-velocity u(τ) = ∂τy(τ). The four-potential A is evaluated at event x. dA(x) = –u(τ)[q ωm / (8 π)] Y1(ωm |x – y(τ)|) / |x – y(τ)| dτ (Riemannian)

We won’t give the Lorentzian equivalent here, as it would require a substantial detour to explain all the details and differences. We’ll just note that what’s known as the Liénard-Wiechert potential at a given event depends only on the location and four-velocity of the charge on the intersection of its world line with the past light cone of the event where we’re evaluating A. In other words, as you might expect, in Lorentzian physics A is only affected by information about the particle propagating from the past, at the speed of light.

The Riemannian Green’s function we’ve given here makes no distinction between the past and the future. That will be fine for problems in electrostatics and magnetostatics, but we need to keep in mind that if it’s applied to situations where electromagnetic waves are generated, it will produce solutions containing both incoming and outgoing waves.

#### Electric Dipoles

An electric dipole consists of two point charges of equal strength, one positive and one negative, which are held a fixed distance apart. If the charges are close to each other, then they’ll tend to cancel each other’s Coulomb potential, but there will be a characteristic dipole field remaining. We can simplify the way we think about the shape of this field by studying the limiting case where the two charges are moved ever closer to each other, while the strength of each charge increases. If we define a vector p, the dipole moment, to be the displacement vector pointing from the negative charge to the positive charge multiplied by the (positive) strength of the charge, then we take the limit where p remains constant and finite, but the separation goes to zero while the strength of each charge goes to infinity.

The easiest way to obtain this limit is by taking the derivative of the Coulomb potential along the opposite direction to the chosen dipole moment. The resulting potential is shown in the diagram on the right, and the formulas for the potential and electric field are given in the table below. Here r is a three-dimensional vector from the location of the dipole to the point where we’re evaluating the field, and r is its magnitude.

 Electric Dipole potential φ(r) = –[p · r / (4 π r3)] [cos(ωm r) + ωm r sin(ωm r)] (Riemannian) φ(r) = p · r / (4 π r3) (Lorentzian) Electric Dipole field E(r) = – [(3 (p · r) r – r2 p) / (4 π r5)] [cos(ωm r) + ωm r sin(ωm r)] + [ωm2 (p · r) r / (4 π r3)] cos(ωm r) (Riemannian) E(r) = ((3 p · r) r – r2 p) / (4 π r5) (Lorentzian)

If you experienced a sense of déjà vu at the sight of the Riemannian dipole potential, then you’ve probably seen a very similar drawing for the potential of an oscillating dipole in conventional electromagnetism. The static Riemannian dipole’s field is, in fact, precisely the same as the spatial part of the standing wave that can be constructed in conventional electromagnetism by summing incoming and outgoing radiation associated with an oscillating dipole.

#### Charged Spherical Shells

Suppose we have a total charge of Q distributed uniformly over a spherical shell of radius R. It’s a well-known result in Lorentzian electromagnetism that the potential outside the sphere is exactly the same as that due to a point charge at the centre of the sphere, while in the interior the potential is constant. However, in Riemannian electromagnetism the result is quite different! Either by explicitly integrating the contributions from across the surface, or by using the appropriate form of Gauss’s Law, we get the following:

Uniformly charged spherical shell
Shell of radius R, total charge Q
Uniformly charged spherical shell, potential
φ(r) =
 –Q cos(ωm R) sin(ωm r) / (4 π ωm R r) r < R –Q sin(ωm R) cos(ωm r) / (4 π ωm R r) r > R
(Riemannian)
φ(r) =
 Q / (4 π R) r < R Q / (4 π r) r > R
(Lorentzian)
Uniformly charged spherical shell, field
E(r) =
 –[Q cos(ωm R) / (4 π ωm R r2)] [sin(ωm r) – ωm r cos(ωm r)] er r < R –[Q sin(ωm R) / (4 π ωm R r2)] [cos(ωm r) + ωm r sin(ωm r)] er r > R
(Riemannian)
E(r) =
 0 r < R Q / (4 π r2) er r > R
(Lorentzian) In the Riemannian case, the exterior potential for the shell is that of a point charge multiplied by a factor of sin(ωm R) / (ωm R), while the interior potential has the roles of r and R exchanged.

Though the interior potential is generally not constant, for certain values of R either the interior or exterior potential will be zero. When ωm R is an odd multiple of π/2, or equivalently, when R is an odd multiple of one quarter the minimum wavelength of light, λmin, the interior potential will be zero. When ωm R is a multiple of π, or equivalently, when R is a multiple of half λmin, the exterior potential will be zero.

Of course these exact cancellations are very sensitive to the precise geometry of the charge distribution. In general, though, the exterior potential will be substantially diminished compared to that of a point charge.

#### Charged Solid Spheres

We can integrate our results for spherical shells to obtain the potential and electric field due to a charge Q uniformly distributed throughout a solid sphere.

Uniformly charged solid sphere
Sphere of radius R, total charge Q
Uniformly charged solid sphere, potential
φ(r) =
 [3Q / (4 π ωm2 R3)] [ 1 – [cos(ωm R) + ωm R sin(ωm R)] sin(ωm r) / (ωm r)] r < R –3Q [sin(ωm R) – ωm R cos(ωm R)] cos(ωm r) / (4 π ωm3 R3 r) r > R
(Riemannian)
φ(r) =
 [Q / (8 π R)] [3 – (r / R)2] r < R Q / (4 π r) r > R
(Lorentzian)
Uniformly charged solid sphere, field
E(r) =
 –3Q [(cos(ωm R) + ωm R sin(ωm R)) / (4 π ωm3 R3 r2)] [sin(ωm r) – ωm r cos(ωm r)] er r < R –3Q [(sin(ωm R) – ωm R cos(ωm R)) / (4 π ωm3 R3 r2)] [cos(ωm r) + ωm r sin(ωm r)] er r > R
(Riemannian)
E(r) =
 Q r / (4 π R3) er r < R Q / (4 π r2) er r > R
(Lorentzian) In the Lorentzian case, as with a spherical shell, the potential and field outside the solid sphere are simply those of a point charge concentrated at the centre of the sphere. Inside the solid sphere, the field is that due to whatever part of the sphere lies closer to the centre than you are, so it increases linearly with the distance from the centre, while the potential is quadratic in the distance from centre.

In the Riemannian case, the exterior potential and field are those of a point charge multiplied by a factor depending on the size of the sphere:

3 [sin(ωm R) – ωm R cos(ωm R)] / [ωm3 R3]

This factor oscillates with R, and has its first zero at R ≈ 0.715 λmin.

The interior potential consists of a flat term that depends on R but doesn’t oscillate, plus a term that’s oscillatory in both R and r. The oscillating part can be made zero by the right choice of R, leaving the potential flat throughout the sphere, with the first zero at R ≈ 0.445 λmin.

#### Capacitors

Suppose we have two concentric charged shells, bearing equal and opposite charges. This setup constitutes a charged capacitor. Real-world capacitors in electronic circuits are usually much more complex than this, but this simple geometry will allow us to make some exact calculations that demonstrate how capacitance works in the Riemannian universe.

Spherical capacitor
Inner shell of radius R1, total charge –Q
Outer shell of radius R2, total charge +Q
Spherical capacitor, potential
φ(r) =
 Q sin(ωm r) [R2 cos(ωm R1)–R1 cos(ωm R2)] / (4 π ωm r R1R2) r < R1 Q [R2 cos(ωm r) sin(ωm R1)–R1 sin(ωm r) cos(ωm R2)] / (4 π ωm r R1R2) R1 < r < R2 Q cos(ωm r) [R2 sin(ωm R1)–R1 sin(ωm R2)] / (4 π ωm r R1R2) r > R2
(Riemannian)
φ(r) =
 Q (R1–R2) / (4 π R1R2) r < R1 Q (r–R2) / (4 π r R2) R1 < r < R2 0 r > R2
(Lorentzian)
Spherical capacitor, field
E(r) =
 Q [sin(ωm r)–ωm r cos(ωm r)] [R2 cos(ωm R1)–R1 cos(ωm R2)] / (4 π ωm r2 R1R2) er r < R1 Q [R1 cos(ωm R2) (ωm r cos(ωm r)–sin(ωm r)) + R2 sin(ωm R1) (ωm r sin(ωm r)+cos(ωm r))] / (4 π ωm r2 R1R2) er R1 < r < R2 Q [ωm r sin(ωm r)+cos(ωm r)] [R2 sin(ωm R1)–R1 sin(ωm R2)] / (4 π ωm r2 R1R2) er r > R2
(Riemannian)
E(r) =
 0 r < R1 –Q/(4 π r2) er R1 < r < R2 0 r > R2
(Lorentzian)
Spherical capacitor, capacitance
C = 8 π ωm R12 R22 /
[4 R1R2 sin(ωm R1) cos(ωm R2) – R12 sin(2 ωm R2) – R22 sin(2 ωm R1)]
(Riemannian)
C = (4 π R1R2) / (R2R1) (Lorentzian) In the Lorentzian case, the potential will always rise from a negative value on the inner shell to zero on the outer shell, and the voltage across the device is defined as a positive value:

VLorentzian = φ(R2) – φ(R1) = Q (R2R1) / (4 π R1R2)

The constant of proportionality between the total positive charge and the voltage difference is known as the capacitance of the device, C.

CLorentzian = Q / VLorentzian = (4 π R1R2) / (R2R1)

In the Riemannian case, the voltage difference between the shells will still be proportional to the total charge, and we can define the capacitance in the same way, but the formula (given in the table above) is quite a bit more complex, being sensitive to the length scale set by the minimum wavelength of light. In principle the Riemannian capacitance can be either positive or negative, and even infinite. Infinite capacitance means you can pour as much charge as you want into the device without building up a voltage between the shells themselves, though the electric field will still increase. Negative capacitance implies that the shell with an excess of positive charge is at a lower potential than the shell with an excess of negative charge, so given a connection between the two, the positive shell will draw in yet more positive charge. When you short-circuit an ordinary capacitor, it discharges; when you short-circuit a capacitor with negative capacitance, it increases its charge.

Clearly this could lead to a runaway process, and there’s nothing in our (highly simplified) analysis to indicate when it would come to an end. But in a more detailed model of a circuit with a negative capacitor that included the properties of all the materials involved, there would eventually be complications that cut short the build-up of charge. Similarly, the mere fact that the Riemannian Coulomb potential allows situations in which like charges attract seems to threaten the possibility that all the positive charge in the universe could end up clumped together in one place — but that scenario neglects quantum-mechanical effects that put limits on the agglomeration of identical charged particles.

It’s also important to note that the situation we’ve studied is an idealisation where the shells are perfectly smooth and their charge evenly distributed, on a scale much smaller than the minimum wavelength of light. Any bumps of a greater size than that will produce a device with a mixture of positive and negative capacitance, leading to the kind of cancellations that moderate all electrostatic phenomena in the Riemannian universe.

Furthermore, this whole analysis assumes that any changes in the charge and voltage occur very slowly. We treat capacitors in an alternating current in a later section.

### Examples from Magnetostatics

#### Linear Current

Suppose we have a steady current I running through a long, thin, straight wire. The Riemannian version of the Ampère-Maxwell Law gives us the curl of the magnetic field, ∇ × B, as a function of the current density and the three-dimensional magnetic potential, A(3). But because we want to think of the current as being concentrated along an infinitesimally thin wire, it’s convenient to convert this law to an integral form, by means of the Kelvin-Stokes theorem, which relates an integral of the curl of a vector field over a surface to a line integral around the boundary of that surface:

 ∫Surface (∇ × B) · n = ∫Boundary B · t (19)

Here n is a unit normal to the surface, and t is a unit tangent to the curve that forms the boundary of the surface, running counterclockwise around the surface when viewed from “above” if our choice for n defines what we mean by “up”.

If we choose as our surface a disk of radius r centred on the wire and perpendicular to it, by symmetry we expect the magnetic potential A(3) to point parallel to the wire and to be a function only of the distance r from the wire. If we choose to have the wire run along the z-axis, we have:

 A(3)(r) = A(r) ez (20) B(r) = ∇ × A(3)(r) = ∂y A(r) ex – ∂x A(r) ey = A'(r) [ (y / r) ex – (x / r) ey] = –A'(r) eφ (21)

where eφ is a unit vector field that points counterclockwise around the wire. If we then apply the Kelvin-Stokes theorem, equation (19), and the Ampère-Maxwell Law, we get:

 I + 2 π ωm2 ∫0r A(s) s ds = –2 π r A'(r) (22)

Dividing through by 2 π, taking the derivative of this with respect to r, and rearranging slightly we have:

 A''(r) + A'(r) / r + ωm2 A(r) = 0 (23)

The general solution to this differential equation is:

 A(r) = C1 J0(ωm r) + C2 Y0(ωm r) (24)

where J0 and Y0 are Bessel functions of the first and second kind. The derivatives of these Bessel functions give us:

 A'(r) = –ωm [C1 J1(ωm r) + C2 Y1(ωm r)] (25)

Now, given this result, the limit as r→0 of the right-hand side of equation (22) is:

 limr→0 (–2 π r A'(r)) = –4 C2 (26)

while the same limit of the left-hand side of equation (22) is is simply the current, I. So we have C2 = –I/4. This leaves C1 undetermined, but as with our derivation of the Coulomb potential, we take the C1 term to be a motionless radiation field coming in from the past that has nothing to do with the current I.

 Linear current magnetic potential A(3)(r) = –[I / 4] Y0(ωm r) ez (Riemannian) A(3)(r) = –[I / (2 π)] log(r) ez (Lorentzian) Linear current magnetic field B(r) = –[I ωm / 4] Y1(ωm r) eφ (Riemannian) B(r) = [I / (2 π r)] eφ (Lorentzian)

The Bessel functions are oscillatory, so the magnetic field around the current reverses direction on a similar length scale to the reversals of the electric field around a point charge. Because the magnetic field has the same direction very close to the current in both the Lorentzian and Riemannian cases, and because the Lorentz force law is also the same in both cases, in theory two sufficiently close (and narrow) wires with currents running in parallel will experience an attractive force. However, as with the electrostatic force the spatial oscillation of the field will lead to significant cancellations over any objects whose width exceeds the wavelength of the oscillation.

In this example we can once again see a link between static Riemannian solutions and the spatial part of oscillating Lorentzian solutions. The Riemannian field around the current is the same as the spatial part of the standing wave around an oscillating current in conventional electromagnetism. Of course an oscillating current in the real world is usually associated with a purely outgoing wave, but in the presence of an incoming wave of the same strength a standing wave will be produced with exactly this form.

#### The Biot-Savart Law

In conventional magnetostatics, the Biot-Savart Law gives the magnetic field produced by a steady current I flowing along a thin wire:

 B = [I / (4 π)] ∫ t × r / r3 dl (27)

Here the variable of integration, l, is the length along the wire, r is a three-dimensional displacement vector from an element of the wire to the point where the field B is being evaluated, and t is a unit tangent vector to the wire.

We will obtain the Riemannian equivalent by making use of the Riemannian Green’s function we derived earlier.

Each element of the wire of length dl will be taken to contain both moving and stationary charges of magnitude dq = ρ dl, where ρ is the linear charge density in the wire. The moving charges will contribute dq u dτ to the Green’s function integral — where u is the four-velocity of each moving charge in this element of wire — but we know that the time component of this vector will be cancelled exactly by an opposite amount of stationary charge present in the wire, which is assumed to be electrically neutral overall. The spatial part of u dτ is just v dt, where v is the ordinary velocity of the moving charges and t is the coordinate time in a frame in which the wire is stationary. And since the current I flowing through the wire is ρ v — or in vector terms, I t = ρ v, where t is a unit tangent vector to the wire — we have:

 (dq u dτ)net = ρ dl v dt = I t dt dl (28)

We can then integrate the Green’s function over t with I t dl as a constant; the integral is the same as that which we used to obtain the Coulomb potential from the Green’s function. Not surprisingly, then, the magnetic potential we get from this integral looks just like a Coulomb potential, and the magnetic field we get by taking the curl of it has the same magnitude (but not direction) as the Coulomb electric field.

 Biot-Savart Law for magnetic potential A(3)(r) = [I / (4 π)] ∫ t cos(ωm r) / r dl (Riemannian) A(3)(r) = [I / (4 π)] ∫ t / r dl (Lorentzian) Biot-Savart Law for magnetic field B(r) = [I / (4 π)] ∫ [cos(ωm r) + ωm r sin(ωm r)] t × r / r3 dl (Riemannian) B(r) = [I / (4 π)] ∫ t × r / r3 dl (Lorentzian)

An explicit integral of the magnetic potential around an infinite straight wire using the Biot-Savart Law gives a result in agreement with the formula we obtained previously.

#### Magnetic Dipoles

A magnetic dipole is a system that produces a certain kind of simple, highly symmetrical magnetic field. A small loop of circulating current, or a charged particle with quantum-mechanical spin are examples of this, but many systems that possess more complicated fields will look like magnetic dipoles from a distance. For a loop of current, the magnetic moment, which we’ll call μ, is defined as a vector normal to the loop whose magnitude is the product of the area of the loop and the strength of the circulating current. The convention is that the current circulates in the direction of the fingers of the right hand when the thumb is aligned with the magnetic moment vector. The pure dipole field can be taken either as the dominant term (that is, the term that drops off least slowly with distance) in the field from a finite loop, or as the field in the limiting case when the area of the loop shrinks to zero while the current goes to infinity, with the product of the two remaining finite.

In Lorentzian electromagnetism, it turns out that the magnetic field of a magnetic dipole takes precisely the same mathematical form as the electric field of an electric dipole. However, that’s impossible in the Riemannian case, because the magnetic field B must satisfy ∇ · B = 0 everywhere — which is to say that lines of magnetic flux form unbroken loops — but that isn’t true of the Riemannian electric field even in a vacuum, and the electric dipole field has lines of electric flux starting and ending far from the dipole itself.

We can use the Biot-Savart Law to find the magnetic dipole potential in the limiting case of a small current loop. As with the electric dipole, we take the derivative of an appropriate quantity to obtain the limit. In this case, we integrate — over half the current loop — the sum of the contribution from an element of the loop and the element directly opposite it, where the current will be flowing in the opposite direction. In the limit of a small loop, that sum is just the directional derivative across the loop of 1/r or cos(ωm r)/r, evaluated at the centre of the loop, then multiplied by the diameter of the loop and the tangent vector to the loop.

 Magnetic Dipole potential μ is magnetic dipole moment A(3)(r) = [μ × r / (4 π r3)] [cos(ωm r) + ωm r sin(ωm r)] (Riemannian) A(3)(r) = μ × r / (4 π r3) (Lorentzian) Magnetic Dipole field B(r) = [(3 (μ · r) r – r2 μ) / (4 π r5)] [cos(ωm r) + ωm r sin(ωm r)] – [ωm2 (μ × r) × r / (4 π r3)] cos(ωm r) (Riemannian) B(r) = (3 (μ · r) r – r2 μ) / (4 π r5) (Lorentzian)

In Lorentzian electromagnetism, although not all materials can be magnetised, the conditions that allow large numbers of magnetic dipoles (generally, the spins of electrons) to combine to produce a much stronger field are not all that stringent. So long as the magnetic moment vectors of a collection of dipoles are parallel, all their contributions to the external magnetic field will reinforce each other. But because the Riemannian magnetic dipole field switches directions on a very small length scale, in any collection of dipoles there will be a huge amount of cancellation between their fields — and the combined field will again have the same kind of spatial oscillations. In the Riemannian universe, there can be no equivalent of our permanent magnets with fields that sustain a force in a single direction over a long distance.

#### Solenoids and Inductance

A solenoid is a helical coil of wire. We will approximate the field inside and outside the coil when there is a steady current flowing through it, assuming that the solenoid is so long that we can neglect precisely what happens at the ends. In effect, what we will analyse is an infinitely long solenoid, which is easier to deal with than a finite one because we can approximate it as having both translational symmetry along its axis and rotational symmetry around the axis.

The most general solution for the magnetic potential and magnetic field with this kind of cylindrical symmetry, and with the magnetic field pointing along the z-axis, is:

 A(3)(r) = [a J1(ωm r) + b Y1(ωm r)] eφ (29a) B(r) = [a ωm J0(ωm r) + b ωm Y0(ωm r)] ez (29b) However, we need to allow the solutions to be different inside and outside the coil, so we will have four coefficients, aint, bint, aext and bext, to find. The need for the solution to be finite at r = 0 means bint = 0, and we require A(3) to be continuous at r = R, the radius of the coil. We get a third relationship by applying Ampère’s Law to a thin vertical rectangle that encloses the current flowing through the n windings along a unit height of the solenoid; this tells us that the difference between the B field immediately inside and outside the coil is equal to that current.

Obtaining a fourth equation to completely fix the solution takes a bit more work. It’s not hard to integrate the contribution to A(3) from the Biot-Savart Law along a vertical strip of the coil, but then a precise expression for the integral around the coil is intractable. But we can obtain a first-order Taylor series, in r, for the contribution to A(3) at a point a small distance from the centre of the coil, and then integrate that around the entire coil. Matching that Taylor series to an equivalent Taylor series obtained from our general solution gives us the value of aint, and then we can solve the other equations to determine all the coefficients. It turns out that aext = 0, so we have a single term in both the interior and exterior solutions.

In conventional electromagnetism, the magnetic field outside an infinite solenoid is zero, but that is not generally true in the Riemannian case.

Long solenoid
Solenoid has radius R, current I and n windings per unit length.
Axis of solenoid coincides with the z-axis.
Long solenoid, magnetic potential
A(3)(r) =
 –½ n I π R Y1(ωm R) J1(ωm r) eφ r < R –½ n I π R J1(ωm R) Y1(ωm r) eφ r > R
(Riemannian)
A(3)(r) =
 (n I r)/2 eφ r < R (n I R2)/(2 r) eφ r > R
(Lorentzian)
Long solenoid, magnetic field
B(r) =
 –½ n I π ωm R Y1(ωm R) J0(ωm r) ez r < R –½ n I π ωm R J1(ωm R) Y0(ωm r) ez r > R
(Riemannian)
B(r) =
 n I ez r < R 0 r > R
(Lorentzian)
Long solenoid, total magnetic flux within coil
Φ = n I π2 R2 J1m R) Y1m R) (Riemannian)
Φ = n I π R2 (Lorentzian)
Long solenoid, inductance
For solenoid of length l.
L = n2 π2 R2 l J1m R) Y1m R) (Riemannian)
L = n2 π R2 l (Lorentzian)

In the table above, we’ve included the total magnetic flux that threads through the solenoid; this is the area integral of the magnetic field B over a cross-section perpendicular to the axis.

If the current flowing through the solenoid starts changing, then so will the magnetic field, so via the Maxwell-Faraday Law an electric field will develop, with a curl proportional to the time rate of change of the magnetic field. Then by the Kelvin-Stokes theorem, the integral of the electric field around any loop that encloses that changing magnetic field will be proportional to the integral over the area of the loop of the rate of change of the magnetic field. But that area integral is just the time rate of change of the total magnetic flux through the loop. So each loop enclosing a changing quantity of flux will have an electromotive force around it that is proportional to the rate of change of flux. In fact, the constant of proportionality is simply minus 1.

EMF = –dΦ/dt

Applying this argument to the coils that constitute our solenoid, if the current flowing through the solenoid changes then a voltage will be produced across the leads of the solenoid that is proportional to the current’s rate of change. The opposite of the constant of proportionality is known as the inductance of the solenoid, L.

EMF = –L dI/dt

The Riemannian inductance for a solenoid of length l (and hence with a total of nl coils) is:

LRiemannian = n l Φ / I = –n2 π2 R2 l J1m R) Y1m R)

while the Lorentzian value is:

LLorentzian = n l Φ / I = n2 π R2 l

The product of Bessel functions in the Riemannian inductance can be either positive or negative, allowing an inductance of either sign. Negative inductance, like negative capacitance, can lead to runaway effects: an increase in the current through a negative inductor will produce a voltage that drives the current even higher, until damage to the materials or other effects put a brake on the current’s growth.

But as with the capacitor, our model here is highly idealised. The difference in the geometry of the coil between a positive and negative inductor is about one minimum wavelength of light, so if the wire in the coil is thicker than that, or deviates from a perfect circle by more than that distance, the solenoid will effectively consistent of both positive and negative inductors — leading, as usual, to a significant degree of cancellation between the two.

What’s more, all our formulas here assume a situation that can be approximated as a steady current. We treat solenoids carrying an alternating current in a later section.

### Electromagnetic Energy Flow

Runaway effects of the kind we see in systems with negative capacitance or inductance would clearly violate conservation of energy in our own universe, but in the Riemannian universe, where the energy associated with matter (including the electromagnetic field) has the opposite sense to kinetic and potential energy, it’s trickier to follow exactly what’s going on. We need to be able to quantify the energy stored in, and transported by, the electromagnetic field. But in order to do this, first we need to take a short detour into a Lagrangian treatment of Riemannian electromagnetism.

#### Lagrangian for Riemannian Electromagnetism

The Lagrangian for a field theory such as electromagnetism is a quantity L that is a function of the field and its derivatives, whose integral over a region of four-space is stationary under variations of the field, when the field satisfies the appropriate equations. If we integrate L to obtain what’s known as the action, S:

S(Ak) = ∫ L(Ak)

then when A satisfies the field equations, S should be, to first order, unchanged by any small variation in A, just like a function of an ordinary variable at a local maximum or minimum.

If the Lagrangian is expressed as a function of the field components Ak and their derivatives ∂j Ak, then — so long as the field vanishes on the boundary of the region of integration, or there are cyclic boundary conditions — the requirement for the action to be stationary is equivalent to the Euler-Lagrange equations:

j [ ∂j AkL ] = ∂AkL

We will define the Riemannian Proca Lagrangian, LRP, in two parts: a field Lagrangian, Lfield, and an interaction term, Linter. Below we also give the Lorentzian equivalents.

 Riemannian Proca Lagrangian Lfield = ¼ Fij Fij – ½ ωm2 Aa Aa = ½ (|B|2 + |E|2) – ½ ωm2 (|A(3)|2+φ2) Linter = –Ak jk = –A(3) · j(3) + φ ρ LRP = Lfield + Linter (Riemannian) Maxwell Lagrangian Lfield = –¼ Fij Fij = –½ (|B|2 – |E|2) Linter = Ak jk = A(3) · j(3) – φ ρ LMaxwell = Lfield + Linter (Lorentzian)

The Euler-Lagrange equations for the full Lagrangians correspond to the Riemannian Proca equation or Maxwell’s equation, respectively.

#### The Stress-Energy Tensor for Electromagnetism

We can find the stress-energy tensor for the Riemannian electromagnetic field, which we will call T, by means of the formula:

 Stress-Energy Tensor From Field Lagrangian Tab = –Lfield gab + 2 ∂gab Lfield (Riemannian) Tab = Lfield gab – 2 ∂gab Lfield (Lorentzian)

Here gab and gab are components of the metric tensor for four-space, with either two lower or two upper indices. In orthonormal coordinates, the matrices of these components are just the 4×4 identity matrix — that is, 1 when a=b and 0 otherwise. But if we think of the components of the dual vector version of our four-potential field, Ak, as the fundamental variables for the Lagrangian, then every time we raise an index to get something like the term Aa Aa, we’re making using of gab (using the Einstein Summation Convention):

Aa Aa = Aa (gab Ab)

So if we view the Lagrangian as a function of the components Ak of the four-potential and the components gab of the metric tensor, the derivative in terms of the metric, evaluated at the actual metric, gives us the second term in the stress-energy tensor.

It would be too much of a detour to explain in any detail why this construction works, but it ultimately fits in with the way Einstein’s equation for gravity — which relates a tensor derived from the metric to the stress-energy tensor of any matter present — can itself be derived from an appropriate Lagrangian. The crucial point is that the complete stress-energy tensor constructed this way (one that includes all matter) will have zero divergence, which means energy and momentum will be conserved.

We will express the result of this calculation both in terms of the electromagnetic field F and the four-potential A, and in terms of the three-dimensional fields B, E, φ and A(3).

Riemannian Electromagnetic Stress-Energy Tensor
Tab = Lfield gab + Fac Fbc – ωm2 Aa Ab
=
 [|E|2–|B|2+ ωm2(|A(3)|2–φ2)]/2 ByEz–BzEy+ ωm2Axφ BzEx–BxEz+ ωm2Ayφ BxEy–ByEx+ ωm2Azφ ByEz–BzEy+ ωm2Axφ [|B|2–|E|2+ ωm2(|A(3)|2+φ2)]/2 +Ex2–Bx2–ωm2Ax2 ExEy–BxBy– ωm2AxAy ExEz–BxBz– ωm2AxAz BzEx–BxEz+ ωm2Ayφ ExEy–BxBy– ωm2AxAy [|B|2–|E|2+ ωm2(|A(3)|2+φ2)]/2 +Ey2–By2–ωm2Ay2 EyEz–ByBz– ωm2AyAz BxEy–ByEx+ ωm2Azφ ExEz–BxBz– ωm2AxAz EyEz–ByBz– ωm2AyAz [|B|2–|E|2+ ωm2(|A(3)|2+φ2)]/2 +Ez2–Bz2–ωm2Az2
Lorentzian Electromagnetic Stress-Energy Tensor
Tab = Lfield gab + Fac Fbc
=
 [|E|2+|B|2]/2 ByEz–BzEy BzEx–BxEz BxEy–ByEx ByEz–BzEy [|E|2+|B|2]/2 –Bx2–Ex2 –ExEy–BxBy –ExEz–BxBz BzEx–BxEz –ExEy–BxBy [|E|2+|B|2]/2 –By2–Ey2 –EyEz–ByBz BxEy–ByEx –ExEz–BxBz –EyEz–ByBz [|E|2+|B|2]/2 –Bz2–Ez2

The divergence of T for the electromagnetic field alone is not zero when j is not zero. Rather, we have:

bTab + Fac jc = 0

The second term corresponds to the density of the four-force acting on the current, which in turn will be the divergence of the charged matter’s own stress-energy tensor. So the sum of stress-energy tensors for both the electromagnetic field and the matter on which it acts will be zero.

#### Energy Density and the Poynting Vector

The stress-energy tensors can look a bit intimidating, but for now let’s ignore the terms that lie beyond the first row and column, which describe pressure and shear stress. The terms we’re interested in are Ttt, which gives the energy density u in the electromagnetic field, and the vector S = (T tx, T ty, T tz), known as the Poynting vector, which describes the rate of energy flow across a unit area. (Note that we have to raise a t index to get the Poynting vector, which changes the sign in the Lorentzian case).

 Electromagnetic energy density u = [|E|2–|B|2 + ωm2(|A(3)|2–φ2)]/2 (Riemannian) u = [|E|2+|B|2]/2 (Lorentzian) Poynting vector S = B × E + ωm2 φ A(3) (Riemannian) S = E × B (Lorentzian)

Let’s look at the energy density and flow in a few simple examples.

##### Energy in Plane Waves

For a plane wave, we have the description in four-space:

A(x) = A0 sin(k · x)
F(x) = (kA0) cos(k · x)

where |k| = ωm and A0 · k = 0. From this, we can compute the stress-energy tensor:

Tab = Lfield gab + Fac Fbc – ωm2 Aa Ab
T = A02 kk cos(k · x)2 + ωm2 [A0A0 – (A02 / 2) I4] cos(2 k · x)

If we average T over one cycle, cos(2 k · x) becomes zero while cos(k · x)2 becomes 1/2, so we have:

<T> = ½ A02 kk

That’s just the stress-energy tensor we’d expect of a uniform cloud of matter with a four-velocity u = km and a mass-energy density (in its rest frame) of ½ A02 ωm2. If we define u that way, and also define a unit vector a0 = A0/A0, we can write the stress-energy tensor as:

T = A02 ωm2 [uu cos(k · x)2 + (a0a0 – ½ I4) cos(2 k · x)]

Suppose the light has an angular time frequency of ω = kt = ωm ut. Then the energy density u (not to be confused with the four-velocity u or any of its components) is:

u = Ttt = A022 cos(k · x)2 + ωm2 (a0, t2 – ½) cos(2 k · x)]
= ½ A022 + (ω2 + (2 a0, t2 – 1) ωm2) cos(2 k · x) ]

Clearly there are values for ω and a0, t such that the energy density will be negative some of the time: for example, if a0, t = 0 and ω < ωm / √2. But the average energy density over any cycle will still be positive:

<u> = ½ A02 ω2

We can see from <T> that the same kind of average of the Poynting vector S will be parallel to the spatial projection of the propagation vector k, which in turn is parallel to the ordinary velocity v that corresponds to the four-velocity u = km. Specifically:

<S> = ½ A02 ω2 v
##### Energy in Capacitors

We can apply our formula for the energy density in an electric field to the spherical capacitor that we analysed earlier. In the Lorentzian case, the electric field is zero outside the capacitor, and the energy density depends only on the field, so we can get a finite answer from a straightforward integration.

In the Riemannian case, the situation is a bit trickier. The potential and the electric field extend beyond the capacitor, and the energy density computed from them is non-zero, out to infinity. The energy contained within a sphere of a given radius S >> R2 is cyclic in S, and the peak-to-peak distance of these cycles does not grow smaller with distance, so the integral to infinity is undefined. But we can get a sensible finite answer by setting the cyclic part to zero and taking the asymptotic value of the remainder.

Spherical capacitor
Inner shell of radius R1, total charge –Q
Outer shell of radius R2, total charge +Q
Spherical capacitor, capacitance
C = 8 π ωm R12 R22 /
[4 R1R2 sin(ωm R1) cos(ωm R2) – R12 sin(2 ωm R2) – R22 sin(2 ωm R1)]
(Riemannian)
C = (4 π R1R2) / (R2R1) (Lorentzian)
Spherical capacitor, energy density in electric field
u(r) =
 (|E(r)|2 – ωm2 φ(r)2) / 2 See capacitor field calculations.
(Riemannian)
u(r) =
 0 r < R1 Q2 / (32 π2 r4) R1 < r < R2 0 r > R2
(Lorentzian)
Spherical capacitor, total energy in electric field
<U> =
 ∫0R2 4 π r2 u(r) dr + ∫R2S 4 π r2 u(r) dr Averaged for S >> R2
= [Q2 / (16 π ωm R12 R22) ][R12 sin(2 ωm R2)+R22 sin(2 ωm R1) –4 R1R2 sin(ωm R1) cos(ωm R2)]
= Q2 / (2 C) (Riemannian)
U = R1R2 4 π r2 u(r) dr
= [Q2 / (8 π)] [1/R1 – 1/R2]
= Q2 / (2 C) (Lorentzian)

The answers we get in both the Riemannian and Lorentzian cases are compatible with the potential energy that we expect for the capacitor, if we integrate the energy required to charge it up from zero charge to a total charge of Q:

Potential energy = ∫0Q V(q) dq = ∫0Q (q/C) dq = Q2 / (2 C)

In the Lorentzian case, this is exactly the energy stored in the electric field. In the Riemannian case, it’s the opposite! The reason, of course, is that potential energy in the Riemannian universe has the opposite sense to electromagnetic field energy.

##### Energy in Inductors

The calculations for the energy stored in a solenoid follow the same general pattern as that for a capacitor. In the Lorentzian case, there is a constant magnetic field over a finite volume, making the total energy in the field very easy to compute.

In the Riemannian case, we can’t neglect the field outside the solenoid, and the integral over an infinite region doesn’t converge, but if we integrate out to a radius S the total energy enclosed cycles between maxima and minima that, in the limit of large S, approach fixed values. In the table below, we use an asymptotic expression for a product of Bessel functions of S in terms of a cosine function. The average value over a cycle of this cosine term (which we can easily find, just by setting that term to zero) then gives a result that accords with the energy from the inductance.

Long solenoid
Solenoid has radius R, length l, current I and n windings per unit length.
Long solenoid, inductance
L = n2 π2 R2 l J1m R) Y1m R) (Riemannian)
L = n2 π R2 l (Lorentzian)
Long solenoid, energy density in magnetic field
u(r) =
 1/8 n2 I2 π2 R2 ωm2 Y1(ωm R)2 (J1(ωm r)2 – J0(ωm r)2) r < R 1/8 n2 I2 π2 R2 ωm2 J1(ωm R)2 (Y1(ωm r)2 – Y0(ωm r)2) r > R
(Riemannian)
u(r) =
 ½ n2 I2 r < R 0 r > R
(Lorentzian)
Long solenoid, total energy in magnetic field
<U> =
 2 π l [ ∫0R u(r) r dr + ∫RS u(r) r dr ] Averaged for S >> R
= –1/4 n2 I2 π3 R3 l ωm [Y1m R)2 J0m R) J1m R) –
J1m R)2 [Y0m R) Y1m R) – (S/R) Y0m S) Y1m S)] ]
–1/4 n2 I2 π3 R3 l ωm [Y1m R)2 J0m R) J1m R) –
J1m R)2 [Y0m R) Y1m R) – cos(2 ωm S) / (π ωm R)] ]
= ½ n2 I2 π2 R2 l J1m R) Y1m R)
= –½ L I2 (Riemannian)
U = π R2 l u(0)
= ½ n2 I2 π R2 l
= ½ L I2 (Lorentzian)

For an inductor, the potential energy is found by computing the work we need to do to bring the current up from zero to some final steady value I. As we change the current from i to i+di in a time dt, we move a charge i dt against a voltage V = L di/dt. So we have:

Potential energy = ∫0I V(t) i dt = ∫0I L (di/dt) i dt = ½ L I2

As we’d expect, the potential energy computed this way agrees with the total energy in the magnetic field in the Lorentzian case, but is the opposite of the energy in the magnetic field in the Riemannian case.

### Oscillating Solutions Derived From Magnetostatic Ones

Suppose we have a magnetostatic solution of the Riemannian Proca equation, with a four-potential AMS and a source four-current jMS. What we mean by “magnetostatic” is that both AMS and jMS are unchanging in time, and that the fields are solely magnetic, AMSt = 0. We’ve looked at three such solutions: a steady linear current, a magnetic dipole, and a solenoid with a steady current.

Now suppose we take that solution and in place of ωm, the maximum angular frequency of Riemannian light, we substitute a smaller value k, giving us AMS, k and jMS, k, which satisfy the equation:

x2AMS, k + ∂y2AMS, k + ∂z2AMS, k + k2 AMS, k + jMS, k = 0

We then form an oscillating solution:

A = AMS, k cos(ωt)
j = jMS, k cos(ωt)

with an angular time frequency of ω, such that:

k2 + ω2 = ωm2

The new A and j will satisfy the RVWS equation:

x2A + ∂y2A + ∂z2A + ∂t2A + ωm2 A + j
= cos(ωt) [∂x2AMS, k + ∂y2AMS, k + ∂z2AMS, k – ω2 AMS, k + ωm2 AMS, k + jMS, k]
= cos(ωt) [∂x2AMS, k + ∂y2AMS, k + ∂z2AMS, k + k2 AMS, k + jMS, k]
= 0

What about the transverse condition? Our magnetostatic solution satisfies that, with no time component:

x AMS, kx + ∂y AMS, ky + ∂z AMS, kz = 0

After multiplying AMS, k by cos(ωt) this will still be true, and of course we still have At=0. So our new oscillatory solution is a genuine solution of the Riemannian Proca equation.

In all of the above, we could just as well have used sin(ωt) rather than cos(ωt). It also makes no difference whether we use the t direction in this construction, or any other direction in four-space along which the solution is unchanging and the four-potential’s component is zero.

We can get the same kind of oscillating Lorentzian solution from our original magnetostatic Riemannian solution by a very similar process. In Lorentzian electromagnetism, the four-potential doesn’t appear in Maxwell’s equations, and its only physical significance comes through the electromagnetic field F. But different four-potentials A can give rise to exactly the same F, so we’re free to make certain kinds of changes to A without changing the physics; this is known as gauge freedom. One convenient approach to gauge freedom is to choose an extra condition that A must satisfy, and there are various choices that make the calculations easier in various contexts. One such choice is known as the Lorenz gauge condition — that’s “Lorenz” not “Lorentz”, they’re two completely different people! — which requires:

x Ax + ∂y Ay + ∂z Az + ∂t At = 0

This is a Lorentzian version of the transverse condition that we impose on every Riemannian vector wave. So the connections between the two kinds of electromagnetism become much clearer if we do our Lorentzian electromagnetism in Lorenz gauge, where Maxwell’s equations are equivalent to the following equations for the four-potential:

 Maxwell’s Equations for Four-Potential in Lorenz Gauge ∂x2A + ∂y2A + ∂z2A – ∂t2A + j = 0 (LVWS) ∂x Ax + ∂y Ay + ∂z Az + ∂t At = 0 (Lorenz)

If we take our original Riemannian magnetostatic solution, AMS, for a four-current jMS, we can get an oscillating Lorentzian solution as follows. We substitute any frequency ω for ωm, to obtain AMS, ω and jMS, ω, then we multiply them by cos(ωt):

AL = AMS, ω cos(ωt)
jL = jMS, ω cos(ωt)

These functions will then satisfy the Lorentzian vector wave equation with source (LVWS):

x2AL + ∂y2AL + ∂z2AL – ∂t2AL + jL
= cos(ωt) [∂x2AMS, ω + ∂y2AMS, ω + ∂z2AMS, ω + ω2 AMS, ω + jMS, ω]
= 0

Since there is no time component to either four-potential, the fact that AMS, ω meets the transverse condition is enough for AL to meet the Lorenz condition.

#### Linear Alternating Current

If we apply the method we have just described to the four-potential for a steady current through a linear conductor, we obtain the solution for an oscillating standing wave field around a linear conductor carrying an alternating current.

 Linear Alternating CurrentStanding Wave Solution Current I0 cos(ωt) runs along the z-axis For the Riemannian solution, k2 + ω2 = ωm2 Linear AC magnetic potential A(3)(r) = –[I0 / 4] Y0(kr) cos(ωt) ez (Riemannian) A(3)(r) = –[I0 / 4] Y0(ωr) cos(ωt) ez (Lorentzian) Linear AC, magnetic and electric fields B(r) = –[I0 k / 4] Y1(kr) cos(ωt) eφ E(r) = –[I0 ω / 4] Y0(kr) sin(ωt) ez (Riemannian) B(r) = –[I0 ω / 4] Y1(ωr) cos(ωt) eφ E(r) = –[I0 ω / 4] Y0(ωr) sin(ωt) ez (Lorentzian)

A standing wave solution has a fixed form in space and simply oscillates in time. This is the kind of wave we’d expect if the wire was sitting in a cylindrical cavity. But what if we want a travelling wave solution instead? A standing wave can be formed as the sum or difference of ingoing and outgoing travelling waves, and conversely the ingoing and outgoing waves can be recovered as the sum or difference of those standing waves, so if we can find a second standing wave solution, we should be able to construct the travelling waves.

For the second standing wave solution, we go back to our original calculation for a linear current, and use the sourceless solution that is completely independent of the strength of the current. This amounts to changing the Bessel function Y0 into J0 in our potential above. If we also make the new solution 90 degrees out of phase with the original, by changing the cos(ωt) factor to sin(ωt), then add the two solutions together, we end up with an outgoing travelling wave. Since the second solution that we’ve added is sourceless, there’s no need to change the current; this is simply the wave around the same wire with the same current, under different boundary conditions.

For the Lorentzian case, we need to subtract the second solution, not add it, in order to get an outgoing wave.

 Linear Alternating CurrentOutgoing Travelling Wave Solution Current I0 cos(ωt) runs along the z-axis For the Riemannian solution, k2 + ω2 = ωm2 Linear AC magnetic potential A(3)(r) = –[I0 / 4] [Y0(kr) cos(ωt) + J0(kr) sin(ωt)] ez (Riemannian) A(3)(r) = –[I0 / 4] [Y0(ωr) cos(ωt) – J0(ωr) sin(ωt)] ez (Lorentzian) Linear AC, magnetic and electric fields B(r) = –[I0 k / 4] [Y1(kr) cos(ωt) + J1(kr) sin(ωt)] eφ E(r) = –[I0 ω / 4] [Y0(kr) sin(ωt) – J0(kr) cos(ωt) ] ez (Riemannian) B(r) = –[I0 ω / 4] [Y1(ωr) cos(ωt) – J1(ωr) sin(ωt)] eφ E(r) = –[I0 ω / 4] [Y0(ωr) sin(ωt) + J0(ωr) cos(ωt) ] ez (Lorentzian) Linear AC, Poynting vector S(r) = [I02 k ω / 16] [Y1(kr) cos(ωt) + J1(kr) sin(ωt)] [Y0(kr) sin(ωt) – J0(kr) cos(ωt) ] er (Riemannian) S(r) = –[I02 ω2 / 16] [Y1(ωr) cos(ωt) – J1(ωr) sin(ωt)] [Y0(ωr) sin(ωt) + J0(ωr) cos(ωt) ] er (Lorentzian) = [I02 ω / (16 π r)] er (Common) Linear AC, average power radiated (per unit length of wire)

= I02 ω / 8 (Common)

We can see most clearly that these are outgoing travelling waves from <S(r)>, the Poynting vector averaged over one time cycle, where it’s an obviously positive value times the unit vector pointing radially out from the wire.

It might seem a bit puzzling that in the Riemannian case the angular spatial frequency k vanishes from the final results; after all, we expect the speed of these waves to be k / ω, and the density of energy flow to be that speed times the energy density. But it turns out that the energy density is inversely proportional to k, which is not hard to see when you look at the four-potential for large r, which is inversely proportional to the square root of k thanks to the asymptotic expansion of the Bessel functions:

A(r) ≈ I0 cos(krt+π/4) / [2 √(2 π kr)] ez

The analysis of energy flow in a plane wave we carried out previously then gives the same average Poynting vector from this plane wave as we derived from the precise solution.

In the Lorentzian case, the power being radiated means that work must be done to maintain the current at a fixed amplitude. In the Riemannian case, work in the conventional sense must be done by the current, to keep it from growing larger! Strange as this is, it’s exactly what we’d expect, given that energy in the electromagnetic field will have the opposite sense to kinetic and potential energy.

#### Oscillating Magnetic Dipoles

We’ll use the same method to construct the field for an oscillating magnetic dipole, based on our previous result for a static dipole. We won’t show either of the standing wave solutions, we’ll skip straight to the outgoing travelling wave.

 Oscillating Magnetic Dipole Outgoing Travelling Wave Solution Magnetic dipole moment is μ cos(ωt) For the Riemannian solution, k2 + ω2 = ωm2 Oscillating Magnetic Dipole potential A(3)(r) = [μ × r / (4 π r3)] [cos(kr+ωt) + kr sin(kr+ωt)] (Riemannian) A(3)(r) = [μ × r / (4 π r3)] [cos(ω(r–t)) + ωr sin(ω(r–t))] (Lorentzian) Oscillating Magnetic Dipole, magnetic and electric fields B(r) = [(3 (μ · r) r – r2 μ) / (4 π r5)] [cos(kr+ωt) + kr sin(kr+ωt)] – [k2 (μ × r) × r / (4 π r3)] cos(kr+ωt) E(r) = [ω μ × r / (4 π r3)] [sin(kr+ωt) – kr cos(kr+ωt)] (Riemannian) B(r) = [(3 (μ · r) r – r2 μ) / (4 π r5)] [cos(ω(r–t)) + ωr sin(ω(r–t))] – [ω2 (μ × r) × r / (4 π r3)] cos(ω(r–t)) E(r) = [ω μ × r / (4 π r3)] [ωr cos(ω(r–t)) – sin(ω(r–t))] (Lorentzian) Oscillating Magnetic DipolePoynting vector averaged over one cycle = [ k3 ω ((μ · μ) – (μ · er)2) / (32 π2 r2)] er (Riemannian) = [ ω4 ((μ · μ) – (μ · er)2) / (32 π2 r2)] er (Lorentzian) Oscillating Magnetic DipoleTotal power averaged over one cycle

= k3 ω (μ · μ) / (12 π) (Riemannian)

= ω4 (μ · μ) / (12 π) (Lorentzian) If we look at the asymptotic form of the Riemannian four-potential for large r, we have:

A(r) ≈ [ k sin(krt) / (4 π r) ] μ × er

The polarisation is always transverse, with the four-potential pointing around the dipole axis. The magnitude is greatest perpendicular to the dipole, and drops off to zero on the axis itself. The angular distribution of the radiated power is precisely the same as in the Lorentzian case.

In the Riemannian case, the energy density averaged over a cycle is proportional to k2 ω2. Multiplied by the speed of the wave, k / ω, that gives the k3 ω frequency-dependence for the power that we see in the table, and plotted on the right.

#### Alternating Current in a Solenoid

We can apply the same method to adapt our magnetostatic description of a solenoid carrying a steady current to one carrying an alternating current. To get a source-free magnetostatic solution, we change the factor of Y1 in the exterior part of the steady-current solenoid solution to J1, and continue the same solution all the way in to the z-axis. Combining the two standing wave solutions gives us an outgoing travelling wave solution.

Long solenoid (AC)
Outgoing Travelling Wave Solution
Solenoid has radius R, current I0 cos(ωt) and n windings per unit length.
Axis of solenoid coincides with the z-axis.
For the Riemannian solution, k2 + ω2 = ωm2
Long solenoid (AC), magnetic potential
A(3)(r) =
 –½ π I0 n R J1(kr)[J1(kR) sin(ωt)+Y1(kR) cos(ωt)] eφ r < R –½ π I0 n R J1(kR)[J1(kr) sin(ωt)+Y1(kr) cos(ωt)] eφ r > R
(Riemannian)
A(3)(r) =
 1/2 π I0 n R J1(ωr)[J1(ωR) sin(ωt)–Y1(ωR) cos(ωt)] eφ r < R 1/2 π I0 n R J1(ωR)[J1(ωr) sin(ωt)–Y1(ωr) cos(ωt)] eφ r > R
(Lorentzian)
Long solenoid (AC), magnetic and electric fields
B(r) =
 –½ π k I0 n R J0(kr)[J1(kR) sin(ωt)+Y1(kR) cos(ωt)] ez r < R –½ π k I0 n R J1(kR)[J0(kr) sin(ωt)+Y0(kr) cos(ωt)] ez r > R
E(r) =
 1/2 π I0 n ωR J1(kr)[J1(kR) cos(ωt)–Y1(kR) sin(ωt)] eφ r < R 1/2 π I0 n ωR J1(kR)[J1(kr) cos(ωt)–Y1(kr) sin(ωt)] eφ r > R
(Riemannian)
B(r) =
 1/2 π I0 n ωR J0(ωr)[J1(ωR) sin(ωt)–Y1(ωR) cos(ωt)] ez r < R 1/2 π I0 n ωR J1(ωR)[J0(ωr) sin(ωt)–Y0(ωr) cos(ωt)] ez r > R
E(r) =
 –½ π I0 n ωR J1(ωr)[J1(ωR) cos(ωt)+Y1(ωR) sin(ωt)] eφ r < R –½ π I0 n ωR J1(ωR)[J1(ωr) cos(ωt)+Y1(ωr) sin(ωt)] eφ r > R
(Lorentzian)
Long solenoid (AC)
Poynting vector averaged over one cycle
<S(r)> =
 0 r < R 1/(4r) π I02 n2 R2 ω J1(kR)2 er r > R
(Riemannian)
<S(r)> =
 0 r < R 1/(4r) π I02 n2 R2 ω J1(ωR)2 er r > R
(Lorentzian)
Long solenoid (AC)
Total radiated power averaged over one cycle, for a coil of length l
<PRadiated> = ½ π2 I02 l n2 R2 ω J1(kR)2
 (1/8) π2 I02 l n2 R4 ωm k2 Low k limit
(Riemannian)
<PRadiated> = ½ π2 I02 l n2 R2 ω J1R)2
 (1/8) π2 I02 l n2 R4 ω3 Low ω limit
(Lorentzian)
Long solenoid (AC)
Total magnetic flux within coil
Φ = –π2 I0 n R2 J1(kR) [J1(kR) sin(ωt)+Y1(kR) cos(ωt)] (Riemannian)
Φ = π2 I0 n R2 J1R) [J1R) sin(ωt)–Y1R) cos(ωt)] (Lorentzian)
Long solenoid (AC)
Voltage across a coil of length l
V = π2 I0 l n2 R2 ω J1(kR) [Y1(kR) sin(ωt) – J1(kR) cos(ωt)]
 –π I0 l n2 R2 ωm sin(ωt) Low k limit
(Riemannian)
V = π2 I0 l n2 R2 ω J1R) [Y1R) sin(ωt) + J1R) cos(ωt)]
 –π I0 l n2 R2 ω sin(ωt) Low ω limit
(Lorentzian)
Long solenoid (AC)
Average electrical power expended on a coil of length l
<PRadiated> = –½ π2 I02 l n2 R2 ω J1(kR)2
 –(1/8) π2 I02 l n2 R4 ωm k2 Low k limit
(Riemannian)
<PRadiated> = ½ π2 I02 l n2 R2 ω J1R)2
 (1/8) π2 I02 l n2 R4 ω3 Low ω limit
(Lorentzian)

The first interesting feature of the Riemannian solution is that the spatial angular frequency k now sets the scale for the geometry of the solenoid, in place of ωm when the current is unchanging. While the direct-current behaviour of a solenoid would be extremely sensitive to any imperfections comparable to the minimum wavelength of light — and a realistic device might have a wire whose width spanned several wavelengths, so that the whole structure would include a series of negative and positive inductances that largely cancelled each other out — we now have the possibility of a much larger wavelength, and a system that’s both free of cancellations and less sensitive to the precise shape of the coil. When we treated the DC solenoid, we noted that it could possess either a positive or negative inductance, and hence it could either oppose or assist changes in current flow. However, in an AC context that distinction is less important; what matters is the power expended over a full cycle, and it’s guaranteed by our choice of an outgoing wave that the Riemannian solenoid will act as a source of electrical power, while the Lorentzian equivalent will require an expenditure of power.

The graph on the right shows the current, voltage and power for the Riemannian and Lorentzian case, for three sizes of coil. Here J1 and Y1 are abbreviations for J1(kR) and Y1(kR) in the Riemannian case or J1R) and Y1R) in the Lorentzian case. The sign convention we’re using for the voltage here is such that an ordinary resistor would have a voltage exactly in phase with the current, so the power computed as the product VI is electrical energy being dissipated.

In the Lorentzian case, the voltage is never more than 90 degrees out of phase with the current. In the low-frequency DC limit, J1R) Y1R) ≈ –1/π to first order, and the voltage leads the current by exactly 90 degrees. If we think of an inductor at least a few millimetres across, carrying AC frequencies in the kilohertz range or less, the wavelength is vastly larger than the size of the solenoid, so that “limiting case” is actually a good approximation for a lot of common AC circuits. As the frequency becomes higher, though, Y1R) eventually becomes zero, putting the voltage in phase with the current, and then positive, so that the voltage lags the current. But whatever the values of J1R) and Y1R), the average power dissipated over each cycle is always either positive or zero.

In the Riemannian case, the voltage is never less than 90 degrees out of phase with the current. The DC limit involves the maximum possible spatial frequency and behaviour that’s highly sensitive to the coil’s geometry. It’s only in the high (time) frequency limit that the wavelength becomes large, J1(kR) Y1(kR) ≈ –1/π, and the voltage leads the current by 90 degrees. But at all frequencies and coil sizes, the average power dissipated is negative or zero – because any field energy radiated away must be accompanied, in the Riemannian case, by an increase in conventional energy.

### Oscillating Solutions Derived From Electrostatic Ones

#### Oscillating Electric Dipoles

The trick we used to get oscillating solutions from magnetostatic ones won’t work quite so easily for electrostatic solutions. If we take a pure electrostatic potential, φES, adapt it to a new constant, k rather than ωm, and multiply it by cos(ωt), then it will solve the RVWS for a source equal to the original charge density multiplied by cos(ωt), where as always we have k2 + ω2 = ωm2. But it won’t satisfy the transverse condition, because the time component of the four-potential, which is now –φES, k cos(ωt), has a non-zero time derivative, but there are no spatial components to the four-potential with derivatives of their own that can make the divergence sum to zero.

In the case of a static electric dipole, though, there’s a fairly easy trick to get around this. The static dipole potential is just the opposite of the spatial derivative of the Coulomb potential along the dipole axis, say the z-axis. So if we make the z-component of the four-potential equal to the Coulomb potential (also adapted for the constant k rather than ωm) times ω sin(ωt), its spatial derivative in the z direction will cancel out the time derivative of the four-potential’s time component, satisfying the transverse condition. The extra term will also satisfy the RVWS with a source modified in the same way, and charge will automatically be conserved. Specifically, this adds a pointlike oscillating current to the source, ninety degrees out of phase from the oscillations in the strength of the dipole.

As before, we can build two standing waves with this approach, and then combine them to get an outgoing travelling wave. And as before, we can adapt the method to get Lorentzian solutions as well.

 Oscillating Electric Dipole Outgoing Travelling Wave Solution Electric dipole moment is p cos(ωt) For the Riemannian solution, k2 + ω2 = ωm2 Oscillating Electric Dipole potentials φ(r) = –[p · r / (4 π r3)] [cos(kr+ωt) + kr sin(kr+ωt)] A(3)(r) = –[p ω / (4 π r)] sin(kr+ωt) (Riemannian) φ(r) = [p · r / (4 π r3)] [cos(ω(r–t)) + ωr sin(ω(r–t))] A(3)(r) = [p ω / (4 π r)] sin(ω(r–t)) (Lorentzian) Oscillating Electric Dipole, electric and magnetic fields E(r) = –[(3 (p · r) r – r2 p) / (4 π r5)] [cos(kr+ωt) + kr sin(kr+ωt)] + [(k2 (p · r) r + ω2 r2 p) / (4 π r3)] cos(kr+ωt) B(r) = [ω p × r / (4 π r3)] [kr cos(kr+ωt) – sin(kr+ωt)] (Riemannian) E(r) = [(3 (p · r) r – r2 p) / (4 π r5)] [cos(ω(r–t)) + ωr sin(ω(r–t))] – [ω2 (p × r) × r / (4 π r3)] cos(ω(r–t)) B(r) = [ω p × r / (4 π r3)] [sin(ω(r–t)) – ωr cos(ω(r–t))] (Lorentzian) Oscillating Electric DipolePoynting vector averaged over one cycle = [ (k ω3 (p · p) + k3 ω (p · er)2) / (32 π2 r2)] er (Riemannian) = [ ω4 ((p · p) – (p · er)2) / (32 π2 r2)] er (Lorentzian) Oscillating Electric DipoleTotal power averaged over one cycle

= (k3 ω + 3 k ω3) (p · p) / (24 π) (Riemannian)

= ω4 (p · p) / (12 π) (Lorentzian) The Riemannian solution here has a somewhat different power-frequency relationship than the oscillating magnetic dipole. It also provides the first explicit source we’ve seen of longitudinally polarised waves.

We can write the Riemannian four-potential for large r as:

A(r) ≈ [sin(krt) / (4 π r)] [k (p · er) et – ω p]

We can split this into transverse and longitudinal parts:

AT(r) ≈ [sin(krt) / (4 π r)] ω [(p · er) erp]
AL(r) ≈ [sin(krt) / (4 π r)] (p · er) [k et – ω er]

The transverse part has no time component and is orthogonal to er, the direction in space in which the wave is propagating. Using our analysis of energy in plane waves, if we write θ for the angle between the dipole vector and the direction of the wave in space, the local energy density in the transverse and longitudinal modes, averaged over one time cycle, is:

<uT(r)> ≈ [1 / (32 π2 r2)] (p · p) ω4 sin2(θ)
<uL(r)> ≈ [1 / (32 π2 r2)] (p · p) (ω2+k2) ω2 cos2(θ)

This shows us that the transverse waves are strongest perpendicular to the dipole axis, dropping to zero on the axis itself, while the longitudinal waves have the opposite pattern: strongest on the axis, dropping to zero perpendicular to it. The angular distribution of energy for the transverse waves matches that of the Lorentzian case.

If we multiply these energy densities by the speed of the wave, k / ω, and integrate over the whole sphere, we get the total power in each form:

<PT> = k ω3 (p · p) / (12 π)
<PL> = (k ω3 + k3 ω) (p · p) / (24 π)

Of course these two values add up to give the total power radiated, shown in the table.

#### Alternating Current in a Capacitor

To look at the behaviour of our spherical capacitor with an alternating current charging and discharging the two shells, we need to fit the current somewhere into the picture. The only way we can do this without breaking the spherical symmetry is by having a symmetrically distributed current run directly from shell to shell, through the gap between them. To do this literally would obviously disrupt the functioning of the capacitor, but we can treat this model as an approximation to an arrangement where a large number of flat-plate capacitors are being charged and discharged through wires that run beside, but not actually within, the devices themselves. Arranging a large number of such circuits so they all fan out around a central point will produce fields very similar to those in our model.

We find the Riemannian four-potential due to the oscillating charge on the spheres by modifying the electrostatic solution, substituting k for ωm and multiplying by cos(ωt). Then we add in a four-potential for the current flowing back and forth between the shells, which we can find first as a magnetostatic solution with the Biot-Savart Law, and then convert to an oscillatory solution with our standard method. Charge is conserved, since the current we’ve added accounts for the oscillating charge on the shells, and so the four-potential obeys the transverse condition and gives us a valid solution. Then as usual, we need to combine this with a sourceless standing wave to get the outgoing travelling wave solution.

Because the four-potential is radially symmetrical, there is no magnetic field. In the Lorentzian case, that means there can be no radiation, while in the Riemannian case there is purely transverse radiation.

Spherical capacitor (AC)
Outgoing Travelling Wave Solution
Inner shell of radius R1, total charge –Q0 cos(ωt)
Outer shell of radius R2, total charge +Q0 cos(ωt)
For the Riemannian solution, k2 + ω2 = ωm2
Spherical capacitor (AC), potentials
φ(r) =
 [Q0 sin(kr) / (4 π kr R1R2)] [R2 cos(kR1+ωt)–R1 cos(kR2+ωt)] r < R1 [Q0 / (4 π kr R1R2)] [R2 sin(kR1) cos(kr+ωt)–R1 sin(kr) cos(kR2+ωt)] R1 < r < R2 [Q0 cos(kr+ωt) / (4 π kr R1R2)] [R2 sin(kR1)–R1 sin(kR2)] r > R2
A(3)(r) =
 [Q0 ω (kr cos(kr)–sin(kr)) / (4 π k3r2 R1R2)] [R2 sin(kR1+ωt)–R1 sin(kR2+ωt)] er r < R1 [Q0 ω / (4 π k3r2 R1R2)] [cos(ωt) (kr cos(kr)–sin(kr)) (R2 sin(kR1)–R1 sin(kR2)) + sin(ωt) (R1 cos(kR2) (sin(kr)–kr cos(kr)) –R2 sin(kR1) (kr sin(kr)+cos(kr))+kR1R2)] er R1 < r < R2 [Q0 ω (kr cos(kr+ωt)–sin(kr+ωt)) / (4 π k3r2 R1R2)] [R2 sin(kR1)–R1 sin(kR2)] er r > R2
(Riemannian)
φ(r) =
 Q0 (R1–R2) cos(ωt) / (4 π R1R2) r < R1 Q0 (r–R2) cos(ωt) / (4 π r R2) R1 < r < R2 0 r > R2
A(3)(r) = 0 (Lorentzian)
Spherical capacitor (AC), electric field
E(r) =
 [Q0 ωm2 (kr cos(kr)–sin(kr)) / (4 π k3r2 R1R2)] [R1 cos(kR2+ωt)–R2 cos(kR1+ωt)] er r < R1 [Q0 / (4 π k3 r2 R1R2)] [cos(ωt) (ωm2 (R1 cos(kR2) (kr cos(kr)–sin(kr)) + R2 sin(kR1) (kr sin(kr)+cos(kr))) – ω2 kR1R2) + ωm2 sin(ωt) (kr cos(kr)–sin(kr)) (R2 sin(kR1)–R1 sin(kR2))] er R1 < r < R2 [Q0 ωm2 (kr sin(kr+ωt)+cos(kr+ωt)) / (4 π k3r2 R1R2)] [R2 sin(kR1)–R1 sin(kR2)] er r > R2
(Riemannian)
E(r) =
 0 r < R1 –Q0 cos(ωt) / (4 π r2) er R1 < r < R2 0 r > R2
(Lorentzian)
Spherical capacitor (AC)
Poynting vector averaged over one cycle
<S(r)> =
 0 r < R1 [Q02 ωm2 ω / (32 π2 k3r3 R12 R2)] [(r sin(kR1)–R1 sin(kr)) (R2 sin(kR1)–R1 sin(kR2))] er R1 < r < R2 [Q02 ωm2 ω / (32 π2 k3r2 R12 R22)] [(R2 sin(kR1)–R1 sin(kR2))2] er r > R2
(Riemannian)
<S(r)> = 0 (Lorentzian)
Spherical capacitor (AC)
Total radiated power averaged over one cycle
<PRadiated> = [Q02 ωm2 ω / (8 π k3 R12 R22)] [(R2 sin(kR1)–R1 sin(kR2))2]
 Q02 (R22–R12)2 ωm3 k3 / (288 π) Low k limit
(Riemannian)
Spherical capacitor (AC), voltage between shells
V = [Q0 / (4 π k3 R12 R22)]
[ ωm2 sin(ωt) (R2 sin(kR1)–R1 sin(kR2))2
cos(ωt) (ωm2 (R22 sin(kR1) cos(kR1)+R1 cos(kR2) (R1 sin(kR2)–2 R2 sin(kR1)))
+ ω2 kR1R2 (R1R2)) ]
 –Q0 (R2–R1) (3 + ωm2 R1 (R2–R1)) cos(ωt) / (12 π R1 R2) Low k limit
(Riemannian)
V = Q0 (R2R1) cos(ωt) / (4 π R1 R2) (Lorentzian)
Spherical capacitor (AC)
Average electrical power expended
<PElectrical> = –[Q02 ωm2 ω / (8 π k3 R12 R22)] [(R2 sin(kR1)–R1 sin(kR2))2]
 –Q02 (R22–R12)2 ωm3 k3 / (288 π) Low k limit
(Riemannian)
<PElectrical> = 0 (Lorentzian)

As usual, the electrical power expended in the Riemannian case is the opposite of the total power radiated, so work needs to be extracted from the circuit to keep the peak amplitude of the oscillating current unchanged. The exact relationship between the voltage/current phase difference and the frequency of the oscillations will be complicated, but the fact that there is always a negative (or at worst, zero) power expenditure by the circuit means that the voltage will always be at least 90 degrees out of phase with the current.

In the Lorentzian case, because the geometry prevents any radiative loss, the voltage and current will always be precisely 90 degrees out of phase.

### Resonant Circuits

#### Resonance in Lorentzian Circuits

In basic circuit theory as applied in our own universe, it’s usually assumed that capacitors and inductors have fixed values of capacitance and inductance that are independent of the frequency of the current passing through them. This is a reasonable assumption, because in the Lorentzian universe moderate time frequencies correspond to wavelengths much larger than the dimensions of typical electronic components.

But that’s not to say that the behaviour of a circuit containing these devices is itself independent of frequency. For a capacitor, what the capacitance C fixes is the ratio between the charge stored in the device and the voltage across the plates, but when we look at the relationship between voltage and current, rather than charge, the frequency of the current enters into the relationship through the derivative of the oscillating charge. In what follows, we will write Q0, I0 and V0 for the amplitude of an oscillating charge, current or voltage whose instantaneous value follows a harmonic wave.

Q = Q0 sin(ωt)
V = Q / C = [Q0 / C] sin(ωt)
I = dQ/dt = [ω Q0] cos(ωt)
V0 = I0 / (ωC)

With an inductor, L fixes the ratio between voltage and the rate of change of current, so we have:

I = I0 cos(ωt)
dI/dt = [–ω I0] sin(ωt)
V = L dI/dt = [–Lω I0] sin(ωt)
V0 = (Lω) I0

If we define the capacitative reactance XC and inductive reactance XL as follows:

XC = 1 / (ωC)
XL = ωL

then XC and XL play a role analogous to resistance, with:

V0 = I0 R, for a resistor
V0 = I0 XC, for a capacitor
V0 = I0 XL, for an inductor.

However, the instantaneous values of the voltages are different in these three cases: for a resistor the voltage is in phase with the current, for a capacitor the voltage lags the current by 90 degrees (it’s a positive multiple of sine, if the current is a cosine), and for an inductor the voltage leads the current by 90 degrees (it’s a negative multiple of sine, if the current is a cosine). This means that if all three devices are connected in series, and so the same current is flowing through all of them, the voltage across the capacitor will be 180 degrees out of phase with that across the inductor, which is to say it will precisely oppose it. So the net reactance:

X = XLXC

dictates the combined voltage for those two components, 90 degrees out of phase with the current.

Next, we define the impedance, which includes the effect of resistance, and the overall phase difference φ:

Z = √(R2 + X2)
φ = arctan(X / R)
cos(φ) = R / Z
sin(φ) = X / Z

These two quantities let us describe the combined voltage across the three devices, the capacitor, inductor and resistor wired in series:

V = R I0 cos(ωt) – X I0 sin(ωt)
= I0 Z [cos(φ) cos(ωt) – sin(φ) sin(ωt)]
= I0 Z cos(ωt+φ) Clearly the impedance will be at a minimum when X is zero. If we call the angular frequency when this happens ωres, then:

X = XLXC = 0
ωresL – 1 / (ωresC) = 0
ωres = 1 / √(LC)

At the resonant frequency ωres, the inductance and capacitance cancel each other exactly, and the amplitude of the current, I0, hits a peak that is determined solely by the resistance.

To give an example, suppose we have a solenoid 10 cm long, 5 cm in radius, and with one turn every millimetre. In SI units, its DC inductance will be 0.987 milliHenries.

In series with this we add a spherical capacitor, with inner radius 5 cm and outer radius 5.01 cm. Its DC capacitance will be 2.787 nanoFarads.

We connect the solenoid and the capacitor in series, along with a 1000 ohm resistor. Our formula gives us the angular frequency of the resonance, which corresponds to an ordinary frequency of ν=95.96 kiloHertz.

The plot shows the current that will flow through these three components for a given voltage as the frequency is varied; the frequency scale is logarithmic, and the vertical axis has been normalised so that the current through the 1000 ohm resistor alone would give a value of 1. Just as we’d expect, there’s a peak around 105 Hertz. There will be other resonances that aren’t accounted for in the approximation where we treat the inductance and capacitance as frequency-independent, but they won’t appear until the GHz range, where the wavelength starts to approach the dimensions of the solenoid.

Now, suppose that instead of connecting our three components to an oscillating voltage, we charge up the capacitor to a charge of Qi and then just close the circuit, allowing current to flow through it. What happens?

The sum of all the voltages around the circuit must be zero:

VC + VL + VR = 0
Q / C + L dI/dt + R I = 0
Q / C + L d2Q/dt2 + R dQ/dt = 0
d2Q/dt2 + 2 β dQ/dt + ωres2 Q = 0

where we have defined β=R/(2L). Given Q=Qi and dQ/dt = 0 at t=0, and assuming β < ωres, this differential equation has the solution:

Q(t) = Qi exp(–βt) [cos(√(ωres2 – β2) t) + (β / √(ωres2 – β2)) sin(√(ωres2 – β2) t)]

This describes an oscillating function undergoing an exponential decay. The frequency of the oscillations will be less than the resonant frequency ωres at which the circuit responds with the least impedance to a driving voltage, though as the resistance is reduced the oscillations will approach that frequency.

#### Resonance in Riemannian Circuits

The concepts we discussed in the previous section can be adapted to analogous situations in the Riemannian universe, but there are some very significant changes. The first is that it will very rarely be reasonable to assume that L and C themselves are independent of ω. In the Riemannian universe wavelengths are at their minimum for static fields, and only become larger with increasing time frequencies. The increase in wavelength comes late, and then occurs very abruptly; the wavelength isn’t double the minimum until ω = 0.866 ωm, hits ten times the minimum at ω = 0.995 ωm, and a hundred times at ω = 0.99995 ωm. So an inductor or capacitor in a circuit operating at any but the very highest frequencies will have a current-voltage relationship dictated by the interaction of the field’s wavelength with the geometry of the component, and hence dependent on the frequency in a far more complex fashion than the reactance-frequency formulas we’ve given above for the Lorentzian case.

Furthermore, the electromagnetic radiation from Riemannian inductors and capacitors will give them a significant frequency-dependent negative resistance. This puts a frequency-dependent term into the resistance part of the impedance and the phase difference:

X = XLXC
Z = √(Rtot2 + X2)
φ = arctan(X / Rtot)

where everything here but the ordinary resistance R is now frequency-dependent (and even that is a simplifying assumption). So we can no longer guarantee that the frequency at which X = 0 will give us the minimum impedance, Z. Nevertheless, these extra complications mean that even a very simple circuit can have interesting behaviour. Suppose we have a solenoid, identical to the one we described in the previous section: 10 cm long, 5 cm in radius, and with one turn every millimetre. To apply Riemannian physics to it, we will assume a value for ωm of 2 π × 1015 Hz.

The reactance and resistance of the solenoid are plotted in the diagram on the right. Note that all the wavelengths here correspond to time frequencies extremely close to ωm. Because the reactance crosses zero for the solenoid alone, there is no need to add a capacitor to the circuit; if we wired up this solenoid with an ordinary resistor that balanced the solenoid’s negative resistance at the longest of the wavelengths where its reactance was zero, a closed circuit containing just those two components would resonate at that wavelength, in principle sustaining the current indefinitely. The solenoid would emit electromagnetic waves, bringing ordinary energy into the circuit, and the resistor would turn that energy into heat. This would not violate any of the laws of thermodynamics: energy is conserved, because electromagnetic field energy has the opposite sense to thermal/kinetic energy, and entropy increases, because of the radiation produced.

Amazingly enough, there is even a degree of stability built into the behaviour of this extremely simple circuit. If the current began to increase exponentially, that would entail its frequency spreading out, and although the resonance point isn’t quite at the wavelength of minimum resistance, the difference in time frequency here is so tiny that it would only take a very small growth constant in the exponential to spread out the frequency sufficiently to lower the rate at which the solenoid was feeding energy into the circuit. Of course the same effect would exacerbate the damping of the current if it began to drop, so it would require an additional regulatory mechanism (such as a non-linear resistor, with a resistance that increased at higher currents) to keep the circuit harvesting energy at a constant rate.

### Electromagnetism in Curved Riemannian Space

So far, everything we’ve said about electromagnetism has been expressed in terms of Cartesian coordinates in flat space (or in the Lorentzian case, flat space-time). But since we don’t actually expect the Riemannian universe to be perfectly flat, any more than our own universe, it will be helpful to understand how the equations can be reformulated to work in curved space. This will have the added benefit of allowing us to deal easily with non-Cartesian coordinates in flat space.

If you haven’t done any calculations in curved space-time before, the quick summary that follows might be bewildering. For a much gentler introduction, try this article on the basics of general relativity.

In general-relativistic Lorentzian physics, when converting an equation from flat space-time to curved space-time, the rule of thumb is to convert partial derivatives to covariant derivatives. When we take a derivative of a vector field in flat space, we are implicitly treating vectors at different points as belonging to the same vector space; if we say a vector field has zero derivative, and hence is constant, that claim really only makes sense if we can take a vector at point A and compare it with another vector at point B. But on the curved surface of the Earth, say, how do we compare the vector space of possible velocities across the ground in London with the same kind of vector space in Nairobi? Even if we step away from the Earth’s surface and think of these vectors as three-dimensional, that doesn’t let us match up all the velocities at one location with velocities at another — and in a curved universe, we can’t “step away” at all.

The resolution involves supplementing the idea of a derivative to include a geometrical structure known as the Levi-Civita connection, which gives us a notion of parallel transport of vectors along a curve: that is, if we travel along a curve, we can “carry” a vector from the start of the curve along with us, keeping it “parallel” with its original direction, according to the connection. The Levi-Civita connection has the virtue of being compatible with the metric; the metric defines a dot product on curved space, and the Levi-Civita connection lets you parallel-transport two vectors while preserving their dot product. The covariant derivative computes the derivative of a vector field relative to the Levi-Civita connection: if you parallel-transport a vector along a curve using the Levi-Civita connection, that is the standard that says “this vector is unchanging” against which any change is identified by the covariant derivative.

To make this concrete, suppose we have a vector field v on a curved space, with components in some coordinate basis of vb. Then the covariant derivative of this vector field in one of the coordinate directions, a, is given by:

a vb = ∂a vb + Γbca vc

where Γ is the Levi-Civita connection, telling us how to correct the partial derivative to produce a derivative that respects parallel transport. If gab and gab are the components of the metric in our coordinate system, the Levi-Civita connection Γ has components (often referred to as Christoffel symbols):

Γbca = ½ gbk [ ∂agkc + ∂cgka – ∂kgca]

Note that Γ is symmetric in its last two indices: Γbca = Γbac.

We can extend the idea of parallel transport from vectors to any kind of tensor. For example, if we parallel transport the vectors v and w from point A to point B with the Levi-Civita connection, obtaining v' and w' at B, then parallel transport of rank-(2,0) tensors from A to B is defined so that vw at A becomes v' ⊗ w' at B. For dual vectors, we require that if a dual vector α at point A has α(v) = c, parallel transport of α from A to B yields α' such that α'(v') = c.

These requirements give us the following formulas for the covariant derivatives of the kind of tensors we’ll need:

a Ab = ∂a Ab – Γhba Ah
a Fbc = ∂aFbc – Γhba Fhc – Γhca Fbh
a Fbc = ∂aFbc + Γbha Fhc + Γcha Fbh

Applying the second of these equations to the metric, and making use of the definition of Γ, gives us ∇a gbc = 0. Essentially our definition of Γ has been chosen to get this result: Γ is the connection with respect to which the metric itself is judged to be constant.

If we replace the partial derivatives in our equations of electromagnetism with covariant derivatives, we obtain the following:

 Riemannian Proca Equation in Curved Space ∇b Fab – ωm2 Aa – ja = 0 (Riemannian) ∂a Fbc + ∂b Fca + ∂c Fab = 0 (Common) Maxwell’s Equations in Curved Spacetime ∇b Fab – ja = 0 (Lorentzian) ∂a Fbc + ∂b Fca + ∂c Fab = 0 (Common)

Why are there still partial derivatives rather than covariant derivatives in the common equation shared by Riemannian and Lorentzian electromagnetism? If we write out the equation with covariant derivatives and use the fact that Fbc is antisymmetric while Γ is symmetric in its last two indices, all the correction terms cancel each other out, and we’re left with just the partial derivatives.

In the relationship between the electromagnetic field F and the four-potential A, the correction terms for the covariant derivative again cancel out.

 Field From Four-Potential Fab = ∇a Ab – ∇b Aa = ∂a Ab – ∂b Aa (Common)

It follows that the common equation in the Riemannian Proca and the Maxwell Equations will again be satisfied merely by defining F in terms of A this way, since nothing has changed and exactly the same partial derivatives appear as in the flat space-time case.

Now, the next step is where things get a little tricky. In flat space or space-time, partial derivatives commute: if you take two derivatives, it doesn’t matter which order you do it in. This is not the case for covariant derivatives in curved space, and indeed the whole idea of curvature is tied up with the fact that covariant derivatives don’t commute.

Suppose we take the covariant derivative of a vector field v along two different coordinate directions, indexed by a and b, in both orders. The difference between the two is given by:

ab v – ∇ba v = Rhcab vc eh

where eh is the basis vector in the coordinate direction indexed by h, and the four-index tensor R is what’s known as the Riemann curvature tensor (named, of course, after the same Georg Friedrich Bernhard Riemann as we’ve been referring to all along, though this tensor is just as useful in Lorentzian curved space-time as in Riemannian curved space). By explicitly calculating the covariant derivatives in terms of the Levi-Civita connection, we can express the components of the Riemann curvature tensor as:

Rhcab = ∂a Γhcb – ∂b Γhca + Γhka Γkbc – Γhkb Γkca

Now, suppose the four-potential A satisfies a covariant-derivative version of the transverse condition or the Lorenz gauge condition:

b Ab = 0

where as usual we’re using the Einstein Summation Convention on repeated indices. Then the expression ∇ba Ab would be zero if covariant derivatives commuted ... but they don’t commute, so instead we have:

ba Ab = ∇ba Ab – ∇ab Ab = Rbcba Ac = Rca Ac

where the two-index tensor R, known as the Ricci curvature tensor, is found by “contracting” the Riemann curvature tensor, that is summing over two of its indices.

If we make use of this result to evaluate ∇b Fab — which appears in both the Riemannian Proca equation and Maxwell’s Equations — in terms of the four-potential A, we get:

b Fab
= gαa gβbb Fαβ
= gαa gβbb (∇α Aβ – ∇β Aα)
= gαa gβb (∇bα Aβ – ∇bβ Aα)
= gαa (∇bα Ab – ∇bb Aα)
= gαa (Rcα Ac – ∇bb Aα)
= Rca Ac – ∇bb Aa

We can now express everything in terms of the four-potential A:

 Riemannian Vector Wave Equations in Curved Space ∇b ∇b Aa – Rca Ac + ωm2 Aa + ja = 0 (RVWS) ∇c Ac = 0 (Transverse) Maxwell’s Equations for Four-Potential in Lorenz Gauge in Curved Space-time ∇b ∇b Aa – Rca Ac + ja = 0 (LVWS) ∇c Ac = 0 (Lorenz)

While we’re on the subject of wave equations in curved space, we can also give a modified scalar wave equation. A covariant derivative of a scalar is just the partial derivative in the same direction, and the gradient of a scalar can be defined without reference to the metric or the Levi-Civita connection. However, the sum of the second derivatives in all the coordinate directions that appears in the Riemannian Scalar Wave equation will only be independent of the coordinate system if we compute it, in the general case, as the divergence of the gradient, using the covariant derivative:

div grad A = gij (∂ij A – Γkjik A)

This operation, which is a generalisation of the Laplacian, is known as the Laplace-Beltrami operator. When the metric only has components on the diagonal, which is true for many coordinate systems, it’s very easy to compute the determinant of the metric as the product of those diagonal entries. If we write |g| for the absolute value of the determinant of the metric, it can be shown that an alternative expression for the Laplace-Beltrami operator is:

div grad A = (1/√|g|) ∂i [(√|g|) gijj A]

which is easier to use than going to the trouble of computing the Christoffel symbols. Even if you haven’t encountered this equation before, if you stare at it long enough you’ll probably recognise it as lying behind the formulas you’ve seen for the Laplacian in spherical or cylindrical coordinates.

 Riemannian Scalar Wave Equation With Source, in Curved Space div grad A + ωm2 A + j = 0 (RSWS) Lorentzian Scalar Wave Equation With Source, in Curved Space-time div grad A + j = 0 (LSWS)

Further reading: Sections 16.3 and 22.4 of Gravitation by Charles Misner, Kip Thorne and John Wheeler, W.H. Freeman, San Francisco, 1973.

### Boundary Conditions

As we mentioned when first discussing the Riemannian wave equations, there is a serious problem with these equations: they allow for solutions that have an angular frequency higher than ωm in one direction, along with exponential change in another. The nice, well-behaved plane waves from which we derived the Riemannian Scalar Wave Equation have the sum of the squares of their frequencies in the four dimensions equal to a fixed total, νmax2, and so none of those individual frequencies can exceed νmax, but the equation itself can’t rule out solutions with an exponential factor, such as cos (kx) exp(αt), which will satisfy the RSW equation so long as k2 – α2 = ωm2.

If you’ve read the first volume of Orthogonal you’ll know how these exponential solutions can be avoided. If you haven’t read the book but have read this far into these notes despite the spoiler warnings, this is your last chance to decide not to read on.

If the Riemannian universe is finite but has no boundary, the requirement that solutions of the wave equations are continuous, and possess continuous derivatives, will rule out solutions with an exponential factor. While a cyclic function can, by its very nature, join up smoothly with itself when followed around a closed curve, an exponential function can’t do that. (Things become a bit more subtle when we go from a free wave in the vacuum to a field with a source, and we’ll look at some examples of that in the following sections.)

So far we’ve mostly been treating the Riemannian universe as an infinite, perfectly flat four-space, while noting that this is just an approximation, akin to the useful approximation of the Lorentzian universe as flat Minkowski space-time. In the same spirit, we can look at two idealised models of the Riemannian universe which are finite, but which still make simplifying assumptions about the curvature. In one of these models, the 4-torus, the Riemannian universe remains perfectly flat. In the other model, the 4-sphere, the universe has a constant, positive curvature.

#### The 4-torus

Suppose we take a region of flat four-space in the shape of a rectangular hyperprism. We put coordinates (x, y, z, t) on this region that range from –Lx/2 to Lx/2, –Ly/2 to Ly/2, –Lz/2 to Lz/2 and –Lt/2 to Lt/2. Then we declare that all eight of the three-dimensional hyperfaces of this hyperprism are “glued” to the opposite face. For example, all points (x, y, z, –Lt/2) are identified with the corresponding points (x, y, z, Lt/2). This is the four-dimensional equivalent of taking a rectangle in the plane and identifying its opposite edges to make a torus.

We should stress, though, that the whole four-space remains perfectly flat; we are not “rolling up” the hyperprism in any higher-dimensional space, we are just decreeing that this model of the Riemannian universe is finite in all directions, and that its topology takes the form we have described, which is known as a 4-torus. Our choice of topology doesn’t require the curvature of the four-space to be zero everywhere, but it certainly allows it.

In what follows, we will call this model universe T4. We will take it as given that the whole four-space is flat, and that we’ve chosen coordinates like those described above. There is, of course, nothing physically special about the choice of origin or the points where the coordinates jump from Li/2 to –Li/2, and any solution to the equations of Riemannian physics that we find using our original coordinates will still be valid if we translate everything by an arbitrary displacement vector. However, the boundary conditions imposed by the shape of the T4 universe are not rotationally symmetrical, so if we take a solution and apply an arbitrary rotation, it will no longer satisfy those boundary conditions.

Any sufficiently well-behaved scalar function A(x, y, z, t) on T4 can be written as a Fourier series:

A(x, y, z, t) = Σi, j, k, l ai, j, k, l fi, j, k, l(x, y, z, t)

where the sum is over all integer values (positive, negative and zero) for i, j, k, l, and:

fi, j, k, l(x, y, z, t) = fi(x / Lx) fj(y / Ly) fk(z / Lz) fl(t / Lt)
fn(u) = sin(2 π n u),   n > 0
fn(u) = cos(2 π n u),   n < 0
f0(u) = 1/√2

We will refer to the functions fi, j, k, l as the Fourier basis functions for T4. With the integral over T4 as the inner product between functions:

<f, g> = ∫T4 fg

the different basis functions are orthogonal to each other, and they all have the same squared norm: V / 16, where V = Lx Ly Lz Lt is the total 4-volume of T4.

Each basis function is a standing wave that undergoes |i|, |j|, |k| and |l| cycles, respectively, in the x, y, z and t directions, around the entire width of the universe. Given a function A(x, y, z, t), we can explicitly compute the Fourier coefficients ai, j, k, l as follows:

ai, j, k, l = (16 / V) ∫T4 fi, j, k, l(x, y, z, t) A(x, y, z, t)

Now we’d like to know which, if any, of the fi, j, k, l satisfy the sourceless Riemannian Scalar Wave equation. Applying that differential equation to a Fourier basis function, we get the algebraic equation:

(i / Lx)2 + (j / Ly)2 + (k / Lz)2 + (l / Lt)2 = νmax2

where νmax = ωm / (2 π). If the Li and νmax are just randomly chosen numbers, no integer values for i, j, k, l will satisfy this equation. So we have two possibilities to consider: the generic case, where there are no solutions to the sourceless RSW equation, and the special case, where the Li and νmax have values that allow some solutions to exist.

##### Special Case Allowing Sourceless Solutions

To give an example of the special case, suppose all the Li = 1, and νmax = √90. Then any integers i, j, k, l whose sum of squares is 90 will provide a Fourier basis function fi, j, k, l that satisfies the RSW equation. There are 1872 such quadruples of integers, if we count all the permutations and choices of positive and negative signs, but they can all be derived from these nine equations:

02+02+32+92 = 90
02+12+52+82 = 90
02+42+52+72 = 90
12+22+22+92 = 90
12+22+62+72 = 90
12+32+42+82 = 90
22+52+52+62 = 90
32+32+62+62 = 90
32+42+42+72 = 90

In a Riemannian universe where the ratio between the size of the universe and the minimum wavelength of light was comparable to, say, the size of our observable universe measured in wavelengths of far ultraviolet light, or about 1034, the number of solutions for suitable choices of Li and νmax would be extremely large. We won’t go into the number theory involved in counting the solutions (see Mathworld’s Sum of Squares function page for a taste of that), but it’s intuitively plausible that on a cosmic scale the number of discrete solutions could easily be so large as to appear continuous. In other words, although sourceless plane waves in such a Riemannian universe could only have a finite number of specific propagation vectors, the actual choices would be so numerous as to look like a continuum that included all directions.

Since the sourceless solutions are all built from a finite number of Fourier basis functions, they will be smooth and finite everywhere. None of their directional frequencies can exceed νmax, and they could equally well be written as a superposition of a finite number of plane waves, which is how we originally envisioned constructing general solutions to the wave equation.

What can we say about solutions to the scalar wave equation with a source, which we’ll call H?

 ∂x2A(x) + ∂y2A(x) + ∂z2A(x) + ∂t2A(x) + ωm2 A(x) + H(x) = 0 (RSWS)

If we Fourier-expand both the function A with coefficients a and the scalar source H with coefficients h, we have:

ai, j, k, l [(i / Lx)2 + (j / Ly)2 + (k / Lz)2 + (l / Lt)2 – νmax2] = hi, j, k, l / (4 π2)

For those values of i, j, k, l that satisfy the sourceless equation — making the expression in square brackets zero — the source’s Fourier coefficient hi, j, k, l must be zero in order for a solution to exist at all, while ai, j, k, l can be chosen freely. For all other values, we solve the equation above to obtain:

ai, j, k, l = hi, j, k, l / [(4 π2) ((i / Lx)2 + (j / Ly)2 + (k / Lz)2 + (l / Lt)2 – νmax2)]

So the source will determine all the coefficients that do not correspond to sourceless solutions, and then we’re free to add any additional, sourceless solution we wish.

##### Generic Case With No Sourceless Solutions

For generic values of Li and νmax, none of the Fourier basis functions will solve the sourceless Riemannian Scalar Wave equation. In this case, there are no Fourier components of the source that are required to be zero, and we can always use:

ai, j, k, l = hi, j, k, l / [(4 π2) ((i / Lx)2 + (j / Ly)2 + (k / Lz)2 + (l / Lt)2 – νmax2)]

to obtain a solution, assuming the Fourier series converges.

##### Planar Charge

To give a very simple example, suppose we have a motionless planar sheet of unit charge density that bisects the T4 Riemannian universe, lying in the yz-plane. The source for the time component of the four-potential is then a one-dimensional Dirac delta function in the x coordinate. Since everything will be a function of x alone, we will drop the other three dimensions from the Fourier coefficient subscripts and integrals, and we’ll simply write L for Lx.

The non-zero Fourier coefficients of the source are then:

h0 = (√2)/L
hi = 2/L,   i < 0

This precise source will only be possible if νmax L is not an integer, so we’ll assume that’s the case. The non-zero Fourier coefficients of the solution for the time component of the four-potential are then:

a0 = –1 / [2(√2) π2 L νmax2]
ai = L / [2 π2 (i2L2 νmax2)],   i < 0

Rather than attempting to explicitly sum the Fourier series, we will find the solution by another method. By using the symmetry of the problem and the Riemannian version of Gauss’s Law, we can easily establish that the four-potential associated with a unit planar charge when there are no boundary conditions imposed is:

At, src = –sin(ωm |x|) / (2 ωm)

But there is also a sourceless solution with the same symmetry that we’re free to add in any multiple we wish:

At, nsrc = cos(ωm x) / (2 ωm)

Both functions are even in x (i.e. they have the same value at ±x for all x), so any solution will be continuous at xL/2. But an even function has opposite derivatives at ±x, so the solution can only meet itself at xL/2 smoothly if the derivative there is zero. By adjusting the constant C in the general solution At, src + C At, nsrc we can ensure a derivative of zero at xL/2. The result simplifies to:

At, bc = –cos(π νmax (L – 2|x|)) / [4 π νmax sin(π νmax L)]

The Fourier coefficients of At, bc are precisely those we’ve already written above, so the two methods are in agreement. What this solution describes is a phase shift in the potential that allows it to wrap around the universe smoothly, while still having just the right discontinuity on the planar charge to satisfy Gauss’s Law there.

##### Linear Charge

Suppose we have a motionless line of unit charge density located on the z-axis of the T4 Riemannian universe. The source for the time component of the four-potential will be a Dirac delta function in the x and y coordinates. We’ll drop the z and t coordinates from the Fourier coefficients, and for simplicity we’ll assume Lx = Ly = L. The non-zero Fourier coefficients of the source are:

h0, 0 = 2 / L2
hi, 0 = h0, i = (2√2) / L2,    i < 0
hi, j = 4 / L2,    i, j < 0

This source will only be possible if L2 νmax2 is not a sum of squares of two integers, so we’ll assume that it’s not. The non-zero Fourier coefficients of the solution for the time component of the four-potential are:

a0, 0 = –1 / [2 π2 L2 νmax2]
ai, 0 = a0, i = 1 / [(√2) π2 (i2L2 νmax2)],    i < 0
ai, j = 1 / [π2 (i2 + j2L2 νmax2)],    i, j < 0  It’s possible to explicitly evaluate the sum over one index and reduce the Fourier series to a sum over the other index. We can’t get a closed form for the whole expression, but halving the number of indices makes the result much easier to work with numerically.

At = Σj=0 fj(0) βj(x / L) fj(y / L)
βj(u) = cosh(αj (1 – 2|u|)) / [2 αj sinh(αj)]
αj = π √(j2L2 νmax2)

Note that αj will be imaginary at first — until j / L exceeds νmax — and while it’s imaginary, the functions βj(u) will be oscillatory, since the cosh of an imaginary number ix is simply the cosine of x.

Once αj is real, the βj(u) decrease monotonically from a positive maximum at u = 0 to a minimum (also positive) at u = 1/2, which corresponds to the point half a universe away from the source. The drop isn’t literally an exponential decay — since exponential decay never flattens out to a minimum — but it’s very similar. So these non-oscillatory terms decay rapidly with distance from the source.

The diagrams on the right show the contours of zero potential in a plane perpendicular to the line of charge, demonstrating how the shape of the field is distorted by the boundary conditions. [Since non-zero contours aren’t shown, there is no information here about the field strength — the contours’ spacing here is basically just the wavelength.] The top image shows the entire universe, for a choice of parameters where L is just a few wavelengths, and the effect is very pronounced. The bottom image shows a region of the same size (in wavelengths), but in this case it is only a small portion of a universe that is a thousand times wider, and the field is already beginning to grow more radially symmetrical close to the charge. So, although it’s interesting to see how the field loses radial symmetry in order to satisfy the boundary conditions, in a realistically-sized universe — at least 1030 or so wavelengths wide — these effects aren’t likely to be empirically detectable.

##### Linear Alternating Current

The original motivation for introducing these boundary conditions was to avoid exponential blow-ups in high-frequency waves. We’ve seen that if sourceless waves can exist at all in the T4 universe, then they are guaranteed not to exceed the notional maximum frequency that appears in the wave equation. So an obvious question to ask is: what happens in the T4 universe if we have some kind of source that oscillates at a frequency greater than the maximum? The simplest kind of source to analyse is a linear alternating current. If the current runs along the z-axis of the T4 Riemannian universe, and oscillates with a frequency lAC / Lt for some integer lAC, then both the source and the solution will share a single Fourier component in the time direction and we can factor that out and deal with the spatial dependence of the solution in an almost identical fashion to the previous problem. The difference is that a constant term, lAC2, will be added to the sum of squared indices, which previously had only the term j2. As before, for the sake of simplicity we’ll assume that the universe has the same width, L, in all directions (including our chosen time direction). We then have:

Az = cos(2 π lAC t / L) Σj=0 fj(0) βj(x / L) fj(y / L)
βj(u) = cosh(αj (1 – 2|u|)) / [2 αj sinh(αj)]
αj = π √(j2 + lAC2L2 νmax2)

If the frequency of the current’s oscillations, lAC / L, exceeds νmax, the expression inside the square root in the definition of αj will always be positive, so αj will be real for all j. As we discussed in the previous section, when αj is real the functions βj(u) drop away in a manner very similar to exponential decay, while flattening out to reach a derivative of zero half-way across the universe.

We can’t produce a closed expression for the infinite sum over j, but the diagram shows the sum of a large number of terms. It’s apparent that a high-frequency source will be accompanied by a field that is only significant very close to the source itself, dropping off far more rapidly with distance than the radiation field around a linear alternating current with a frequency less than νmax.

##### Cauchy Data and Predictions

In our universe, where light in a vacuum is governed by a Lorentzian wave equation, if we know both the value and the time derivative of the electromagnetic field throughout a region R of space at some instant in time, t0, we can predict the value of the field some way into the future. Of course, electromagnetic waves can always enter the region from the sides, so as time moves on from t0 the region where we can make predictions will shrink at the speed of light, but in principle there will be a certain, definite portion of space-time where our initial data lets us predict what the field will be.

This kind of data — the value of a function and its time derivative, throughout a region of space at a particular moment — is known as Cauchy data. It’s in the nature of Lorentzian wave equations — which are second-order hyperbolic differential equations — that we can use Cauchy data to obtain their solution some way into the future.

Another example of a hyperbolic equation where we can make use of Cauchy data would be the wave equation for small displacements of an elastic string. Suppose the string is finite and anchored at both ends. Then if we know the displacement of the string and the time derivative of the displacement, along the entire string at some instant in time, then in principle we can predict the entire future of the string’s motion. What’s more, even if our knowledge was limited to just part of the string, since the waves it carried would have a certain maximum speed, cmax, we could still confidently make predictions about a region of the string that gradually shrank down from the portion about which we had data, with the ends being nibbled away at the rate cmax.

In contrast to this, the Riemannian wave equations are elliptic differential equations. To solve an elliptic differential equation in some region, we usually need data about the value of the solution on the entire boundary of the region. Examples of elliptic differential equations in our own universe involve regions of space, rather than of space-time. For instance, the equilibrium temperature reached in a solid material obeys an elliptic differential equation — Laplace’s equation — and to determine the temperature throughout some region of the material, we generally need to know the temperature on the entire boundary of that region. Being told the temperature on, say, one face of an iron cube — along with the temperature’s derivative in the direction pointing into the cube from that face, giving us Cauchy data — is not a reliable way to compute the temperature throughout the cube.

For example, suppose the opposite face of the cube to the one where we have data is covered in a pattern of closely spaced stripes of alternating high and low temperature. Our data might then describe an extremely weak, washed-out version of those stripes. The progression of temperature from our face to the opposite face will involve an exponential rise in the temperature difference, which will amplify enormously any imprecision in our data, to the point where just having our washed-out stripes and their derivative provides a very poor guide to the exact values the temperature reaches on the other face. But if instead we were supplied with the temperature on every face of the cube, interpolating the temperature distribution within the region that satisfied Laplace’s equation would be a much more reliable process.

In an infinite Riemannian universe, the problem of making predictions for the Riemannian wave equation from Cauchy data would be as difficult as trying to compute the temperature in a cube from data on just one face. Given that the equation is elliptic, we might conclude that we could only make postdictions about its solutions: gathering data about both the initial values of the field in some region of space and the final values after some interval of time had passed, along with data about what happened during that interval on the borders of the region, and then using all that information on the boundary of the relevant portion of four-space to compute the time course of the field in the region’s interior, after the fact. Such a situation would allow the laws of physics to be tested, but it would make it very hard to anticipate and prepare for the future.

But in a finite Riemannian universe such as T4, the situation isn’t so bad. For sourceless waves in T4, there are only a finite number of Fourier basis functions that can contribute to the total wave, so if we are able to determine the coefficients for all of them, we will know the entire history of the wave. If we include a source — which itself ought to obey an equation of the same general form — then the problem becomes more complex, but the principle is the same.

For simplicity, let’s work with a sourceless scalar wave. Suppose we know the value and the time derivative of the wave, throughout all of space at one moment in time. We will choose coordinates so that the moment of time for which we have data is t=0, and of course “time” can be any of the four directions in which the torus can be circumnavigated.

Suppose some Fourier basis function fi, j, k, l satisfies the sourceless wave equation. If l ≤ 0, then the time-dependent factor of this function, fl(t / Lt), will be a cosine or a constant function, and hence non-zero at t=0, so we can identify the coefficient of fi, j, k, l simply by performing a three-dimensional Fourier analysis of our data for t=0. If l > 0, the time-dependent factor will be a sine, so it will be zero at t=0. But its time derivative will be non-zero at t=0, and so we can identify its coefficient from a Fourier analysis of the time derivative data we have for t=0. So between the data and its time derivative, we can compute the coefficient of every basis function that contributes to a sourceless wave, which will allow us to compute the value of that wave at any time, future or past.

Now, of course it’s absurd to expect anyone in the Riemannian universe to have information about the electromagnetic field across the entire universe. But then, when we make predictions in our own universe about what will happen over the next five minutes, we never have perfect information about our surroundings out to a distance of five light-minutes (about 90 million kilometres). Yet we’ve managed to test scientific theories, and to predict the future well enough to survive, so far. The fact is, we live in a sufficiently orderly and calm time and place that we can usually assume that the most important sources of electromagnetic radiation around us are nearby objects like the sun, whose behaviour is well-known and fairly predictable. That the laws of physics allow sudden, massive inflows of radiation from unknown sources that would take us completely by surprise hasn’t ruined our ability to do science or plan for the future.

In Max Tegmark’s classic paper, “Is ‘the Theory of Everything’ Merely the Ultimate Ensemble Theory?”, Tegmark suggests that the elliptic partial differential equations governing a universe with no timelike dimensions would render it impossible to make predictions, and hence very difficult for what he calls “self-aware substructures” to function effectively. But in a finite Riemannian universe, if there are regions where the local environment is relatively calm and orderly — the kind of conditions that our own evolution and thriving have relied upon — then the strict need for Cauchy data spanning the whole universe in order to make predictions will no more be the determining factor governing what life can achieve than the strict need in our own universe to have Cauchy data for a region 90 million kilometres in radius in order to know what will happen in the next five minutes.

#### The 4-sphere

The boundary of a solid hypersphere in five-dimensional space is a finite, borderless four-dimensional space known as the 4-sphere, or S4. A 4-sphere need not be embedded in any higher-dimensional space, and it need not have uniform curvature, but for the sake of simplicity we’ll consider a Riemannian universe with this topology that does have all the geometric properties that a 4-sphere embedded in flat five-dimensional space would inherit from that space. If we take the radius of the hypersphere to be R, that fixes the total 4-volume at:

V = (8/3) π2 R4

and fixes the maximum length of any geodesic within the 4-sphere to 2 π R. The Ricci scalar curvature — which measures the degree to which the volume of a solid ball within the space grows less rapidly with increasing radius than it would in Euclidean space — is 12 / R2 at every point.

The nice thing about S4 as a model universe is that it is more symmetrical than T4. If we look at the symmetries of S4 that leave a point fixed, they are exactly the same group, O(4), as applies in Euclidean four-space. And in place of translations of Euclidean space, we simply extend the group to O(5).

The cost of this is that we have to deal with a curved four-space. Unlike T4, it’s impossible for a space with the topology of S4 to be perfectly flat everywhere. Why? The Euler characteristic of Sn for even n is always 2 (this can be proved quite simply by counting the parts of a hypercube). The Generalised Gauss-Bonnet Theorem equates an integral of a function relating to the curvature of the space to the Euler characteristic, and if the curvature were zero, that integral would be zero — contradicting the known value of the Euler characteristic.

We can put a form of polar coordinates on S4, with four angular coordinates:

0 ≤ ξ ≤ π
0 ≤ ψ ≤ π
0 ≤ θ ≤ π
0 ≤ φ ≤ 2π

which parameterise a point on the 4-sphere of radius R in flat five-dimensional space as:

(R cos(ξ), R sin(ξ) cos(ψ), R sin(ξ) sin(ψ) cos(θ), R sin(ξ) sin(ψ) sin(θ) cos(φ), R sin(ξ) sin(ψ) sin(θ) sin(φ))

In terms of these coordinates, the metric is diagonal, with non-zero components:

gξξ = R2
gψψ = R2 sin(ξ)2
gθθ = R2 sin(ξ)2 sin(ψ)2
gφφ = R2 sin(ξ)2 sin(ψ)2 sin(θ)2

giving us the square root of the determinant of the metric as:

√|g| = R4 sin(ξ)3 sin(ψ)2 sin(θ)

In much the same fashion as a scalar function on T4 can be written as a Fourier series, a well-behaved scalar function on S4 can be expanded as a sum of four-dimensional spherical harmonics:

Yj, k, lm(ξ, ψ, θ, φ) = Φm(φ) Θmj(θ) Ψjk(ψ) Ξkl(ξ) / R4
Φm(φ) = sin(mφ)/√π,   m > 0
Φm(φ) = cos(mφ)/√π,   m < 0
Φ0(φ) = 1/√(2π)
Θmj(θ) = √[(j+½) (j–|m|)! / (j+|m|)!] P|m|j(cos θ)
Ψjk(ψ) = √[(k+1) (k+j+1)! / (kj)!] Pj–½k(cos ψ) / [√sin(ψ)]
Ξkl(ξ) = √[(l+3/2) (lk)! / (l+k+2)!] Pk+1l+1(cos ξ) / sin(ξ)

Here P is an associated Legendre function of the first kind. The indices m, j, k, l are integers, with the following constraints:

0 ≤ |m| ≤ jkl The function Φm(φ) has a simple trigonometric form, but what do the functions of the other three coordinates look like? They all follow much the same pattern: when their upper index is at its highest possible value, they range from zero when the coordinate is zero, to a single maximum or minimum at π/2, then back to zero when the coordinate reaches π. As the value of that index drops, they gain one more extremum as they go from zero back to zero. When the index reaches zero, the count of extrema is incremented as always, but this time the function is no longer zero at the endpoints of its range.

The total number of these four-dimensional spherical harmonics, for a given l, can be found from the constraints on the other indices to be:

N(l)=(l+1)(l+2)(2l+3) / 6

All the spherical harmonics with different indices are orthogonal to each other, i.e. their products integrated over S4, weighted by the volume √|g|, are zero. We’ve included factors here that also ensure that the integral of each harmonic squared is one.

Which spherical harmonics satisfy the sourceless Riemannian Scalar Wave Equation on the sphere, which we derived in the section on curved space? It’s not hard to show that the spherical harmonics are eigenfunctions of the Laplace-Beltrami operator, with:

div grad Yj, k, lm = [–l(l+3) / R2] Yj, k, lm

So if ωm2 R2 is an integer of the form l(l+3), then all N(l) spherical harmonics for that value of l will satisfy the sourceless equation. If ωm2 R2 is not an integer of that form, then there will be no sourceless solutions. So we have a situation very similar to that on T4, where generic values for the maximum frequency and the size of the universe will not permit sourceless solutions, but if the geometry permits sourceless solutions to exist at all, they will be constructed from a finite number of modes. Here, we can count the modes very easily, without worrying about any of the number-theoretic subtleties required to do so for T4. Since there are N(l) modes if ωm2 R2 = l(l+3), for large l we have:

l ≈ ωm R
N(l) ≈ Nm R) ≈ (ωm R)3 / 3

This will be a very large number, of course, in any universe whose scale is even roughly comparable to our own observable universe. But in fact, the symmetry of S4 means that if there are sourceless solutions at all, there are solutions that look locally like plane waves with literally any propagation vector, rather than a large but discrete set of choices. For an observer located at ξ=ψ=θ=π/2, and any value for their φ coordinate, consider the harmonic Yl, l, ll, where l here is the specific integer such that l(l+3) = ωm2 R2. The observer will be at a very wide, flat extremum for the functions of ξ, ψ and θ, while the function will vary in the φ direction as cos(l φ) ≈ cos(ωm R φ), which will look locally just like a plane wave of the kind we’ve described for Euclidean four-space. But for any choice of the observer’s location and any choice of propagation vector, we can simply pick coordinates that meet the conditions we’ve described, and construct the same solution in those coordinates.

If we consider the RSW equation with a source H, and we write the spherical harmonic coefficients of the source as hj, k, lm and those of the solution A as aj, k, lm, we have:

aj, k, lm [l(l+3) – ωm2 R2] = hj, k, lm R2

If there is an l such that l(l+3) = ωm2 R2, the source cannot contain any spherical harmonics with that value for l, and the solution is free to contain those harmonics in any amounts. For other values of l, the source’s coefficient fixes the solution’s coefficient:

aj, k, lm = hj, k, lm R2 / [l(l+3) – ωm2 R2]
##### A Green’s Function for the 4-Sphere

Suppose we have a point-like, delta function blip of source on S4. What is the solution associated with that source? We’d expect it to look a bit like the Green’s function we previously found for a momentary blip of charge on Euclidean four-space, which was proportional to Y1m s) / s, where Y1 is a Bessel function of the second kind.

To keep things simple, we’ll confine ourselves to the scalar wave equation. If we place the source at a pole of our coordinate system, where ξ=ψ=θ=0 and φ is undefined (just as longitude is undefined at the Earth’s north and south poles), then the only spherical harmonic coefficients that will be non-zero will have m = j = k = 0, since all other values make the harmonic Yj, k, lm equal to zero at the pole. The non-zero coefficients are then:

h0, 0, l0 = – √[(l+1)(l+2)(2l+3)] / (4 π)
a0, 0, l0 = – R2 √[(l+1)(l+2)(2l+3)] / [(4 π) (l(l+3) – ωm2 R2)]

We are assuming that there is no integer l such that l(l+3) = ωm2 R2. The solution is:

A(ξ) = –R2 Σl=0 (2l+3) P1l+1(cos ξ) / [(8 π2) (l(l+3) – ωm2 R2) sin(ξ)] The diagram on the right shows a numerical approximation to this sum. The function behaves much as we’d expect for the first half of its domain, oscillating and declining with distance from the source, but then undergoes a disconcerting resurgence as it approaches the opposite pole. This is an artifact of the symmetry of S4; in a less homogeneous space with the same topology the effect would be much less prominent.

It’s worth noting that even with this perfect symmetry, the field at the antipodal point is finite, unlike that at the source itself. The sum for ξ=π can be computed explicitly:

A(π) = –R2m2 R2 + 2) / (16 π cos((π/2) √[4 ωm2 R2 + 9]))

The cosine in the denominator here could only be zero if ωm2 R2 violated the integer assumption, so the field here will be finite.

##### Cauchy Data and Predictions

For the T4 universe, we found that if we had Cauchy data across the width of the universe at a single instant of time (where the time axis could be any of the four coordinates that wrapped around the 4-torus), we could determine the values of the finite number of coefficients of a free wave, and thus reconstruct its history for all time.

For S4, we can do the same thing with Cauchy data on any “great 3-sphere”, i.e. any 3-sphere of radius R. Assuming the geometry allows sourceless waves, all the spherical harmonics Yj, k, lm(ξ, ψ, θ, φ) that satisfy the sourceless wave equation will share the same value of l. We choose a coordinate system in which ξ=π/2 on the 3-sphere for which we have data. For those harmonics that reach a maximum or minimum at ξ=π/2, we can find their coefficients from the field’s value on the 3-sphere, while those harmonics that are zero there will have maxima or minima in their derivatives in the ξ direction, and we can find their coefficients from the field’s derivative. So from Cauchy data on the 3-sphere, we can reconstruct the entire history of the solution.

What if we have data on a smaller 3-sphere, which we could describe as a hypersurface ξ=ξ0 for some ξ0 < π/2? So long as we actually know the value of ξ0, the factors Ξkl(ξ) and ∂ξΞkl(ξ) will be known quantities on the 3-sphere (and they will never both be zero at once), so in principle we should always be able to compute all the coefficients of the solution.

This leads to the curious observation that in principle we could reconstruct the entire solution from Cauchy data on even the smallest 3-sphere. After all, such a 3-sphere is a boundary of two finite regions: its interior as normally construed, and also the rest of the S4 universe, just as the Arctic Circle is a boundary for the region around the north pole and also for the remainder of the Earth’s surface. But in practice, for ξ much less than π/2 the values of Ξkl(ξ) become extremely small compared to the values at π/2, and also the peaks of the other factors in the harmonics become increasingly close, to the point where extrapolating outwards from a small 3-sphere to the whole universe would demand a prohibitive degree of accuracy in the data.

### References

 Classical Electrodynamics by John David Jackson, John Wiley & Sons, 1999. Section 12.7 gives the Lagrangian for ordinary electromagnetism, and Section 12.8 gives a Lagrangian for Lorentzian Proca electrodynamics. (Note that Jackson uses different units than those we’ve adopted, and also a (+ – – –) signature for the Lorentzian metric, so it takes some care to compare these formulas.)

 Gravitation by Charles Misner, Kip Thorne and John Wheeler, W.H. Freeman, San Francisco, 1973. Section 21.3.  Orthogonal / Riemannian Electromagnetism [Extra] / created Wednesday, 6 April 2011