The surface of a hypersphere in 5 dimensions can be described by the equation:
x^{2} + y^{2} + z^{2} + u^{2} + w^{2}  =  R^{2}  (1) 
where x, y, z, u, w are the 5 spatial coordinates, and the origin of the coordinate system lies at the centre of the hypersphere.
Suppose this hypersphere is rotating as a rigid body. In general (in any number of dimensions) the velocity v of any point of a rotating body is given by the product of the body’s angular velocity matrix, Ω, with the vector for the point in question, r.
v  =  Ω r  (2) 
The angular velocity matrix must be antisymmetric: Ω_{ij} = –Ω_{ji}. To see this, note that Ω is the derivative with respect to time of the linear transformation that takes any point from its original position at t = 0 to its rotated position; let’s call the matrix for this transformation M(t). Since the motion is rigid, if we take two basis vectors, e_{i} and e_{j}, and rotate them with M(t), the dot product of the resulting vectors (which measures the angle between them) must remain constant, and so the rate of change of this with respect to time must be zero.
d/dt [(M(t) e_{i}) · (M(t) e_{j})]  =  0  (3a) 
d/dt [(M(t)_{ki}e_{k}) · (M(t)_{rj}e_{r})]  =  0  (3b) 
d/dt [M(t)_{ki}M(t)_{kj}]  =  0  (3c) 
M(t)_{ki}Ω(t)_{kj} + Ω(t)_{ki}M(t)_{kj}  =  0  (3d) 
Ω(0)_{ij} + Ω(0)_{ji}  =  0  (3e) 
Ω_{ij}  =  –Ω_{ji}  (3f) 
Here we’ve used the Einstein summation convention of summing over all values of repeated indices, such as k and r. To get from (3d) to (3e), note that M(0) is just the identity matrix. In writing (3f) we’ve dropped any dependence on time from the angular velocity matrix, since we’re assuming that the body isn’t subject to external forces, leaving Ω constant.
In 5 dimensions, a completely general 5×5 antisymmetric matrix Ω will have 10 independent parameters, but it’s always possible to choose a basis that reduces it to the “canonical form”:
Ω  = 

(4) 
Here, the x and y coordinates have been chosen to lie in one plane of rotation, the z and u coordinates in the other, and the w coordinate lies perpendicular to both planes. To see why it’s always possible to choose a basis that puts the matrix in this form, first note that the determinant of any antisymmetric N × N matrix where N is odd must be zero. This is because det Ω = det Ω^{T}, and the determinant is a sum of products of N terms which all change sign in the transpose for an antisymmetric matrix, yielding det Ω = –(det Ω) for N odd. This means that there must be at least one nonzero vector in the null space of Ω. Choosing this as the direction for the w coordinate, the axis of rotation, makes the last row and the last column of Ω all zeros, and reduces the problem to 4 dimensions. The 4dimensional case is described in detail below.
Multiplying the vector for a general point, r = (x,y,z,u,w), by this canonical matrix for Ω yields a velocity vector of:
v  =  (ω_{1}y, –ω_{1}x, ω_{2}u, –ω_{2}z, 0)  (5) 
So, for any point with x = y = z = u = 0, the velocity is zero. The set of points that meet this condition is just the w axis, the body’s axis of rotation. This intersects the surface of the hypersphere at w = ±R, giving two poles.
In the physically possible (but cosmologically unlikely) case that ω_{2} = 0, merely setting x = y = 0 is enough to make the velocity zero, and since the other three coordinates are free to take any values, they trace out a 3dimensional volume. This volume intersects the hypersphere to form a single connected “pole”, the 2sphere z^{2} + u^{2} + w^{2} = R^{2}.
The two equatorial circles are {z = u = w = 0; x^{2} + y^{2} = R^{2}}, where the speed at which the surface is moving is ω_{1}R, and {x = y = w = 0; z^{2} + u^{2} = R^{2}}, where the speed is ω_{2}R.
In 4 dimensions, a completely general angular velocity matrix is described by 6 parameters:
A  = 

(6) 
It’s always possible to reorient the coordinate system in such a way that A is converted to the canonical form. One method of doing this is to find the eigenvectors of AA, the matrix product of A with itself. This is a real symmetric matrix, and hence it must have 4 orthogonal eigenvectors; it turns out that they come in two pairs, with eigenvalues –ω_{1}^{2} and –ω_{2}^{2}. This makes sense geometrically: applying A to any vector that lies in one of the planes of rotation simply rotates that vector by 90° and multiples it by the appropriate ω, so applying A twice reverses the vector’s direction and multiplies it by ω^{2}. So each of these pairs of eigenvectors spans one of the planes of rotation.
Another way to find the planes of rotation involves a linear operator called the Hodge dual; the Hodge dual of a matrix M is usually written as ^{*}M. In general, this operator takes an algebraic description of a geometrical object, such as a plane, and produces the corresponding description of the perpendicular object; in the context of 4dimensional Euclidean geometry, it maps planes to other planes. For example, if the 4 coordinates we’re using are called x, y, z and u, the Hodge dual of the x–y plane is the z–u plane, and similarly the dual of the plane spanned by any two coordinates is the plane spanned by the other two. With the small added complication that you need to stick to a consistent orientation scheme (to decide between the two possible directions of rotation within each plane) this is enough for us to write the Hodge dual of the matrix A. We just treat A as a sum of rotations in all 6 coordinate planes, and take the duals of each of them; this amounts to swapping the x–y coordinate of A with the z–u coordinate, etc., and changing a few signs to keep the orientation consistent.
^{*}A  = 

(7) 
Now, what we want to do is write A as a sum of rotations in just two planes: one plane whose matrix we’ll call S, and the plane perpendicular to it, whose matrix will be ^{*}S. In other words, we want to find S, ω_{1}, and ω_{2} such that:
A  =  ω_{1}S + ω_{2}^{*}S  (8) 
Taking the dual of this, and noting that applying the dual twice to anything just gives you back the original matrix, yields:
^{*}A  =  ω_{1}^{*} S + ω_{2}S  (9) 
Multiplying Eqn (8) by ω_{1} and Eqn (9) by ω_{2} and taking the difference allows us to find S in terms of A, ω_{1} and ω_{2}:
S  =  (ω_{1}A – ω_{2}^{*}A) / (ω_{1}^{2} – ω_{2}^{2})  (10) 
To find the values of ω_{1} and ω_{2}, note that if S is applied to a vector perpendicular to its plane, the result must be zero, and this is only possible if the determinant of S is zero. Writing out S in full, and then computing its determinant, gives:
S  = 

(11) 
det S  =  [ω_{2}/(ω_{1}^{2} – ω_{2}^{2})]^{4} [D(ω_{1}/ω_{2})^{2} + A^{2}(ω_{1}/ω_{2}) + D]^{2}  (12) 
where A^{2} = (a^{2}+b^{2}+c^{2}+d^{2}+e^{2}+f^{2}), the square of the magnitude of A, and D = (b e – c d – a f) is the square root of the determinant of A. Eqn (12) can be solved to find the value of ω_{1}/ω_{2} that makes det S equal to zero; call this value r.
r  =  [±√(A^{4} – 4 D^{2}) – A^{2}] / 2 D  (13a) 
1+r^{2}  =  A^{2} [A^{2} – √(A^{4} – 4 D^{2})] / 2 D^{2}  (13b) 
Then if S is normalised by requiring that S^{2} = ^{*}S^{2} = 1, ω_{1} and ω_{2} can be found individually. There’s a nice Pythagorean relationship between the squares of the magnitudes of the matrices involved, once A is split into dual parts, which makes it easy to find the individual rates of rotation.
A^{2}  =  ω_{1}^{2}S^{2} + ω_{2}^{2}^{*}S^{2}  
=  ω_{1}^{2} + ω_{2}^{2}  (14a)  
ω_{1}^{2}  =  A^{2} r^{2} / (1+r^{2})  
=  [A^{2} – √(A^{4} – 4 D^{2})] / 2  (14b)  
ω_{2}^{2}  =  A^{2} / (1+r^{2})  
=  2 D^{2} / [A^{2} – √(A^{4} – 4 D^{2})]  
=  [A^{2} + √(A^{4} – 4 D^{2})] / 2  (14c) 
Choosing new coordinates that put A into canonical form then requires picking one pair of orthogonal vectors spanning the plane defined by S, and then another pair spanning the plane defined by ^{*}S. There are standard linear algebra techniques for doing this, starting with a pair of row or column vectors from each matrix.