====================================================================== = 3D rotation group = ====================================================================== Introduction ====================================================================== In mechanics and geometry, the 3D rotation group, often denoted SO(3), is the group of all rotations about the origin of three-dimensional Euclidean space R3 under the operation of composition. By definition, a rotation about the origin is a transformation that preserves the origin, Euclidean distance (so it is an isometry), and orientation (i.e. 'handedness' of space). Every non-trivial rotation is determined by its axis of rotation (a line through the origin) and its angle of rotation. Composing two rotations results in another rotation; every rotation has a unique inverse rotation; and the identity map satisfies the definition of a rotation. Owing to the above properties (along composite rotations' associative property), the set of all rotations is a group under composition. Rotations are not commutative (for example, rotating 'R' 90° in the x-y plane followed by 'S' 90° in the y-z plane is not the same as 'S' followed by 'R'), making it a nonabelian group. Moreover, the rotation group has a natural structure as a manifold for which the group operations are smoothly differentiable; so it is in fact a Lie group. It is compact and has dimension 3. Rotations are linear transformations of R3 and can therefore be represented by matrices once a basis of R3 has been chosen. Specifically, if we choose an orthonormal basis of R3, every rotation is described by an orthogonal 3Ã3 matrix (i.e. a 3Ã3 matrix with real entries which, when multiplied by its transpose, results in the identity matrix) with determinant 1. The group SO(3) can therefore be identified with the group of these matrices under matrix multiplication. These matrices are known as "special orthogonal matrices", explaining the notation SO(3). The group SO(3) is used to describe the possible rotational symmetries of an object, as well as the possible orientations of an object in space. Its representations are important in physics, where they give rise to the elementary particles of integer spin. Length and angle ====================================================================== Besides just preserving length, rotations also preserve the angles between vectors. This follows from the fact that the standard dot product between two vectors u and v can be written purely in terms of length: :\mathbf{u}\cdot\mathbf{v} = \tfrac{1}{2}\left(\|\mathbf{u}+\mathbf{v}\|^2 - \|\mathbf{u}\|^2 - \|\mathbf{v}\|^2\right). It follows that any length-preserving transformation in R3 preserves the dot product, and thus the angle between vectors. Rotations are often defined as linear transformations that preserve the inner product on R3, which is equivalent to requiring them to preserve length. See classical group for a treatment of this more general approach, where appears as a special case. Orthogonal and rotation matrices ====================================================================== Every rotation maps an orthonormal basis of to another orthonormal basis. Like any linear transformation of finite-dimensional vector spaces, a rotation can always be represented by a matrix. Let be a given rotation. With respect to the standard basis of the columns of are given by . Since the standard basis is orthonormal, and since preserves angles and length, the columns of form another orthonormal basis. This orthonormality condition can be expressed in the form :R^\mathsf{T}R = RR^\mathsf{T} = I, where denotes the transpose of and is the identity matrix. Matrices for which this property holds are called orthogonal matrices. The group of all orthogonal matrices is denoted , and consists of all proper and improper rotations. In addition to preserving length, proper rotations must also preserve orientation. A matrix will preserve or reverse orientation according to whether the determinant of the matrix is positive or negative. For an orthogonal matrix , note that implies , so that . The subgroup of orthogonal matrices with determinant is called the 'special orthogonal group', denoted . Thus every rotation can be represented uniquely by an orthogonal matrix with unit determinant. Moreover, since composition of rotations corresponds to matrix multiplication, the rotation group is isomorphic to the special orthogonal group . Improper rotations correspond to orthogonal matrices with determinant , and they do not form a group because the product of two improper rotations is a proper rotation. Group structure ====================================================================== The rotation group is a group under function composition (or equivalently the product of linear transformations). It is a subgroup of the general linear group consisting of all invertible linear transformations of the real 3-space R3. Furthermore, the rotation group is nonabelian. That is, the order in which rotations are composed makes a difference. For example, a quarter turn around the positive 'x'-axis followed by a quarter turn around the positive 'y'-axis is a different rotation than the one obtained by first rotating around 'y' and then 'x'. The orthogonal group, consisting of all proper and improper rotations, is generated by reflections. Every proper rotation is the composition of two reflections, a special case of the Cartan-Dieudonné theorem. Axis of rotation ====================================================================== Every nontrivial proper rotation in 3 dimensions fixes a unique 1-dimensional linear subspace of R3 which is called the 'axis of rotation' (this is Euler's rotation theorem). Each such rotation acts as an ordinary 2-dimensional rotation in the plane orthogonal to this axis. Since every 2-dimensional rotation can be represented by an angle 'Ï', an arbitrary 3-dimensional rotation can be specified by an axis of rotation together with an angle of rotation about this axis. (Technically, one needs to specify an orientation for the axis and whether the rotation is taken to be clockwise or counterclockwise with respect to this orientation). For example, counterclockwise rotation about the positive 'z'-axis by angle 'Ï' is given by :R_z(\varphi) = \begin{bmatrix}\cos\varphi & -\sin\varphi & 0 \\ \sin\varphi & \cos\varphi & 0 \\ 0 & 0 & 1\end{bmatrix}. Given a unit vector n in R3 and an angle 'Ï', let 'R'('Ï',ân) represent a counterclockwise rotation about the axis through n (with orientation determined by n). Then * 'R'(0,n) is the identity transformation for any n * 'R'('Ï',n) = 'R'(â'Ï',ân) * 'R'(â+â'Ï',n) = 'R'(âââ'Ï',ân). Using these properties one can show that any rotation can be represented by a unique angle 'Ï' in the range 0 â¤ Ï â¤ and a unit vector n such that * n is arbitrary if 'Ï' = 0 * n is unique if 0 < 'Ï' < * n is unique up to a sign if 'Ï' = (that is, the rotations 'R'(,â¯Â±n) are identical). In the next section, this representation of rotations is used to identify SO(3) topologically with three-dimensional real projective space. Topology ====================================================================== The Lie group SO(3) is diffeomorphic to the real projective space RP3. Consider the solid ball in R3 of radius (that is, all points of R3 of distance or less from the origin). Given the above, for every point in this ball there is a rotation, with axis through the point and the origin, and rotation angle equal to the distance of the point from the origin. The identity rotation corresponds to the point at the center of the ball. Rotation through angles between 0 and â correspond to the point on the same axis and distance from the origin but on the opposite side of the origin. The one remaining issue is that the two rotations through and through â are the same. So we identify (or "glue together") antipodal points on the surface of the ball. After this identification, we arrive at a topological space homeomorphic to the rotation group. Indeed, the ball with antipodal surface points identified is a smooth manifold, and this manifold is diffeomorphic to the rotation group. It is also diffeomorphic to the real 3-dimensional projective space RP3, so the latter can also serve as a topological model for the rotation group. These identifications illustrate that SO(3) is connected but not simply connected. As to the latter, in the ball with antipodal surface points identified, consider the path running from the "north pole" straight through the interior down to the south pole. This is a closed loop, since the north pole and the south pole are identified. This loop cannot be shrunk to a point, since no matter how you deform the loop, the start and end point have to remain antipodal, or else the loop will "break open". In terms of rotations, this loop represents a continuous sequence of rotations about the 'z'-axis starting and ending at the identity rotation (i.e. a series of rotation through an angle 'Ï' where 'Ï' runs from 0 to 2). Surprisingly, if you run through the path twice, i.e., run from north pole down to south pole, jump back to the north pole (using the fact that north and south poles are identified), and then again run from north pole down to south pole, so that 'Ï' runs from 0 to 4, you get a closed loop which 'can' be shrunk to a single point: first move the paths continuously to the ball's surface, still connecting north pole to south pole twice. The second half of the path can then be mirrored over to the antipodal side without changing the path at all. Now we have an ordinary closed loop on the surface of the ball, connecting the north pole to itself along a great circle. This circle can be shrunk to the north pole without problems. The plate trick and similar tricks demonstrate this practically. The same argument can be performed in general, and it shows that the fundamental group of SO(3) is cyclic group of order 2. In physics applications, the non-triviality of the fundamental group allows for the existence of objects known as spinors, and is an important tool in the development of the spin-statistics theorem. The universal cover of SO(3) is a Lie group called Spin(3). The group Spin(3) is isomorphic to the special unitary group SU(2); it is also diffeomorphic to the unit 3-sphere 'S'3 and can be understood as the group of versors (quaternions with absolute value 1). The connection between quaternions and rotations, commonly exploited in computer graphics, is explained in quaternions and spatial rotations. The map from 'S'3 onto SO(3) that identifies antipodal points of 'S'3 is a surjective homomorphism of Lie groups, with kernel {±1}. Topologically, this map is a two-to-one covering map. (See the plate trick.) Connection between SO(3) and SU(2) ====================================================================== In this section, we give two different constructions of a two-to-one and onto homomorphism of SU(2) onto SO(3). Using quaternions of unit norm ================================ The group is isomorphic to the quaternions of unit norm via a map given by :q = a\mathbf{1} + b\mathbf{i} + c\mathbf{j} + d\mathbf{k} = \alpha + j\beta \leftrightarrow \begin{bmatrix}\alpha & -\overline \beta \\ \beta & \overline \alpha\end{bmatrix} = U, \quad q \in \mathbb{H},\quad a,b,c,d \in \mathbb{R}, \quad \alpha, \beta \in \mathbb{C},\quad U \in \mathrm{SU}(2). Let us now identify \mathbb R^3 with the span of \mathbf{i},\mathbf{j},\mathbf{k}. One can then verify that if v is in \mathbb R^3 and q is a unit quaternion, then :qvq^{-1}\in \mathbb R^3. Furthermore, the map v\mapsto qvq^{-1} is a rotation of \mathbb R^3. Moreover, (-q)v(-q)^{-1} is the same as qvq^{-1}. This means that there is a homomorphism from quaternions of unit norm to . One can work this homomorphism out explicitly: the unit quaternion, , with :\begin{align} q &{}= w + \mathbf{i}x + \mathbf{j}y + \mathbf{k}z , \\ 1 &{}= w^2 + x^2 + y^2 + z^2 , \end{align} is mapped to the rotation matrix : Q = \begin{bmatrix} 1 - 2 y^2 - 2 z^2 & 2 x y - 2 z w & 2 x z + 2 y w \\ 2 x y + 2 z w & 1 - 2 x^2 - 2 z^2 & 2 y z - 2 x w \\ 2 x z - 2 y w & 2 y z + 2 x w & 1 - 2 x^2 - 2 y^2 \end{bmatrix}. This is a rotation around the vector by an angle , where and . The proper sign for is implied, once the signs of the axis components are fixed. The is apparent since both and map to the same . Using Möbius transformations ============================== Stereographic projection from the sphere of radius from the north pole onto the plane given by coordinatized by , here shown in cross section. The general reference for this section is . The points on the sphere {{math|S {('x', 'y', 'z') â â3: 'x'2 + 'y'2 + 'z'2 }}} can, barring the north pole , be put into one-to-one bijection with points on the plane defined by , see figure. The map is called stereographic projection. Let the coordinates on be . The line passing through and can be parametrized as :L(t) = N + t(N - P) = (0,0,1/2) + t( (0,0,1/2) - (x, y, z) ), \quad t\in \mathbb{R}. Demanding that the of L(t_0) equals , one finds t_0= \frac1{z-\frac12}. We have L(t_0)=(\xi,\eta,-1/2). Hence the map :S:\mathbf{S} \rightarrow M;\qquad P \mapsto P' is given by :(x,y,z) \mapsto (\xi, \eta) = \left(\frac{x}{\frac{1}{2} - z}, \frac{y}{\frac{1}{2} - z}\right) \equiv \zeta = \xi + i\eta, where, for later convenience, the plane is identified with the complex plane . For the inverse, write as :L = N + s(P'-N) = \left(0,0,\frac{1}{2}\right) + s\left( \left(\xi, \eta, -\frac{1}{2}\right) - \left(0,0,\frac{1}{2}\right)\right), and demand to find and thus :S^{-1}:M \rightarrow \mathbf{S};\qquad P' \mapsto P;\qquad(\xi, \eta) \mapsto (x,y,z) = \left(\frac{\xi}{1 + \xi^2 + \eta^2}, \frac{\eta}{1 + \xi^2 + \eta^2}, \frac{-1 + \xi^2 + \eta^2}{2 + 2\xi^2 + 2\eta^2}\right). If is a rotation, then it will take points on to points on by its standard action on the embedding space . By composing this action with one obtains a transformation of , :\zeta=P' \quad\mapsto\quad P \quad\mapsto\quad \Pi_s(g)P = gP \quad\mapsto\quad S(gP) \equiv \Pi_u(g)\zeta = \zeta'. Thus is a transformation of associated to the transformation of . It turns out that represented in this way by can be expressed as a matrix (where the notation is recycled to use the same name for the matrix as for the transformation of it represents). To identify this matrix, consider first a rotation about the through an angle , :\begin{align}x' &= x\cos \varphi - y \sin \varphi,\\ y' &= x\sin \varphi + y \cos \varphi,\\ z' &= z.\end{align} Hence :\zeta' = \frac{x' + iy'}{\frac{1}{2} - z'} = \frac{e^{i\varphi}(x + iy)}{\frac{1}{2} - z} = e^{i\varphi}\zeta = \frac{e^{\frac{i\varphi}{2}} \zeta + 0 }{0 \zeta + e^{-\frac{i\varphi}{2}}}, which, unsurprisingly, is a rotation in the complex plane. In an analogous way, if is a rotation about the through and angle , then :w' = e^{i\theta}w, \quad w = \frac{y + iz}{\frac{1}{2} - x}, which, after a little algebra, becomes :\zeta' = \frac{\cos \frac{\theta}{2}\zeta +i\sin \frac{\theta}{2} }{i \sin\frac{\theta}{2}\zeta + \cos\frac{\theta}{2}}. These two rotations, , thus correspond to bilinear transforms of , namely, they are examples of Möbius transformations. A general Möbius transformation is given by :\zeta' = \frac{\alpha \zeta + \beta}{\gamma \zeta + \delta}, \quad \alpha\delta - \beta\gamma \ne 0.. The rotations, generate all of and the composition rules of the Möbius transformations show that any composition of translates to the corresponding composition of Möbius transformations. The Möbius transformations can be represented by matrices :\left(\begin{matrix}\alpha & \beta\\ \gamma & \delta\end{matrix}\right), \quad \quad \alpha\delta - \beta\gamma = 1, since a common factor of cancels. For the same reason, the matrix is 'not' uniquely defined since multiplication by has no effect on either the determinant or the Möbius transformation. The composition law of Möbius transformations follow that of the corresponding matrices. The conclusion is that each Möbius transformation corresponds to two matrices . Using this correspondence one may write :\begin{align}\Pi_u(g_\varphi) &= \Pi_u\left[\left(\begin{matrix} \cos \varphi & -\sin \varphi & 0\\ \sin \varphi & \cos \varphi & 0\\ 0 & 0 & 1 \end{matrix}\right)\right] = \pm \left(\begin{matrix} e^{i\frac{\varphi}{2}} & 0\\ 0 & e^{-i\frac{\varphi}{2}} \end{matrix}\right),\\ \Pi_u(g_\theta) &= \Pi_u\left[\left(\begin{matrix} 1 & 0 & 0\\ 0 & \cos \theta & -\sin \theta\\ 0 & \sin \theta & \cos \theta \end{matrix}\right)\right] = \pm \left(\begin{matrix} \cos\frac{\theta}{2} & i\sin\frac{\theta}{2}\\ i\sin\frac{\theta}{2} & \cos\frac{\theta}{2} \end{matrix}\right).\end{align} These matrices are unitary and thus . In terms of Euler angles one finds for a general rotation {{NumBlk|:|\begin{align}g(\varphi, \theta, \psi) &= g_\varphi g_\theta g_\psi = \left(\begin{matrix} \cos \varphi & -\sin \varphi & 0\\ \sin \varphi & \cos \varphi & 0\\ 0 & 0 & 1 \end{matrix}\right) \left(\begin{matrix} 1 & 0 & 0\\ 0 & \cos \theta & -\sin \theta\\ 0 & \sin \theta & \cos \theta \end{matrix}\right) \left(\begin{matrix} \cos \psi & -\sin \psi & 0\\ \sin \psi & \cos \psi & 0\\ 0 & 0 & 1 \end{matrix}\right)\\ &= \left(\begin{matrix} \cos\varphi\cos\psi - \cos\theta\sin\varphi\sin\psi & -\cos\varphi\sin\psi - \cos\theta\sin\varphi\cos\psi & \sin\varphi\sin\theta\\ \sin\varphi\cos\psi + \cos\theta\cos\varphi\sin\psi & -\sin\varphi\sin\psi + \cos\theta\cos\varphi\cos\psi & -\cos\varphi\sin\theta\\ \sin\psi\sin\theta & \cos\psi\sin\theta & \cos\theta \end{matrix}\right),\end{align}|}} one has {{NumBlk|:|\begin{align}\Pi_u(g(\varphi, \theta, \psi)) &= \pm \left(\begin{matrix} e^{i\frac{\varphi}{2}} & 0\\ 0 & e^{-i\frac{\varphi}{2}} \end{matrix}\right) \left(\begin{matrix} \cos\frac{\theta}{2} & i\sin\frac{\theta}{2}\\ i\sin\frac{\theta}{2} & \cos\frac{\theta}{2} \end{matrix}\right) \left(\begin{matrix} e^{i\frac{\psi}{2}} & 0\\ 0 & e^{-i\frac{\psi}{2}} \end{matrix}\right)\\ &= \pm \left(\begin{matrix} \cos\frac{\theta}{2}e^{i\frac{\varphi + \psi}{2}} & i\sin\frac{\theta}{2}e^{i\frac{\varphi - \psi}{2}}\\ i\sin\frac{\theta}{2}e^{-i\frac{\varphi - \psi}{2}} & \cos\frac{\theta}{2}e^{-i\frac{\varphi + \psi}{2}} \end{matrix}\right).\end{align}|}} For the converse, consider a general matrix :\pm\Pi_u(g_{\alpha,\beta}) = \pm\left(\begin{matrix} \alpha & \beta\\ -\overline{\beta} & \overline{\alpha} \end{matrix}\right) \in \mathrm{SU}(2). Make the substitutions :\begin{align}\cos\frac{\theta}{2} &= |\alpha|,\quad \sin\frac{\theta}{2} = |\beta|, \quad (0 \le \theta \le \pi),\\ \frac{\varphi + \psi}{2} &= \arg \alpha, \quad \frac{\psi - \varphi}{2} = \arg \beta.\end{align} With the substitutions, assumes the form of the right hand side (RHS) of , which corresponds under to a matrix on the form of the RHS of with the same . In terms of the complex parameters , :g_{\alpha,\beta} = \left(\begin{matrix} \frac{1}{2}(\alpha^2 - \beta^2 + \overline{\alpha^2} - \overline{\beta^2}) & \frac{i}{2}(-\alpha^2 - \beta^2 + \overline{\alpha^2} + \overline{\beta^2}) & -\alpha\beta-\overline{\alpha}\overline{\beta}\\ \frac{i}{2}(\alpha^2 - \beta^2 - \overline{\alpha^2} + \overline{\beta^2}) & \frac{1}{2}(\alpha^2 + \beta^2 + \overline{\alpha^2} + \overline{\beta^2}) & -i(+\alpha\beta-\overline{\alpha}\overline{\beta})\\ \alpha\overline{\beta} + \overline{\alpha}\beta & i(-\alpha\overline{\beta} + \overline{\alpha}\beta) & \alpha\overline{\alpha} - \beta\overline{\beta} \end{matrix}\right). To verify this, substitute for the elements of the matrix on the RHS of . After some manipulation, the matrix assumes the form of the RHS of . It is clear from the explicit form in terms of Euler angles that the map just described is a smooth, and onto group homomorphism. It is hence an explicit description of the universal covering map of from the universal covering group . Lie algebra ====================================================================== Associated with every Lie group is its Lie algebra, a linear space of the same dimension as the Lie group, closed under a bilinear alternating product called the Lie bracket. The Lie algebra of is denoted by and consists of all skew-symmetric matrices. This may be seen by differentiating the orthogonality condition, . The Lie bracket of two elements of is, as for the Lie algebra of every matrix group, given by the matrix commutator, , which is again a skew-symmetric matrix. The Lie algebra bracket captures the essence of the Lie group product in a sense made precise by the BakerâCampbellâHausdorff formula. The elements of are the "infinitesimal generators" of rotations, i.e. they are the elements of the tangent space of the manifold SO(3) at the identity element. If 'R'(Ï,ân) denotes a counterclockwise rotation with angle Ï about the axis specified by the unit vector n, then :\left.{\operatorname{d}\over\operatorname{d}\varphi} \right|_{\varphi=0} R(\varphi,\boldsymbol{n}) \boldsymbol{x} = \boldsymbol{n} \times \boldsymbol{x} for every vector x in R3. This can be used to show that the Lie algebra (with commutator) is isomorphic to the Lie algebra R3 (with cross product). Under this isomorphism, an Euler vector \boldsymbol{\omega}\in\mathbb R^3 corresponds to the linear map \mathbf{\tilde\omega} defined by \mathbf{\tilde\omega}(\boldsymbol{x})=\boldsymbol{\omega}\times\boldsymbol{x}. In more detail, a most often suitable basis for as a vector space is : L_{\mathbf{x}} = \begin{bmatrix}0&0&0\\0&0&-1\\0&1&0\end{bmatrix} , \quad L_{\mathbf{y}} = \begin{bmatrix}0&0&1\\0&0&0\\-1&0&0\end{bmatrix} , \quad L_{\mathbf{z}} = \begin{bmatrix}0&-1&0\\1&0&0\\0&0&0\end{bmatrix}. The commutation relations of these basis elements are, : [L_{\mathbf{x}}, L_{\mathbf{y}}] = L_{\mathbf{z}}, \quad [L_{\mathbf{z}}, L_{\mathbf{x}}] = L_{\mathbf{y}}, \quad [L_{\mathbf{y}}, L_{\mathbf{z}}] = L_{\mathbf{x}} which agree with the relations of the three standard unit vectors of R3 under the cross product. As announced above, one can identify any matrix in this Lie algebra with an Euler vector in â3, :\begin{align} \boldsymbol{\omega} &= (x,y,z) \in \mathbb{R}^3,\\ \boldsymbol{\tilde{\omega}} &=\boldsymbol{\omega\cdot L} = x L_{\mathbf{x}} + y L_{\mathbf{y}} + z L_{\mathbf{z}} = \begin{bmatrix}0&-z&y\\z&0&-x\\-y&x&0\end{bmatrix} \in \mathfrak{so}(3). \end{align} This identification is sometimes called the hat-map. Under this identification, the so(3) bracket corresponds in to the cross product, : [\tilde{\mathbf{u}},\tilde{\mathbf{v}}] = \widetilde{\mathbf{u}\!\times\!\mathbf{v} }. The matrix identified with a vector has the property that : \tilde{\mathbf{u}} \mathbf{v} = \mathbf{u} \times \mathbf{v}, where ordinary matrix multiplication is implied on the left hand side. This implies that is in the null space of the skew-symmetric matrix with which it is identified, because . A note on Lie algebra ======================= In Lie algebra representation, the group SO(3) is compact and simple of rank 1, and so it has a single independent Casimir element, a quadratic invariant function of the three generators which commutes with all of them. The Killing form for the rotation group is just the Kronecker delta, and so this Casimir invariant is simply the sum of the squares of the generators, J_x,\, J_y,\, J_z, of the algebra : [J_{\mathbf{x}}, J_{\mathbf{y}}] = J_{\mathbf{z}}, \quad [J_{\mathbf{z}}, J_{\mathbf{x}}] = J_{\mathbf{y}}, \quad [J_{\mathbf{y}}, J_{\mathbf{z}}] = J_{\mathbf{x}}. That is, the Casimir invariant is given by :J^2\equiv \boldsymbol{J\cdot J} = J_x^2+J_y^2+J_z^2 \propto I~. For unitary irreducible representations , the eigenvalues of this invariant are real and discrete, and characterize each representation, which is finite dimensional, of dimensionality 2+1. That is, the eigenvalues of this Casimir operator are :J^2=- j(j+1) ~I_{2j+1} ~, where is integer or half-integer, and referred to as the spin or angular momentum. So, above, the 3Ã3 generators 'L' displayed act on the triplet (spin 1) representation, while the 2Ã2 ones ('t') act on the doublet (spin-½) representation. By taking Kronecker products of with itself repeatedly, one may construct all higher irreducible representations . That is, the resulting generators for higher spin systems in three spatial dimensions, for arbitrarily large , can be calculated using these spin operators and ladder operators. For every unitary irreducible representations there is an equivalent one, . All infinite-dimensional irreducible representations must be non-unitary, since the group is compact. In quantum mechanics, the Casimir invariant is the "angular-momentum-squared" operator; integer values of spin characterize bosonic representations, while half-integer values fermionic representations, respectively. The antihermitian matrices used above are utilized as spin operators, after they are multiplied by , so they are now hermitian (like the Pauli matrices). Thus, in this language, : [J_{\mathbf{x}}, J_{\mathbf{y}}] = iJ_{\mathbf{z}}, \quad [J_{\mathbf{z}}, J_{\mathbf{x}}] = iJ_{\mathbf{y}}, \quad [J_{\mathbf{y}}, J_{\mathbf{z}}] = iJ_{\mathbf{x}}. and hence :J^2= j(j+1) ~I_{2j+1} ~. Explicit expressions for these are, :\begin{align} \left ( J_z^{(j)}\right ) _{ba} &= (j+1-a)~\delta_{b,a}\\ \left (J_x^{(j)}\right )_{ba} &=\frac{1}{2}(\delta_{b,a+1}+\delta_{b+1,a} ) \sqrt{(j+1)(a+b-1)-ab}\\ \left (J_y^{(j)}\right )_{ba} &=\frac{1}{2i}(\delta_{b,a+1}-\delta_{b+1,a} ) \sqrt{(j+1)(a+b-1)-ab}\\ &1 \le a, b \le 2j+1~, \end{align} for arbitrary . For example, the resulting spin matrices for spin 1, spin , and are: For j = 1 :\begin{align} J_x &= \frac{1}{\sqrt{2}} \begin{pmatrix} 0 &1 &0\\ 1 &0 &1\\ 0 &1 &0 \end{pmatrix} \\ J_y &= \frac{1}{\sqrt{2}} \begin{pmatrix} 0 &-i &0\\ i &0 &-i\\ 0 &i &0 \end{pmatrix} \\ J_z &= \begin{pmatrix} 1 &0 &0\\ 0 &0 &0\\ 0 &0 &-1 \end{pmatrix} \end{align} (Note, however, how these are in an equivalent, but different basis, the spherical basis, than the above s in the Cartesian basis.) For j=\textstyle\frac{3}{2}: :\begin{align} J_x &= \frac{1}{2} \begin{pmatrix} 0 &\sqrt{3} &0 &0\\ \sqrt{3} &0 &2 &0\\ 0 &2 &0 &\sqrt{3}\\ 0 &0 &\sqrt{3} &0 \end{pmatrix} \\ J_y &= \frac{1}{2} \begin{pmatrix} 0 &-i\sqrt{3} &0 &0\\ i\sqrt{3} &0 &-2i &0\\ 0 &2i &0 &-i\sqrt{3}\\ 0 &0 &i\sqrt{3} &0 \end{pmatrix} \\ J_z &=\frac{1}{2} \begin{pmatrix} 3 &0 &0 &0\\ 0 &1 &0 &0\\ 0 &0 &-1 &0\\ 0 &0 &0 &-3 \end{pmatrix}. \end{align} For j = \textstyle\frac{5}{2}: :\begin{align} J_x &= \frac{1}{2} \begin{pmatrix} 0 &\sqrt{5} &0 &0 &0 &0 \\ \sqrt{5} &0 &2\sqrt{2} &0 &0 &0 \\ 0 &2\sqrt{2} &0 &3 &0 &0 \\ 0 &0 &3 &0 &2\sqrt{2} &0 \\ 0 &0 &0 &2\sqrt{2} &0 &\sqrt{5} \\ 0 &0 &0 &0 &\sqrt{5} &0 \end{pmatrix} \\ J_y &= \frac{1}{2} \begin{pmatrix} 0 &-i\sqrt{5} &0 &0 &0 &0 \\ i\sqrt{5} &0 &-2i\sqrt{2} &0 &0 &0 \\ 0 &2i\sqrt{2} &0 &-3i &0 &0 \\ 0 &0 &3i &0 &-2i\sqrt{2} &0 \\ 0 &0 &0 &2i\sqrt{2} &0 &-i\sqrt{5} \\ 0 &0 &0 &0 &i\sqrt{5} &0 \end{pmatrix} \\ J_z &= \frac{1}{2} \begin{pmatrix} 5 &0 &0 &0 &0 &0 \\ 0 &3 &0 &0 &0 &0 \\ 0 &0 &1 &0 &0 &0 \\ 0 &0 &0 &-1 &0 &0 \\ 0 &0 &0 &0 &-3 &0 \\ 0 &0 &0 &0 &0 &-5 \end{pmatrix}. \end{align} and so on. Isomorphism with su(2) ======================== The Lie algebras and are isomorphic. One basis for is given by :t_1 = \frac{1}{2}\begin{bmatrix}0 & -i\\ -i & 0\end{bmatrix}, \quad t_2 = \frac{1}{2}\begin{bmatrix}0 & -1\\ 1 & 0\end{bmatrix}, \quad t_3 = \frac{1}{2}\begin{bmatrix}-i & 0\\ 0 & i\end{bmatrix}. These are related to the Pauli matrices by . The Pauli matrices abide by the physicists' convention for Lie algebras. In that convention, Lie algebra elements are multiplied by , the exponential map (below) is defined with an extra factor of in the exponent and the structure constants remain the same, but the 'definition' of them acquires a factor of . Likewise, commutation relations acquire a factor of . The commutation relations for the are :[t_i, t_j] = \varepsilon_{ijk}t_k, where Levi-Civita symbol is the totally anti-symmetric symbol with . The isomorphism between and can be set up in several ways. For later convenience, and are identified by mapping :L_x \leftrightarrow t_1, \quad L_y \leftrightarrow t_2, \quad L_z \leftrightarrow t_3, and extending by linearity. Exponential map ====================================================================== The exponential map for , is, since is a matrix Lie group, defined using the standard matrix exponential series, : \exp \colon \mathfrak{so}(3) \to SO(3); \quad A \mapsto e^A = \sum_{k=0}^\infty \frac{1}{k!} A^k = I + A + \tfrac{1}{2} A^2 + \cdots. For any skew-symmetric matrix , is always in . The proof uses the elementary properties of the matrix exponential : (e^A)^T e^A = e^{A^T} e^A = e^{A^T + A} = e^{-A+A} =e^{A-A} = e^A (e^A)^T = e^0 = I. since the matrices and commute, this can be easily proven with the skew-symmetric matrix condition. This is not enough to show that is the corresponding Lie algebra for , and shall be proven separately. The level of difficulty of proof depends on how a matrix group Lie algebra is defined. defines the Lie algebra as the set of matrices , in which case it is trivial. uses for a definition derivatives of smooth curve segments in through the identity taken at the identity, in which case it is harder. For a fixed , is a one-parameter subgroup along a geodesic in . That this gives a one-parameter subgroup follows directly from properties of the exponential map. The exponential map provides a diffeomorphism between a neighborhood of the origin in the and a neighborhood of the identity in the . For a proof, see Closed subgroup theorem. The exponential map is surjective. This follows from the fact that every , since every rotation leaves an axis fixed (Euler's rotation theorem), and is conjugate to a block diagonal matrix of the form :D=\left(\begin{matrix}\cos \theta & -\sin \theta & 0\\ \sin \theta & \cos \theta & 0\\ 0 & 0 & 1\end{matrix}\right) = e^{\theta L_z}, such that , and that :Be^{\theta L_z}B^{-1} = e^{B\theta L_zB^{-1}}, together with the fact that is closed under the adjoint action of , meaning that . Thus, e.g., it is easy to check the popular identity :e^{-\pi L_x/2} ~e^{\theta L_z}~e^{\pi L_x/2}=e^{\theta L_y} ~. As shown above, every element is associated with a vector , where is a unit magnitude vector. Since is in the null space of , if one now rotates to a new basis, through some other orthogonal matrix , with as the axis, the final column and row of the rotation matrix in the new basis will be zero. Thus, we know in advance from the formula for the exponential that must leave fixed. It is mathematically impossible to supply a straightforward formula for such a basis as a function of , because its existence would violate the hairy ball theorem; but direct exponentiation is possible, and yields :\begin{align} \exp( \tilde{\boldsymbol{\omega}} ) &{}= \exp( \theta ~(\boldsymbol{u\cdot L}) ) = \exp \left( \theta \begin{bmatrix} 0 & -z & y \\ z & 0 & -x \\ -y & x & 0 \end{bmatrix} \right)\\[4pt] &{}= \boldsymbol{I} + 2cs~(\boldsymbol{u\cdot L}) + 2s^2 ~(\boldsymbol{u\cdot L} )^2 \\[4pt] &{}= \begin{bmatrix} 2 (x^2 - 1) s^2 + 1 & 2 x y s^2 - 2 z c s & 2 x z s^2 + 2 y c s \\ 2 x y s^2 + 2 z c s & 2 (y^2 - 1) s^2 + 1 & 2 y z s^2 - 2 x c s \\ 2 x z s^2 - 2 y c s & 2 y z s^2 + 2 x c s & 2 (z^2 - 1) s^2 + 1 \end{bmatrix} , \end{align} where . This is recognized as a matrix for a rotation around axis by the angle : cf. Rodrigues' rotation formula. Logarithm map ====================================================================== Given , let :A = \frac{R - R^{\mathrm{T}}}{2} denote the antisymmetric part and let \|A\|=\sqrt{-\text{Tr}(A^2)/2}. Then, the logarithm of is given by :\log R = \frac{\sin^{-1}\|A\|}{\|A\|}A. This is manifest by inspection of the mixed symmetry form of Rodrigues' formula, :e^X = I + \frac{\sin \theta}{\theta}X + 2\frac{\sin^2\frac{\theta}{2}}{\theta^2}X^2, \quad \theta = \|X\|, where the first and last term on the right-hand side are symmetric. BakerâCampbellâHausdorff formula ====================================================================== Suppose and in the Lie algebra are given. Their exponentials, and , are rotation matrices, which can be multiplied. Since the exponential map is a surjection, for some in the Lie algebra, , and one may tentatively write : Z = C(X, Y), for some expression in and . When and commute, then , mimicking the behavior of complex exponentiation. The general case is given by the more elaborate BCH formula, a series expansion of nested Lie brackets. For matrices, the Lie bracket is the same operation as the commutator, which monitors lack of commutativity in multiplication. This general expansion unfolds as follows, : Z = C(X, Y) = X + Y + \tfrac12 [X, Y] + \tfrac{1}{12} [X,[X,Y]] - \tfrac{1}{12} [Y,[X,Y]] + \cdots ~. The infinite expansion in the BCH formula for reduces to a compact form, :Z = \alpha X + \beta Y + \gamma[X , Y], for suitable trigonometric function coefficients . The are given by : \alpha = \varphi \cot(\varphi/2) ~ \gamma, \qquad \beta = \theta \cot(\theta/2) ~\gamma, \qquad \gamma = \frac{\sin^{-1}d}{d}\frac{c}{\theta \varphi}~~, where :\begin{align}c &= \frac{1}{2}\sin\theta\sin\varphi -2 \sin^2\frac{\theta}{2}\sin^2\frac{\varphi}{2}\cos(\angle(u,v)) ,\quad a = c \cot(\varphi/2), \quad b = c \cot(\theta/2), \\ d &= \sqrt{a^2 + b^2 +2ab\cos(\angle(u,v)) + c^2 \sin^2(\angle(u,v))}~~,\end{align} for : \theta = \frac{1}{\sqrt{2}}\|X\| ~, \quad \varphi = \frac{1}{\sqrt{2}}\|Y\|~, \quad \angle(u,v) = \cos^{-1}\frac{\langle X, Y\rangle}{\|X\|\|Y\|}~. The inner product is the HilbertâSchmidt inner product and the norm is the associated norm. Under the hat-isomorphism, : \langle u, v\rangle = \frac{1}{2}\operatorname{Tr}X^{\mathrm{T}}Y, which explains the factors for and . This drops out in the expression for the angle. It is worthwhile to write this composite rotation generator as :\alpha X + \beta Y + \gamma[X , Y] ~\underset{\mathfrak{so}(3)}{=} ~ X + Y + \tfrac12 [X, Y] + \tfrac{1}{12} [X,[X,Y]] - \tfrac{1}{12} [Y,[X,Y]] + \cdots , to emphasize that this is a 'Lie algebra identity'. The above identity holds for all faithful representations of . The kernel of a Lie algebra homomorphism is an ideal, but , being simple, has no nontrivial ideals and all nontrivial representations are hence faithful. It holds in particular in the doublet or spinor representation. The same explicit formula thus follows in a simpler way through Pauli matrices, cf. the 2Ã2 derivation for SU(2). The Pauli vector version of the same BCH formula is the somewhat simpler group composition law of SU(2), :e^{i a'(\hat{u} \cdot \vec{\sigma})}e^{i b'(\hat{v} \cdot \vec{\sigma})} = \exp\left (\frac{c'}{\sin c'} \sin a' \sin b' ~ \left((i\cot b'\hat{u}+ i \cot a' \hat{v})\cdot\vec{\sigma} +\frac{1}{2} [i \hat{u} \cdot \vec{\sigma} , i \hat{v} \cdot \vec{\sigma} ]\right )\right) ~, where :\cos c' = \cos a' \cos b' - \hat{u} \cdot\hat{v} \sin a' \sin b'~, the spherical law of cosines. (Note are angles, not the above.) This is manifestly of the same format as above, :Z = \alpha' X + \beta' Y + \gamma' [X, Y], with :X = i a'\hat{u} \cdot \mathbf{\sigma}, \quad Y = ib'\hat{v} \cdot \mathbf{\sigma} ~\in \mathfrak{su}(2), so that :\begin{align}\alpha' &= \frac{c'}{\sin c'}\frac{\sin a'}{a'}\cos b'\\ \beta' &= \frac{c'}{\sin c'}\frac{\sin b'}{b'}\cos a'\\ \gamma' &= \frac{1}{2}\frac{c'}{\sin c'}\frac{\sin a'}{a'}\frac{\sin b'}{b'}~. \end{align} For uniform normalization of the generators in the Lie algebra involved, express the Pauli matrices in terms of -matrices, , so that :a' \mapsto -\frac{\theta}{2}, \quad b' \mapsto -\frac {\varphi}{2}. To verify then these are the same coefficients as above, compute the ratios of the coefficients, :\begin{align}\frac{\alpha'}{\gamma'} &= {\theta}\cot\frac{\theta}{2} &= \frac{\alpha}{\gamma}\\ \frac{\beta'}{\gamma'} &= \varphi\cot\frac{\varphi}{2} &= \frac{\beta}{\gamma}~.\end{align} Finally, given the identity . For the general case, one might use Ref. The quaternion formulation of the composition of two rotations RB and RA also yields directly the rotation axis and angle of the composite rotation RC=RBRA. Let the quaternion associated with a spatial rotation R is constructed from its rotation axis S and the rotation angle 'Ï' this axis. The associated quaternion is given by, : S = \cos\frac{\varphi}{2} + \sin\frac{\varphi}{2} \mathbf{S}. Then the composition of the rotation RR with RA is the rotation RC=RBRA with rotation axis and angle defined by the product of the quaternions :A=\cos\frac{\alpha}{2}+ \sin\frac{\alpha}{2}\mathbf{A}\quad \text{and}\quad B=\cos\frac{\beta}{2}+ \sin\frac{\beta}{2}\mathbf{B}, that is : C = \cos\frac{\gamma}{2}+\sin\frac{\gamma}{2}\mathbf{C} = \Big(\cos\frac{\beta}{2}+\sin\frac{\beta}{2}\mathbf{B}\Big) \Big(\cos\frac{\alpha}{2}+ \sin\frac{\alpha}{2}\mathbf{A}\Big). Expand this product to obtain : \cos\frac{\gamma}{2}+\sin\frac{\gamma}{2} \mathbf{C} = \Big(\cos\frac{\beta}{2}\cos\frac{\alpha}{2} - \sin\frac{\beta}{2}\sin\frac{\alpha}{2} \mathbf{B}\cdot \mathbf{A}\Big) + \Big(\sin\frac{\beta}{2}\cos\frac{\alpha}{2} \mathbf{B} + \sin\frac{\alpha}{2}\cos\frac{\beta}{2} \mathbf{A} + \sin\frac{\beta}{2}\sin\frac{\alpha}{2} \mathbf{B}\times \mathbf{A}\Big). Divide both sides of this equation by the identity, which is the law of cosines on a sphere, : \cos\frac{\gamma}{2} = \cos\frac{\beta}{2}\cos\frac{\alpha}{2} - \sin\frac{\beta}{2}\sin\frac{\alpha}{2} \mathbf{B}\cdot \mathbf{A}, and compute : \tan\frac{\gamma}{2} \mathbf{C} = \frac{\tan\frac{\beta}{2}\mathbf{B} + \tan\frac{\alpha}{2} \mathbf{A} + \tan\frac{\beta}{2}\tan\frac{\alpha}{2} \mathbf{B}\times \mathbf{A}}{1 - \tan\frac{\beta}{2}\tan\frac{\alpha}{2} \mathbf{B}\cdot \mathbf{A}}. This is Rodrigues' formula for the axis of a composite rotation defined in terms of the axes of the two rotations. He derived this formula in 1840 (see page 408). The three rotation axes A, B, and C form a spherical triangle and the dihedral angles between the planes formed by the sides of this triangle are defined by the rotation angles. Infinitesimal rotations ====================================================================== The matrices in the Lie algebra are not themselves rotations; the skew-symmetric matrices are derivatives. An actual "differential rotation", or 'infinitesimal rotation matrix' has the form : I + A \, d\theta ~, where is vanishingly small and . These matrices do not satisfy all the same properties as ordinary finite rotation matrices under the usual treatment of infinitesimals . To understand what this means, consider : dA_{\mathbf{x}} = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & -d\theta \\ 0 & d\theta & 1 \end{bmatrix}~ . First, test the orthogonality condition, . The product is : dA_{\mathbf{x}}^T \, dA_{\mathbf{x}} = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1+d\theta^2 & 0 \\ 0 & 0 & 1+d\theta^2 \end{bmatrix} , differing from an identity matrix by second order infinitesimals, discarded here. So, to first order, an infinitesimal rotation matrix is an orthogonal matrix. Next, examine the square of the matrix, : dA_{\mathbf{x}}^2 = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1-d\theta^2 & -2d\theta \\ 0 & 2\,d\theta & 1-d\theta^2 \end{bmatrix}~. Again discarding second order effects, note that the angle simply doubles. This hints at the most essential difference in behavior, which we can exhibit with the assistance of a second infinitesimal rotation, : dA_{\mathbf{y}} = \begin{bmatrix} 1 & 0 & d\varphi \\ 0 & 1 & 0 \\ -d\varphi & 0 & 1 \end{bmatrix} . Compare the products to , :\begin{align} dA_{\mathbf{x}}\,dA_{\mathbf{y}} &{}= \begin{bmatrix} 1 & 0 & d\varphi \\ d\theta\,d\varphi & 1 & -d\theta \\ -d\varphi & d\theta & 1 \end{bmatrix} \\ dA_{\mathbf{y}}\,dA_{\mathbf{x}} &{}= \begin{bmatrix} 1 & d\theta\,d\varphi & d\varphi \\ 0 & 1 & -d\theta \\ -d\varphi & d\theta & 1 \end{bmatrix}. \\ \end{align} Since d\theta \, d\varphi is second-order, we discard it: thus, to first order, multiplication of infinitesimal rotation matrices is 'commutative'. In fact, : dA_{\mathbf{x}}\,dA_{\mathbf{y}} = dA_{\mathbf{y}}\,dA_{\mathbf{x}} , \,\! again to first order. In other words, the order in which infinitesimal rotations are applied is irrelevant. This useful fact makes, for example, derivation of rigid body rotation relatively simple. But one must always be careful to distinguish (the first order treatment of) these infinitesimal rotation matrices from both finite rotation matrices and from Lie algebra elements. When contrasting the behavior of finite rotation matrices in the BCH formula above with that of infinitesimal rotation matrices, where all the commutator terms will be second order infinitesimals one finds a bona fide vector space. Technically, this dismissal of any second order terms amounts to Group contraction. Realizations of rotations ====================================================================== We have seen that there are a variety of ways to represent rotations: * as orthogonal matrices with determinant 1, * by axis and rotation angle * in quaternion algebra with versors and the map 3-sphere 'S'3 â SO(3) (see quaternions and spatial rotations) * in geometric algebra as a rotor * as a sequence of three rotations about three fixed axes; see Euler angles. Spherical harmonics ====================================================================== See also Representations of SO(3) The group of three-dimensional Euclidean rotations has an infinite-dimensional representation on the Hilbert space : L^2 (\mathbf{S}^2) = \operatorname{span} \left \{ Y^\ell_m, \ell \in \mathbf{N}^+, -\ell \leqslant m \leqslant \ell \right \}, where Y^\ell_m are spherical harmonics. Its elements are square integrable complex-valued functions on the sphere. The inner product on this space is given by {{NumBlk|:|\langle f,g\rangle = \int_{\mathbf{S}^2}\overline{f}g\,d\Omega = \int_0^{2\pi} \int_0^\pi \overline{f}g \sin\theta \, d\theta \, d\varphi.|}} If is an arbitrary square integrable function defined on the unit sphere , then it can be expressed as {{NumBlk|:||f\rangle = \sum_{\ell = 1}^\infty\sum_{m = -\ell}^{m = \ell} |Y_m^\ell\rangle\langle Y_m^\ell|f\rangle, \qquad f(\theta, \varphi) = \sum_{\ell = 1}^\infty\sum_{m = -\ell}^{m = \ell} f_{\ell m} Y^\ell_m(\theta, \varphi),|}} where the expansion coefficients are given by {{NumBlk|:|f_{\ell m} = \langle Y_m^\ell, f \rangle = \int_{\mathbf{S}^2}\overlinef \, d\Omega = \int_0^{2\pi} \int_0^\pi \overline(\theta, \varphi)f(\theta, \varphi)\sin \theta \, d\theta \, d\varphi.|}} The Lorentz group action restricts to that of and is expressed as {{NumBlk|:|(\Pi(R)f)(\theta(x), \varphi(x)) = \sum_{\ell = 1}^\infty\sum_{m = -\ell}^{m = \ell}\sum_{m' = -\ell}^{m' = \ell} D^{(\ell)}_{mm'} (R) f_{\ell m'}Y^\ell_m \left (\theta(R^{-1}x), \varphi(R^{-1}x) \right ), \qquad R \in \mathrm{SO}(3), \quad x \in \mathbf{S}^2.|}} This action is unitary, meaning that {{NumBlk|:|\langle \Pi(R)f,\Pi(R)g\rangle = \langle f,g\rangle \qquad \forall f,g \in \mathbf{S}^2, \quad \forall R \in \mathrm{SO}(3).|}} The can be obtained from the of above using Clebsch-Gordan decomposition, but they are more easily directly expressed as an exponential of an odd-dimensional -representation (the 3-dimensional one is exactly ). In this case the space decomposes neatly into an infinite direct sum of irreducible odd finite-dimensional representations according to {{NumBlk|:|L^2(\mathbf{S}^2) = \sum_{i = 0}^\infty V_{2i + 1} \equiv \bigoplus_{i=0}^\infty \operatorname{span}\left \{Y_m^{2i+1} \right \}.|}} This is characteristic of infinite-dimensional unitary representations of . If is an infinite-dimensional unitary representation on a separable Hilbert space, then it decomposes as a direct sum of finite-dimensional unitary representations. Such a representation is thus never irreducible. All irreducible finite-dimensional representations can be made unitary by an appropriate choice of inner product, :\langle f, g\rangle_U \equiv \int_{\mathrm{SO}(3)}\langle \Pi(R)f, \Pi(R)g\rangle \, dg = \frac{1}{8\pi^2} \int_0^{2\pi} \int_0^\pi \int_0^{2\pi} \langle \Pi(R)f, \Pi(R)g\rangle \sin \theta \, d\varphi \, d\theta \, d\psi, \quad f,g \in V, where the integral is the unique invariant integral over normalized to , here expressed using the Euler angles parametrization. The inner product inside the integral is any inner product on . Generalizations ====================================================================== The rotation group generalizes quite naturally to 'n'-dimensional Euclidean space, R'n' with its standard Euclidean structure. The group of all proper and improper rotations in 'n' dimensions is called the orthogonal group O('n'), and the subgroup of proper rotations is called the special orthogonal group SO('n'), which is a Lie group of dimension . In special relativity, one works in a 4-dimensional vector space, known as Minkowski space rather than 3-dimensional Euclidean space. Unlike Euclidean space, Minkowski space has an inner product with an indefinite signature. However, one can still define 'generalized rotations' which preserve this inner product. Such generalized rotations are known as Lorentz transformations and the group of all such transformations is called the Lorentz group. The rotation group SO(3) can be described as a subgroup of E+(3), the Euclidean group of direct isometries of Euclidean R3. This larger group is the group of all motions of a rigid body: each of these is a combination of a rotation about an arbitrary axis and a translation along the axis, or put differently, a combination of an element of SO(3) and an arbitrary translation. In general, the rotation group of an object is the symmetry group within the group of direct isometries; in other words, the intersection of the full symmetry group and the group of direct isometries. For chiral objects it is the same as the full symmetry group. See also ====================================================================== *Orthogonal group *Angular momentum *Coordinate rotations *Charts on SO(3) *Representations of SO(3) *Euler angles *Rodrigues' rotation formula *Infinitesimal rotation *Pin group *Quaternions and spatial rotations *Rigid body *Spherical harmonics *Plane of rotation *Lie group *Pauli matrix *Plate trick Bibliography ====================================================================== * * * [http://www.ii.uib.no/publikasjoner/texrap/pdf/2000-201.pdf] * * * * * * (translation of the original 1932 edition, 'Die Gruppentheoretische Methode in Der Quantenmechanik'). *. License ========= All content on Gopherpedia comes from Wikipedia, and is licensed under CC-BY-SA License URL: http://creativecommons.org/licenses/by-sa/3.0/ Original Article: http://en.wikipedia.org/wiki/3D_rotation_group .