Coordinate Systems¶
Hemispherical coordinates are essentially spherical coordinates restricted to the non-negative half-space defined by a differential surface’s normal.
Notice that Fig. 1 depicts \((\theta, \phi)\) in a right-handed coordinate system where counterclockwise rotations are positive.
In terms of mapping \(Y \rightarrow Z\), \(Z \rightarrow X\), and \(X \rightarrow Y\), the two coordinate systems are equivalent, so the important thing is being consistent with the coordinate system at hand. Fig. 2 is assumed in some software packages (e.g. RenderMan) while Fig. 3 is favored by graphics APIs (e.g. OpenGL, Vulkan). Before delving into coordinate system transformations, it will be helpful to have some examples to work with.
Cylindrical Coordinates¶
Fig. 4 is an extension of polar coordinates into three dimensions. The Cartesian coordinates of cylindrical coordinates are defined as
Notice that \(\phi\) will start from the \(X\)-axis and sweep towards the \(Z\)-axis by convention of polar coordinates. The inverse transformation is then
Spherical Coordinates¶
One intuitive way to think about Fig. 5 is to relate it to Fig. 4 since they share the azimuthal angle \(\phi\). The cylindrical coordinates of spherical coordinates are defined as
Substituting (2) into (1) yields
The inverse transformation from Cartesian to spherical coordinates is then
Change of Coordinates¶
A vector space over a field \(F\) is a set \(V\) together with the operations of addition and multiplication that adheres to eight axioms. The elements of \(V\) and \(F\) are commonly called vectors and scalars respectively. A set \(\beta = \{ \mathbf{b}_1, \mathbf{b}_2, \ldots, \mathbf{b}_n \} \subset V\) is a basis if it satisfies the following properties:
- Linear Independence
For all \(x_1, x_2, \ldots, x_n \in F\), if \(x_1 \mathbf{b}_1 + x_2 \mathbf{b}_2 + \cdots + x_n \mathbf{b}_n = \boldsymbol{0}\), then \(x_1 = x_2 = \cdots = x_n = 0\).
- Linear Span
The coordinates of a vector \(\mathbf{v} \in V\) are those coefficients \(c_1, c_2, \ldots, c_n \in F\) which uniquely express \(\mathbf{v} = c_1 \mathbf{b}_1 + c_2 \mathbf{b}_2 + \cdots + c_n \mathbf{b}_n\).
Suppose \(\beta\) is a basis of \(V\). The dimension of \(V\), denoted as \(\text{dim}_F(V)\), is the cardinality of \(\beta\) and represents the maximum number of linearly independent vectors, which in this case is \(n\). In an \(n\)-dimensional vector space, any set of \(n\) linearly independent vectors form a basis for the space.
[Joy] denotes the column vector of these coordinates as
Let \(\gamma = \{ \mathbf{g}_1, \mathbf{g}_2, \ldots, \mathbf{g}_n \}\) denote another basis of \(V\). Observe that
where \(P_{\gamma \leftarrow \beta}\) represents the transition matrix whose columns are coordinates of \(\mathbf{b}_{1 \leq j \leq n}\) in basis \(\gamma\) [Min]. Therefore, the transition matrix \(P_{\gamma \leftarrow \beta}\) converts from \(\beta\)-coordinates to \(\gamma\)-coordinates.
To find the coordinates of \(\mathbf{b}_j\) in basis \(\gamma\), one needs to express \(\mathbf{b}_j\) as a linear combination of the \(\gamma\)-basis vectors i.e. solve the linear system
[Bura]. The solution set to each system can be computed via reducing the associated augmented matrix \(\begin{bmatrix} \mathbf{g}_1 & \mathbf{g}_2 & \cdots & \mathbf{g}_n & \mid & \mathbf{b}_j \end{bmatrix}\) to echelon form, which can be accomplished through Gaussian-Jordan elimination [Burb]. Notice that when \(\mathbf{b}_j = \boldsymbol{0}\), solving for the linear system is equivalent to testing for linear independence. Since both the \(\gamma\)-basis and \(\beta\)-basis vectors are fixed i.e. not varying, one can compute the solution set for all the systems simultaneously by producing the reduced echelon form of
When the \(\beta\)-basis vectors are varying, reframing the problem as
and applying LU factorization instead of Gaussian-Jordan elimination results in less computation. Evidently, when \(\gamma = \beta\), the transition matrix must be the identity matrix to satisfy \([\mathbf{v}]_\beta = P_{\beta \leftarrow \beta} [\mathbf{v}]_\beta\).
Let \(\epsilon = \{ \mathbf{e}_1, \mathbf{e}_2, \ldots, \mathbf{e}_n \}\) denote the standard basis. When \(\gamma = \epsilon\), the augmented matrix is already in reduced echelon form. Hence the transition matrix
Since the transition matrices \(P_{\epsilon \leftarrow \gamma}\) and \(P_{\epsilon \leftarrow \beta}\) have linearly independent columns,
Applying Change of Coordinates¶
Some derivations define cylindrical coordinates (1) and spherical coordinates (3) as
and
respectively, which is essentially defining
Likewise, Fig. 2 can be converted to Fig. 3 by using the transition matrix
References
- Bura
James V. Burke. Math 308 review. https://www.math.washington.edu/ burke/crs/407/308rev/308rev.pdf. Accessed: 2016-10-06.
- Burb
James V. Burke. Matrices, block structures and gaussian elimination. https://www.math.washington.edu/ burke/crs/407/notes/LA-and-blocks.pdf. Accessed: 2016-10-06.
- Dawa
Paul Dawkins. Cylindrical coordinates. http://tutorial.math.lamar.edu/Classes/CalcIII/CylindricalCoords.aspx. Accessed: 2016-10-06.
- Dawb
Paul Dawkins. Spherical coordinates. http://tutorial.math.lamar.edu/Classes/CalcIII/SphericalCoords.aspx. Accessed: 2016-10-06.
- Joy
David E. Joyce. Change of coordinates. http://aleph0.clarku.edu/ djoyce/ma130/change.pdf. Accessed: 2016-10-06.
- Min
Andrey Minchenko. Transition matrix. http://www.math.cornell.edu/ andreim/Lec26.pdf. Accessed: 2016-10-06.