(i) Computing Homographies of \(\mathbb{P}^n\)
Each pair of point correspondences
\(\mathbf{x}, \mathbf{x}' \in \mathbb{R}^m\) is related by a homography
\(\mathbf{H} \in \mathbb{R}^{m \times m}\) such that
\(\mathbf{x}' \sim \mathbf{H} \mathbf{x}\). Let \(\mathbf{h}_i^\top\)
denote the \(i^\text{th}\) row of \(\mathbf{H}\).
Since equality is only up to a scale in homogeneous coordinates, set
\(x'_m = 1\) so that for \(i = 1, \ldots, m\)
\[\begin{split}\frac{x'_i}{x'_m}
&= \frac{\mathbf{h}_i^\top \mathbf{x}}{\mathbf{h}_m^\top \mathbf{x}}\\
0 &= \mathbf{x} \cdot \mathbf{h}_i -
\frac{x'_i}{x'_m} \mathbf{x} \cdot \mathbf{h}_m\\
0 &= \mathbf{x} \cdot \mathbf{h}_i - x'_i \mathbf{x} \cdot \mathbf{h}_m.\end{split}\]
Given a set of \(n\) point correspondences, organize them as
\(\mathbf{X}, \mathbf{X}' \in \mathbb{R}^{n \times m}\). The previous
equations can be rearranged into the following sparse linear system
\[\begin{split}\DeclareMathOperator{\diag}{diag}
\boldsymbol{0} =
\begin{bmatrix}
\mathbf{X} & & & & -\diag(\mathbf{X}'_{\colon 1}) \mathbf{X}\\
& \mathbf{X} & & & -\diag(\mathbf{X}'_{\colon 2}) \mathbf{X}\\
& & \ddots & & \vdots\\
& & & \mathbf{X} & -\diag(\mathbf{X}'_{\colon (m - 1)}) \mathbf{X}
\end{bmatrix}
\begin{bmatrix} \mathbf{h}_1\\ \mathbf{h}_2\\ \vdots\\ \mathbf{h}_m \end{bmatrix}.\end{split}\]
(iii) Scaling Unbounded Point Sets
In mathematics, a moment is a specific quantitative measure of the shape of a
set of points. The zeroth moment is the sum of the measurements, and the first
moment is the mean of the measurements. The second moment recentered around a
known, definite mean is the variance of the distribution.
Denote \(\mathbf{X} \in \mathbb{R}^{n \times m}\) as a data matrix
representing a set of \(n\) observations measuring \(m\) variables where
\(\left\{ \mathbf{x}_i \in \mathbb{R}^m \right\}_{i = 1}^n\) is a set of
independent and identically distributed real-valued random vector.
The centroid (a.k.a. first moment, sample mean) over the random vectors is
defined as \(\overline{\mathbf{x}} = n^{-1} \sum_i \mathbf{x}_i\).
Isotropic Scaling
The goal of isotropic scaling is to compute a similarity transformation
\[\begin{split}\mathrm{T} =
\begin{bmatrix}
s & 0 & t_x\\
0 & s & t_y\\
0 & 0 & 1
\end{bmatrix}\end{split}\]
such that
\[\begin{split}\mathrm{T} \overline{\mathbf{x}} =
\begin{bmatrix}
s & 0 & t_x\\
0 & s & t_y\\
0 & 0 & 1
\end{bmatrix}
\begin{bmatrix} \overline{x}\\ \overline{y}\\ 1 \end{bmatrix} =
\begin{bmatrix}
s \overline{x} + t_x\\
s \overline{y} + t_x\\
1
\end{bmatrix} =
(0, 0, 1)^\top =
\mathbf{e}_3\end{split}\]
and
\[\DeclareMathOperator{\dist}{dist}
n^{-1} \sum_i \dist\left( \mathrm{T} \mathbf{x}_i, \mathbf{e}_3 \right) =
n^{-1} \sum_i \sqrt{(s x_i + t_x)^2 + (s y_i + t_y)^2} =
\sqrt{2}.\]
By inspection, assigning \(t_x = -s \overline{x}\) and
\(t_y = -s \overline{y}\) accomplishes the former objective and restricts
the latter criterion to
\[\begin{split}n^{-1} \sum_i \sqrt{
(s x_i - s \overline{x})^2 + (s y_i - s \overline{y})^2
} &= \sqrt{2}\\
s &= \frac{n \sqrt{2}}{
\sum_i \sqrt{(x_i - \overline{x})^2 + (y_i - \overline{y})^2}
}.\end{split}\]
The normalized DLT can be understood as
\[\begin{split}\mathrm{T}' \mathbf{x}'_i &= \tilde{\mathrm{H}} \mathrm{T} \mathbf{x}_i\\
\mathbf{x}'_i &= \mathrm{H} \mathbf{x}_i\end{split}\]
where \(\mathrm{H} = {\mathrm{T}'}^{-1} \tilde{\mathrm{H}} \mathrm{T}\).
Non-Isotropic Scaling
The goal of non-isotropic scaling (a.k.a. decorrelating and whitening) is to
produce a normalized moment with zero mean and identity covariance. Both
moments are unknown and have to be estimated.
Recall that the covariance matrix of two random vectors is defined as
\[\begin{split}\DeclareMathOperator{\E}{\mathbb{E}}
\DeclareMathOperator{\Cov}{\mathrm{Cov}}
\Cov(X, Y)
&= \E\left[ \left( X - \E[X] \right) \left( Y - \E[Y] \right)^\top \right]\\
&= \E\left[ X Y^\top \right] - \E\left[ X \E[Y]^\top \right] -
\E\left[ \E[X] Y^\top \right] + \E\left[ \E[X] \E[Y]^\top \right]\\
&= \E\left[ X Y^\top \right] - 2 \E[X] \E[Y]^\top + \E[X] \E[Y]^\top\\
&= \E\left[ X Y^\top \right] - \E[X] \E[Y]^\top.\end{split}\]
A special case of the covariance matrix is the correlation matrix
\[\DeclareMathOperator{\diag}{\mathrm{diag}}
\DeclareMathOperator{\Corr}{\mathrm{Corr}}
\DeclareMathOperator{\sd}{\mathrm{sd}}
\Corr(X, Y) =
\sd(X)^{-1} \Cov(X, Y) \sd(Y)^{-1}\]
where
\(\sd(\cdot) \triangleq \diag\left( \Cov(\cdot, \cdot) \right)^{1 / 2}\)
represents the standard deviation.
The covariance matrix of a random vector with itself is called the
variance-covariance matrix:
\[\begin{split}\DeclareMathOperator*{\Var}{\mathrm{Var}}
\Var(X) =
\Cov(X, X) =
\E\left[ X X^\top \right] - \E[X] \E[X]^\top =
\begin{bmatrix}
\Var(X_1) & \Cov(X_1, X_2) & \cdots & \Cov(X_1, X_m)\\
\Cov(X_2, X_1) & \Var(X_2) & \cdots & \Cov(X_2, X_m)\\
\vdots & \vdots & \ddots & \vdots\\
\Cov(X_m, X_1) & \Cov(X_m, X_2) & \cdots & \Var(X_m)
\end{bmatrix}.\end{split}\]
The sample mean \(\overline{\mathbf{x}}\) is assumed to be an estimate of
the population mean such that
\(\E\left[ \mathbf{x}_i \right] = \overline{\mathbf{x}}\).
To estimate the population covariance, the sample mean must be made
independent of the samples used to compute it. Hence
\[\begin{split}\Var(\mathbf{X})
&= (n - 1)^{-1} \sum_i \Var(\mathbf{x}_i)\\
&= (n - 1)^{-1} \sum_i
\E\left[ \mathbf{x}_i \mathbf{x}_i^\top \right] -
\overline{\mathbf{x}} \overline{\mathbf{x}}^\top\\
&= (n - 1)^{-1} \left(
\E\left[ \sum_i \mathbf{x}_i \mathbf{x}_i^\top \right] -
n \overline{\mathbf{x}} \overline{\mathbf{x}}^\top
\right)
& \quad & \E[X + Y] = \E[X] + \E[Y]\\
&= (n - 1)^{-1} \left(
\E\left[ \mathbf{X}^\top \mathbf{X} \right] -
n \overline{\mathbf{x}} \overline{\mathbf{x}}^\top
\right)
& \quad &
\sum_i \mathbf{x}_i \mathbf{x}_i^\top =
\begin{bmatrix} \mathbf{x}_i & \cdots & \mathbf{x}_n \end{bmatrix}
\begin{bmatrix}
\mathbf{x}_i^\top\\ \vdots\\ \mathbf{x}_n^\top
\end{bmatrix} = \mathbf{X}^\top \mathbf{X}\\
&= (n - 1)^{-1} \E\left[
\mathbf{X}^\top \mathbf{X} -
n \overline{\mathbf{x}} \overline{\mathbf{x}}^\top
\right]\end{split}\]
reserves one of the observations (“-1”) to account for the loss of a degree of
freedom (given the sample mean, only \(n - 1\) observations are needed to
reconstruct the samples). Occasionally the quantity
\(\mathbf{X}^\top \mathbf{X}\) is mentioned as the uncorrected sums of
squares and cross products (SSCP) matrix. Likewise, the corrected SSCP
(co-scatter) matrix is defined as
\(\mathbf{X}^\top \mathbf{X} - n \overline{x} \overline{x}^\top\).
While the preceding formulas can be used to achieve non-isotropic scaling, a
simpler approach is to realize that the SSCP matrix is always positive
semi-definite. Since \(m < n\) in this scenario,
\[\DeclareMathOperator*{\rank}{\mathrm{rank}}
\rank\left( \mathbf{X}^\top \mathbf{X} \right) =
\rank(\mathbf{X}) \leq
\min(m, n)\]
implies \(\mathbf{X}^\top \mathbf{X}\) is full rank (nonsingular) and thus
positive definite. The desired non-isotropic transformation is
\[\begin{split}\DeclareMathOperator*{\chol}{\mathrm{cholesky}}
\mathrm{T} =
\mathbf{L}^{-1}
\begin{bmatrix}
1 & 0 & -\overline{x}\\
0 & 1 & -\overline{y}\\
0 & 0 & 1
\end{bmatrix}
\quad \text{with} \quad
\mathbf{L} \mathbf{L}^\top =
\chol\left(
\mathbf{X}^\top \mathbf{X} -
n \overline{\mathbf{x}} \overline{\mathbf{x}}^\top
\right).\end{split}\]
Scaling with Points Near Infinity
Given a set of points \(\mathbf{x}_i = (x_i, y_i, w_i)^\top\), the goal is
to compute an affine transformation
\[\begin{split}\mathrm{T}_0 =
\begin{bmatrix}
1 & 0 & 0 & t_x\\
0 & 1 & 0 & t_y\\
0 & 0 & c & 0
\end{bmatrix}\end{split}\]
such that
\[\sum_i x_i + t_x = \sum_i y_i + t_y = 0;\quad
\sum_i (x_i + t_x)^2 + (y_i + t_y)^2 = 2 \sum_i (c w_i)^2;\quad
(x_i + t_x)^2 + (y_i + t_y)^2 + (c w_i)^2 = 1 \forall i.\]
By inspection, assigning \(t_x = -\overline{x}\) and
\(t_y = -\overline{y}\) satisfies the first condition, and defining
\[c^2 = \frac{\sum_i (x_i + t_x)^2 + (y_i + t_y)^2}{2 \sum_i w_i^2}\]
fulfills the second criterion. To meet the last specification, one should not
directly normalize the result of
\(\mathrm{T}_0 \begin{bmatrix} \mathbf{x}_i\\ 1 \end{bmatrix}\) because that
will invalidate the previous conditions.
Notice that
\[\sum_i (x_i + t_x)^2 + (y_i + t_y)^2 + (c w_i)^2 = \sum_i 1 = n.\]
While meeting this alternative normalization is not equivalent to the original,
the set of points will form an approximately symmetric circular point cloud of
radius 1. The desired transformation is
\[\mathrm{T} = \mathbf{L}^{-1} \mathrm{T}_0
\quad \text{with} \quad
\mathbf{L} \mathbf{L}^\top = \chol\left( \mathbf{Y}^\top \mathbf{Y} \right)\]
where \(\mathbf{Y}\) represents \(\mathbf{X}\) already transformed to
satisfy the first two conditions.
(vi) Sampson Error with Non-Isotropic Error Distributions
Given (4.10) still holds and the point \(\mathbf{X} = (x, y, x', y')\) is
measured with covariance matrix \(\Sigma_{\mathbf{X}}\), the minimization
problem becomes
\[\begin{split}\min_{\boldsymbol{\delta}_{\mathbf{X}}} \quad
\left\Vert
\boldsymbol{\delta}_{\mathbf{X}}
\right\Vert^2_{\Sigma_{\mathbf{X}}} &\\
\text{subject to} \quad
\mathtt{J} \boldsymbol{\delta}_{\mathbf{X}} &= -\boldsymbol{\epsilon}.\end{split}\]
The corresponding Lagrangian is
\[\mathcal{L} =
\boldsymbol{\delta}_{\mathbf{X}}^\top
\Sigma_{\mathbf{X}}^{-1}
\boldsymbol{\delta}_{\mathbf{X}} -
2 \boldsymbol{\lambda}^\top \left(
\mathtt{J} \boldsymbol{\delta}_{\mathbf{X}} + \boldsymbol{\epsilon}
\right).\]
The extrema of \(\mathcal{L}\) must satisfy
\[\begin{split}\boldsymbol{0}
&= \frac{\partial \mathcal{L}}{\partial \boldsymbol{\delta}_{\mathbf{X}}}\\
&= 2 \Sigma_{\mathbf{X}}^{-1} \boldsymbol{\delta}_{\mathbf{X}} -
2 \mathtt{J}^\top \boldsymbol{\lambda}
& \quad & \text{covariance matrix is always symmetric}\\
\boldsymbol{\delta}_{\mathbf{X}}
&= \Sigma_{\mathbf{X}} \mathtt{J}^\top \boldsymbol{\lambda}\end{split}\]
and
\[\begin{split}\boldsymbol{0}
&= \frac{\partial \mathcal{L}}{\partial \boldsymbol{\lambda}}\\
&= \mathtt{J} \boldsymbol{\delta}_{\mathbf{X}} + \boldsymbol{\epsilon}\\
\mathtt{J} \Sigma_{\mathbf{X}} \mathtt{J}^\top \boldsymbol{\lambda}
&= -\boldsymbol{\epsilon}\\
\boldsymbol{\lambda}
&= -\left(
\mathtt{J} \Sigma_{\mathbf{X}} \mathtt{J}^\top
\right)^{-1} \boldsymbol{\epsilon}.\end{split}\]
Therefore,
\[\boldsymbol{\delta}_{\mathbf{X}} =
-\Sigma_{\mathbf{X}} \mathtt{J}^\top \left(
\mathtt{J} \Sigma_{\mathbf{X}} \mathtt{J}^\top
\right)^{-1} \boldsymbol{\epsilon}\]
and the Sampson error is
\[\begin{split}\left\Vert
\boldsymbol{\delta}_{\mathbf{X}}
\right\Vert^2_{\Sigma_{\mathbf{X}}}
&= \boldsymbol{\delta}_{\mathbf{X}}^\top
\Sigma_{\mathbf{X}}^{-1}
\boldsymbol{\delta}_{\mathbf{X}}\\
&= \boldsymbol{\epsilon}^\top
\left(
\mathtt{J} \Sigma_{\mathbf{X}} \mathtt{J}^\top
\right)^{-\top}
\mathtt{J} \Sigma_{\mathbf{X}}^\top \mathtt{J}^\top
\left(
\mathtt{J} \Sigma_{\mathbf{X}} \mathtt{J}^\top
\right)^{-1} \boldsymbol{\epsilon}\\
&= \boldsymbol{\epsilon}^\top
\left(
\mathtt{J} \Sigma_{\mathbf{X}} \mathtt{J}^\top
\right)^{-1}
\boldsymbol{\epsilon}
& \quad & \left( \mathbf{A}^\top \right)^{-1} =
\left( \mathbf{A}^{-1} \right)^\top.\end{split}\]
(vii) Sampson Error Programming Hint
Recall that
\[\begin{split}\newcommand{\variety}[2]{\operatorname{\mathcal{#1}}_\mathtt{#2}}
\variety{C}{H}(\mathbf{X})
&= \mathtt{A}(\mathbf{X}) \mathbf{h}
& \quad & \text{(4.10)}\\
&= \begin{bmatrix}
\boldsymbol{0}^\top & -\mathbf{x}^\top & y' \mathbf{x}^\top\\
\mathbf{x}^\top & \boldsymbol{0}^\top & -x' \mathbf{x}^\top
\end{bmatrix}
\begin{bmatrix} \mathbf{h}_1\\ \mathbf{h}_2\\ \mathbf{h}_3 \end{bmatrix}
& \quad & \text{(4.3)}.\end{split}\]
The partial derivative matrix may be computed as
\[\frac{\partial \variety{C}{H}(\mathbf{X})}{\partial \mathbf{X}} =
\begin{bmatrix}
\frac{\partial \variety{C}{H}(\mathbf{X})}{\partial X_1} &
\cdots &
\frac{\partial \variety{C}{H}(\mathbf{X})}{\partial X_4}
\end{bmatrix} =
\mathtt{J}\]
where
\[\frac{\partial \variety{C}{H}(\mathbf{X})}{\partial X_i} =
\variety{C}{H}\left( \mathbf{X} + \mathbf{E}_i \right) -
\variety{C}{H}(\mathbf{X})\]
because w.l.o.g.
\[\begin{split}\frac{\partial \variety{C}{H}(\mathbf{X})}{\partial X_1}
&= \frac{\partial}{\partial x}
\begin{bmatrix}
0 & 0 & 0 & -x & -y & -1 & y' x & y' y & y'\\
x & y & 1 & 0 & 0 & 0 & -x' x & -x' y & -x'
\end{bmatrix}
\mathbf{h}\\
&= \begin{bmatrix}
0 & 0 & 0 & -1 & 0 & 0 & y' & 0 & 0\\
1 & 0 & 0 & 0 & 0 & 0 & -x' & 0 & 0
\end{bmatrix}
\mathbf{h}\\
&= \left(
\begin{bmatrix}
0 & 0 & 0 & -(x + 1) & -y & -1 & y' (x + 1) & y' y & y'\\
x + 1 & y & 1 & 0 & 0 & 0 & -x' (x + 1) & -x' y & -x'
\end{bmatrix} -
\begin{bmatrix}
0 & 0 & 0 & -x & -y & -1 & y' x & y' y & y'\\
x & y & 1 & 0 & 0 & 0 & -x' x & -x' y & -x'
\end{bmatrix}
\right)
\mathbf{h}\\
&= \variety{C}{H}\left( \mathbf{X} + \mathbf{E}_1 \right) -
\variety{C}{H}(\mathbf{X}).\end{split}\]
Consider a matrix \(A \in \mathbb{R}^{m \times n}\) and
\(B \in \mathbb{R}^{n \times p}\) where \(a_i \in \mathbb{R}^m\) and
\(b_i \in \mathbb{R}^p\).
\[\begin{split}\newcommand{\textemdash}{\unicode{x2014}}
AB =
\begin{bmatrix}
\vert & \vert & & \vert\\
a_1 & a_2 & \cdots & a_n\\
\vert & \vert & & \vert
\end{bmatrix}
\begin{bmatrix}
\textemdash & b_1^\top & \textemdash\\
\textemdash & b_2^\top & \textemdash\\
& \vdots & \\
\textemdash & b_n^\top & \textemdash\\
\end{bmatrix} =
\sum_{i = 1}^n a_i b_i^\top\end{split}\]
Observe that
\[\begin{split}\mathtt{J} \mathtt{J}^\top
&= \sum_i
\frac{\partial \variety{C}{H}}{\partial X_i}
\frac{\partial \variety{C}{H}}{\partial X_i}^\top\\
&= \sum_i
\left[
\variety{C}{H}\left( \mathbf{X} + \mathbf{E}_i \right) -
\variety{C}{H}(\mathbf{X})
\right]
\left[
\variety{C}{H}\left( \mathbf{X} + \mathbf{E}_i \right) -
\variety{C}{H}(\mathbf{X})
\right]^\top.\end{split}\]
(viii) Minimizing Geometric Error for Affine Transformations
Given a set of correspondences \((x_i, y_i) \leftrightarrow (x'_i, y'_i)\),
the goal is to find an affine transformation \(\mathtt{H}_{\mathrm{A}}\)
that minimizes the reprojection error (4.8):
\[\sum_i
d\left( \mathbf{x}_i, \hat{\mathbf{x}}_i \right)^2 +
d\left( \mathbf{x}'_i, \hat{\mathbf{x}}'_i \right)^2
\quad \text{subject to }
\hat{\mathbf{x}}'_i = \mathtt{H}_{\mathrm{A}} \hat{\mathbf{x}}_i\, \forall i.\]
Since \(\mathbf{h}_3 = \mathbf{e}_3\) in (2.10), the minimal solution would
assign
\(w_i = \hat{w}_i = w'_i = \mathbf{h}_3 \cdot \hat{\mathbf{x}}_i = 1\)
resulting in
\[\sum_i
(x_i - \hat{x}_i)^2 + (y_i - \hat{y}_i)^2 +
(x'_i - \mathbf{h}_1 \cdot \hat{\mathbf{x}}_i)^2 +
(y'_i - \mathbf{h}_2 \cdot \hat{\mathbf{x}}_i)^2.\]
Recall that the reprojection error measured in both the images is equivalent to
geometric distance in \(\mathbb{R}^4\). This means finding the homography
\(\mathtt{H}_{\mathrm{A}}\) and the estimated points
\(\hat{\mathbf{x}}_i\) and \(\hat{\mathbf{x}}'_i\) that minimize (4.8)
is equivalent to finding the variety \(\mathcal{V}_{\mathtt{H}}\) and points
\(\hat{\mathbf{X}}_i\) on \(\mathcal{V}_{\mathtt{H}}\) that minimize
\[\begin{split}\sum_i \left\Vert \mathbf{X}_i - \hat{\mathbf{X}}_i \right\Vert^2 =
\sum_i
\left\Vert
\begin{bmatrix} x_i\\ y_i\\ x'_i\\ y'_i \end{bmatrix} -
\begin{bmatrix}
\hat{x}_i\\ \hat{y}_i\\ \hat{x}'_i\\ \hat{y}'_i
\end{bmatrix}
\right\Vert^2.\end{split}\]
The point \(\hat{\mathbf{X}}\) on \(\mathcal{V}_{\mathtt{H}}\) that lies
closest to a measured point \(\mathbf{X}\) is a point where the line
between \(\mathbf{X}\) and \(\hat{\mathbf{X}}\) is perpendicular to the
tangent plane to \(\mathcal{V}_{\mathtt{H}}\) at \(\hat{\mathbf{X}}\).
In other words, the perpendicular distance of each point to the variety
\[d_\perp\left( \mathbf{X}_i, \mathcal{V}_{\mathtt{H}} \right)^2 =
d\left( \mathbf{x}_i, \hat{\mathbf{x}}_i \right)^2 +
d\left( \mathbf{x}'_i, \hat{\mathbf{x}}'_i \right)^2.\]
The foregoing are merely different interpretations of the same non-linear
estimation problem. The points \(\hat{\mathbf{X}}_i\) cannot be estimated
directly except via iteration because of the non-linear nature of the variety
\(\mathcal{V}_{\mathtt{H}}\).
The idea of the Sampson error function (4.10) is to estimate a first-order
approximation to the point \(\hat{\mathbf{X}}\), assuming the cost function
is well approximated linearly in the neighborhood of the estimated point.
The variety \(\mathcal{V}_{\mathtt{H}}\) for this problem must satisfy
(4.3), or
\[\mathcal{C}_{\mathtt{H}}(\mathbf{X}) =
\mathtt{A}(\mathbf{X}) \mathbf{h} =
\boldsymbol{0},\]
so that
\(\boldsymbol{\epsilon}_i = \mathcal{C}_{\mathtt{H}}(\mathbf{X}_i)\) and
\[\begin{split}\mathtt{J}_i
&= \frac{
\partial \mathcal{C}_{\mathtt{H}}(\mathbf{X}_i)
}{
\partial \mathbf{X}
}\\
&= \frac{\partial}{\partial \mathbf{X}}
\begin{bmatrix}
\boldsymbol{0}^\top & -w'_i \mathbf{x}_i^\top &
y'_i \mathbf{x}_i^\top\\
w'_i \mathbf{x}_i^\top & \boldsymbol{0}^\top &
-x'_i \mathbf{x}_i^\top
\end{bmatrix}
\begin{bmatrix}
\mathbf{h}_1\\ \mathbf{h}_2\\ \mathbf{h}_3
\end{bmatrix}
& \quad & \text{(4.3)}\\
&= \frac{\partial}{\partial \mathbf{X}}
\begin{bmatrix}
w'_i \mathbf{x}^\top \mathbf{h}_1 -
x'_i \mathbf{x}_i^\top \mathbf{h}_3\\
w'_i \mathbf{x}^\top \mathbf{h}_2 -
y'_i \mathbf{x}_i^\top \mathbf{h}_3
\end{bmatrix}
& \quad & \text{negated and swapped first row}\\
&= \begin{bmatrix}
w'_i h_{11} - x'_i h_{31} & -w'_i h_{12} - x'_i h_{32} &
-\mathbf{x}_i^\top \mathbf{h}_3 & 0\\
w'_i h_{21} - y'_i h_{31} & w'_i h_{22} - y'_i h_{32} &
0 & -\mathbf{x}_i^\top \mathbf{h}_3
\end{bmatrix}\\
&= \begin{bmatrix}
w'_i h_{11} & -w'_i h_{12} & -w_i & 0\\
w'_i h_{21} & w'_i h_{22} & 0 & -w_i
\end{bmatrix}
& \quad & \mathbf{h}_3 = \mathbf{e}_3\\
&= \left[
\begin{array}{c|c} \mathtt{H}_{2 \times 2} & -\mathtt{I} \end{array}
\right]
& \quad & w' = w = 1.\end{split}\]
One benefit of the Sampson error function is that it avoids computing
\(\hat{\mathbf{X}}\), and hence only the estimation of
\(\mathtt{H}_{\mathrm{A}}\) remains. The error over all the point
correspondences is defined as
\[\begin{split}\mathcal{D}_\perp
&= \sum_i \left\Vert \boldsymbol{\delta}_{\mathbf{X}_i} \right\Vert^2
& \quad & \text{(4.12)}\\
&= \sum_i
\boldsymbol{\epsilon}_i^\top
\left( \mathtt{J}_i \mathtt{J}_i^\top \right)^{-1}
\boldsymbol{\epsilon}_i
& \quad & \text{(4.13)}\\
&= \sum_i
\begin{bmatrix}
-\mathbf{x}_i^\top \mathbf{h}_2 + y'_i &
\mathbf{x}_i^\top \mathbf{h}_1 - x'_i
\end{bmatrix}
\left(
\mathtt{H}_{2 \times 2} \mathtt{H}_{2 \times 2}^\top + \mathtt{I}
\right)^{-1}
\begin{bmatrix}
-\mathbf{x}_i^\top \mathbf{h}_2 + y'_i\\
\mathbf{x}_i^\top \mathbf{h}_1 - x'_i
\end{bmatrix}.\end{split}\]
(a)
Abstracting away the terms that are not related to \(h_{\cdot 3}\) yields
\[\begin{split}0 = \frac{\partial \mathcal{D}_\perp}{\partial h_{23}}
&= \frac{\partial}{\partial h_{23}}
\sum_i \sum_j \sum_k a_{ijk} \epsilon_{ij} \epsilon_{ik}
& \quad & a_{ijk} =
\left( \mathtt{J}_i \mathtt{J}_i^\top \right)^{-1}_{jk}\\
&= \sum_i
a_{i11} \frac{\partial \epsilon_{i1}^2}{\partial h_{23}} +
2 a_{i21} \epsilon_{i2} \frac{\partial \epsilon_{i1}}{\partial h_{23}}
& \quad & a_{i12} = a_{i21}\\
&= \sum_i
2 a_{i11}
\left( -\mathbf{x}_i^\top \mathbf{h}_2 + y'_i \right) (-w_i) +
2 a_{i21} \left( \mathbf{x}_i^\top \mathbf{h}_1 - x'_i \right) (-w_i)\\
&= (a_{11} h_{21} - a_{21} h_{11}) \sum_i x_i +
(a_{11} h_{22} - a_{21} h_{12}) \sum_i y_i +
a_{11} \sum_i h_{23} -
a_{11} \sum_i y'_i -
a_{21} \sum_i h_{13} +
a_{21} \sum_i x'_i
& \quad & w_i = 1 \text{ and } a_{ijk} = a_{jk} \, \forall i\\
a_{21} h_{13} &= a_{11} h_{23} +
(a_{11} h_{21} - a_{21} h_{11}) \bar{x}_i +
(a_{11} h_{22} - a_{21} h_{12}) \bar{y}_i +
a_{21} \bar{x}'_i - a_{11} \bar{y}'_i\end{split}\]
and
\[\begin{split}0 = \frac{\partial \mathcal{D}_\perp}{\partial h_{13}}
&= \sum_i
a_{i22} \frac{\partial \epsilon_{i2}^2}{\partial h_{13}} +
2 a_{i12} \epsilon_{i1} \frac{\partial \epsilon_{i2}}{\partial h_{13}}
& \quad & a_{i12} = a_{i21}\\
&= \sum_i
2 a_{i22}
\left( \mathbf{x}_i^\top \mathbf{h}_1 - x'_i \right) (w_i) +
2 a_{i12} \left( -\mathbf{x}_i^\top \mathbf{h}_2 + y'_i \right) (w_i)\\
&= (a_{22} h_{11} - a_{12} h_{21}) \sum_i x_i +
(a_{22} h_{12} - a_{12} h_{22}) \sum_i y_i +
a_{22} \sum_i h_{13} -
a_{22} \sum_i x'_i -
a_{12} \sum_i h_{23} +
a_{12} \sum_i y'_i
& \quad & w_i = 1 \text{ and } a_{jk} = a_{ijk}\, \forall i\\
a_{12} h_{23} &= a_{22} h_{13} +
(a_{22} h_{11} - a_{12} h_{21}) \bar{x}_i +
(a_{22} h_{12} - a_{12} h_{22}) \bar{y}_i -
a_{22} \bar{x}'_i + a_{12} \bar{y}'_i.\end{split}\]
By inspection, translating the points so that the centroids
\(\bar{\mathbf{x}}\) and \(\bar{\mathbf{x}}'\) are at the origin
restricts \(h_{\cdot 3}\) to
\[\begin{split}\boldsymbol{0}
&= \begin{bmatrix} a_{22} & -a_{12}\\ -a_{21} & a_{11} \end{bmatrix}
\begin{bmatrix} h_{13}\\ h_{23} \end{bmatrix}\\
&= \frac{1}{a_{11} a_{22} - a_{12} a_{21}}
\begin{bmatrix} a_{22} & -a_{12}\\ -a_{21} & a_{11} \end{bmatrix}
\begin{bmatrix} h_{13}\\ h_{23} \end{bmatrix}\\
&= \mathtt{J} \mathtt{J}^\top
\begin{bmatrix} h_{13}\\ h_{23} \end{bmatrix}
& \quad & \mathtt{J} \mathtt{J}^\top =
\mathtt{J}_i \mathtt{J}_i^\top \, \forall i.\end{split}\]
For nondegenerate affinities, \(\mathtt{J} \mathtt{J}^\top\) is
nonsingular, so the nullspace is the set containing only the zero vector.
(b)
A variety is the simultaneous zero-set of one or more multivariate polynomials
defined in \(\mathbb{R}^N\).
For a given affinity \(\mathtt{H}_{\mathrm{A}}\) (4.9), the image
correspondences \(\mathbf{x} \iff \mathbf{x}'\) that satisfy
\[\begin{split}\boldsymbol{0}
&= \mathcal{C}_\mathtt{H}\left( \mathbf{X} \right)
& \quad & \text{(4.10) and (4.11)}\\
&= \begin{bmatrix}
\boldsymbol{0}^\top & -w' \mathbf{x}^\top & y' \mathbf{x}^\top\\
w' \mathbf{x}^\top & \boldsymbol{0}^\top & -x' \mathbf{x}^\top
\end{bmatrix}
\begin{bmatrix}
\mathbf{h}_1\\ \mathbf{h}_2\\ \mathbf{h}_3
\end{bmatrix}
& \quad & \text{(4.3)}\\
&= \begin{bmatrix}
y' - h_{21} x - h_{22} y - h_{23}\\
h_{11} x + h_{12} y + h_{13} - x'
\end{bmatrix}
& \quad & w' = w = 1\end{split}\]
define an algebraic variety \(\mathcal{V}_\mathtt{H} \in \mathbb{R}^4\)
which is the intersection of two hyperplanes. Each hyperplane is in
\(\mathbb{R}^4\) and represents a linear constraint.
Recall that the intersection of two hyperplanes in \(\mathbb{R}^n\) defines
a linear subspace of dimension \(n - 2\), which could be interpreted as the
space of free variables. By definition of codimension,
\(\mathcal{V}_\mathtt{H}\) is a codimension-2 subspace of
\(\mathbb{R}^4\).
To prove the sufficient condition, suppose \(\mathbf{X}\) lies on
\(\mathcal{V}_\mathtt{H}\) and
\(\mathtt{H}_{2 \times 2} \mathbf{x} - \mathbf{x}' \neq \boldsymbol{0}\).
\[\begin{split}\boldsymbol{0}
&= \mathcal{C}_\mathtt{H}\left( \mathbf{X} \right)\\
&= \begin{bmatrix}
h_{11} x + h_{12} y + h_{13} - x'\\
h_{21} x + h_{22} y + h_{23} - y'
\end{bmatrix}
& \quad & \text{negated and swapped first row of (4.3)}\\
&= \mathtt{H}_{2 \times 2} \mathbf{x} - \mathbf{x}' +
\begin{bmatrix} h_{13}\\ h_{23} \end{bmatrix}\\
&= \mathtt{H}_{2 \times 2} \mathbf{x} - \mathbf{x}'\end{split}\]
holds according to (a), but this contradicts the initial assumption that
\(\mathtt{H}_{2 \times 2} \mathbf{x} - \mathbf{x}' \neq \boldsymbol{0}\).
To prove the necessary condition, suppose
\(\mathtt{H}_{2 \times 2} \mathbf{x} - \mathbf{x}' = \boldsymbol{0}\) and
\(\mathbf{X}\) does not lie on \(\mathcal{V}_\mathtt{H}\).
\[\begin{split}\mathcal{C}_\mathtt{H}\left( \mathbf{X} \right)
&= \mathtt{H}_{2 \times 2} \mathbf{x} - \mathbf{x}' +
\begin{bmatrix} h_{13}\\ h_{23} \end{bmatrix}
& \quad & \text{negated and swapped first row of (4.3)}\\
&= \begin{bmatrix} h_{13}\\ h_{23} \end{bmatrix}\\
&= \boldsymbol{0}\end{split}\]
holds according to (a), but this contradicts the initial assumption that
\(\mathbf{X}\) does not lie on \(\mathcal{V}_\mathtt{H}\).
(c)
According to (b), any codimension-2 subspace may be expressed as
\(\left[
\begin{array}{c|c} \mathtt{H}_{2 \times 2} & -\mathtt{I} \end{array} \right]
\mathbf{X} = \boldsymbol{0}\).
Section 4.2.5 shows that the task of estimating the homography
\(\hat{\mathtt{H}}\) and points \(\hat{\mathbf{x}}_i\) and
\(\hat{\mathbf{x}}_i'\) that minimizes (4.8) is equivalent to finding a
variety \(\mathcal{V}_\mathtt{H}\) that passes through the points
\(\mathbf{X}_i\).
Therefore, the estimation task is equivalent to finding the best-fitting
codimension-2 subspace.
(d)
The matrix \(\mathtt{M}\) with rows \(\mathbf{X}_i^\top\) has a rank
of \(4\). According to (c), the codimension of best fitting subspace is
\(2\). Here best means minimize the sum of the squares of the perpendicular
distances of the points to the subspace.
Since the (right) singular vectors \(\mathbf{V}_1\) and \(\mathbf{V}_2\)
correspond to the two largest singular values of \(\mathtt{M}\) and
form an orthonormal basis, Theorem 4.1 in [HK12]
guarantees that \(\mathcal{V}_\mathtt{H}\) spanned by
\(\mathbf{V}_1, \mathbf{V}_2\) is the best-fit \(2\)-dimensional
subspace for \(\mathtt{M}\).
(e)
Since \(\mathcal{V}_{\mathtt{H}}\) is spanned by \(\mathbf{V}_1\) and
\(\mathbf{V}_2\), each point that lies on \(\mathcal{V}_{\mathtt{H}}\)
can be expressed as
\[\mathbf{X} =
\begin{bmatrix} \mathbf{V}_1 & \mathbf{V}_2 \end{bmatrix} \mathbf{t}\]
where \(\mathbf{t} \in \mathbb{R}^2\). By (b) and (c), each point must also
satisfy
\[\left[
\begin{array}{c|c} \mathtt{H}_{2 \times 2} & -\mathtt{I} \end{array}
\right] \mathbf{X} =
\left[
\begin{array}{c|c} \mathtt{H}_{2 \times 2} & -\mathtt{I} \end{array}
\right]
\begin{bmatrix} \mathbf{V}_1 & \mathbf{V}_2 \end{bmatrix} \mathbf{t} =
\boldsymbol{0}.\]
For non-trivial solutions where \(\mathbf{t} \neq \boldsymbol{0}\),
\[\begin{split}\left[
\begin{array}{c|c} \mathtt{H}_{2 \times 2} & -\mathtt{I} \end{array}
\right]
\begin{bmatrix} \mathbf{V}_1 & \mathbf{V}_2 \end{bmatrix}
&= \boldsymbol{0}\\
\mathtt{H}_{2 \times 2} \mathtt{B}
&= \mathtt{C}
& \quad & \begin{bmatrix} \mathbf{V}_1 & \mathbf{V}_2 \end{bmatrix} =
\begin{bmatrix} \mathtt{B}\\ \mathtt{C} \end{bmatrix}\\
\mathtt{H}_{2 \times 2} &= \mathtt{C} \mathtt{B}^{-1}.\end{split}\]
The desired affinity can be derived as
\[\begin{split}\mathbf{x}'_i + \mathbf{t}'
&= \mathtt{H}_{2 \times 2} \left( \mathbf{x}_i + \mathbf{t} \right)
& \quad & \text{(a)}\\
\mathbf{x}'_i
&= \mathtt{H}_{2 \times 2} \left( \mathbf{x}_i + \mathbf{t} \right) -
\mathbf{t}'\\
\begin{bmatrix} \mathbf{x}'_i\\ 1 \end{bmatrix}
&= \mathtt{H}_{\mathtt{A}}
\begin{bmatrix} \mathbf{x}_i\\ 1 \end{bmatrix}\end{split}\]
where
\[\begin{split}\mathtt{H}_{\mathtt{A}} =
\begin{bmatrix}
\mathtt{H}_{2 \times 2} &
\mathtt{H}_{2 \times 2} \mathbf{t} - \mathbf{t}'\\
\boldsymbol{0}^\top & 1
\end{bmatrix}\end{split}\]
and
\[\begin{split}\hat{\mathbf{X}}_i
&= \mathbf{X}_i + \boldsymbol{\delta}_{\mathbf{X}_i}\\
&= \begin{bmatrix}
\mathtt{B} \mathtt{B}^\top & \mathtt{B} \mathtt{C}^\top\\
\mathtt{C} \mathtt{B}^\top & \mathtt{C} \mathtt{C}^\top
\end{bmatrix}
\mathbf{X}_i
& \quad & \text{(e.1)}\\
&= \begin{bmatrix} \mathtt{B}\\ \mathtt{C} \end{bmatrix}
\begin{bmatrix} \mathtt{B}^\top & \mathtt{C}^\top \end{bmatrix}
\mathbf{X}_i\\
&= \begin{bmatrix} \mathbf{V}_1 & \mathbf{V}_2 \end{bmatrix}
\begin{bmatrix} \mathbf{V}_1^\top\\ \mathbf{V}_2^\top \end{bmatrix}
\mathbf{X}_i\\
&= \left( \sum_{i = 1}^2 \mathbf{V}_i \mathbf{V}_i^\top \right)
\mathbf{X}_i.\end{split}\]
(e.1)
\[\begin{split}\boldsymbol{\delta}_{\mathbf{X}_i}
&= -\mathtt{J}^\top
\left( \mathtt{J} \mathtt{J}^\top \right)^{-1}
\boldsymbol{\epsilon}_i
& \quad & \text{(4.11)}\\
&= -\begin{bmatrix} \mathtt{H}_{2 \times 2}\\ -\mathtt{I} \end{bmatrix}
\left(
\mathtt{I} + \mathtt{H}_{2 \times 2} \mathtt{H}_{2 \times 2}^\top
\right)^{-1}
\begin{bmatrix} \mathtt{H}_{2 \times 2} & -\mathtt{I} \end{bmatrix}
\mathbf{X}_i\\
&= -\begin{bmatrix}
\mathtt{H}_{2 \times 2}^\top (\bullet) \mathtt{H}_{2 \times 2} &
-\mathtt{H}_{2 \times 2}^\top (\bullet)\\
-(\bullet) \mathtt{H}_{2 \times 2}^\top & (\bullet)
\end{bmatrix} \mathbf{X}_i
& \quad & \text{(e.2) and } \bullet =
\mathtt{I} - \mathtt{C} \mathtt{C}^\top\\
&= \left(
\begin{bmatrix}
\mathtt{B} \mathtt{B}^\top & \mathtt{B} \mathtt{C}^\top\\
\mathtt{C} \mathtt{B}^\top & \mathtt{C} \mathtt{C}^\top
\end{bmatrix} -
\mathtt{I}
\right) \mathbf{X}_i
& \quad & \text{(e.3), (e.4), (e.5), and (e.6)}\end{split}\]
(e.2)
\[\begin{split}\left(
\mathtt{I} + \mathtt{H}_{2 \times 2} \mathtt{H}_{2 \times 2}^\top
\right)^{-1}
&= \mathtt{I} -
\mathtt{H}_{2 \times 2} \left(
\mathtt{I} + \mathtt{H}_{2 \times 2}^\top \mathtt{H}_{2 \times 2}
\right)^{-1} \mathtt{H}_{2 \times 2}^\top
& \quad & \text{Woodbury matrix identity}\\
&= \mathtt{I} -
\mathtt{H}_{2 \times 2}
\mathtt{B} \mathtt{B}^\top \mathtt{H}_{2 \times 2}^\top
& \quad & \text{(e.3)}\\
&= \mathtt{I} - \mathtt{C} \mathtt{C}^\top
& \quad & \mathtt{H}_{2 \times 2} = \mathtt{C} \mathtt{B}^{-1}\end{split}\]
(e.3)
\[\begin{split}\mathtt{I}
&= \begin{bmatrix} \mathbf{V}_1^\top\\ \mathbf{V}_2^\top \end{bmatrix}
\begin{bmatrix} \mathbf{V}_1 & \mathbf{V}_2 \end{bmatrix}\\
&= \begin{bmatrix} \mathtt{B}^\top & \mathtt{C}^\top \end{bmatrix}
\begin{bmatrix} \mathtt{B}\\ \mathtt{C} \end{bmatrix}\\
&= \mathtt{B}^\top \mathtt{B} + \mathtt{C}^\top \mathtt{C}\\
\mathtt{B}^{-\top} \mathtt{B}^{-1}
&= \mathtt{I} +
\mathtt{B}^{-\top} \mathtt{C}^\top \mathtt{C} \mathtt{B}^{-1}\\
&= \mathtt{I} + \mathtt{H}_{2 \times 2}^\top \mathtt{H}_{2 \times 2}\end{split}\]
(e.4)
\[\begin{split}\mathtt{H}_{2 \times 2}^\top (\bullet) \mathtt{H}_{2 \times 2}
&= \mathtt{B}^{-\top} \mathtt{C}^\top
\left( \mathtt{I} - \mathtt{C} \mathtt{C}^\top \right)
\mathtt{C} \mathtt{B}^{-1}
& \quad & \text{(e.2)}\\
&= \mathtt{B}^{-\top} \mathtt{C}^\top \mathtt{C} \mathtt{B}^\top
& \quad & \mathtt{B}^{-\top} \mathtt{C}^\top \mathtt{C}
\mathtt{I} \mathtt{B}^{-1} =
\mathtt{B}^{-\top} \mathtt{C}^\top \mathtt{C}
\mathtt{B}^\top \mathtt{B} \mathtt{B}^{-1} +
\mathtt{B}^{-\top} \mathtt{C}^\top \mathtt{C}
\mathtt{C}^\top \mathtt{C} \mathtt{B}^{-1}\\
&= \mathtt{I} - \mathtt{B} \mathtt{B}^\top
& \quad & \mathtt{B}^{-\top} \mathtt{I} \mathtt{B}^\top =
\mathtt{B}^{-\top} \mathtt{B}^\top \mathtt{B} \mathtt{B}^\top +
\mathtt{B}^{-\top} \mathtt{C}^\top \mathtt{C} \mathtt{B}^\top\end{split}\]
(e.5)
\[\begin{split}-\mathtt{H}_{2 \times 2}^\top (\bullet)
&= -\mathtt{B}^{-\top} \mathtt{C}^\top
\left( \mathtt{I} - \mathtt{C} \mathtt{C}^\top \right)
& \quad & \text{(e.2)}\\
&= -\mathtt{B} \mathtt{C}^\top
& \quad & \mathtt{B}^{-\top} \mathtt{I} \mathtt{C}^{\top} =
\mathtt{B}^{-\top} \mathtt{B}^\top
\mathtt{B} \mathtt{C}^{\top} +
\mathtt{B}^{-\top} \mathtt{C}^\top
\mathtt{C} \mathtt{C}^{\top}\end{split}\]
(e.6)
\[\begin{split}-(\bullet) \mathtt{H}_{2 \times 2}^\top
&= -\left( \mathtt{I} - \mathtt{C} \mathtt{C}^\top \right)
\mathtt{C} \mathtt{B}^{-1}
& \quad & \text{(e.2)}\\
&= -\mathtt{C} \mathtt{B}^\top
& \quad & \mathtt{C} \mathtt{I} \mathtt{B}^{-1} =
\mathtt{C} \mathtt{B}^\top \mathtt{B} \mathtt{B}^{-1} +
\mathtt{C} \mathtt{C}^\top \mathtt{C} \mathtt{B}^{-1}\end{split}\]
(ix) Computing Homographies of \(\mathbb{P}^3\) from Line Correspondences
Invariants are measurements that remain fixed under collineations. The book
incorrectly references [Har94] instead of
[Har93] for the discussion of isotropy subgroups.
[Har93] considers invariants that can be derived from two
views of an object. The alternative is to consider constrained sets of points
(e.g. solids of revolution) because no invariants of arbitrary point sets in
3-dimensions may be computed from a single image.
The invariants of lines in space can not be computed from two views of lines
only. Given two images of a line and two arbitrary cameras, there is always a
line in space that corresponds to the two images. Unless the epipolar geometry
of the two views is known, two images of an unknown line do not in any way
constrain the cameras.
Given four lines in \(\mathcal{P}^3\) in general positions, there exist
exactly two transverse lines that meet all four of these lines. The cross ratio
of the points of intersection of lines with each of the transverse lines give
two independent projective invariants of the set of four lines. One algebraic
way of defining the invariants of four lines is as a homogeneous vector
\[I(\lambda_1, \lambda_2, \lambda_3, \lambda_4) =
\left(
\left\vert \lambda_1 \lambda_2 \right\vert,
\left\vert \lambda_3 \lambda_4 \right\vert,
\left\vert \lambda_1 \lambda_3 \right\vert,
\left\vert \lambda_2 \lambda_4 \right\vert,
\left\vert \lambda_1 \lambda_4 \right\vert,
\left\vert \lambda_2 \lambda_3 \right\vert
\right)
\quad \text{where} \quad
\left\vert \lambda_i \lambda_j \right\vert =
\det\left( \begin{bmatrix} \lambda_i & \lambda_j \end{bmatrix} \right).\]
Two such computed invariants are deemed equal if they differ by a scalar factor.
One application of this formulation is
\(\text{Pl}\mathrm{\ddot{u}}\text{cker}\)
line coordinates (3.13).
\(\left\vert \lambda_i \lambda_j \right\vert\) will vanish if and only if
the four points involved are coplanar i.e. when the two lines are coincident.
The proposed definition of the invariant avoids the (near-)vanishing denominator
problems of cross ratios. However, if three of the determinants vanish, then
the invariant is undefined. This occurs when
Three of the lines lie in a plane.
One of the lines meets all the other three.
The configuration where one line meets two of the other lines is not degenerate,
but is not useful because two of the determinants vanish. Any two such
configurations cannot be distinguished and are equivalent under collineation.
A configuration of four lines in \(\mathbb{P}^3\) is degenerate because
there is a one-parameter subgroup of projectivities fixing the four lines. This
reduces the number of degrees of freedom of the projectivities of
\(\mathbb{P}^3\) on sets of four lines in space to 14, which explains why
there are two independent invariants (result 2.16).
There is no natural or canonical way to fix the coordinate system by specifying
line coordinates. To fix a basis using lines will lead to the use of a maximum
of three lines which have in total \(3 \times 4 = 12\) degrees of freedom.
The other degrees of freedom must be determined by the cameras or another point.
[OZAAstrom04] observed that a well-defined parameterization of
structure and motion problems must satisfy
\[\boldsymbol{\pi}_i^\top \mathtt{H} \mathbf{X}_j = 0
\qquad
i = 1, 2; j = 1, 2\]
where \(\mathtt{H}\) transfers a line defined by the two points
\((\mathbf{X}_1\), \(\mathbf{X}_2)\) to a line defined by the
intersection of the two planes
\((\boldsymbol{\pi}_1\), \(\boldsymbol{\pi}_2)\).
References
- Har93(1,2)
R Hartley. Invariants of lines in space. In Proc. DARPA Image Understanding Workshop, 737–744. 1993.
- Har94
Richard I Hartley. Projective reconstruction and invariants from multiple images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 16(10):1036–1041, 1994.
Magnus Oskarsson, Andrew Zisserman, and Kalle Åström. Minimal projective reconstruction for combinations of points and lines in three views. Image and Vision Computing, 22(10):777–785, 2004.