Skip to content

Orthogonal projections of multidimensional ellipsoids – V – relation between the inverse matrices of the involved quadratic forms and of respective covariance matrices

This series started with four main questions. The first two were: How do we know that an orthogonal projection of a (n-1)-dimensional ellipsoid from its n-dimensional vector space onto a lower p-dimensional sub-space leads to yet another ellipsoid? And how can we derive the matrix defining the quadratic form of the resulting lower dimensional ellipsoid from the matrix describing the original ellipsoid? The answers required relatively complicated considerations. The matrix of the quadratic form which defines the hull of the ellipsoid’s projection image was found to be a Schur complement of the original matrix. In this post we will derive an answer to the third question: What about the inverse matrices of the two involved definition matrices for the quadratic forms of the ellipsoids?

We shall see that the relation between these inverse matrices is surprisingly simple: You can read the elements of the (pxp) quadratic form matrix directly off the (nxn)-matrix defining the original ellipsoid by a simple selection process. Pick out all elements whose indices refer to those base vectors which span the projection’s target space.

Previous posts

Relevant publications and some criticism

Originally, I had planned to refer to a physics publication [1] on the topic. However, with all due respect, the proof offered there for the inverse of the matrix for the quadratic forms of orthogonal projections of multidimensional ellipsoids, appears to be somewhat questionable on second sight. Among other things, it disregards that we have to consider special points on the n-dim ellipsoid’s surface to address the hull of the projection image properly. It also treats respective underlying vectors of the unit sphere improperly. By improperly I mean that the asserted assumptions lead to a contradiction. I will shortly address this point below. Note that the claims, conclusions and applications discussed in the named paper fortunately remain unaffected by this problem.

The fact that the Schur complement appeared in the analysis of the last post in this series indicates already that a proper proof of the relation between the different inverse matrices has to take into properties of this matrix complement into account. Such a proof will therefore become a bit more complicated than what [1] discusses in its chapter III. The interested reader is recommended to read [2], a paper written by Chris Yeh. I will cling closely to his line of argumentation, which is convincing. All credits belong to him! Another valuable paper on the same topic is a paper, published by J. Gallier; see [3]. Links and paper titles are listed in the last section of this post.

Previous results of this post series

I use the same notation and abbreviations as in previous posts. Readers already familiar with previous posts in this series may hop over this section. We defined the original ellipsoid in the ℝn by a matrix Q (= Σ-1) :

\[ \text{vectors } \pmb{x}_e \text{ of (} n\text{-1)-dim ellipsoid } E\,: \quad \pmb{x}_{e} \in E = \left\{ (\pmb{x}_e)^T \,\, | \,\, \pmb{x}_e \, \pmb{\operatorname{Q}} \, \pmb{x}_{e} \,=\, 1 \, \right\} \subset \mathbb{R}^n \,. \tag{1} \]

I use the notation by Σ-1 to remember the reader that the ellipsoids could be contour surfaces of Multivariate Normal Distributions for some variance-covariance matrix Σ. Both the matrices Q (=Σ-1) and Q-1 (=Σ) are symmetric, invertible and positive-definite. Q-1(=Σ) has a Cholesky decomposition

\[ \pmb{\operatorname{\Sigma}} \,\equiv \, \pmb{\operatorname{Q}}^{-1} \,=\, \pmb{\operatorname{A}} \, \pmb{\operatorname{A}}^T \,. \tag{2} \]

Matrix A generates the ellipsoid when applied to vectors u of the unit sphere 𝕊n-1 (u ∈ 𝕊n-1) :

\[ ||\pmb{u}|| = \sqrt{ u_1^2 + u_2^1 + … + u_n^2 \, } \,=\, 1 \,, \,\, \, \pmb{u} \in \mathbb{S}^{n-1} \, , \tag{3} \]
\[ \pmb{u} \in \mathbb{S}^{n-1} \,\Rightarrow \, \pmb{\operatorname{A}} \, \pmb{u} \in E \,. \tag{4} \]

Our orthogonal projection was defined by a well defined p-dimensional target space SP and its orthogonal complement (SP), both being orthogonal sub-spaces of the ℝn:

\[ \mathbb{R}^n \,=\, S_P \,\oplus (S_P)^{\perp} \,. \tag{5} \]

We chose a base of unit vectors ei such that SP was covered by respective unit vectors ej with (1 ≤ jp). We introduced reduced vectors of the complementary orthogonal sub-spaces, SP with dimensionality p, the complement (SP) with dimensionality (n-p), such that

\[ \begin{align} \pmb{x}_e \,&=\, \begin{pmatrix} \pmb{y}_{e,p} \\ \pmb{z}_{e,n-p} \end{pmatrix} \,. \tag{6} \\[10pt] \pmb{y}_{e,p} \,&= \, (y_1, ……… , y_p)^T \in S_P \sim \mathbb{R}^p\,, \\[10pt] \pmb{z}_{e,n-p} \,&=\, (z_{p+1}, …, z_n)^T \in (S_P)^{\perp} \sim \mathbb{R}^{(n-p)} \,, \\[10pt] \pmb{x}_e \, &=\, \pmb{y}_e \,+\, \pmb{z}_e \,=\, ( (\pmb{y}_{e,p})^T, 0,..,0)^T + (0,..0, (\pmb{z}_{e,n-p})^T )^T \,. \end{align} \]

Note: We can always choose our coordinate system such that our projection’s target space is spanned by the first p unit vectors of the related base of unit vectors. We saw that the vectors describing the hull of the projection image could be derived by applying the projection to special vectors xet

\[ \pmb{x}_e^t \,=\, \pmb{y}_e^t + \pmb{z}_e^t \,=\, \begin{pmatrix} \pmb{y}_{e,p}^t \\ \pmb{z}_{e,n-p}^t \end{pmatrix} \,. \]

of the original ellipsoid. At these specific points the surface’s normal vectors Qxet have to be perpendicular to (SP). This in particular means:

\[ \left(\pmb{z}_e^t\right)^T \pmb{\operatorname{Q}} \, \pmb{x}_{e}^t \, =\, 0 \,. \tag{7} \]

This is a generalization from the 2-dim ellipsoids in the ℝ3 case, where the vector giving the line of projection onto an orthogonal target plane must become a tangential vector xet. You could also say that Qxet must become an element of SP. The xet in turn are generated by A from special vectors uet of the unit sphere 𝕊n-1:

\[ \begin{align} & \pmb{u}_e^t \,=\, \pmb{\operatorname{A}}^{-1} \pmb{x}_e^t \,\in\, \mathbb{S}^{n-1} \, \tag{8} \\[10pt] & \pmb{u}_e^t \,\perp\, \pmb{\operatorname{A}}^{-1} \pmb{e}_j \,, \quad \forall \,j \,\,\, \text{with} \,\,\, p+1 \,\le \, j \, \le\, n \,. \tag{9} \end{align} \]

Let us turn to the vectors ye,rtSP resulting from the projection of xet . In the last post, we partitioned the (n x n) matrix Q = Σ-1 into respective blocks with respect to the orthogonal sub-spaces (SP ⊕ (SP)):

\[ \pmb{\operatorname{\Sigma}}^{-1} \,\equiv \pmb{\operatorname{Q}} \,=\, \begin{pmatrix} \pmb{\operatorname{Q}}_{yy} & \pmb{Q}_{yz} \\ (\pmb{Q}_{yz})^T & \pmb{Q}_{zz} \end{pmatrix} \,. \tag{10} \]
\[ \pmb{\operatorname{Q}}_{yy} \,\in \, \mathbb{R}^{p\operatorname{x}p} \,, \,\, \pmb{\operatorname{Q}}_{yz} \in \mathbb{R}^{p\operatorname{x}(n-p)} \,, \,\, \pmb{\operatorname{Q}}_{zz} \in \mathbb{R}^{(n-p)\operatorname{x}(n-p)} \,. \tag{11} \]

The matrix for the (p-1)-dimensional ellipsoid, i.e. the hull of the projection image in SP, was given by a reduced matrix Qp :

\[ \pmb{\operatorname{Q}}_{p} \,:=\, \pmb{\operatorname{Q}}_{yy} \,-\, \pmb{\operatorname{Q}}_{yz} \pmb{\operatorname{Q}}_{zz}^{-1} \pmb{\operatorname{Q}}_{yz}^T \,\, \in \,\, \mathbb{R}^{p\,x\, p} \,. \tag{12} \]
\[ (\pmb{y}_{e,p})^T \, \pmb{\operatorname{Q}}_{p} \, \pmb{y}_{e,p} \,=\, 1 \,, \quad \pmb{y}_p \in S_P \, . \tag{13} \]

This matrix Qp is just the Schur complement[Q/Qzz]” of Q (= Σ-1):

\[ \pmb{\operatorname{Q}}_{p} \,=\, \pmb{A}_p\,[\pmb{A}_p]^{-1} \,=\, [\pmb{\operatorname{Q}}/\pmb{\operatorname{Q}}_{zz}] \,. \tag{14} \]


So much for the general understanding.

Pitfalls regarding asymmetric projection vectors and assumptions on unit vectors

The starting points are the equations for the special vectors xte,n ∈ ℝn of the original ellipsoid (see post IV) and the reduced vectors ye,p (= ye,r ) ∈ SP (∼ ℝp) for the lower dimensional ellipsoid:

\[ (\pmb{x}_{e,n}^t)^T \, \pmb{\operatorname{Q}} \, \pmb{x}_{e,n}^t \,=\, 1 \,, \tag{15} \]
\[ \pmb{y}_{e,p}^t \, \pmb{\operatorname{Q}}_p \, \pmb{y}_{e,p} \,=\, 1 \,. \tag{16} \]

The (added) second index just indicates the dimensionality. This will be useful to distinguish vectors of different vector spaces below.

Due to the results of posts I to IV we know for sure that eq. (16) exists for a positive definite, invertible and symmetric matrix Qp. In contrast to my approach in post IV, the authors of [1] use a modified projection operator Pnp to indicate the transition between the involved Euclidean vector spaces:

\[ \pmb{\operatorname{P}}^n_p \,=\, \begin{pmatrix} \pmb{\mathbb{I}}_{p\operatorname{x}p} & \pmb{\mathbb{O}}_{p\operatorname{x}(n-p)} \end{pmatrix} \,\in \, \mathbb{R}^{p\operatorname{x}n}\,. \tag{17} \ \]
\[ \pmb{\mathbb{I}}_{p\operatorname{x}p} \,\in \, \mathbb{R}^{p\operatorname{x}p} \,, \,\, \pmb{\mathbb{O}}_{p\operatorname{x}(n-p)} \in \mathbb{R}^{p\operatorname{x}(n-p)} \,. \]

A matrix 𝕀p is the (pxp)-identiy matrix. The matrix 𝕆px(n-p) are (p x n-1) zero matrices. Pnp fulfills:

\[ \pmb{\operatorname{P}}_p^n \, [\pmb{\operatorname{P}}_p^n]^T \,=\, \pmb{\mathbb{I}}_{p\operatorname{x}p} \,. \tag{18} \]

But:

\[ [\pmb{\operatorname{P}}_p^n]^T \, \pmb{\operatorname{P}}_p^n \,\ne \, \pmb{\mathbb{I}}_{n\operatorname{x}n} \,. \tag{19} \]

In paper [1] the authors focus on the case p=2, but generalize their conclusions afterwards. P does, however, not fufill PP = P. One has to be very careful with it as it connects vectors in spaces with different dimensions. The application of Pnp to a vector xn ∈ ℝn creates a vector yp ∈ ℝp :

\[ \pmb{y}_p\,=\, \pmb{\operatorname{P}}^n_p \, \pmb{x}_n \,\in\, \mathbb{R}^p\,. \tag{20} \]

In particular

\[ \begin{align} & \pmb{y}_{e,p}^t \,=\, \pmb{\operatorname{P}}^n_p \, \pmb{x}^t_{e,n} \,=\, \pmb{\operatorname{P}}^n_p \, \pmb{\operatorname{A}}_n \, \pmb{u}_n^t \,, \tag{21} \\[10pt] & \left[\pmb{\operatorname{P}}^n_p \, \pmb{\operatorname{A}}_n\right] \,\in\, \mathbb{R}^{p\operatorname{x}n} \,. \end{align} \]

The authors of [1] claim

\[ \begin{align} & \pmb{y}_{e,p}^t \,=\, \pmb{\operatorname{A}}_p \, \pmb{u}_p^t \,, \\[10pt] & \text{and (wrongly!) : } \,\, \pmb{u}_p^t \,=\, \pmb{\operatorname{P}}^n_p \, \pmb{u}_n^t . \tag{A} \end{align} \]

Another (correct) assertion of the authors is

\[ \pmb{\operatorname{Q}}_p^{-1} \, = \, [\pmb{A}_p^{-1}]^T \,\pmb{A}_p^{-1} \,=\, \pmb{\operatorname{P}}_p^n \, \pmb{\operatorname{Q}}^{-1} \, \left[\pmb{\operatorname{P}}^n_p \right]^T \,. \tag{B} \]

Although the projection Pnp untSP, it is NOT equal to upt. Actually and instead,

\[ \pmb{u}_p^t \,=\, \pmb{\operatorname{A}}_p^{-1} \, \pmb{\operatorname{P}}^n_p \, \pmb{\operatorname{A}}_n \, \pmb{u}_n^t \,. \tag{22} \]

If assertion (A) were right, it would lead to a contradiction with the (correct) assertion (B), because due to (22) it would mean

\[ \begin{align} & \text{wrong : }\quad \pmb{\operatorname{A}}_p^{-1} \, \pmb{\operatorname{P}}^n_p \, \pmb{\operatorname{A}}_n \,\equiv\, \pmb{\operatorname{P}}^n_p \quad \Rightarrow \\[10pt] & \pmb{\operatorname{A}}_p^{-1} \,=\, \pmb{\operatorname{P}}^n_p \, \pmb{\operatorname{A}}_n^{-1} \, \left[\pmb{\operatorname{P}}^n_p\right]^T \quad \Rightarrow \\[10pt] & \pmb{\operatorname{Q}}_p^{-1} \,=\, \pmb{\operatorname{P}}_p^n \, \left[\pmb{\operatorname{A}}_n^{-1}\right]^T \, \left[\pmb{\operatorname{P}}^n_p \right]^T \, \pmb{\operatorname{P}}_p^n \, \pmb{\operatorname{A}}_n^{-1} \, \left[\pmb{\operatorname{P}}^n_p\right]^T \tag{C} \end{align} \]

(C) contradicts (B) because of un-equation (19) ! So, if (C) is right, assertion (A) is wrong…

Let me in addition warn you of another mistake, one is tempted to make: Combining (15) and (16) we might be tempted to conclude something regarding Q

\[ \pmb{y}_{e,p}^T \, \pmb{\operatorname{Q}}_{P,red} \, \pmb{y}_{e,p} \,=\, 1 \,=\, (\pmb{x}_{e,n}^t)^T \, \pmb{\operatorname{Q}} \, \pmb{x}_{e,n}^t \quad \Rightarrow \]
\[ (\pmb{x}_{e,n}^t)^T \, \left[ \pmb{\operatorname{P}}^n_p \right]^T \, \pmb{\operatorname{Q}}_{P,red} \, \pmb{\operatorname{P}}^n_p \, \pmb{x}_{e,n}^t \,=\, (\pmb{x}_{e,n}^t)^T \, \pmb{\operatorname{Q}} \, \pmb{x}_{e,n}^t \]
\[ \require{\cancel} {\large {WRONG: }} \,\, \cancel{ \pmb{\operatorname{Q}} \,=\, \left[ \pmb{\operatorname{P}}^n_p \right]^T \, \pmb{\operatorname{Q}}_{P,red} \, \pmb{\operatorname{P}}^n_p } \,. \]

Which would be wrong! You cannot conclude anything on operators in the higher dimensional space when special vectors (!) are selected. The condition does not hold for arbitrary vectors in the original space!

Matrix decomposition based on Schur complements

Let us consider an existing and invertible Schur-Complement “[M/D]” of a general matrix invertible matrix M:

\[ \pmb{\operatorname{M}} \,=\, \begin{pmatrix} \pmb{\operatorname{A}} & \pmb{\operatorname{B}} \\ \pmb{\operatorname{C}} & \pmb{\operatorname{D}} \end{pmatrix} \,. \tag{23} \]
\[ \pmb{\operatorname{A}} \,\in \, \mathbb{R}^{p\operatorname{x}p} \,, \,\, \pmb{\operatorname{B}} \in \mathbb{R}^{p\operatorname{x}(n-p)} \,, \,\, \pmb{\operatorname{C}} \in \mathbb{R}^{(n-p)\operatorname{x}p} \,, \,\, \pmb{\operatorname{D}} \in \mathbb{R}^{(n-p)\operatorname{x}(n-p)} \,, \]
\[ [ \pmb{\operatorname{M}} / \pmb{\operatorname{D}} ] \,=\, \pmb{\operatorname{A}} \,-\, \pmb{\operatorname{B}} \, \pmb{\operatorname{D}}^{-1} \, \pmb{\operatorname{C}} \,. \tag{24} \]

Compare this with eq. (12). For our purposes below, we assume that D is invertible. C. Yeh proves in [2]) that we then can decompose matrix M in the following way:

\[ \begin{align} \pmb{\operatorname{M}} \,&=\, \begin{pmatrix} \pmb{\mathbb{I}}_{p\operatorname{x}p} & \pmb{\operatorname{B}} \pmb{\operatorname{D}}^{-1} \\ \pmb{\mathbb{O}}_{(n-p)\operatorname{x}p} & \pmb{\mathbb{I}}_{(n-p)\operatorname{x}(n-p)} \end{pmatrix} \begin{pmatrix} [ \pmb{\operatorname{M}} / \pmb{\operatorname{D}} ] \, & \pmb{\mathbb{O}}_{p\operatorname{x}(n-p)} \\ \pmb{\mathbb{O}}_{(n-p)\operatorname{x}p} & \pmb{\operatorname{D}}\end{pmatrix} \begin{pmatrix} \pmb{\mathbb{I}}_{p\operatorname{x}p} & \pmb{\mathbb{O}}_{p\operatorname{x}(n-p)} \\ \pmb{\operatorname{D}}^{-1} \pmb{\operatorname{C}} & \pmb{\mathbb{I}}_{(n-p)\operatorname{x}(n-p)} \end{pmatrix} \\[10pt] & = \, \pmb{\operatorname{M}}_I \, \pmb{\operatorname{M}}_{II} \, \pmb{\operatorname{M}}_{III} \,. \end{align} \tag{25} \]

You can relatively easily verify this by executing the block operations on each other. Yeh also showed

\[ \operatorname{det}(\pmb{\operatorname{M}} ) \, =\, \operatorname{det} (\pmb{\operatorname{D}} ) * \operatorname{det} ( [ \pmb{\operatorname{M}} / \pmb{\operatorname{D}} ] ) \,. \tag{26} \]

This in turn guarantees the invertibility of the Schur complement [M/D].

Inversion of a matrix M with invertible Schur complement [M/D]

The inversion can now easily be processed and yields

\[ \begin{align} \pmb{\operatorname{M}}^{-1} \,&=\, \pmb{\operatorname{M}}_{III}^{-1} \, \pmb{\operatorname{M}}_{II}^{-1} \, \pmb{\operatorname{M}}_{I}^{-1} \\[10pt] &= \, \begin{pmatrix} \pmb{\mathbb{I}}_{p\operatorname{x}p} & \pmb{\mathbb{O}}_{p\operatorname{x}(n-p)} \\ -\,\pmb{\operatorname{D}}^{-1} \pmb{\operatorname{C}} & \pmb{\mathbb{I}}_{(n-p)\operatorname{x}(n-p)} \end{pmatrix} \begin{pmatrix} [ \pmb{\operatorname{M}} / \pmb{\operatorname{D}} ]^{-1} \, & \pmb{\mathbb{O}}_{p\operatorname{x}(n-p)} \\ \pmb{\mathbb{O}}_{(n-p)\operatorname{x}p} & \pmb{\operatorname{D}}^{-1} \end{pmatrix} \begin{pmatrix} \pmb{\mathbb{I}}_{p\operatorname{x}p} & – \, \pmb{\operatorname{B}} \pmb{\operatorname{D}}^{-1} \\ \pmb{\mathbb{O}}_{(n-p)\operatorname{x}p} & \pmb{\mathbb{I}}_{(n-p)\operatorname{x}(n-p)} \end{pmatrix} \\[10pt] &=\, \begin{pmatrix} [\pmb{\operatorname{M}} / \pmb{\operatorname{D}}]^{-1} & -\, [\pmb{\operatorname{M}} / \pmb{\operatorname{D}} ]^{-1} \pmb{\operatorname{B}} \pmb{\operatorname{D}}^{-1} \\ \pmb{\operatorname{D}}^{-1} \pmb{\operatorname{C}} \,[\pmb{\operatorname{M}} / \pmb{\operatorname{D}} ]^{-1} & \pmb{\operatorname{D}}^{-1} + \pmb{\operatorname{D}}^{-1} \pmb{\operatorname{C}} \,[\pmb{\operatorname{M}} / \pmb{\operatorname{D}} ]^{-1} \pmb{\operatorname{B}} \pmb{\operatorname{D}}^{-1} \end{pmatrix} \end{align} \tag{27} \]

Application to the matrices controlling the orthogonal projection of a (n-1)-dim ellipsoid

We now can use the special asymmetric projection operators Pnp suggested by the authors of [1].

\[ [ \pmb{\operatorname{M}} / \pmb{\operatorname{D}} ]^{-1} \,=\, \pmb{\operatorname{P}}^n_p \, \pmb{\operatorname{M}}^{-1} \, [\pmb{\operatorname{P}}^n_p]^T \,. \tag{28} \]

The operators just project our elements out of the matrix. You can easily verify this. I omit the proof.

By replacing M with our original symmetric, invertible Q (= Σ-1) and using the Schur complement [Q/Qzz], we find

\[ \pmb{\operatorname{Q}}_p^{-1} \,=\, \pmb{A}_p\,[\pmb{A}_p]^{-1} \,=\, [ \pmb{\operatorname{Q}} / \pmb{\operatorname{Q}}_{zz} ]^{-1} \,=\, \pmb{\operatorname{P}}^n_p \, \pmb{\operatorname{Q}}^{-1} \, [\pmb{\operatorname{P}}^n_p]^T \,. \tag{29} \]

This is nothing else than assertion (B) – but this time derived correctly via Schur complement algebra. And this in turn means that we can actually read the elements of the symmetric matrix [Qp]-1 directly off Q-1 (!)

It is easy to see that this would hold even if we reordered our base vectors. The rule is:

  • Pick those elements of Q, whose indices refer to the base vectors of the sub-space you want to project your ellipsoid to!

Consequences for variance-covariance matrices of Multivariate Normal Distributions orthogonally projcted down to sub-spaces

The results above have direct consequences for the projections of variance-covariance matrices of Multivariate Normal Distributions down to sub-spaces SP of the ℝn as e.g. (xk, xq) coordinate planes given by a pair of base vectors (ek , eq). You just have to pick those elements (i.e. correlation coefficients) of the variance-covariance matrix Σ to get the lower-dimensional covariance matrix Σp describing the projected distribution by plucking tose elements with indices referring to the base vectors ej spanning the projections target space.

In the case of a coordinate plane given by a pair of base vectors (ek , eq) you just pick elements at the crossings of the k-th and q-th columns with k-th and q-th lines of the matrix.

Conclusion

We have shown the following for the orthogonal projection of a (n-1)-dimensional ellipsoid in the ℝn to a p-dimensional sub-space SP (with an orthogonal complement (SP); ℝn = SP ⊕ (SP)) and base vectors ek ( 1 ≤ kp) and ej ( p+1 ≤ kn) which span SP and (SP), respectively :

  • The orthogonal projection of a (n-1)-dimensional ellipsoid in the ℝn to a p-dimensional sub-space SP has an ellipsoidal (p-1)-dimensional hull / surface.
  • The quadratic form matrix Qp which describes the hull of the projection’s image is a Schur complement of the matrix Q describing the original ellipsoid. The Schur complement refers to a block structure reflecting the sub-spaces SP and (SP).
  • The inverse matris [Qp]-1 contains those elements of the inverse matrix Q-1 which have indices referring to the base vectors of the sub-space SP.
  • Let a Multivariate Normal Distribution MVNn in the ℝn be given by a symmetric, invertible and pos.-definite (nxn)-variance-covariance matrix Σ. The elements of the (pxp)-variance-covariance matrix Σp describing the lower dimensional image MVNp of an orthogonal projection of MVNn down to a p-dimensional sub-space SP can directly be picked from the matrix Σ. You just have to choose those elements which have indices referring to base vectors spanning SP.

In the next post we will check all results and claims of this post series for a 4-dimensional MVN and its ellipsoids, which we will project down to 3-sub-spaces and 2-dimensional coordinate planes.

Links and literature

[1] R. Anwar, M. Hamilton, P.M. Nadolsky, 2019, “Direct ellipsoidal fitting of discrete multi-dimensional data”, Dep. of physics, Southern Methodist University, Dallas,
https://arxiv.org/abs/1901.05511, https://arxiv.org/pdf/1901.05511

[2] C. Yeh, 2109, “Schur Complements and the Matrix Inversion Lemma”, article published at github.io,
https://chrisyeh96.github.io/2021/05/19/schur-complement.html

[3] J. Gallier, 2011, “Schur Complements and Applications”, University of Pennsylvania, DOI:10.1007/978-1-4419-9961-0_16,
https://www.researchgate.net/publication/251414079_Schur_Complements_and_Applications