Here are some notes I've made about linear algebra but you may also find this link to a good undergraduate lecture notes useful.

__Positive definite__

Square matrix M is positive definite if z

^{T}M z is positive for every non-zero z.

"A matrix is positive definite if it's symmetric and all its eigenvalues are positive" from here. It has applications in finding minima and maxima.

__Hermitian Matrices__

Where A

_{i,j}= A

_{j,i}

where A is the complex conjugate of A.

This means it is necessarily a square matrix.

For real matrices, this means it's just a symmetric matrix.

__Gramian/Gram Matrix__

The Hermitian matrix, G, where:

G

_{i,j}= < v

_{i}, v

_{i}>

That is, it "is the matrix of all possible inner products" (from MathWorld).

__Degrees of Freedom of a matrix__

The number of degrees of freedom of an nxn matrix is n(n-1)/2. The degrees of freedom equate to the number of linearly independent planes.

The best explanation of how we calculate this number I found was here on StackExchange. Basically, we are rotating the system around planes not axes (although in ℝ

^{3}, you'll find you get the same degrees of freedom if you erroneously treat n as the number of axis). To define a plane, you need 2 axis and there are n(n-1)/2 such pairings.

__Cholesky decomposition__

Cholesky decomposition says a Hermitian positive definite (see above) matrix, A, can be decomposed such that A = LL

^{*}where L is a

**l**ower triangular matrix.

The beauty of the L matrices being triangular is that you can solve linear equations Ax = b by substituting in LL

^{*}, solving for y by back substitution in Ly=b and then solving for x by forward substitution in L

^{*}y = x.

__Diagonalization__

The recipe for diagonalizing a matrix, M follows:

- Find the eigenvalues.
- For each eigenvalue, find the eigenvectors and normalize
- Re-express the original equation in terms of M C = C D where C is matrix with the normalized eigenvectors as columns and D is a diagonal matrix whose diagonal elements are eigenvalues.(Note: this is just a representation of the original problem of finding the eigen-vector/values).
- Re-arrange for M noting that det(C) cannot be 0.

Which leads us to what we want, that is the decomposition:

M = C D C

^{-1}“A matrix has real eigenvalues and can be diagonalized by a unitary similarity transformation if and only if it is Hermitian.” Boaz, p154

__Similar Matrices__

Matrices A and B are similar if A = P

^{-1}B P.

“Similar matrices represent the same linear operator under two (possibly) different bases, with P being the change of basis matrix” (Wikipedia)

See "Diagonalization" above.

__Symmetric matrices__

A notable property of a symmetric matrix is that their eigenvalues are all real. This is a good proof from my alma mater, Imperial. Briefly, if A ∈ ℝ

^{n x n}and one of its eigenvectors is u ∈ ℂ

^{n}, then Au=λu. Take the complex conjugate of both sides, pre-multiply by u

^{*}and re-arrange. You'll find that (because eigenvectors are not 0), λ-λ

^{*}=0. The only way for this to be true is that λ is real.

The eigenvectors of a symmetric matrix can be chosen (that is, we chose to scale them to unit vectors) to be orthonormal. We can prove this by noting for a symmetric matrix A that A = A

^{T}. Plugging this into the equation for diagonalization, there is a

*hint*[Strang, p330] that C

^{T}= C

^{-1}which is exactly that property of orthogonal matrices.

The proof is Au

_{1}=λ

_{1}u

_{1}and Au

_{2}=λ

_{2}u

_{2}then pre-multiply the first equation with u

_{2}

^{T}, re-arrange and substitute in the second. You'll find for λ

_{1}≠λ

_{2}, u

_{2}

^{T}u

_{1}=0.

They can always be diagonalized (StackExchange).

__Covariant and Contravariant matrices__

This has nothing to do with type co- and contravariance that you find in programming languages (so I am told by a maths PhD).

Basically, “the differentials of the coordinates are the components of a contravariant vector. Similarly, … the partial derivatives of a function are the components of a covariant vector.” [Boas p530].

More concretely, “every quantity which under a transformation of coordinates, transforms like the coordinate differentials is called a contravariant tensor.… Every quantity which under a coordinate transformation, transforms like the derivatives of a scalar is called a covariant tensor.” (StackExchange).

I've seen various analogies to explain this on the Web but it's hard to because “in three-dimensional Euclidean space,… contravariant and covariant tensors are equivalent.... The two types of tensors do differ in higher dimensions, however.” (MathWorld), specifically they "become identical for Cartesian vectors" [Boaz p529].

In terms of their notation, “covariant vectors are representable as row vectors. Contravariant vectors are representable as column vectors.” (StackExchange)

As a mnemonic, remember “co-low-row”. That is, covariant matrices have a lower (subscript) index in notation and by convention are represented as row vectors.

__QR factorization__

I've discussed this before. It decomposes a matrix into two matrices, Q and R, where R is upper triangular. It does not give you eigenvalues but you can use the related QR Algorithm to do so but note: "The QR algorithm only converges using Wilkinson-Shift strategie[sic]" (StackExchange).

__Singular Value Decomposition__

Note that despite similarities, singular values are not eigenvalues although singular values can be used to compute eigenvalues and vectors. In fact, in a real, symmetric matrix, the singular values and the eigenvalues are the same - but then few matrices are real and symmetric.

Intuitively the difference between SVD and eigendecompositon can be found here at StackExchange:

"SVD says for any linear map, there is an orthonormal frame in the domain such that it is first mapped to a different orthonormal frame in the image space, and then the values are scaled. Eigendecomposition says that there is a basis, it doesn't have to be orthonormal... Consider the difference for a rotation matrix in ℜ

^{2}... Here, there are no real eigenvalues and this corresponds to there being no choice of basis which under the transformation is simply a scaling. On the other hand, SVD makes a lot of sense here because it says we can take the standard basis in the domain, map it to the rotated version of this basis (thought of as in the image space), and scale everything by 1."

The relationship between eigenvalues and singular values can be found here on StackExchange. Basically, if we are looking at matrix X, let C = X

Interestingly, "the parallelization of dense eigenvalue problems is still an area of active research" (see here).^{T}X. Diagonalise this (see above) so C = VDV^{-1}but also note that X = USV^{T}from SVD. Substitute this into the original equation for C and equate to the diagonalized one (noting UU^{T}=I) and you can see that λ_{i}~ s_{i}.
## No comments:

## Post a Comment