Here are some notes I've made about linear algebra but you may also find this link to a good undergraduate lecture notes useful.
Positive definite
Square matrix M is positive definite if zT M z is positive for every non-zero z.
"A matrix is positive definite if it's symmetric and all its eigenvalues are positive" from here. It has applications in finding minima and maxima.
Hermitian Matrices
Where Ai,j = Aj,i
where A is the complex conjugate of A.
This means it is necessarily a square matrix.
For real matrices, this means it's just a symmetric matrix.
Gramian/Gram Matrix
The Hermitian matrix, G, where:
Gi,j = < vi, vi >
That is, it "is the matrix of all possible inner products" (from MathWorld).
Degrees of Freedom of a matrix
The number of degrees of freedom of an nxn matrix is n(n-1)/2. The degrees of freedom equate to the number of linearly independent planes.
The best explanation of how we calculate this number I found was here on StackExchange. Basically, we are rotating the system around planes not axes (although in ℝ3, you'll find you get the same degrees of freedom if you erroneously treat n as the number of axis). To define a plane, you need 2 axis and there are n(n-1)/2 such pairings.
Cholesky decomposition
Cholesky decomposition says a Hermitian positive definite (see above) matrix, A, can be decomposed such that A = LL* where L is a lower triangular matrix.
The beauty of the L matrices being triangular is that you can solve linear equations Ax = b by substituting in LL*, solving for y by back substitution in Ly=b and then solving for x by forward substitution in L*y = x.
Diagonalization
The recipe for diagonalizing a matrix, M follows:
- Find the eigenvalues.
- For each eigenvalue, find the eigenvectors and normalize
- Re-express the original equation in terms of M C = C D where C is matrix with the normalized eigenvectors as columns and D is a diagonal matrix whose diagonal elements are eigenvalues.(Note: this is just a representation of the original problem of finding the eigen-vector/values).
- Re-arrange for M noting that det(C) cannot be 0.
Which leads us to what we want, that is the decomposition:
M = C D C-1
“A matrix has real eigenvalues and can be diagonalized by a unitary similarity transformation if and only if it is Hermitian.” Boaz, p154
Similar Matrices
Matrices A and B are similar if A = P-1 B P.
“Similar matrices represent the same linear operator under two (possibly) different bases, with P being the change of basis matrix” (Wikipedia)
See "Diagonalization" above.
Symmetric matrices
A notable property of a symmetric matrix is that their eigenvalues are all real. This is a good proof from my alma mater, Imperial. Briefly, if A ∈ ℝn x n and one of its eigenvectors is u ∈ ℂn, then Au=λu. Take the complex conjugate of both sides, pre-multiply by u* and re-arrange. You'll find that (because eigenvectors are not 0), λ-λ*=0. The only way for this to be true is that λ is real.
The eigenvectors of a symmetric matrix can be chosen (that is, we chose to scale them to unit vectors) to be orthonormal. We can prove this by noting for a symmetric matrix A that A = AT. Plugging this into the equation for diagonalization, there is a hint [Strang, p330] that CT = C-1 which is exactly that property of orthogonal matrices.
The proof is Au1=λ1u1 and Au2=λ2u2 then pre-multiply the first equation with u2T, re-arrange and substitute in the second. You'll find for λ1≠λ2, u2Tu1=0.
They can always be diagonalized (StackExchange).
Covariant and Contravariant matrices
This has nothing to do with type co- and contravariance that you find in programming languages (so I am told by a maths PhD).
Basically, “the differentials of the coordinates are the components of a contravariant vector. Similarly, … the partial derivatives of a function are the components of a covariant vector.” [Boas p530].
More concretely, “every quantity which under a transformation of coordinates, transforms like the coordinate differentials is called a contravariant tensor.… Every quantity which under a coordinate transformation, transforms like the derivatives of a scalar is called a covariant tensor.” (StackExchange).
I've seen various analogies to explain this on the Web but it's hard to because “in three-dimensional Euclidean space,… contravariant and covariant tensors are equivalent.... The two types of tensors do differ in higher dimensions, however.” (MathWorld), specifically they "become identical for Cartesian vectors" [Boaz p529].
In terms of their notation, “covariant vectors are representable as row vectors. Contravariant vectors are representable as column vectors.” (StackExchange)
As a mnemonic, remember “co-low-row”. That is, covariant matrices have a lower (subscript) index in notation and by convention are represented as row vectors.
QR factorization
I've discussed this before. It decomposes a matrix into two matrices, Q and R, where R is upper triangular. It does not give you eigenvalues but you can use the related QR Algorithm to do so but note: "The QR algorithm only converges using Wilkinson-Shift strategie[sic]" (StackExchange).
Singular Value Decomposition
Note that despite similarities, singular values are not eigenvalues although singular values can be used to compute eigenvalues and vectors. In fact, in a real, symmetric matrix, the singular values and the eigenvalues are the same - but then few matrices are real and symmetric.
Intuitively the difference between SVD and eigendecompositon can be found here at StackExchange:
"SVD says for any linear map, there is an orthonormal frame in the domain such that it is first mapped to a different orthonormal frame in the image space, and then the values are scaled. Eigendecomposition says that there is a basis, it doesn't have to be orthonormal... Consider the difference for a rotation matrix in ℜ2... Here, there are no real eigenvalues and this corresponds to there being no choice of basis which under the transformation is simply a scaling. On the other hand, SVD makes a lot of sense here because it says we can take the standard basis in the domain, map it to the rotated version of this basis (thought of as in the image space), and scale everything by 1."
The relationship between eigenvalues and singular values can be found here on StackExchange. Basically, if we are looking at matrix X, let C = XTX. Diagonalise this (see above) so C = VDV-1 but also note that X = USVT from SVD. Substitute this into the original equation for C and equate to the diagonalized one (noting UUT=I) and you can see that λi ~ si.
Interestingly, "the parallelization of dense eigenvalue problems is still an area of active research" (see here).