There is a lot of talk of gradients in Spark's machine learning library. Here are some notes to remind you what it's all about.
Approximation techniques
Newton method uses differentials. Secant methods use finite differences.
"For problems with more than a few variables, it also quickly became clear that the cost of evaluation Jacobians or Hessians [see below] at every step could be exorbitant. Faster methods were needed that might make use of inexact Jacobians or Hessians and/or inexact solutions of the associated linear equations, while still achieving superlinear convergence. An early breakthrough of this kind was the discovery of QuasiNewtonian methods in the 1960s by Broyden, Davidon, Fletcher and Powell, in which partial information is used to generate steadily improving estimates of the true Jacobian or Hessian or its matrix factors." [1]
Under certain circumstances "the convergence is quadratic, meaning that the error at each stage is roughly the square of the error at the previous stage" [1]
Taylor Expansion
A polynomial can be expressed as:
f(x) = f(a) + (x  a) f'(a) + (x  a)^{2} f''(a) / 2! + ... + (x  a)^{n} f^{n}(a) / n! + ...
where f' is the first differential, f'' the second and f^{n} the nth. (see Boas [2] p25 for a proof).
The multivariate version (that is, when x and a are vectors) looks like this:
f(x) = f(a) + (x  a) (∂f(a)/∂x_{1 }+ ∂f(a)/∂x_{2}) + (x  a)^{2} (∂^{2}f(a)/∂x_{1}^{2} + 2 ∂f(a)/∂x_{1}∂x_{2} + ∂^{2}f(a)/∂x_{2}^{2}) / 2! + ...
in the case of 2 dimensions (that is, when x = (x_{1}, x_{2})) although it could be any number of dimensions.
For a polynomial of degree 2, the first three terms are all you need as any further differentiation will always yield zero. Alternatively, we might choose that the first three terms are "good enough". Either way, let's now drop all the other terms. We also note that the above equation, when we drop the extraneous terms, can be more succinctly expressed as matrices, thus:
f(x) = f(a) + p^{T} ∇ f(a) + p^{T} B_{ }p / 2
where p = x  a and B is the Hessian (see below). By putting some simple values into a square matrix, it's not hard to see that this last term in the matrix equation when expanded is the same as the more verbose equation above it.
Gradients
∇ φ is a vector if φ is a field ("a collection of values with a plus operator and a times operator"[3]). In Cartesian 3 dimensions, it's:
∇ φ = grad φ = i ∂φ /∂x + j ∂φ /∂y + k ∂φ /∂z
where φ is a scalar and i, j and k are unit vectors although there is no reason to restrict ourselves to 3 dimensions. It's just commonly used in the mathematical sciences.
"The vector ∇ φ is perpendicular to the surface φ = const" (see Boas [2] p292, 293, 296)
Divergence
If ∇ acts on a vector, V:
V = i V_{x}(x, y, z) + j V_{y}(x, y, z) + k V_{z}(x, y, z)
then
∇ . V = div V = ∂V_{x}/∂x + ∂V_{y}/∂y + ∂V_{z}/∂z
Again, this is the special case in 3D but we need not be restricted to just 3 dimensions.
Laplacian
The Laplacian occurs in a variety of problems in physics (wave equations, diffusion [2] etc). Again, this is the 3D version but it can be generalized to any number of dimensions:
∇^{2} φ = div grad φ = ∂^{2}φ /∂x^{2} + ∂^{2}φ /∂y^{2} + ∂^{2}φ /∂z^{2}
"Now, that is just the mathematical definition, depending on the field where the Laplacian appears it may have different interpretations. The one I like is the relation with the mean curvature of a surface (how much bent the surface is). The mean curvature is the divergence of the unit normal of a surface, if the normal is given by the gradient of a function (that means that your surface is a level curve, an equipotential, an isopotential, etc...) then the curvature is proportional to the Laplacian." (from Quora)
Jacobians
"The Jacobian of x, y with respect to s, t is the determinant" [2]
J = 

The Jacobian matrix is the matrix for which this is the determinant, namely:
J_{i,j} = ∂f_{i} / ∂x_{j}
From what I can tell, 'Jacobian' can refer to either the matrix or its determinant depending on the context.
Hessians
A Hessian is a matrix that looks like:
H_{i,j} = ∂^{2}f / ∂x_{i} ∂x_{j}
Confusingly, ∇^{2} the operator is used for both the Laplacian and the Hessian. However, if ∇ is a column matrix, the Hessian is ∇^{T}∇ and the Laplacian is ∇ ∇^{T}.
Note that the "trace of the Hessian (the sum of the diagonal elements) is the Laplace operator." (from Quora.com)
Note that the "Hessian is always symmetric" [1].
[1] The Princeton Companion to Mathematics
[2] Mathematical Methods in the Physical Sciences, Boas
[3] Coding the Matrix, Klein
[3] Coding the Matrix, Klein
No comments:
Post a Comment