In mathematics, the Jacobi matrix is the matrix of first-order partial derivatives of the (vector-valued) function:
![{\displaystyle \mathbf {f} :\quad \mathbb {R} ^{n}\rightarrow \mathbb {R} ^{m}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/dd14b5324ca786c38323be13eb653fc24a1e51de)
(often f maps only from and to appropriate subsets of these spaces). The Jacobi matrix is m × n and consists of m rows of the first-order partial derivatives of f with respect to x1, ...,xn, respectively. This matrix is also known as the functional matrix of Jacobi. The determinant of the Jacobi matrix for n = m is known as the Jacobian. The Jacobi matrix and its determinant have several uses in mathematics:
- For m = 1, the Jacobi matrix appears in the second (linear) term of the Taylor series of f. Here the Jacobi matrix is 1 × n (the gradient of f, a row vector).
- The inverse function theorem states that if m = n and f is continuously differentiable, then f is invertible in the neighborhood of a point x0 if and only if the Jacobian at x0 is non-zero.
The Jacobi matrix and its determinant are named after the German mathematician Carl Gustav Jacob Jacobi (1804 - 1851).
Definition
Let f be a map of an open subset T of
into
with continuous first partial derivatives,
![{\displaystyle \mathbf {f} :\quad T\rightarrow \mathbb {R} ^{m}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/3a309c94b1f9d7dbe289645ec09f161886ec5d18)
That is if
![{\displaystyle \mathbf {t} =(t_{1},\;t_{2},\;\ldots ,t_{n})\in T\subset \mathbb {R} ^{n},}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2d4fa065ab68c05180ebdd5d12fd12e6a10d1350)
then
![{\displaystyle {\begin{aligned}x_{1}&=f_{1}(t_{1},t_{2},\ldots ,t_{n})\\x_{2}&=f_{2}(t_{1},t_{2},\ldots ,t_{n})\\\cdots &\cdots \\x_{m}&=f_{m}(t_{1},t_{2},\ldots ,t_{n}),\\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c23d04a52a98cdef521977e7976221d3b82c3c2d)
with
![{\displaystyle \mathbf {x} =(x_{1},\;x_{2},\;\ldots ,x_{m})\in \mathbb {R} ^{m}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/283bc16289116643c56fc861653c82b6f487bd41)
The m × n functional matrix of Jacobi consists of partial derivatives
![{\displaystyle {\begin{pmatrix}{\dfrac {\partial f_{1}}{\partial t_{1}}}&{\dfrac {\partial f_{1}}{\partial t_{2}}}&\ldots &{\dfrac {\partial f_{1}}{\partial t_{n}}}\\\\{\dfrac {\partial f_{2}}{\partial t_{1}}}&{\dfrac {\partial f_{2}}{\partial t_{2}}}&\ldots &\dots \\\\&&\ddots \\\\{\dfrac {\partial f_{m}}{\partial t_{1}}}&\dots &\ldots &{\dfrac {\partial f_{m}}{\partial t_{n}}}\\\end{pmatrix}}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/3ce745d0e93be67c86bfce27fac5b9457d49b020)
The determinant (which is only defined for square matrices) of this matrix is usually written as (take m = n),
![{\displaystyle \mathbf {J} _{\mathbf {f} }(\mathbf {t} )\quad {\hbox{or}}\quad {\frac {\partial {\big (}f_{1},f_{2},\ldots ,f_{n}{\Big )}}{\partial {\big (}t_{1},t_{2},\ldots ,t_{n}{\Big )}}}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/3300426180c059c2bcff1541f8699a85f401c977)
Example
Let T be the subset {r, θ, φ | r > 0, 0 < θ<π, 0 <φ <2π} in
and let f be defined by
![{\displaystyle {\begin{aligned}x_{1}\equiv x&=f_{1}(r,\theta ,\phi )=r\sin \theta \cos \phi \\x_{2}\equiv y&=f_{2}(r,\theta ,\phi )=r\sin \theta \sin \phi \\x_{3}\equiv z&=f_{3}(r,\theta ,\phi )=r\cos \theta \\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/425d8e96cbcfb21e9e63caf7a1fd8909c4ed51cf)
The Jacobi matrix is
![{\displaystyle {\begin{pmatrix}\sin \theta \cos \phi &r\cos \theta \cos \phi &-r\sin \theta \sin \phi \\\sin \theta \sin \phi &r\cos \theta \sin \phi &r\sin \theta \cos \phi \\\cos \theta &-r\sin \theta &0\\\end{pmatrix}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e14591808f6db15d9ab0a8dca51704e20499c847)
Its determinant can be obtained most conveniently by a Laplace expansion along the third row
![{\displaystyle \cos \theta {\begin{vmatrix}r\cos \theta \cos \phi &-r\sin \theta \sin \phi \\r\cos \theta \sin \phi &r\sin \theta \cos \phi \end{vmatrix}}+r\sin \theta {\begin{vmatrix}\sin \theta \cos \phi &-r\sin \theta \sin \phi \\\sin \theta \sin \phi &r\sin \theta \cos \phi \end{vmatrix}}=r^{2}(\cos \theta )^{2}\sin \theta +r^{2}(\sin \theta )^{3}=r^{2}\sin \theta }](https://wikimedia.org/api/rest_v1/media/math/render/svg/8919b430feb0b1caf9151d1a5744f6c2b4fc5be7)
The quantities {r, θ, φ} are known as spherical polar coordinates and its Jacobian is r2sinθ.
Coordinate transformation
Let
. The map
is a coordinate transformation if (i) f has continuous first derivatives on T (ii) f is one-to-one on T and (iii) the Jacobian of f is not equal to zero on T.
Multiple integration
It can be proved [1] that
![{\displaystyle \int _{\mathbf {f} (\mathbf {t} )}\phi (\mathbf {x} )\;\mathrm {d} \mathbf {x} =\int _{T}\phi {\big (}\mathbf {f} (\mathbf {t} ){\big )}\;\mathbf {J} _{\mathbf {f} }(\mathbf {t} )\;\mathrm {d} \mathbf {t} .}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0bfeab509b9c8fd29166401478e101b4f16afbd6)
As an example we consider the spherical polar coordinates mentioned above. Here x = f(t) ≡ f(r, θ, φ) covers all of
, while T is the region {r > 0, 0 < θ<π, 0 <φ <2π}. Hence the theorem states that
![{\displaystyle \iiint \limits _{\mathbb {R} ^{3}}\phi (\mathbf {x} )\;\mathrm {d} \mathbf {x} =\int \limits _{0}^{\infty }\int \limits _{0}^{\pi }\int \limits _{0}^{2\pi }\phi {\big (}\mathbf {x} (r,\theta ,\phi ){\big )}\;r^{2}\sin \theta \;\mathrm {d} r\mathrm {d} \theta \mathrm {d} \phi .}](https://wikimedia.org/api/rest_v1/media/math/render/svg/30ba041ae71083dfaf118ee6f57a8948765c5b34)
Geometric interpretation of the Jacobian
The Jacobian has a geometric interpretation which we expound for the example of n = 3.
The following is a vector of infinitesimal length in the direction of increase in t1,
![{\displaystyle \mathrm {d} \mathbf {g} _{1}\equiv \lim _{\Delta t_{1}\rightarrow 0}{\frac {\mathbf {f} (t_{1}+\Delta t_{1},t_{2},t_{3})-\mathbf {f} (t_{1},t_{2},t_{3})}{\Delta t_{1}}}\Delta t_{1}={\frac {\partial \mathbf {f} }{\partial t_{1}}}\mathrm {d} t_{1}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/dab1607b77a9129dd9ba2e22c0ee96d4c3126230)
Similarly, we define
![{\displaystyle \mathrm {d} \mathbf {g} _{2}\equiv {\frac {\partial \mathbf {f} }{\partial t_{2}}}\mathrm {d} t_{2},\quad \mathrm {d} \mathbf {g} _{3}\equiv {\frac {\partial \mathbf {f} }{\partial t_{3}}}\mathrm {d} t_{3}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8352212455e0dda34e809b8e1f999ea68d272f0d)
The scalar triple product of these three vectors gives the volume of an infinitesimally small parallelepiped,
![{\displaystyle \mathrm {d} V=\mathrm {d} \mathbf {g} _{1}\cdot (\mathrm {d} \mathbf {g} _{2}\times \mathrm {d} \mathbf {g} _{3})={\frac {\partial \mathbf {f} }{\partial t_{1}}}\cdot \left({\frac {\partial \mathbf {f} }{\partial t_{2}}}\times {\frac {\partial \mathbf {f} }{\partial t_{3}}}\right)\;\mathrm {d} t_{1}\mathrm {d} t_{2}\mathrm {d} t_{3}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4a057037e670bee5263dbaecaba68d2d6c0e89ac)
The components of the first vector are given by
![{\displaystyle {\frac {\partial \mathbf {f} }{\partial t_{1}}}\equiv \left({\frac {\partial x}{\partial t_{1}}},{\frac {\partial y}{\partial t_{1}}},{\frac {\partial z}{\partial t_{1}}}\right)\equiv \left({\frac {\partial f_{1}}{\partial t_{1}}},{\frac {\partial f_{2}}{\partial t_{1}}},{\frac {\partial f_{3}}{\partial t_{1}}}\right)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/48a9230760a930747550b3f85ba80fb1bc9f59e1)
and similar expressions hold for the components of the other two derivatives.
It has been shown in the article on the scalar triple product that
![{\displaystyle {\frac {\partial \mathbf {f} }{\partial t_{1}}}\cdot \left({\frac {\partial \mathbf {f} }{\partial t_{2}}}\times {\frac {\partial \mathbf {f} }{\partial t_{3}}}\right)={\begin{vmatrix}{\dfrac {\partial f_{1}}{\partial t_{1}}}&{\dfrac {\partial f_{2}}{\partial t_{1}}}&{\dfrac {\partial f_{3}}{\partial t_{1}}}\\{\dfrac {\partial f_{1}}{\partial t_{2}}}&{\dfrac {\partial f_{2}}{\partial t_{2}}}&{\dfrac {\partial f_{3}}{\partial t_{2}}}\\{\dfrac {\partial f_{1}}{\partial t_{3}}}&{\dfrac {\partial f_{2}}{\partial t_{3}}}&{\dfrac {\partial f_{3}}{\partial t_{3}}}\\\end{vmatrix}}\equiv {\frac {\partial (f_{1},f_{2},f_{3})}{\partial (t_{1},t_{2},t_{3})}}\equiv \mathbf {J} _{\mathbf {f} }(\mathbf {t} ).}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b4470685960041c0b1f10a7d813045138517fc74)
Note that a determinant is invariant under transposition (interchange of rows and columns), so that the transposed determinant being given is of no concern.
Finally.
![{\displaystyle \mathrm {d} V={\frac {\partial (f_{1},f_{2},f_{3})}{\partial (t_{1},t_{2},t_{3})}}\;\mathrm {d} t_{1}\mathrm {d} t_{2}\mathrm {d} t_{3}\equiv \mathbf {J} _{\mathbf {f} }(\mathbf {t} )\;\mathrm {d} \mathbf {t} .}](https://wikimedia.org/api/rest_v1/media/math/render/svg/bc9055b8433826e88a99144f8879ef9cf133a18c)
Reference
- ↑ T. M. Apostol, Mathematical Analysis, Addison-Wesley, 2nd ed. (1974), sec. 15.10