User Tools

Site Tools


Vector calculus

Stan Zurek, Vector calculus, Encyclopedia Magnetica,

Vector calculus - a set of mathematical operations involving derivatives and integrals of vectors which can represent functions or fields in a multidimensional space (2D, 3D, 4D, etc.) Vector calculus is an extension of the “ordinary” calculus which is performed on scalar derivatives and integrals.1)2)

Gradient $∇F$, divergence $∇·\mathbf{F}$, and curl $∇ × \mathbf{F}$ are the basic operations in vector calculus

The three main operators in vector calculus quantify changes in fields:

Vector calculus is used widely in calculations of electromagnetic phenomena. The basic operations allow extracting information about the distribution of electromagnetic fields, energy associated with the field, electromagnetic radiation, and so on. The four Maxwell's equations are typically written in the vector calculus notation.3)

The topic of vector calculus (including derivatives and integrals) is very complex and this article contains only the fundamental concepts, and is by no means exhaustive.

There are a number of excellent in-depth, but easy-to-follow books on the subject of using vector calculus in electrostatics, magnetostatics, and electromagnetic theory, for example by Purcell and Morin,4) Griffiths,5) Fleisch,6)7) to name just a few.

→ → →
Helpful page? Support us!
→ → →
← ← ←
Help us with just $0.10 per month? Come on… ;-)
← ← ←

Summary of vector calculus operators

The nabla operator $∇$ (explained below) allows significant simplification of notation, especially for cylindrical and spherical systems of coordinates.8)

Vector calculus operators in a Cartesian system9)
operator input function mathematical representation outcome


scalar $F$
$$\text{grad} \, F ≡ ∇F ≡ \hat{\imath} \frac{∂F}{∂x} + \hat{\jmath} \frac{∂F}{∂y} + \hat{k} \frac{∂F}{∂z}$$


vector $\mathbf{F}$
$$\text{div} \, \mathbf{F} ≡ ∇·\mathbf{F} ≡ \frac{∂ F_x}{∂x} + \frac{∂ F_y}{∂y} + \frac{∂ F_z}{∂z} $$
(also rot in some nomenclature)

vector $\mathbf{F}$
$$\text{curl} \, \mathbf{F} ≡ ∇ × \mathbf{F} ≡ \left( \frac{∂ F_z}{∂y} - \frac{∂ F_y}{∂z} \right) \hat{\imath} + \left( \frac{∂ F_x}{∂z} - \frac{∂ F_z}{∂x} \right) \hat{\jmath} + \left( \frac{∂ F_y}{∂x} - \frac{∂ F_x}{∂y} \right) \hat{k} $$


scalar $F$
$$ \text{"Laplacian" (of scalar)} ≡ ∇·(∇F) ≡ ∇^2F ≡ \frac{∂^2 F_x}{∂x^2} + \frac{∂^2 F_y}{∂y^2} + \frac{∂^2 F_z}{∂z^2} $$

vector $\mathbf{F}$
$$\text{"Laplacian" (of vector)} ≡ ∇×(∇×\mathbf{F}) ≡ ∇^2\mathbf{F} ≡ (∇^2 F_x) \hat{\mathbf{x}} + (∇^2 F_y) \hat{\mathbf{y}} + (∇^2 F_z) \hat{\mathbf{z}} $$

Nabla ∇ operator

An operator such as square root $\sqrt{F}$ informs that an operation should be performed on the variable F that follows the operator. Similarly, a derivative operator $\frac{dF}{dx} = \frac{d}{dx}(F)$ informs that the derivative should be performed on the quantity F that follows.

In a 3D space, vectors can be split into orthogonal components, and partial derivatives can be calculated accordingly for each directional component. The special ∂ character (“curly d”) is used to denote that the derivatives are partial, so for one component it could be $\frac{∂F}{∂x} = \frac{∂}{∂x}(F)$, and more components can be used accordingly.

Thus, in a 3D space, the operator can be defined with the three directional unit vector components ($\hat{\imath}, \hat{\jmath}, \hat{k}$), and for simplicity of notation just one character called nabla or del denoted with an upside-down triangle $\vec{∇} ≡ \mathbf{∇} ≡ ∇$ (with the arrow or bold notation typically omitted) can be used to represent the whole operation. In a Cartesian system:10)

$$ \vec{∇} ≡ \mathbf{∇} ≡ ∇ ≡ \hat{\imath} \frac{∂}{∂x} + \hat{\jmath} \frac{∂}{∂y} + \hat{k} \frac{∂}{∂z} $$

Nabla can be used in the “Laplacian” operator, referred to sometimes as “nabla squared” or “del squared”, denoting effectively double differentiation:

$$ \vec{∇}·\vec{∇} ≡ \mathbf{∇}·\mathbf{∇} ≡ ∇·∇ ≡ \vec{∇}^2 ≡ \mathbf{∇}^2 ≡ ∇^2 ≡ \frac{∂^2}{∂x^2} + \frac{∂^2}{∂y^2} + \frac{∂^2}{∂z^2} $$

It is also possible to define a Laplacian for tensors.11)

Vector operations

Vectors can be manipulated mathematically in some sense similar to scalars, but the complexity of operations is increased due to the number of components involved in such calculations.

Unit vector $\hat{f}$ multiplied by a scalar value is simply scaled by that value

Addition, subtraction, multiplication

Vector calculus allows analysis of electromagnetic phenomena in 3D space, as dictated by the Maxwell's equations.12)

Vectors can be manipulated algebraically (addition, subtraction, negation), with the processing similar to the scalar values, but of course by taking into account the angle between them.13) For instance, scalar addition means a simple arithmetic sum of the components, but the vector addition means a geometric sum.

Vectors can be multiplied, but there are three operations:

  • In the scalar product the vector is multiplied by a scalar, simply scaling the vector by the magnitude of that scalar: $A · \mathbf{B} = A \mathbf{B} = \mathbf{B} A$ (the order is not important)14). To avoid confusion, in most publications the scalar multiplication dot is not used (simply omitted). However, this omission is not strictly necessary, because the operation is dictated by the type of input values - and there is only one possibility when multiplying a scalar and a vector. Additionally, if the scalar value is negative, then the sense (direction) of the vector is reversed.
Vector dot product $c=\vec{a} · \vec{b}$ is a scalar, numerically equal the area of rectangle $|\vec{a}| |\vec{b}| \, \cos{}(θ_{ab})$
  • In the vector dot product the result is proportional to the projection of one vector onto another, and the resulting value is a scalar, numerically equal to a scalar multiplication including the angle: $\mathbf{A}·\mathbf{B}=|\mathbf{A}| \, |\mathbf{B}| \cos{}(θ_{AB})$. The order of input vectors does not matter and thus $\mathbf{A}·\mathbf{B}=\mathbf{B}·\mathbf{A}$.
    A dot product of a vector with a normal vector of some surface gives the component perpendicular to that surface.
    A dot product of two perpendicular vectors is zero, so for example $\hat{\imath} · \hat{\jmath} = 0$. This simplifies outcomes of certain calculations, because some terms end up eliminated.
  • In the vector cross product the result is a vector pointing in a direction perpendicular to both input vectors, and is proportional to the area stretched between the two input vectors. In a general case this area has a rhomboidal shape (but it can be rectangular or square). The order of input vectors matters so that: $\mathbf{A} × \mathbf{B} = - \mathbf{B} × \mathbf{A} = \mathbf{C}$.
    Also, the chirality of the system of coordinates is important, because the result can change sign, if the calculations are performed in the right- or left-handed system.
    A cross product of two parallel vectors is zero, so for example $\hat{\imath} × \hat{\imath} = 0$. This simplifies outcomes of certain calculations, because some terms end up eliminated.
    The vector cross product can be written in a matrix notation, with calculations similar to the way a determinant of an array is computed (by multiplying the components along diagonals of the array)15), as shown in the equation below.
Vector cross product $\vec{c}=\vec{a} × \vec{b}$ is a vector, with length numerically equal to the area of rectangle $|\vec{a}| |\vec{b}| \, \sin{}(θ_{ab})$, $\vec{c}$ is perpendicular to both $\vec{a}$ and $\vec{b}$
Definition of right-handed and left-handed systems of coordinates

$$\mathbf{A} × \mathbf{B} = \left( \begin{array}{c} \hat{\imath} & \hat{\jmath} & \hat{k} \\ A_x & A_y & A_z \\ B_x & B_y & B_z \end{array} \right) = ( A_y B_z - A_z B_y ) \hat{\imath} + ( A_z B_x - A_x B_z ) \hat{\jmath} + ( A_x B_y - A_y B_x ) \hat{k} $$

The dot and cross products can be combined, for example the triple scalar product returns a scalar value proportional to the volume (parallelepiped) stretched between all three input vectors:16) $$ \mathbf{A} · ( \mathbf{B} × \mathbf{C}) = \left( \begin{array}{c} A_x & A_y & A_z \\ B_x & B_y & B_z \\ C_x & C_y & C_z \end{array} \right) = A_x ( B_y C_z - B_z C_y ) + A_y ( B_z C_x - B_x C_z ) + A_z ( B_x C_y - B_y C_x ) $$

Also, a triple vector product can be calculated in a similar way, but involving more tedious equations due to the number of involved components, and thus it is typically easier to use the equivalent identity which uses dot products instead: $$\mathbf{A} × ( \mathbf{B} × \mathbf{C}) = \left( \begin{array}{c} \hat{\imath} & \hat{\jmath} & \hat{k} \\ A_x & A_y & A_z \\ ( B_y C_z - B_z C_y ) & ( B_z C_x - B_x C_z ) & ( B_x C_y - B_y C_x ) \end{array} \right) = \mathbf{B} (\mathbf{A} ·\mathbf{C}) - \mathbf{C}( \mathbf{A} · \mathbf{B})$$

Interestingly, vector division cannot be easily defined and thus is not used in vector calculus.17) Instead, in some cases, such as in 2D analysis, it is much more advantageous to change the representation to complex numbers, which allow division and hence arriving at a correct result with more straightforward calculations.18)

Gradient - grad

Gradient of scalar field points along the direction of the steepest ascend

If ∇ is made to operate on a scalar function F (such as scalar field), then the following notation for the gradient is used, with the result being a vector (even though the input is a scalar field):

Gradient = grad (scalar field) = scalar
in a Cartesian system of coordinates
$$ \text{gradient}(F) ≡ \text{grad}(F) ≡ \vec{∇}F ≡ \mathbf{∇}F ≡ ∇F$$
$$ ∇F ≡ \left( \hat{\imath} \frac{∂}{∂x} + \hat{\jmath} \frac{∂}{∂y} + \hat{k} \frac{∂}{∂z} \right) F ≡ \hat{\imath} \frac{∂F}{∂x} + \hat{\jmath} \frac{∂F}{∂y} + \hat{k} \frac{∂F}{∂z} $$
The gradient operator $∇F$ quantifies the magnitude and direction of the steepest slope of a scalar field at the point of calculation19)

It should be noted that with the grad operation there is no multiplier dot between the operator and the variable: $∇F$

The simplified notation is the same for other systems of coordinates (cylindrical, spherical), but the full notation is somewhat more complex, due to trigonometric dependencies between the systems.20)

The gradient can be used to perform certain calculations of electromagnetic fields. For example, the value of electric field at a given point is equal to the negative of the gradient of electric potential at that point:21) $\mathbf{E} = -∇φ$. The negative sign is because of the convention that the electric field lines point from positive to negative charges, but the gradient points from lower to higher (namely from negative to positive).

Divergence - div

Divergence is positive for outward flux of field, and negative for inward

Divergence quantifies the magnitude only (no direction) of the amount of a vector field which “flows” out or into a specific region. In other words - the divergence calculates the amount of source (or sink) for a given field.22)

If ∇ is made to operate on a vector function F (such as vector field), then the following notation for the divergence is used, with the result being a scalar (even though the input is a vector field).

$$ \text{divergence}(\vec{F}) ≡ \left( \hat{\imath} \frac{∂}{∂x} + \hat{\jmath} \frac{∂}{∂y} + \hat{k} \frac{∂}{∂z} \right)·\left( \hat{\imath} F_x + \hat{\jmath} F_y + \hat{k} F_z \right) ≡ \text{scalar} $$

However, dot product of any two parallel vectors is a scalar ($\hat{\imath} · \hat{\imath} = \hat{\jmath} · \hat{\jmath} = \hat{k} · \hat{k} = 1$), and a dot product of any two perpendicular vectors is zero ($\hat{\imath} · \hat{\jmath} = \hat{\jmath} · \hat{k} = \hat{k} · \hat{\imath} = 0$), and thus the equation simplifies.

Divergence = div (vector field) = scalar
in a Cartesian system of coordinates
$$ \text{divergence}(\vec{F}) ≡ \text{div}(\vec{F}) ≡ \vec{∇}·\vec{F} ≡ \mathbf{∇}·\mathbf{F} ≡ ∇·\mathbf{F} $$
$$ ∇·\mathbf{F} ≡ \left( \frac{∂ F_x}{∂x} + \frac{∂ F_y}{∂y} + \frac{∂ F_z}{∂z} \right) $$
Electrostatic field lines around an electric dipole

With the div operation there is the vector dot multiplier between the operator and the variable: $∇·\mathbf{F}$.

The simplified notation is the same for other systems of coordinates (cylindrical, spherical), but the full notation is significantly more complex, due to trigonometric dependencies between the systems.23)

The divergence calculation is useful for calculating fields produces by sources, such as electrostatic field produced by electric charges. For such field, divergence is non-zero for a volume which surrounds a non-equalised charge, but it is zero if the charges are equalised or if they are not contained by the analysed closed volume.

Also, the Gauss's law for magnetism states that divergence of magnetic flux density is zero $∇·\mathbf{B} = 0$ which means that there are no point-like sources of magnetic field (i.e. there are no magnetic monopoles).24)


The calculation of curl quantifies the amount and direction of rotation of a vector field, with the result being a vector perpendicular to the plane of rotation (in a similar sense as when a pseudovector is used to represent rotation in physics).

Curl quantifies rotation of a vector field

In a 3D Cartesian system, the curl of a vector field can be calculated from its orthogonal components, as follows:

$$ \text{curl} (\vec{F}) ≡ \left( \hat{\imath} \frac{∂}{∂x} + \hat{\jmath} \frac{∂}{∂y} + \hat{k} \frac{∂}{∂z} \right)×\left( \hat{\imath} F_x + \hat{\jmath} F_y + \hat{k} F_z \right) ≡ \left( \begin{array}{c} \hat{\imath} & \hat{\jmath} & \hat{k} \\ \frac{∂}{∂x} & \frac{∂}{∂y} & \frac{∂}{∂z} \\ F_x & F_y & F_z \end{array} \right) $$

which is equivalent to:

$$ \text{curl} (\vec{F}) ≡ \left( \frac{∂ F_z}{∂y} - \frac{∂ F_y}{∂z} \right) \hat{\imath} + \left( \frac{∂ F_x}{∂z} - \frac{∂ F_z}{∂x} \right) \hat{\jmath} + \left( \frac{∂ F_y}{∂x} - \frac{∂ F_x}{∂y} \right) \hat{k}$$

Curl = curl (vector field) = vector
in a Cartesian system of coordinates
$$ \text{curl} (\vec{F}) ≡ \vec{∇} × \vec{F} ≡ \mathbf{∇} × \mathbf{F} ≡ ∇ × \mathbf{F} $$
$$ ∇ × \mathbf{F} ≡ \left( \frac{∂ F_z}{∂y} - \frac{∂ F_y}{∂z} \right) \hat{\imath} + \left( \frac{∂ F_x}{∂z} - \frac{∂ F_z}{∂x} \right) \hat{\jmath} + \left( \frac{∂ F_y}{∂x} - \frac{∂ F_x}{∂y} \right) \hat{k} $$
Compass needles follow the direction of magnetic field (B or H) around the wire with current I, showing that magnetic field “circulates” around a wire with current magnetic_field_around_wire_e-m.jpg

With the curl operation there is the vector cross multiplier between the operator and the variable: $∇ × \mathbf{F}$. The simplified notation is the same for other systems of coordinates (cylindrical, spherical), but the full notation is significantly more complex, due to trigonometric dependencies between the systems.25)

Calculation of curl is involved in the relationship between electric and magnetic field, in the Maxwell's equations.

Electric current is a source of magnetic field, which circulates around it according to the right-hand rule. Also, electric field circulates around a change of magnetic field, which is a basis for the Faraday's law of induction, which can be expressed in a differential form as: $$∇ × \mathbf{E} = - \frac{∂\mathbf{B}}{∂t}$$


A double application of the nabla operator is used very often in the analysis of electromagnetic fields (and in other branches of science).

For a scalar field, the Laplacian is based on the divergence operator, and the result is a scalar:26)

$$ \text{"Laplacian" of scalar} ≡ ∇·(∇F) ≡ ∇^2F ≡ \frac{∂^2 F_x}{∂x^2} + \frac{∂^2 F_y}{∂y^2} + \frac{∂^2 F_z}{∂z^2} ≡ \text{scalar} $$

For a vector field, the Laplacian is based on the curl operator, and the result is a vector. The calculation of the directional components can be split in such a way that it involves the scalar Laplacian:27)

$$ \text{"Laplacian" of vector} ≡ ∇×(∇× \mathbf{F}) ≡ ∇^2 \mathbf{F} ≡ (∇^2 F_x) \hat{\mathbf{x}} + (∇^2 F_y) \hat{\mathbf{y}} + (∇^2 F_z) \hat{\mathbf{z}} ≡ \text{vector} $$

Calculation of Laplacian is useful for finding an extremum of a function, such as source or sink.28)

Vector calculus theorems

Gauss's divergence theorem

Intuitive illustration of the Gauss's divergence theorem29): divergence is equal to net flux for a given volume

Gauss’s theorem is also called Green’s theorem, Ostrogradsky's theorem or simply the divergence theorem.

The theorem states that the flux of a vector field through a closed surface is equal to the divergence of that field in the volume enclosed by that surface. Therefore, if the given volume does not contain a source (or sink) of the vector field then the net flux trough that volume must be zero (i.e. all flux entering the volume must also leave that volume).

It is possible to find such volume that will entrap an electric charge, because each electric charge represents an electric monopole. But it is not possible to find a volume which entraps a magnetic charge, so the magnetic field is “divergenceless” and thus there are no magnetic monopoles.

Mathematically, the integral of a derivative (or divergence) over a given region such as volume $V$ is equal to the integral of the function over the whole boundary which delimits the region, in this case the closed surface $S$ which delimits the volume $V$. The boundary itself is an integral.

The following notation can be used (using bold font to denote vector quantities):30)

$$ \int \int \int_\mathbf{V} (\mathbf{∇}·\mathbf{F})dV = \oint \oint_S ( \mathbf{F} · \mathbf{\hat{a}} ) dS $$
where: $V$ - volume, $\mathbf{F}$ - analysed vector field, $S$ - surface surrounding the volume $V$, $\hat{a}$ - unit vector normal to $S$

Stokes' curl theorem

Intuitive illustration of the Stokes' curl theorem31)

In the Stokes' theorem, also called Kelvin–Stokes theorem, the fundamental theorem for curls or simply the curl theorem, the integral of a derivative (or curl) over a given region such as surface $S$ is equal to the value of the function at the boundary of this region, for example the closed curve $C$ denoting the perimeter which delimits the region. The boundary of the region is also an integral.

In other words, the circulation of a vector around a given boundary is equal to net curl over the whole surface of the patch limited by that boundary.

Depending on the shape of a given patch, it might be easier to calculate one or the other, and thus this theorem is useful when simplifying more complex problems.

This theorem has practical implications, for example for such current transducers as current transformer or Rogowski coil. The curl of magnetic field produced by the wire with current is the same, regardless of the path taken around that wire. This allows measurement of current through detection of magnetic field, and it is sufficient that the transducer surrounds the wire, but it is not important where the wire sits inside of the transducer.

Stoke's theorem states can be mathematically written as:32)

$$ \int \int_S (\mathbf{∇} \times \mathbf{F}) · d\mathbf{a} = \oint_C \mathbf{F} · d\mathbf{l} $$
where:$S$ - analysed surface, $\mathbf{F}$ - analysed vector field, $\mathbf{a}$ - vector normal to surface $S$, $C$ - closed curve enclosing the surface $S$, $\mathbf{l}$ - vector tangential to curve $C$

Maxwell's equations

See also the main article: Maxwell's equations.
Maxwell's equations in a differential form33)
Gauss's law for electrostatics $$ ∇ · \mathbf{E} = \frac {\rho_{charge}}{\epsilon_0}$$
Gauss's law for magnetism $$ ∇ · \mathbf{B} = 0$$
Faraday's law of electromagnetic induction $$ ∇ × \mathbf{E} = - \frac {\partial \mathbf{B}}{\partial t}$$
Ampère's circuital law $$ ∇ × \mathbf{B} = \mu_0 · \mathbf{J} + \mu_0 · \epsilon_0 · \frac {\partial \mathbf{E}}{\partial t}$$

Maxwell's equations fully describe mathematically the interrelation between electric and magnetic fields. The early version of these equations were first collated by a Scottish physicist James Clerk Maxwell. Subsequently they were unified and expressed in vector notation by Oliver Heaviside, so that today four fundamental equations are used, whose physical meaning can be summarised as follows:34)

The equations can be mathematically written in many ways (e.g. differential or integral form) or different units (e.g. CGS or MKS). They can also be formulated on the basis of more fundamental theory of quantum electrodynamics.

In vacuum the equations simplify, because there are no charges, no currents and no material properties which have to be included in the constitutive equations.

System of coordinates

Vectors and tensors can be defined or expressed in different systems of coordinates, which are used for reducing the complexity of calculations of certain problems. This is because the mathematical equations can take slightly different form in each of such systems, and therefore might be easier to solve analytically (or numerically) in a given system.

Three most important systems are: Cartesian, cylindrical, and spherical. If they are 3D then each of them requires three unique values to identify a unique vector, or a distance from some reference point, e.g. (0, 0, 0).

Example of representation of the same vector in three different systems of coordinates: Cartesian (x, y, z), cylindrical (r, φ, z), and spherical (r, φ, θ)

In a Cartesian system, there are three orthogonal directions, for example (x, y, z) and therefore each vector can be combined from three components, each expressing a linear distance from a reference point (0, 0, 0). The distances can be also represented in a relative sense defining the length and angle of the vector, but not its location as such.

In a cylindrical system, there are also three components. But a location of a given point inside a cylinder can be expressed by a radius r, angle φ, and height z of a cylindrically-shaped volume, so such set of values uniquely identifies a given point or vector. Such system is useful for performing calculations on a geometry which has one axis of rotational symmetry.

In a spherical system, the components denote a radius r and two angles φ and θ. Such system is useful for performing calculations in spherical or elliptical geometries.

The specific components differ between the systems, but they can be converted between them through strict mathematical relationships involving trigonometry.39)

Of course, the names of the component variables (for lengths and angles) can be chosen in an arbitrary way, depending on the preference in a given approach.

See also


This website uses cookies. By using the website, you agree with storing cookies on your computer. Also you acknowledge that you have read and understand our Privacy Policy. If you do not agree leave the website.More information about cookies
vector_calculus.txt · Last modified: 2023/09/04 14:30 by stan_zurek

Except where otherwise noted, content on this wiki is licensed under the following license: CC Attribution-Share Alike 4.0 International
CC Attribution-Share Alike 4.0 International Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki
Legal disclaimer: Information provided here is only for educational purposes. Accuracy is not guaranteed or implied. In no event the providers can be held liable to any party for direct, indirect, special, incidental, or consequential damages arising out of the use of this data.

For information on the cookies used on this site refer to Privacy policy and Cookies.