In mathematical analysisdistributions (also known as generalized functions) are objects which generalize functions and probability distributions. They extend the concept of derivative to all integrable functions and beyond, and are used to formulate generalized solutions of partial differential equations. They are important in physics and engineering where many non-continuous problems naturally lead to differential equations whose solutions are distributions, such as the Dirac delta distribution.

“Generalized functions” were introduced by Sergei Sobolev in 1935. They were independently introduced in the late 1940s by Laurent Schwartz, who developed a comprehensive theory of distributions.

The basic idea is to identify functions with abstract linear functionals on a space of unproblematic test functions (conventional and well-behaved functions). Operators on distributions can be understood by moving them to the test function. 

 For example, if

 f : R →R

 

is a locally integrable function, and

φ : R →R

is a smooth (that is, infinitely differentiable) function with compact support (so, identically zero outside of some bounded set), then we set

leftlangle f, varphi rightrangle = int_mathbf{R} f varphi ,dx = int_mathbf{R} varphi ,df .

This is a real number which linearly and continuously depends on φ. One can therefore think of the function f as a continuous linear functional on the space which consists of all the “test functions” φ

.

Similarly, if P is a probability distribution on the reals and φ

is a test function, then

leftlangle P, varphi rightrangle = int_{mathbf{R}} varphi, dP

is a real number that continuously and linearly depends on φ

: probability distributions can thus also be viewed as continuous linear functionals on the space of test functions. This notion of “continuous linear functional on the space of test functions” is therefore used as the definition of a distribution.

Such distributions may be multiplied with real numbers and can be added together, so they form a real vector space. In general it is not possible to define a multiplication for distributions, but distributions may be multiplied with integrable functions.

To define the derivative of a distribution, we first consider the case of a differentiable and integrable function f : R → R. If φ

is a test function, then we have

int_{mathbf{R}}{}{f'varphi ,dx} = - int_{mathbf{R}}{}{fvarphi' ,dx}

using integration by parts (note that φ

is zero outside of a bounded set and that therefore no boundary values have to be taken into account). This suggests that if S is a distribution, we should define its derivative S‘ by

leftlangle S', varphi rightrangle = - leftlangle S, varphi' rightrangle.

It turns out that this is the proper definition; it extends the ordinary definition of derivative, every distribution becomes infinitely differentiable and the usual properties of derivatives hold.

Example: The Dirac delta (so-called Dirac delta function) is the distribution defined by

leftlangle delta, varphi rightrangle = varphi(0)

It is the derivative of the Heaviside step function: For any test function varphi, we have

 < H' , varphi > =- < H,varphi'> = -int_{infty}^{infty} H(x)varphi'(x)dx= , 

 and

-int_{0}^{infty}varphi'(x)dx=varphi(0)-varphi(infty)=varphi(0)=leftlangledelta, varphirightrangle, 

 

so H‘ = δ

varphi(infty)=0 because of compact support. Similarly, the derivative of the Dirac delta is the distribution 

delta'(varphi)= -varphi'(0). 

This latter distribution is our first example of a distribution which is neither a function nor a probability distribution.

Formal definition

In the sequel, real-valued distributions on an open subset U of Rn will be formally defined. (With minor modifications, one can also define complex-valued distributions, and one can replace Rn by any smoothmanifold.)

First, the space D(U) of test functions on U needs to be explained. A function φ : U → R is said to have compact support if there exists a compact subset K of U such that φ(x) = 0 for all x in U  K. The elements of D(U) are the infinitely differentiable functions φ : U → R with compact support (also known as bump functions). This is a real vector space. We turn it into a topological vector space by stipulating that a sequence (or net) (φk) converges to 0 if and only if there exists a compact subset K of U such that all φk are identically zero outside K, and for every ε > 0 and natural number d ≥ 0 there exists a natural number k0 such that for all k ≥ k0 the absolute value of all d-th derivatives of φk is smaller than ε. With this definition, D(U) becomes a complete topological vector space (in fact, a so-called LF-space).

The dual space of the topological vector space D(U), consisting of all continuous linear functionals S : D(U) → R, is the space of all distributions on U; it is a vector space and is denoted by D'(U). The dual pairing between a distribution S in D′(U) and a test function φ in D(U) is denoted using angle brackets thus:

mathrm{D}'(U) times mathrm{D}(U) ni (S, varphi) mapsto langle S, varphi rangle in mathbf{R}.

The function f : U → R is called locally integrable if it is Lebesgue integrable over every compact subset K of U. This is a large class of functions which includes all continuous functions. The topology on D(U) is defined in such a fashion that any locally integrable function f yields a continuous linear functional on D(U) whose value on the test function φ is given by the Lebesgue integral ∫U fφ dx. Two locally integrable functions f and g yield the same element of D'(U) if and only if they are equal almost everywhere. Similarly, every Radon measure μ on U (which includes the probability distributions) defines an element of D'(U) whose value on the test function φ is ∫φ dμ.

As mentioned above, integration by parts suggests that the derivative ∂S/∂xk of the distribution S in the direction xk should be defined using the formula

leftlangle frac{partial S}{partial x_{k}}, varphi rightrangle = - leftlangle S, frac{partial varphi}{partial x_{k}} rightrangle

for all test functions φ. In this way, every distribution is infinitely differentiable, and the derivative in the direction xk is a linear operator on D′(U). In general, if α = (α1, …, αn) is an arbitrary multi-index and ∂αdenotes the associated mixed partial derivative operator, the mixed partial derivative ∂αS of the distribution S ∈ D′(U) is defined by

leftlangle partial^{alpha} S, varphi rightrangle = (-1)^{| alpha |} leftlangle S, partial^{alpha} varphi rightrangle mbox{ for all } varphi in mathrm{D}'(U).

The space D'(U) is turned into a locally convex topological vector space by defining that the sequence (Sk) converges towards 0 if and only if Sk(φ) → 0 for all test functions φ; this topology is called the weak-* topology. This is the case if and only if Sk converges uniformly to 0 on all bounded subsets of D(U). (A subset of E of D(U) is bounded if there exists a compact subset K of U and numbers dn such that every φ inE has its support in K and has its n-th derivatives bounded by dn.) With respect to this topology, differentiation of distributions is a continuous operator; this is an important and desirable property that is not shared by most other notions of differentiation. Furthermore, the test functions (which can themselves be viewed as distributions) are dense in D'(U) with respect to this topology.

 

If ψ : U → R is an infinitely often differentiable function and S is a distribution on U, we define the product Sψ by (Sψ)(φ) = S(ψφ) for all test functions φ. The ordinary product rule of calculus remains valid. 

Advertisements