Integration
From Wikipedia, the free encyclopedia.
Printable version | Pages that link here
66.28.98.5
Log in | Help
[Main Page]
Main Page | Recent Changes | Edit this page | History | Random Page | Special Pages

Integration is a tool of calculus. Isaac Newton and Gottfried Wilhelm Leibniz independently discovered its foundations in the seventeenth century.

Integration can be regarded two different ways. The first is as an inverse to differentiation. The second as a measure of area under a curve. The Fundamental Theorem of Calculus states that these two ideas are equivalent.

The antiderivative approach is perhaps more limited, a priori. Given the elementary formulae of calculus, we can observe that the only functions that can be differentiated to obtain 1 are of the form x+c where c is some number. Unfortunately this is not quite satisfactory, for there will always be functions that aren't listed in the table of derivatives. However, there is a theorem here, and it reads as follows.

Theorem: If f(x) and g(x) are differentiable functions defined on some interval whose derivatives f'(x) and g'(x) are identical, then f(x) and g(x) may only differ by a constant.

This can be shown using the mean value theorem.

Thus, we can now take the table of derivatives and provide a table of antiderivatives, knowing that each antiderivative can only be wrong up to a constant c. Unfortunately, this table of antiderivatives is not quite as easy to use as the table of derivatives, and does not allow to integrate arbitrary formulae. For instance, it can be shown that the antiderivative of the function f(x) = exp(x2) can not be written in [closed form], that is, as a formula involving the usual polynomials, [trigonometric functions]?, roots? and [transcendental functions]?.

The alternate approach, that of area under the curve, is more geometric in nature. Let us begin with a simple example. Take f(x) = 1 over the interval (0,1). A not so interesting question that we can ask is, what is the area under that curve and above the x axis. There is a certain problem of axiomatization here but most of us will agree that this area should be 1. More generally, we are hoping that the area under f(x) = A over the interval (p,q) be A(p-q). If A<0, we have to be willing to accept a negative area, which we do.

0) if f(x) = A over (p,q) and 0 elsewhere, its integral should be A(p-q)

Here and in the sequel, we will denote the area under the curve f by ∫f. If there could be confusion as to what the variable of f is, the more verbose ∫f(x)dx will be used to specify, "f is a function of x and we are integrating with respect to that variable x."

In addition, we would like that

1) if f(x) = f1(x) + f2 (x) + ... + fn(x) then ∫f(x) is the sum of the areas under fk(x), k = 1..n.

2) if f(x) ≤ g(x) ≤ h(x), we would like that the areas ∫f ≤ ∫g ≤ ∫h.

Given the preceding constraints (and they are even more than necessary), we can assign reasonable values to ∫f for all the usual formulae of calculus, such as polynomials and trigonometric functions. It is possible to show, for at least some of the formulae, that the integral thus obtain corresponds in some way to the antiderivative previously discussed. First, we define ∫(a,b)f to be the integral of the function f(x) when restricted to (a,b). Then, the following is true.

Theorem (Fundamental Theorem of Calculus): If F is an antiderivative of f then ∫(a,b) f(x) = F(b) - F(a). Conversely, if F(x) is defined to be ∫(a,x)f(t)dt and if f is continuous near x then F'(x) = f(x).

There are improved versions of the fundamental theorem of calculus, for instance the continuity can be relaxed somewhat.

To more precisely discuss the integral, we need a simple definition. The indicating function 1_(a,b)(x) of an interval is the function that takes the value 1 over the interval (a,b) and 0 elsewhere. Note that from property 0, the integral of C1_(a,b), where C is a constant, is C(b-a). Property 2 allows us to compute

3) If f(x) = ∑ ck1_(ak,bk) then ∫f = ∑ ck (bk-ak). Such a function f is called a step function.

Armed with this, we can define upper sums and lower sums. Fix an arbitrary function g. If h is a step function with g(x) ≤ h(x), then because of property 2, we want ∫g ≤ ∫h. Likewise, if f is a step function with f(x) ≤ g(x), then we want ∫f ≤ ∫g.

If ∫h - ∫f can be made arbitrarily small, then we are left with a unique choice for ∫g. This choice is called the Riemann integral of g, and is represented by the symbol ℛ. It may not be immediately clear that this entire process yields to consistent choices, so there is a certain amount of checking left to be done here.

An interesting phenomenon is that certain functions are left without a Riemann integral. For instance, the function f(x) that takes the value 1 whenever x is a rational number on the interval (0,1) and 0 otherwise, has no Riemann integral. All of its upper sums are 1 and all of its lower sums are 0, and we do not know what value to assign to ∫f.

Here and in the sequel, a function that has a Riemann integral is said to be Riemann integrable.

This may not sound like a big problem, but this function can be written as the limit of functions that are integrable and of integral 0. This means that it is possible that we could have a sequence of functions fk converging [monotone convergence]? to a function f, with ∫fk well behaved yet we have no handle on ∫f, and this even for silly little positive functions fk and f!

This problem has a solution, but we need to relax our requirement number 2, and replace it by the following:

2') If 0 ≤ f(x) ≤ g(x), then we want 0 ≤ ∫f ≤ ∫g. If fk converges to f and fk(x) is monotonously increasing for every x, then we want ∫fk to converge to ∫f.

Unfortunately, now we lose all negative valued functions. It is also not clear that these requirements will be consistent. They turn out to work, though a large amount of proving is required to show that everything is kosher.

The integral we get from this process is called the Lebesgue integral. It can be shown that all Riemann integrable functions are also Lebesgue integrable. Furthermore, the Lebesgue integral plays better with limit processes than does the Riemann integral.

There are more general integration theories. The basic ideas here are usually recycled. One way of generalizing is to replace the domain by some arbitrary set X. In R, we could easily measure the length of an interval. In an arbitrary set, we evaluate the size of sets using a measure and measure theory.

Another possible direction is to let the image of f be in some Banach space or some other [topological vector space]?. Given a measure (or, if the domain is still R, in a way similar to the above), we can write down "step functions" and their integrals, and then attempt to generalize to a larger class of functions using either the Lebesgue approach or the Riemann approach. Some complications are introduced by the lack of an order? in a vector space.

See also functional analysis, harmonic analysis, complex analysis, Dirac delta function.
Reference: Table of Integrals

Main Page
Recent Changes
Watch page links
Edit this page
History
Upload files
Statistics
New pages
Orphans
Most wanted
Most popular
All pages
Random Page
Stub articles
Long articles
List users
Bug reports
Talk
This page has been accessed 172 times. Other namespaces : Talk
Last edited Monday, February 25, 2002, 15:43 (diff)
    Validate this page