Series (mathematics): Difference between revisions

From Citizendium
Jump to navigation Jump to search
imported>Aleksander Stos
m (→‎Motivations and examples: some copy edit (much more needed...))
mNo edit summary
 
(54 intermediate revisions by 6 users not shown)
Line 1: Line 1:
{{subpages}}
In mathematics, a '''series''' is the cumulative sum of a given [[sequence]] of terms. Typically, these terms are real or complex numbers, but much more generality is possible.  
In mathematics, a '''series''' is the cumulative sum of a given [[sequence]] of terms. Typically, these terms are real or complex numbers, but much more generality is possible.  


For example, given the sequence of the natural numbers 1, 2, 3, ..., the series is 1, 1 + 2, 1 + 2 + 3, ...
For example, given the sequence of the natural numbers 1, 2, 3, ..., the series is <br>
1, <br>
1 + 2, <br>
1 + 2 + 3, ...<br>
The above writing stresses the 'cumulative' nature of the series and is justified by the mathematical definition we introduce below, but more direct notation is typically used: 1 + 2 + 3 + ... .


According to the number of terms, the series may be finite or infinite. The former is relatively easy to deal with. In fact, the finite series is identified with the sum of all terms and &mdash; apart from the elementary algebra &mdash; there is no particular theory that applies. It turns out, however, that much care is required when manipulating infinite series. For example, some simple operations borrowed from elementary algebra &mdash; such as a change of order of the terms often lead to unexpected  results. So it is sometimes tacitly understood, especially in  [[mathematical analysis|analysis]], that the term "series" refers to the infinite series. In what follows we adopt this convention and concentrate on the theory of the infinite case.
Depending on the number of terms, the series may be finite or infinite. The former is relatively easy to deal with. The finite series is identified with the sum of all terms and &mdash; apart from the elementary algebra &mdash; there is no particular theory that applies. It turns out, however, that much care is required when manipulating infinite series. For example, some simple operations borrowed from the elementary algebra &mdash; such as a change of order of the terms &mdash; often lead to unexpected  results. So it is sometimes tacitly understood, especially in  [[mathematical analysis|analysis]], that the term "series" refers to the infinite series. In what follows we adopt this convention and concentrate on the theory of the infinite case.


==Motivations and examples==
==Motivations and examples==
[[Image:Geometric_series.png|thumb|350px|Fig. 1. Graphical representation of a geometric series.]]
[[Image:Geometric_series.png|thumb|350px|Fig. 1. Graphical representation of a geometric series.]]
Given a series, an obvious question arises. Does it make sense to talk about the sum of ''all'' terms?  Clearly, it is not always the case. Consider a simple example when the general term is constant and equal to 1, say. That is, the series reads as  
Given a series, an obvious question arises. Does it make sense to talk about the sum of ''all'' terms?  That is not always the case. Consider a simple example when the general term is constant and equal to 1, say. That is, the series reads as 1+1+1+1+... (without end). Clearly, the sum of all terms can not be finite. In mathematical language, the series ''diverges'' to infinity. <ref>Adjective ''divergent'' series is used as well</ref> There is not much to say about such an object. If we want to build an interesting theory, that is to have some examples, operations and theorems, we need to deal with ''convergent'' series, that is series for which the sum of all terms is well-defined.  
:1,
:2=1+1,
:3=1+1+1,
:4=1+1+1+1,
...etc. One observes that the sum of all terms can not be finite. In mathematical language, the series ''diverges'' to infinity. <ref>shortly: it is ''divergent''</ref> There is not much to say about such an object. If we want to build an interesting theory, that is to have some examples, operations and theorems, we need to deal with ''convergent'' series, that is series for which the sum of all terms is well-defined.  


Actually, are there any? Maybe any series would diverge like this?
Actually, are there any? Maybe any series would diverge like this?
Consider the following series of decreasing terms 1/2+1/4+1/8+1/16+...<ref>This is a special example of what is called [[geometric series]].</ref> The sum is finite!
Consider the following series of decreasing terms, where every term is half the previous term:
Instead of a rigorous proof, we present a picture (see fig.1) which gives a geometric interpretation.  Each term is represented by a rectangle of the corresponding area. At any moment (given any number of rectangular "chips on the table"),  the next rectangle covers exactly one half of the remaining space. Thus, the chips will never cover more than the unit square. In other words, the sum increases when more terms are added, but it does not go to the infinity, it never exceeds 1. The series ''converges''.<ref>A simple calculation shows that the sum is actually equal to one.</ref>
:<math>\frac{1}{2}+\frac{1}{4}+\frac{1}{8} +\frac{1}{16} +\frac{1}{32} + \ldots</math>
This is a special example of what is called [[geometric series]]. The sum is finite!
Instead of a rigorous proof, we present a picture (see fig.1) which gives a geometric interpretation.  Each term is represented by a rectangle of the corresponding area. At any moment (given any number of rectangular "chips on the table"),  the next rectangle covers exactly one half of the remaining space. Thus, the chips will never cover more than the unit square. In other words, the sum increases when more terms are added, but it does not go to the infinity, it never exceeds 1. The series ''converges''.<ref>A simple calculation shows that the sum is actually equal to one. More details in [[geometric series]] article.</ref>
 
One may tend to think that any decreasing sequence of terms would eventually lead to a convergent series. This, however, is not the case. Consider for example the series<ref>It is often called harmonic series</ref>
:<math> 1+\frac{1}{2}+\frac{1}{3}+\frac{1}{4}+\frac{1}{5}+\frac{1}{6}+\ldots</math>
To see that it diverges to infinity, let us make the following groups of terms (we may and do forget the one at the beginning):
:1/2,
:1/3+1/4,
:1/5+1/6+1/7+1/8,
:1/9+1/10+1/11+1/12+1/13+1/14+1/15+1/16,
etc. We will show that sum of each line is at least 1/2. Notice that each group ends by a term of the form <math>1/2^n,</math> the smallest one in the group. Note also that group that ends by <math>1/4</math> has two members, the one that ends by <math>1/8</math> has four members and so on. Generally, a group that ends by <math>1/2^n</math> has <math>2^{n-1}</math> terms, each of which is greater than the smallest one at the end. So pick a group and replace each term by the smallest one. Then the new sum is easy to calculate:
:<math> 2^{n-1}</math> members <math> \times \frac{1}{2^n} </math> (this is the smallest term) <math> = 1/2</math>
Obviously, the original sum is at least as big as this and our claim follows: the sum of ''each'' group is at least one half. And since there are infinitely many of such groups, the sum of all terms is not finite.
 
These examples show that we need criteria to decide whether a given series is convergent. Before such  tools can be developed we need some mathematical notation.


==Formal definition==
==Formal definition==
Line 32: Line 48:
Note that the ''sum'' (i.e. the numeric value of the above limit) and the series (i.e. the sequence <math>S_n</math>) are usually denoted by the same symbol. If the above limit does not exist - or is infinite - the series is said to be ''divergent''.
Note that the ''sum'' (i.e. the numeric value of the above limit) and the series (i.e. the sequence <math>S_n</math>) are usually denoted by the same symbol. If the above limit does not exist - or is infinite - the series is said to be ''divergent''.


==References==
==Series with positive terms==
{{reflist}}
The simplest to investigate are the series with positive terms <math>(a_n\ge 0).</math> In this case, the cumulative sum can only increase. The only question is whether the growth has a limit. This simplifies the analysis and results in a number of basic criteria. Notice that it is not positivity that really matters. If a given series has ''only'' negative terms, a "symmetric" argument can always be applied. So the only thing we really need is the constant sign. However, for the sake of clarity, in this section we assume  that the terms are simply positive.
 
===Comparisons===
There is a "family" of relative tests. The nature of a given series is inferred from what we know about another series, possibly simpler one.
 
If for two series <math>\sum a_n</math> and <math>\sum b_n</math> we have <math>a_n \le b_n</math> and if the series <math>\sum b_n</math> is known to be convergent, then the other one, <math>\sum a_n,</math> is convergent as well. In other words a series "smaller" than a convergent one is always convergent. This is not difficult to justify. When we consider series of positive terms, the cumulative sum may only increase. For convergent series the growth is ''not'' unlimited -- there is an upper bound. Accordingly, the growth of the "smaller" series is limited by the same bound.
 
The above argument works the other way round too. If for the two series we have <math>a_n \ge b_n</math> and the series <math>\sum b_n</math> is known to be divergent, then the other one, <math>\sum a_n,</math> diverges as well. In other words a series "bigger" than a divergent one is divergent too.
 
A very useful version of comparison test may be expressed as follows. If for two series <math>\sum a_n</math> and <math>\sum b_n</math> it can be established that
:<math> \lim_{n\to \infty}\frac{a_n}{b_n} = c</math>
with a finite strictly positive constant <math>c\in(0,\infty)</math>, then both series are of the same nature. In other words, if one series is known to converge, the other converges as well; if one diverges, so does the other.
 
Comparison with [[integral]]. This can be done if the ''sequence'' of general terms is decreasing, that is <math>a_n > a_{n+1}.</math> More precisely, suppose that we have <math>f(n)=a_n,\, n=1,2,...,</math> for a certain decreasing function ''f'' defined on <math>(1,\infty)</math>. If the integral
:<math> \int_1^\infty f(x) \, dx </math>
converges, so does the series <math>\sum a_n.</math>
 
This equivalence is quite important since it allows to establish convergence of a given series using basic methods of calculus. Prominent examples include the ''Riemann series'', that is the series with general term <math>a_n=1/n^p</math> with a constant <math>p\in (0,\infty).</math> If we take the function <math>f(x)=1/x^p</math> then we get
:<math>\int_1^\infty f(x) \, dx  = [ \ln x ]_1^\infty</math> if <math>p=1</math> and
:<math>\int_1^\infty f(x) \, dx  = \left[ \frac{x^{-p+1}}{-p+1} \right]_1^\infty</math> otherwise.
The limit at infinity exists if and only if <math>p>1</math>. It follows that the series
:<math> \sum \frac{1}{n^p}</math>
converges if and only if <math>p\in (1,\infty).</math>
 
===Absolute tests===
There are also tests that allow to  determine the nature of a given single series. Two most popular are listed below.
 
* [[Jean d'Alembert|D'Alembert]] ratio test
In its simplest form it involves computation of the following limit
:<math>L=\lim_{n\to \infty} \frac{a_{n+1}}{a_n}.</math>
If ''L'' is strictly greater than 1, then the series diverges. If ''L'' < 1 then the series is convergent. The case ''L'' = 1 gives no answer. Indeed, take for example  <math>a_n=1/n</math>. A short computation gives ''L'' = 1. Notice that we showed that this series is divergent. On the other hand, if <math>a_n=1/n^2</math>, the series was shown to converge. Still, if we compute ''L'' from the above formula, we get ''L'' = 1.
 
* [[Augustin Louis Cauchy|Cauchy]] root test
In its popular form it is based on the computation of the following number
:<math>L=\limsup_{n\to\infty} \sqrt[n]{a_n} </math>
(<math>\limsup</math> here refers to the [[upper limit]] of the sequence).
Similarly as before, ''L'' > 1 implies divergence, ''L'' < 1 means that the series is convergent. The case ''L''= 1 does not allow to decide.
 
Which one of the two above tests is more appropriate depends on the form of the general term. However, not only computational convenience is in question.  In some cases only the root test can be applied. Consider for example the series 1 + 1 + 1/2 + 1/2 + 1/4 + 1/4 + 1/8 + 1/8 + 1/16 + 1/16 +.... Formally, the limit of the ratio <math>a_{n+1}/a_n</math> does not exist (the upper limit is equal to 1, the lower limit is 1/2). Nonetheless the root test gives <math>L = 1/\sqrt{2}</math> and this determines that the series converges.
 
In general, the root test is more universal in the following sense: if the limit ''L'' of the ratios <math>a_{n+1}/a_n</math> exists, then the limit of roots <math>\sqrt[n]{a_n}</math> exists as well and both are equal. On the other hand, if the ratio test fails to give an answer due to ''L'' = 1, then there is no hope that the root test allows to decide.
 
==Series with arbitrary terms==
{{Image|Alternating_harmonic.png|right|450px|Fig. 2 Alternating series converges if the "steps" are shortening.}}
In general, a given series may contain terms of different sign. This may lead to very delicate situations. Consider a simple example, called the ''alternating harmonic'' series
:<math> 1-\frac{1}{2}+\frac{1}{3}-\frac{1}{4}+\ldots.</math>
Denote the general term by <math>a_n=(-1)^n/n.</math> Imagine that each term represents a small step along a line. Considering the sign, we swing back and forth. And it is easy to ''see'' that due to  ''shortening'' the step we will approach a certain point, just like a ''damped pendulum'' -- see Fig. 2. This informally shows that <math>\sum a_n</math> converges. Furthermore, we know the actual sum of this series, it equals to <math>\ln 2</math> (this needs some actual computations, though).
Now we reorganise the terms and consider
:<math>1-\frac{1}{2}-\frac{1}{4}+\frac{1}{3}-\frac{1}{6}-\frac{1}{8}+\ldots..</math>
The general term reads
:<math>b_n = \frac{1}{2n+1}-\frac{1}{2(2n+1)}-\frac{1}{2(2n+2)},\quad n=0,1,2,\ldots</math>
Clearly, this series contains the same terms as <math>\sum a_n</math> just in different order. Furthermore, simplifying the expression for <math>b_n</math> gives
:<math>b_n=\frac{1}{2(2n+1)} - \frac{1}{2(2n+2)} = \frac{1}{2}\left( \frac{1}{2n+1} - \frac{1}{2n+2}\right)</math>
so that
:<math>b_0+b_1+b_2+\ldots = \frac{1}{2} \left[ (1-\frac{1}{2}) + (\frac{1}{3}-\frac{1}{4}) + \ldots \right]. </math>
It follows that the  total sum is <math>1/2\cdot \ln 2</math>. And we changed only the order...
 
Even a more general result can be shown: for any given number <math>M\in\mathbb{R}</math> we can find a rearrangement of terms in our series <math>\sum a_n</math> to get the total sum of ''M''.<ref>This is called [[Riemann series theorem]].</ref>
 
To assure a more regular behaviour the notion of ''absolute convergence'' is introduced. We say that a series <math>\sum a_n</math>  is ''absolutely convergent'' if the series <math>\sum |a_n|</math> converges. The absolute convergence is ''stronger'' that the simple convergence introduced at the beginning. This means that any absolutely convergent series is convergent in the former sense.  Consequently, given a series with arbitrary terms <math>a_n</math> one may apply any of the above mentioned basic criteria to <math>|a_n|</math>. If the result is positive ("convergence" detected) it means that the series is absolutely convergent -- so it converges in the sense of the first definition as well. Furthermore, the absolutely convergent series do not change the sum on reordering terms.
 
However, as we showed there are interesting series that are convergent but not absolutely convergent. Typical examples include ''alternating'' series, that is series whose general term each time changes the sign: <math>a_n\cdot a_{n+1} < 0.</math> For these we have a criterion.
 
* If <math>|a_n|</math> form a ''decreasing'' sequence and the sign of <math>a_n</math> is alternating then the series <math>\sum a_n</math> is convergent.
 
With this useful tool we immediately see that our alternating harmonic series converges. This is not very surprising -- since an actual proof of the criterion is just mathematicaly rigorous transcription of what can be seen on the Fig.2.
 
Notice that the relative tests from the previous section do not apply for alternating or arbitrary series (unless one investigates the modulus of the general term and the absolute convergence question). Consider the following examples
:<math> a_n=\frac{(-1)^n}{\sqrt{n}}</math>
:<math> b_n=\frac{(-1)^n}{(-1)^n+\sqrt{n}}</math>
The general terms are obviously equivalent (the ratio <math>a_n/b_n</math> converges to 1). The first series converges in virtue of our last criterion while the second one may be shown to diverge.<ref>To verify this one may consider the difference of the two general terms.</ref>
 
We can invert the question we dealt with so far ("what can guarantee the convegence") and ask what follows from the convergence of a series. What all convergent series have in common? Here is an answer.
*If <math>\sum a_n</math> is convergent then <math>\lim_{n\to\infty} a_n =0.</math>
That is, the general numbers, taken as a ''sequence'', converge to 0.  For example, the series with the general term
:<math>a_n=(-1)^n \frac{2n+1}{4n-5}</math> or <math>b_n=\frac{(n+1)^2}{2n^2+1}</math>
can not be convergent.
 
==Series of functions==
 
On one hand, the sum of a series <math>f_1+f_2+\dots</math> of functions <math>f_n</math> is defined in the same way as above, — as the limit of the sequence of functions <math>S_n=f_1+\dots+f_n</math>, the partial sums of the series. On the other hand, convergence of functions differs strikingly from convergence of numbers. There is only one widely used convergence mode for numbers; in contrast, there are several widely used convergence modes for functions.
 
It is convenient to reduce the general convergence, <math> S_n \to S </math>, to the special case, <math> R_n \to 0 </math>, letting <math> R_n = S_n - S </math>.
 
For example, the sequence of functions
: <math> R_n(x) = \frac{x^n}{n!} </math>
converges to 0 pointwise but not uniformly. It means that, given <math> \epsilon > 0 </math> and <math> x </math>, there exists <math> N </math> such that <math> |R_n(x)| < \epsilon </math> for all <math> n > N </math>. However,<math> N </math> depends on <math> x </math>, and no <math> N </math> can serve all <math> x </math>. In other words: <math> R_n(x) </math> converges to 0 for every ''x'' (which means, ''x'' not depending on ''n''), but <math> R_n(x_n) </math> need not converge to 0 for arbitrary sequences <math> (x_n) </math>. For instance, <math>x_n=n</math> leads to <math> R_n(x_n) = n^n/n! \to \infty </math>.
 
Every example of the sequence of functions <math>R_n</math> is relevant to the general theory of functional series, since it corresponds to some example of the series of functions <math>f_1+f_2+\dots</math>, namely, <math>f_n=R_n-R_{n-1}</math>.
 
For a sequence of continuous functions on <math>[0,1]</math>, still, uniform convergence does not follow from pointwise convergence. For example, functions
: <math> R_n(x) = x^n - x^{2n} </math> for 0 &le; x &le; 1.
Choosing  <math> x_n </math> such that (''x<sub>n</sub>'')''<sup>n</sup>'' = 0.5 one gets <math> R_n(x_n) = 0.25 </math> in spite of pointwise convergence to 0. Here is a similar example for periodic functions on the whole line:
: <math> R_n(x) = \sin^n x - \sin^{3n} x </math>.
 
The uniform convergence of <math> R_n </math> to 0 can be written in the form
: <math> \sup_x |R_n(x)| \to 0; </math>
usual convergence of some numeric characteristics of these functions. Many other modes of convergence are also of this form. Some examples:
: <math> \int |R_n(x)| \, dx \to 0; </math>
: <math> \int |R_n(x)|^2 \, dx \to 0; </math>
: <math> \sup_x |R_n(x)| + \sup_x |R'_n(x)| \, dx \to 0. </math>
 
Most important are two classes of series of functions: [[power series|power]] (especially, [[Taylor series|Taylor]]) series whose terms are power functions <math> c_n x^n </math> and trigonometric (especially, [[Fourier series|Fourier]]) series whose terms are trigonometric functions <math> a_n \cos nx + b_n \sin nx </math>; the latter being the real part of the complex number <math> c_n e^{inx} </math> for <math> c_n = a_n - i b_n </math>, the series with terms <math> c_n e^{inx} </math> is also treated as Fourier series.
 
These two classes are unrelated when <math>x</math> is real, but related via complex numbers: if <math> x = e^{i\phi} </math> then <math> c_n x^n = c_n e^{in\phi} </math>.
 
Taylor and Fourier series behave quite differently. A Taylor series converges uniformly, together with all derivatives, on <math>[-a,a]</math> whenever ''a'' is less than the [[radius of convergence]], and its sum is a smooth (more exactly, analytic, therefore, continuous and infinitely differentiable) function. The behavior of a Fourier series depends on the behavior of its sum. If the sum is smooth enough, the series converges uniformly, together with some derivatives. If the sum is only continuous, the series need not converge uniformly, nor even pointwise. If the sum is square integrable (maybe quite discontinuous), the series converges in the sense that <math> \int |R_n(x)|^2 \, dx \to 0 </math>. Some [[Distribution (mathematics)|distributions (generalized functions)]] also may be developed into Fourier series, in which case convergence is much weaker.
 
The different behavior of power and trigonometric series corresponds on the complex plane to the different behavior of a function, analytic on a disk, inside the disk and on its boundary. Inside the disk the function is always smooth; on the boundary it can be quite a bad function, and even something more general than a function.


[[Category:CZ Live]]
==Notes and references==
[[Category:Mathematics Workgroup]]
{{reflist}}[[Category:Suggestion Bot Tag]]

Latest revision as of 11:00, 17 October 2024

This article is developing and not approved.
Main Article
Discussion
Related Articles  [?]
Bibliography  [?]
External Links  [?]
Citable Version  [?]
 
This editable Main Article is under development and subject to a disclaimer.

In mathematics, a series is the cumulative sum of a given sequence of terms. Typically, these terms are real or complex numbers, but much more generality is possible.

For example, given the sequence of the natural numbers 1, 2, 3, ..., the series is
1,
1 + 2,
1 + 2 + 3, ...
The above writing stresses the 'cumulative' nature of the series and is justified by the mathematical definition we introduce below, but more direct notation is typically used: 1 + 2 + 3 + ... .

Depending on the number of terms, the series may be finite or infinite. The former is relatively easy to deal with. The finite series is identified with the sum of all terms and — apart from the elementary algebra — there is no particular theory that applies. It turns out, however, that much care is required when manipulating infinite series. For example, some simple operations borrowed from the elementary algebra — such as a change of order of the terms — often lead to unexpected results. So it is sometimes tacitly understood, especially in analysis, that the term "series" refers to the infinite series. In what follows we adopt this convention and concentrate on the theory of the infinite case.

Motivations and examples

Fig. 1. Graphical representation of a geometric series.

Given a series, an obvious question arises. Does it make sense to talk about the sum of all terms? That is not always the case. Consider a simple example when the general term is constant and equal to 1, say. That is, the series reads as 1+1+1+1+... (without end). Clearly, the sum of all terms can not be finite. In mathematical language, the series diverges to infinity. [1] There is not much to say about such an object. If we want to build an interesting theory, that is to have some examples, operations and theorems, we need to deal with convergent series, that is series for which the sum of all terms is well-defined.

Actually, are there any? Maybe any series would diverge like this? Consider the following series of decreasing terms, where every term is half the previous term:

This is a special example of what is called geometric series. The sum is finite! Instead of a rigorous proof, we present a picture (see fig.1) which gives a geometric interpretation. Each term is represented by a rectangle of the corresponding area. At any moment (given any number of rectangular "chips on the table"), the next rectangle covers exactly one half of the remaining space. Thus, the chips will never cover more than the unit square. In other words, the sum increases when more terms are added, but it does not go to the infinity, it never exceeds 1. The series converges.[2]

One may tend to think that any decreasing sequence of terms would eventually lead to a convergent series. This, however, is not the case. Consider for example the series[3]

To see that it diverges to infinity, let us make the following groups of terms (we may and do forget the one at the beginning):

1/2,
1/3+1/4,
1/5+1/6+1/7+1/8,
1/9+1/10+1/11+1/12+1/13+1/14+1/15+1/16,

etc. We will show that sum of each line is at least 1/2. Notice that each group ends by a term of the form the smallest one in the group. Note also that group that ends by has two members, the one that ends by has four members and so on. Generally, a group that ends by has terms, each of which is greater than the smallest one at the end. So pick a group and replace each term by the smallest one. Then the new sum is easy to calculate:

members (this is the smallest term)

Obviously, the original sum is at least as big as this and our claim follows: the sum of each group is at least one half. And since there are infinitely many of such groups, the sum of all terms is not finite.

These examples show that we need criteria to decide whether a given series is convergent. Before such tools can be developed we need some mathematical notation.

Formal definition

Given a sequence of elements that can be added, let

Then, the series is defined as the sequence and denoted by [4] For a single n, the sum is called the partial sum of the series.

If the sequence has a finite limit, the series is said to be convergent. In this case we define the sum of the series as

Note that the sum (i.e. the numeric value of the above limit) and the series (i.e. the sequence ) are usually denoted by the same symbol. If the above limit does not exist - or is infinite - the series is said to be divergent.

Series with positive terms

The simplest to investigate are the series with positive terms In this case, the cumulative sum can only increase. The only question is whether the growth has a limit. This simplifies the analysis and results in a number of basic criteria. Notice that it is not positivity that really matters. If a given series has only negative terms, a "symmetric" argument can always be applied. So the only thing we really need is the constant sign. However, for the sake of clarity, in this section we assume that the terms are simply positive.

Comparisons

There is a "family" of relative tests. The nature of a given series is inferred from what we know about another series, possibly simpler one.

If for two series and we have and if the series is known to be convergent, then the other one, is convergent as well. In other words a series "smaller" than a convergent one is always convergent. This is not difficult to justify. When we consider series of positive terms, the cumulative sum may only increase. For convergent series the growth is not unlimited -- there is an upper bound. Accordingly, the growth of the "smaller" series is limited by the same bound.

The above argument works the other way round too. If for the two series we have and the series is known to be divergent, then the other one, diverges as well. In other words a series "bigger" than a divergent one is divergent too.

A very useful version of comparison test may be expressed as follows. If for two series and it can be established that

with a finite strictly positive constant , then both series are of the same nature. In other words, if one series is known to converge, the other converges as well; if one diverges, so does the other.

Comparison with integral. This can be done if the sequence of general terms is decreasing, that is More precisely, suppose that we have for a certain decreasing function f defined on . If the integral

converges, so does the series

This equivalence is quite important since it allows to establish convergence of a given series using basic methods of calculus. Prominent examples include the Riemann series, that is the series with general term with a constant If we take the function then we get

if and
otherwise.

The limit at infinity exists if and only if . It follows that the series

converges if and only if

Absolute tests

There are also tests that allow to determine the nature of a given single series. Two most popular are listed below.

In its simplest form it involves computation of the following limit

If L is strictly greater than 1, then the series diverges. If L < 1 then the series is convergent. The case L = 1 gives no answer. Indeed, take for example . A short computation gives L = 1. Notice that we showed that this series is divergent. On the other hand, if , the series was shown to converge. Still, if we compute L from the above formula, we get L = 1.

In its popular form it is based on the computation of the following number

( here refers to the upper limit of the sequence). Similarly as before, L > 1 implies divergence, L < 1 means that the series is convergent. The case L= 1 does not allow to decide.

Which one of the two above tests is more appropriate depends on the form of the general term. However, not only computational convenience is in question. In some cases only the root test can be applied. Consider for example the series 1 + 1 + 1/2 + 1/2 + 1/4 + 1/4 + 1/8 + 1/8 + 1/16 + 1/16 +.... Formally, the limit of the ratio does not exist (the upper limit is equal to 1, the lower limit is 1/2). Nonetheless the root test gives and this determines that the series converges.

In general, the root test is more universal in the following sense: if the limit L of the ratios exists, then the limit of roots exists as well and both are equal. On the other hand, if the ratio test fails to give an answer due to L = 1, then there is no hope that the root test allows to decide.

Series with arbitrary terms

Fig. 2 Alternating series converges if the "steps" are shortening.

In general, a given series may contain terms of different sign. This may lead to very delicate situations. Consider a simple example, called the alternating harmonic series

Denote the general term by Imagine that each term represents a small step along a line. Considering the sign, we swing back and forth. And it is easy to see that due to shortening the step we will approach a certain point, just like a damped pendulum -- see Fig. 2. This informally shows that converges. Furthermore, we know the actual sum of this series, it equals to (this needs some actual computations, though). Now we reorganise the terms and consider

The general term reads

Clearly, this series contains the same terms as just in different order. Furthermore, simplifying the expression for gives

so that

It follows that the total sum is . And we changed only the order...

Even a more general result can be shown: for any given number we can find a rearrangement of terms in our series to get the total sum of M.[5]

To assure a more regular behaviour the notion of absolute convergence is introduced. We say that a series is absolutely convergent if the series converges. The absolute convergence is stronger that the simple convergence introduced at the beginning. This means that any absolutely convergent series is convergent in the former sense. Consequently, given a series with arbitrary terms one may apply any of the above mentioned basic criteria to . If the result is positive ("convergence" detected) it means that the series is absolutely convergent -- so it converges in the sense of the first definition as well. Furthermore, the absolutely convergent series do not change the sum on reordering terms.

However, as we showed there are interesting series that are convergent but not absolutely convergent. Typical examples include alternating series, that is series whose general term each time changes the sign: For these we have a criterion.

  • If form a decreasing sequence and the sign of is alternating then the series is convergent.

With this useful tool we immediately see that our alternating harmonic series converges. This is not very surprising -- since an actual proof of the criterion is just mathematicaly rigorous transcription of what can be seen on the Fig.2.

Notice that the relative tests from the previous section do not apply for alternating or arbitrary series (unless one investigates the modulus of the general term and the absolute convergence question). Consider the following examples

The general terms are obviously equivalent (the ratio converges to 1). The first series converges in virtue of our last criterion while the second one may be shown to diverge.[6]

We can invert the question we dealt with so far ("what can guarantee the convegence") and ask what follows from the convergence of a series. What all convergent series have in common? Here is an answer.

  • If is convergent then

That is, the general numbers, taken as a sequence, converge to 0. For example, the series with the general term

or

can not be convergent.

Series of functions

On one hand, the sum of a series of functions is defined in the same way as above, — as the limit of the sequence of functions , the partial sums of the series. On the other hand, convergence of functions differs strikingly from convergence of numbers. There is only one widely used convergence mode for numbers; in contrast, there are several widely used convergence modes for functions.

It is convenient to reduce the general convergence, , to the special case, , letting .

For example, the sequence of functions

converges to 0 pointwise but not uniformly. It means that, given and , there exists such that for all . However, depends on , and no can serve all . In other words: converges to 0 for every x (which means, x not depending on n), but need not converge to 0 for arbitrary sequences . For instance, leads to .

Every example of the sequence of functions is relevant to the general theory of functional series, since it corresponds to some example of the series of functions , namely, .

For a sequence of continuous functions on , still, uniform convergence does not follow from pointwise convergence. For example, functions

for 0 ≤ x ≤ 1.

Choosing such that (xn)n = 0.5 one gets in spite of pointwise convergence to 0. Here is a similar example for periodic functions on the whole line:

.

The uniform convergence of to 0 can be written in the form

usual convergence of some numeric characteristics of these functions. Many other modes of convergence are also of this form. Some examples:

Most important are two classes of series of functions: power (especially, Taylor) series whose terms are power functions and trigonometric (especially, Fourier) series whose terms are trigonometric functions ; the latter being the real part of the complex number for , the series with terms is also treated as Fourier series.

These two classes are unrelated when is real, but related via complex numbers: if then .

Taylor and Fourier series behave quite differently. A Taylor series converges uniformly, together with all derivatives, on whenever a is less than the radius of convergence, and its sum is a smooth (more exactly, analytic, therefore, continuous and infinitely differentiable) function. The behavior of a Fourier series depends on the behavior of its sum. If the sum is smooth enough, the series converges uniformly, together with some derivatives. If the sum is only continuous, the series need not converge uniformly, nor even pointwise. If the sum is square integrable (maybe quite discontinuous), the series converges in the sense that . Some distributions (generalized functions) also may be developed into Fourier series, in which case convergence is much weaker.

The different behavior of power and trigonometric series corresponds on the complex plane to the different behavior of a function, analytic on a disk, inside the disk and on its boundary. Inside the disk the function is always smooth; on the boundary it can be quite a bad function, and even something more general than a function.

Notes and references

  1. Adjective divergent series is used as well
  2. A simple calculation shows that the sum is actually equal to one. More details in geometric series article.
  3. It is often called harmonic series
  4. Other popular (equivalent) definition describes the series as a formal (ordered) list of terms combined by the addition operator
  5. This is called Riemann series theorem.
  6. To verify this one may consider the difference of the two general terms.