Javascript required
Skip to content Skip to sidebar Skip to footer

Find a Closed-form Solution the Affine Recurrence Relation

Definition of each term of a sequence as a function of preceding terms

In mathematics, a recurrence relation is an equation that expresses the nth term of a sequence as a function of the k preceding terms, for some fixed k (independent from n), which is called the order of the relation. Once k initial terms of a sequence are given, the recurrence relation allows computing recursively all terms of the sequence.

Most general results on recurrence relations are about linear recurrences, which are recurrence relations such that the nth term is linear with respect to its preceding terms. Among them, linear recurrences with constant coefficients, and linear recurrences with polynomial coefficients are specially important. In the first case, this is because one can express the general term of the sequence as a closed-form expression of the index of the term. In the second case, this is because many common elementary and special functions have a Taylor series whose coefficients satisfy such a recurrence relation (see holonomic function).

The concept can be extended to multidimensional arrays, that is, indexed families that are indexed by tuples of natural numbers.

Definition [edit]

A recurrence relation is an equation that expresses each element of a sequence as a function of the preceding ones. More precisely, in the case where only the immediately preceding element is involved, a recurrence relation has the form

u n = φ ( n , u n 1 ) for n > 0 , {\displaystyle u_{n}=\varphi (n,u_{n-1})\quad {\text{for}}\quad n>0,}

where

φ : N × X X {\displaystyle \varphi :\mathbb {N} \times X\to X}

is a function, where X is a set to which the elements of a sequence must belong. For any u 0 X {\displaystyle u_{0}\in X} , this defines a unique sequence with u 0 {\displaystyle u_{0}} as its first element, called the initial value.[1]

It is easy to modify the definition for getting sequences starting from the term of index 1 or higher.

This defines recurrence relation of first order. A recurrence relation of order k has the form

u n = φ ( n , u n 1 , u n 2 , , u n k ) for n k , {\displaystyle u_{n}=\varphi (n,u_{n-1},u_{n-2},\ldots ,u_{n-k})\quad {\text{for}}\quad n\geq k,}

where φ : N × X k X {\displaystyle \varphi :\mathbb {N} \times X^{k}\to X} is a function that involves k consecutive elements of the sequence. In this case, k initial values are needed for defining a sequence.

Examples [edit]

Factorial [edit]

The factorial is defined by the recurrence relation

n ! = n ( n 1 ) ! for n > 0 , {\displaystyle n!=n(n-1)!\quad {\text{for}}\quad n>0,}

and the initial condition

0 ! = 1. {\displaystyle 0!=1.}

Logistic map [edit]

An example of a recurrence relation is the logistic map:

x n + 1 = r x n ( 1 x n ) , {\displaystyle x_{n+1}=rx_{n}(1-x_{n}),}

with a given constant r {\displaystyle r} ; given the initial term x 0 {\displaystyle x_{0}} each subsequent term is determined by this relation.

Solving a recurrence relation means obtaining a closed-form solution: a non-recursive function of n {\displaystyle n} .

Fibonacci numbers [edit]

The recurrence of order two satisfied by the Fibonacci numbers is the canonical example of a homogeneous linear recurrence relation with constant coefficients (see below). The Fibonacci sequence is defined using the recurrence

F n = F n 1 + F n 2 {\displaystyle F_{n}=F_{n-1}+F_{n-2}}

with initial conditions

F 0 = 0 {\displaystyle F_{0}=0}
F 1 = 1. {\displaystyle F_{1}=1.}

Explicitly, the recurrence yields the equations

F 2 = F 1 + F 0 {\displaystyle F_{2}=F_{1}+F_{0}}
F 3 = F 2 + F 1 {\displaystyle F_{3}=F_{2}+F_{1}}
F 4 = F 3 + F 2 {\displaystyle F_{4}=F_{3}+F_{2}}

etc.

We obtain the sequence of Fibonacci numbers, which begins

0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ...

The recurrence can be solved by methods described below yielding Binet's formula, which involves powers of the two roots of the characteristic polynomial t 2 = t + 1 {\displaystyle t^{2}=t+1} ; the generating function of the sequence is the rational function

t 1 t t 2 . {\displaystyle {\frac {t}{1-t-t^{2}}}.}

Binomial coefficients [edit]

A simple example of a multidimensional recurrence relation is given by the binomial coefficients ( n k ) {\displaystyle {\tbinom {n}{k}}} , which count the number of ways of selecting k {\displaystyle k} elements out of a set of n {\displaystyle n} elements. They can be computed by the recurrence relation

( n k ) = ( n 1 k 1 ) + ( n 1 k ) , {\displaystyle {\binom {n}{k}}={\binom {n-1}{k-1}}+{\binom {n-1}{k}},}

with the base cases ( n 0 ) = ( n n ) = 1 {\displaystyle {\tbinom {n}{0}}={\tbinom {n}{n}}=1} . Using this formula to compute the values of all binomial coefficients generates an infinite array called Pascal's triangle. The same values can also be computed directly by a different formula that is not a recurrence, but that requires multiplication and not just addition to compute: ( n k ) = n ! k ! ( n k ) ! . {\displaystyle {\binom {n}{k}}={\frac {n!}{k!(n-k)!}}.}

Difference operator and difference equations [edit]

The difference operator is an operator that maps sequences to sequences, and, more generally, functions to functions. It is commonly denoted Δ , {\displaystyle \Delta ,} and is defined, in functional notation, as

( Δ f ) ( x ) = f ( x + 1 ) f ( x ) . {\displaystyle (\Delta f)(x)=f(x+1)-f(x).}

It is thus a special case of finite difference.

When using the index notation for sequences, the definition becomes

( Δ a ) n = a n + 1 a n . {\displaystyle (\Delta a)_{n}=a_{n+1}-a_{n}.}

The parentheses around Δ f {\displaystyle \Delta f} and Δ a {\displaystyle \Delta a} are generally omitted, and Δ a n {\displaystyle \Delta a_{n}} must be understood as the term of index n in the sequence Δ a , {\displaystyle \Delta a,} and not Δ {\displaystyle \Delta } applied to the element a n . {\displaystyle a_{n}.}

Given sequence a = ( a n ) n N , {\displaystyle a=(a_{n})_{n\in \mathbb {N} },} the first difference of a is Δ a . {\displaystyle \Delta a.}

The second difference is Δ 2 a = ( Δ Δ ) a = Δ ( Δ a ) . {\displaystyle \Delta ^{2}a=(\Delta \circ \Delta )a=\Delta (\Delta a).} A simple computation shows that

Δ 2 a n = a n + 2 2 a n + 1 + a n . {\displaystyle \Delta ^{2}a_{n}=a_{n+2}-2a_{n+1}+a_{n}.}

More generally: the k th difference is defined recursively as Δ k = Δ Δ n 1 , {\displaystyle \Delta ^{k}=\Delta \circ \Delta ^{n-1},} and one has

Δ k a n = t = 0 k ( 1 ) t ( k t ) a n + k t . {\displaystyle \Delta ^{k}a_{n}=\sum _{t=0}^{k}(-1)^{t}{\binom {k}{t}}a_{n+k-t}.}

This relation can be inverted, giving

a n + k = a n + ( k 1 ) Δ a n + + ( k k ) Δ k ( a n ) . {\displaystyle a_{n+k}=a_{n}+{k \choose 1}\Delta a_{n}+\cdots +{k \choose k}\Delta ^{k}(a_{n}).}

A difference equation of order k is an equation that involves the k first differences of a sequence or a function, in the same way as a differential equation of order k relates the k first derivatives of a function.

The two above relations allow transforming a recurrence relation of order k into a difference equation of order k, and, conversely, a difference equation of order k into recurrence relation of order k. Each transformation is the inverse of the other, and the sequences that are solution of the difference equation are exactly those that satisfies the recurrence relation.

For example, the difference equation

3 Δ 2 a n + 2 Δ a n + 7 a n = 0 {\displaystyle 3\Delta ^{2}a_{n}+2\Delta a_{n}+7a_{n}=0}

is equivalent to the recurrence relation

3 a n + 2 = 4 a n + 1 8 a n , {\displaystyle 3a_{n+2}=4a_{n+1}-8a_{n},}

in the sense that the two equations are satisfied by the same sequences.

As it is equivalent for a sequence to satisfy a recurrence relation or to be the solution of a recurrence relation, the two terms "recurrence relation" and "difference equation" are sometimes used interchangeably. See Rational difference equation and Matrix difference equation for example of uses of "difference equation" instead of "recurrence relation"

Difference equations resemble to differential equations, and this resemblance is often used for mimic methods for solving differentiable equations for solving difference equations, and therefore recurrence relations.

Summation equations relate to difference equations as integral equations relate to differential equations. See time scale calculus for a unification of the theory of difference equations with that of differential equations.

From sequences to grids [edit]

Single-variable or one-dimensional recurrence relations are about sequences (i.e. functions defined on one-dimensional grids). Multi-variable or n-dimensional recurrence relations are about n {\displaystyle n} -dimensional grids. Functions defined on n {\displaystyle n} -grids can also be studied with partial difference equations.[2]

Solving [edit]

Solving homogeneous linear recurrence relations with constant coefficients [edit]

Roots of the characteristic polynomial [edit]

An order- d {\displaystyle d} homogeneous linear recurrence with constant coefficients is an equation of the form

a n = c 1 a n 1 + c 2 a n 2 + + c d a n d , {\displaystyle a_{n}=c_{1}a_{n-1}+c_{2}a_{n-2}+\cdots +c_{d}a_{n-d},}

where the d {\displaystyle d} coefficients c i {\displaystyle c_{i}} (for all i {\displaystyle i} ) are constants, and c d 0 {\displaystyle c_{d}\neq 0} .

A constant-recursive sequence is a sequence satisfying a recurrence of this form. There are d {\displaystyle d} degrees of freedom for solutions to this recurrence, i.e., the initial values a 0 , , a d 1 {\displaystyle a_{0},\,\dots ,\,a_{d-1}} can be taken to be any values but then the recurrence determines the sequence uniquely.

The same coefficients yield the characteristic polynomial (also "auxiliary polynomial" or "companion polynomial")

p ( t ) = t d c 1 t d 1 c 2 t d 2 c d {\displaystyle p(t)=t^{d}-c_{1}t^{d-1}-c_{2}t^{d-2}-\cdots -c_{d}}

whose roots play a crucial role in finding and understanding the sequences satisfying the recurrence. If there are d {\displaystyle d} distinct roots r 1 , r 2 , , r d , {\displaystyle r_{1},r_{2},\ldots ,r_{d},} then each solution to the recurrence takes the form

a n = k 1 r 1 n + k 2 r 2 n + + k d r d n , {\displaystyle a_{n}=k_{1}r_{1}^{n}+k_{2}r_{2}^{n}+\cdots +k_{d}r_{d}^{n},}

where the coefficients k i {\displaystyle k_{i}} are determined in order to fit the initial conditions of the recurrence. When the same roots occur multiple times, the terms in this formula corresponding to the second and later occurrences of the same root are multiplied by increasing powers of n {\displaystyle n} . For instance, if the characteristic polynomial can be factored as ( x r ) 3 {\displaystyle (x-r)^{3}} , with the same root r {\displaystyle r} occurring three times, then the solution would take the form

a n = k 1 r n + k 2 n r n + k 3 n 2 r n . {\displaystyle a_{n}=k_{1}r^{n}+k_{2}nr^{n}+k_{3}n^{2}r^{n}.} [3]

As well as the Fibonacci numbers, other constant-recursive sequences include the Lucas numbers and Lucas sequences, the Jacobsthal numbers, the Pell numbers and more generally the solutions to Pell's equation.

For order 1, the recurrence

a n = r a n 1 {\displaystyle a_{n}=ra_{n-1}}

has the solution a n = r n {\displaystyle a_{n}=r^{n}} with a 0 = 1 {\displaystyle a_{0}=1} and the most general solution is a n = k r n {\displaystyle a_{n}=kr^{n}} with a 0 = k {\displaystyle a_{0}=k} . The characteristic polynomial equated to zero (the characteristic equation) is simply t r = 0 {\displaystyle t-r=0} .

Solutions to such recurrence relations of higher order are found by systematic means, often using the fact that a n = r n {\displaystyle a_{n}=r^{n}} is a solution for the recurrence exactly when t = r {\displaystyle t=r} is a root of the characteristic polynomial. This can be approached directly or using generating functions (formal power series) or matrices.

Consider, for example, a recurrence relation of the form

a n = A a n 1 + B a n 2 . {\displaystyle a_{n}=Aa_{n-1}+Ba_{n-2}.}

When does it have a solution of the same general form as a n = r n {\displaystyle a_{n}=r^{n}} ? Substituting this guess (ansatz) in the recurrence relation, we find that

r n = A r n 1 + B r n 2 {\displaystyle r^{n}=Ar^{n-1}+Br^{n-2}}

must be true for all n > 1 {\displaystyle n>1} .

Dividing through by r n 2 {\displaystyle r^{n-2}} , we get that all these equations reduce to the same thing:

r 2 = A r + B , {\displaystyle r^{2}=Ar+B,}
r 2 A r B = 0 , {\displaystyle r^{2}-Ar-B=0,}

which is the characteristic equation of the recurrence relation. Solve for r {\displaystyle r} to obtain the two roots λ 1 {\displaystyle \lambda _{1}} , λ 2 {\displaystyle \lambda _{2}} : these roots are known as the characteristic roots or eigenvalues of the characteristic equation. Different solutions are obtained depending on the nature of the roots: If these roots are distinct, we have the general solution

a n = C λ 1 n + D λ 2 n {\displaystyle a_{n}=C\lambda _{1}^{n}+D\lambda _{2}^{n}}

while if they are identical (when A 2 + 4 B = 0 {\displaystyle A^{2}+4B=0} ), we have

a n = C λ n + D n λ n {\displaystyle a_{n}=C\lambda ^{n}+Dn\lambda ^{n}}

This is the most general solution; the two constants C {\displaystyle C} and D {\displaystyle D} can be chosen based on two given initial conditions a 0 {\displaystyle a_{0}} and a 1 {\displaystyle a_{1}} to produce a specific solution.

In the case of complex eigenvalues (which also gives rise to complex values for the solution parameters C {\displaystyle C} and D {\displaystyle D} ), the use of complex numbers can be eliminated by rewriting the solution in trigonometric form. In this case we can write the eigenvalues as λ 1 , λ 2 = α ± β i . {\displaystyle \lambda _{1},\lambda _{2}=\alpha \pm \beta i.} Then it can be shown that

a n = C λ 1 n + D λ 2 n {\displaystyle a_{n}=C\lambda _{1}^{n}+D\lambda _{2}^{n}}

can be rewritten as[4] : 576–585

a n = 2 M n ( E cos ( θ n ) + F sin ( θ n ) ) = 2 G M n cos ( θ n δ ) , {\displaystyle a_{n}=2M^{n}\left(E\cos(\theta n)+F\sin(\theta n)\right)=2GM^{n}\cos(\theta n-\delta ),}

where

M = α 2 + β 2 cos ( θ ) = α M sin ( θ ) = β M C , D = E F i G = E 2 + F 2 cos ( δ ) = E G sin ( δ ) = F G {\displaystyle {\begin{array}{lcl}M={\sqrt {\alpha ^{2}+\beta ^{2}}}&\cos(\theta )={\tfrac {\alpha }{M}}&\sin(\theta )={\tfrac {\beta }{M}}\\C,D=E\mp Fi&&\\G={\sqrt {E^{2}+F^{2}}}&\cos(\delta )={\tfrac {E}{G}}&\sin(\delta )={\tfrac {F}{G}}\end{array}}}

Here E {\displaystyle E} and F {\displaystyle F} (or equivalently, G {\displaystyle G} and δ {\displaystyle \delta } ) are real constants which depend on the initial conditions. Using

λ 1 + λ 2 = 2 α = A , {\displaystyle \lambda _{1}+\lambda _{2}=2\alpha =A,}
λ 1 λ 2 = α 2 + β 2 = B , {\displaystyle \lambda _{1}\cdot \lambda _{2}=\alpha ^{2}+\beta ^{2}=-B,}

one may simplify the solution given above as

a n = ( B ) n 2 ( E cos ( θ n ) + F sin ( θ n ) ) , {\displaystyle a_{n}=(-B)^{\frac {n}{2}}\left(E\cos(\theta n)+F\sin(\theta n)\right),}

where a 1 {\displaystyle a_{1}} and a 2 {\displaystyle a_{2}} are the initial conditions and

E = A a 1 + a 2 B F = i A 2 a 1 A a 2 + 2 a 1 B B A 2 + 4 B θ = arccos ( A 2 B ) {\displaystyle {\begin{aligned}E&={\frac {-Aa_{1}+a_{2}}{B}}\\F&=-i{\frac {A^{2}a_{1}-Aa_{2}+2a_{1}B}{B{\sqrt {A^{2}+4B}}}}\\\theta &=\arccos \left({\frac {A}{2{\sqrt {-B}}}}\right)\end{aligned}}}

In this way there is no need to solve for λ 1 {\displaystyle \lambda _{1}} and λ 2 {\displaystyle \lambda _{2}} .

In all cases—real distinct eigenvalues, real duplicated eigenvalues, and complex conjugate eigenvalues—the equation is stable (that is, the variable a {\displaystyle a} converges to a fixed value [specifically, zero]) if and only if both eigenvalues are smaller than one in absolute value. In this second-order case, this condition on the eigenvalues can be shown[5] to be equivalent to | A | < 1 B < 2 {\displaystyle |A|<1-B<2} , which is equivalent to | B | < 1 {\displaystyle |B|<1} and | A | < 1 B {\displaystyle |A|<1-B} .

The equation in the above example was homogeneous, in that there was no constant term. If one starts with the non-homogeneous recurrence

b n = A b n 1 + B b n 2 + K {\displaystyle b_{n}=Ab_{n-1}+Bb_{n-2}+K}

with constant term K {\displaystyle K} , this can be converted into homogeneous form as follows: The steady state is found by setting b n = b n 1 = b n 2 = b {\displaystyle b_{n}=b_{n-1}=b_{n-2}=b^{*}} to obtain

b = K 1 A B . {\displaystyle b^{*}={\frac {K}{1-A-B}}.}

Then the non-homogeneous recurrence can be rewritten in homogeneous form as

[ b n b ] = A [ b n 1 b ] + B [ b n 2 b ] , {\displaystyle [b_{n}-b^{*}]=A[b_{n-1}-b^{*}]+B[b_{n-2}-b^{*}],}

which can be solved as above.

The stability condition stated above in terms of eigenvalues for the second-order case remains valid for the general n {\displaystyle n} -th order case: the equation is stable if and only if all eigenvalues of the characteristic equation are less than one in absolute value.

Given a homogeneous linear recurrence relation with constant coefficients of order d {\displaystyle d} , let p ( t ) {\displaystyle p(t)} be the characteristic polynomial (also "auxiliary polynomial")

t d c 1 t d 1 c 2 t d 2 c d = 0 {\displaystyle t^{d}-c_{1}t^{d-1}-c_{2}t^{d-2}-\cdots -c_{d}=0}

such that each c i {\displaystyle c_{i}} corresponds to each c i {\displaystyle c_{i}} in the original recurrence relation (see the general form above). Suppose λ {\displaystyle \lambda } is a root of p ( t ) {\displaystyle p(t)} having multiplicity r {\displaystyle r} . This is to say that ( t λ ) r {\displaystyle (t-\lambda )^{r}} divides p ( t ) {\displaystyle p(t)} . The following two properties hold:

  1. Each of the r {\displaystyle r} sequences λ n , n λ n , n 2 λ n , , n r 1 λ n {\displaystyle \lambda ^{n},n\lambda ^{n},n^{2}\lambda ^{n},\dots ,n^{r-1}\lambda ^{n}} satisfies the recurrence relation.
  2. Any sequence satisfying the recurrence relation can be written uniquely as a linear combination of solutions constructed in part 1 as λ {\displaystyle \lambda } varies over all distinct roots of p ( t ) {\displaystyle p(t)} .

As a result of this theorem a homogeneous linear recurrence relation with constant coefficients can be solved in the following manner:

  1. Find the characteristic polynomial p ( t ) {\displaystyle p(t)} .
  2. Find the roots of p ( t ) {\displaystyle p(t)} counting multiplicity.
  3. Write a n {\displaystyle a_{n}} as a linear combination of all the roots (counting multiplicity as shown in the theorem above) with unknown coefficients b i {\displaystyle b_{i}} .
    a n = ( b 1 λ 1 n + b 2 n λ 1 n + b 3 n 2 λ 1 n + + b r n r 1 λ 1 n ) + + ( b d q + 1 λ n + + b d n q 1 λ n ) {\displaystyle a_{n}=\left(b_{1}\lambda _{1}^{n}+b_{2}n\lambda _{1}^{n}+b_{3}n^{2}\lambda _{1}^{n}+\cdots +b_{r}n^{r-1}\lambda _{1}^{n}\right)+\cdots +\left(b_{d-q+1}\lambda _{*}^{n}+\cdots +b_{d}n^{q-1}\lambda _{*}^{n}\right)}
    This is the general solution to the original recurrence relation. ( q {\displaystyle q} is the multiplicity of λ*)
  4. Equate each a 0 , a 1 , , a d {\displaystyle a_{0},a_{1},\dots ,a_{d}} from part 3 (plugging in n = 0 , , d {\displaystyle n=0,\dots ,d} into the general solution of the recurrence relation) with the known values a 0 , a 1 , , a d {\displaystyle a_{0},a_{1},\dots ,a_{d}} from the original recurrence relation. However, the values a n {\displaystyle a_{n}} from the original recurrence relation used do not usually have to be contiguous: excluding exceptional cases, just d of them are needed (i.e., for an original homogeneous linear recurrence relation of order 3 one could use the values a 0 {\displaystyle a_{0}} , a 1 {\displaystyle a_{1}} , a 4 {\displaystyle a_{4}} ). This process will produce a linear system of d {\displaystyle d} equations with d {\displaystyle d} unknowns. Solving these equations for the unknown coefficients b 1 , b 2 , , b d {\displaystyle b_{1},b_{2},\dots ,b_{d}} of the general solution and plugging these values back into the general solution will produce the particular solution to the original recurrence relation that fits the original recurrence relation's initial conditions (as well as all subsequent values a 0 , a 1 , a 2 , {\displaystyle a_{0},a_{1},a_{2},\dots } of the original recurrence relation).

The method for solving linear differential equations is similar to the method above—the "intelligent guess" (ansatz) for linear differential equations with constant coefficients is e λ x {\displaystyle e^{\lambda x}} where λ {\displaystyle \lambda } is a complex number that is determined by substituting the guess into the differential equation.

This is not a coincidence. Considering the Taylor series of the solution to a linear differential equation:

n = 0 f ( n ) ( a ) n ! ( x a ) n {\displaystyle \sum _{n=0}^{\infty }{\frac {f^{(n)}(a)}{n!}}(x-a)^{n}}

it can be seen that the coefficients of the series are given by the n {\displaystyle n} -th derivative of f ( x ) {\displaystyle f(x)} evaluated at the point a {\displaystyle a} . The differential equation provides a linear difference equation relating these coefficients.

This equivalence can be used to quickly solve for the recurrence relationship for the coefficients in the power series solution of a linear differential equation.

The rule of thumb (for equations in which the polynomial multiplying the first term is non-zero at zero) is that:

y [ k ] f [ n + k ] {\displaystyle y^{[k]}\to f[n+k]}

and more generally

x m y [ k ] n ( n 1 ) . . . ( n m + 1 ) f [ n + k m ] {\displaystyle x^{m}*y^{[k]}\to n(n-1)...(n-m+1)f[n+k-m]}

Example: The recurrence relationship for the Taylor series coefficients of the equation:

( x 2 + 3 x 4 ) y [ 3 ] ( 3 x + 1 ) y [ 2 ] + 2 y = 0 {\displaystyle (x^{2}+3x-4)y^{[3]}-(3x+1)y^{[2]}+2y=0}

is given by

n ( n 1 ) f [ n + 1 ] + 3 n f [ n + 2 ] 4 f [ n + 3 ] 3 n f [ n + 1 ] f [ n + 2 ] + 2 f [ n ] = 0 {\displaystyle n(n-1)f[n+1]+3nf[n+2]-4f[n+3]-3nf[n+1]-f[n+2]+2f[n]=0}

or

4 f [ n + 3 ] + 2 n f [ n + 2 ] + n ( n 4 ) f [ n + 1 ] + 2 f [ n ] = 0. {\displaystyle -4f[n+3]+2nf[n+2]+n(n-4)f[n+1]+2f[n]=0.}

This example shows how problems generally solved using the power series solution method taught in normal differential equation classes can be solved in a much easier way.

Example: The differential equation

a y + b y + c y = 0 {\displaystyle ay''+by'+cy=0}

has solution

y = e a x . {\displaystyle y=e^{ax}.}

The conversion of the differential equation to a difference equation of the Taylor coefficients is

a f [ n + 2 ] + b f [ n + 1 ] + c f [ n ] = 0. {\displaystyle af[n+2]+bf[n+1]+cf[n]=0.}

It is easy to see that the n {\displaystyle n} -th derivative of e a x {\displaystyle e^{ax}} evaluated at 0 {\displaystyle 0} is a n {\displaystyle a^{n}} .

Solving via linear algebra [edit]

A linearly recursive sequence y {\displaystyle y} of order n {\displaystyle n}

y n + k c n 1 y n 1 + k c n 2 y n 2 + k + c 0 y k = 0 {\displaystyle y_{n+k}-c_{n-1}y_{n-1+k}-c_{n-2}y_{n-2+k}+\cdots -c_{0}y_{k}=0}

is identical to

y n = c n 1 y n 1 + c n 2 y n 2 + + c 0 y 0 . {\displaystyle y_{n}=c_{n-1}y_{n-1}+c_{n-2}y_{n-2}+\cdots +c_{0}y_{0}.}

Expanded with n 1 {\displaystyle n-1} identities of kind y n k = y n k {\displaystyle y_{n-k}=y_{n-k}} , this n {\displaystyle n} -th order equation is translated into a matrix difference equation system of n {\displaystyle n} first-order linear equations,

y n = [ y n y n 1 y 1 ] = [ c n 1 c n 2 c 0 1 0 0 0 0 0 1 0 ] [ y n 1 y n 2 y 0 ] = C y n 1 = C n y 0 . {\displaystyle \mathbf {y} _{n}={\begin{bmatrix}y_{n}\\y_{n-1}\\\vdots \\\vdots \\y_{1}\end{bmatrix}}={\begin{bmatrix}c_{n-1}&c_{n-2}&\cdots &\cdots &c_{0}\\1&0&\cdots &\cdots &0\\0&\ddots &\ddots &&\vdots \\\vdots &\ddots &\ddots &\ddots &\vdots \\0&\cdots &0&1&0\end{bmatrix}}{\begin{bmatrix}y_{n-1}\\y_{n-2}\\\vdots \\\vdots \\y_{0}\end{bmatrix}}=C\mathbf {y} _{n-1}=C^{n}\mathbf {y} _{0}.}

Observe that the vector y n {\displaystyle \mathbf {y} _{n}} can be computed by n {\displaystyle n} applications of the companion matrix, C {\displaystyle C} , to the initial state vector, y 0 {\displaystyle y_{0}} . Thereby, n {\displaystyle n} -th entry of the sought sequence y {\displaystyle y} , is the top component of y n , y n = y n [ n ] {\displaystyle \mathbf {y} _{n},y_{n}=\mathbf {y} _{n}[n]} .

Eigendecomposition, y n = C n y 0 = a 1 λ 1 n e 1 + a 2 λ 2 n e 2 + + a n λ n n e n {\displaystyle \mathbf {y} _{n}=C^{n}\,\mathbf {y} _{0}=a_{1}\,\lambda _{1}^{n}\,\mathbf {e} _{1}+a_{2}\,\lambda _{2}^{n}\,\mathbf {e} _{2}+\cdots +a_{n}\,\lambda _{n}^{n}\,\mathbf {e} _{n}} into eigenvalues, λ 1 , λ 2 , , λ n {\displaystyle \lambda _{1},\lambda _{2},\ldots ,\lambda _{n}} , and eigenvectors, e 1 , e 2 , , e n {\displaystyle \mathbf {e} _{1},\mathbf {e} _{2},\ldots ,\mathbf {e} _{n}} , is used to compute y n {\displaystyle \mathbf {y} _{n}} . Thanks to the crucial fact that system C {\displaystyle C} time-shifts every eigenvector, e {\displaystyle e} , by simply scaling its components λ {\displaystyle \lambda } times,

C e i = λ i e i = C [ e i , n e i , n 1 e i , 1 ] = [ λ i e i , n λ i e i , n 1 λ i e i , 1 ] {\displaystyle C\,\mathbf {e} _{i}=\lambda _{i}\mathbf {e} _{i}=C{\begin{bmatrix}e_{i,n}\\e_{i,n-1}\\\vdots \\e_{i,1}\end{bmatrix}}={\begin{bmatrix}\lambda _{i}\,e_{i,n}\\\lambda _{i}\,e_{i,n-1}\\\vdots \\\lambda _{i}\,e_{i,1}\end{bmatrix}}}

that is, time-shifted version of eigenvector, e {\displaystyle \mathbf {e} } , has components λ {\displaystyle \lambda } times larger, the eigenvector components are powers of λ {\displaystyle \lambda } , e i = [ λ i n 1 λ i 2 λ i 1 ] T , {\displaystyle \mathbf {e} _{i}={\begin{bmatrix}\lambda _{i}^{n-1}&\cdots &\lambda _{i}^{2}&\lambda _{i}&1\end{bmatrix}}^{\mathrm {T} },} and, thus, recurrent homogeneous linear equation solution is a combination of exponential functions, y n = 1 n c i λ i n e i {\displaystyle \mathbf {y} _{n}=\sum _{1}^{n}{c_{i}\,\lambda _{i}^{n}\,\mathbf {e} _{i}}} . The components c i {\displaystyle c_{i}} can be determined out of initial conditions:

y 0 = [ y 0 y 1 y n + 1 ] = i = 1 n c i λ i 0 e i = [ e 1 e 2 e n ] [ c 1 c 2 c n ] = E [ c 1 c 2 c n ] {\displaystyle \mathbf {y} _{0}={\begin{bmatrix}y_{0}\\y_{-1}\\\vdots \\y_{-n+1}\end{bmatrix}}=\sum _{i=1}^{n}{c_{i}\,\lambda _{i}^{0}\,\mathbf {e} _{i}}={\begin{bmatrix}\mathbf {e} _{1}&\mathbf {e} _{2}&\cdots &\mathbf {e} _{n}\end{bmatrix}}\,{\begin{bmatrix}c_{1}\\c_{2}\\\vdots \\c_{n}\end{bmatrix}}=E\,{\begin{bmatrix}c_{1}\\c_{2}\\\vdots \\c_{n}\end{bmatrix}}}

Solving for coefficients,

[ c 1 c 2 c n ] = E 1 y 0 = [ λ 1 n 1 λ 2 n 1 λ n n 1 λ 1 λ 2 λ n 1 1 1 ] 1 [ y 0 y 1 y n + 1 ] . {\displaystyle {\begin{bmatrix}c_{1}\\c_{2}\\\vdots \\c_{n}\end{bmatrix}}=E^{-1}\mathbf {y} _{0}={\begin{bmatrix}\lambda _{1}^{n-1}&\lambda _{2}^{n-1}&\cdots &\lambda _{n}^{n-1}\\\vdots &\vdots &\ddots &\vdots \\\lambda _{1}&\lambda _{2}&\cdots &\lambda _{n}\\1&1&\cdots &1\end{bmatrix}}^{-1}\,{\begin{bmatrix}y_{0}\\y_{-1}\\\vdots \\y_{-n+1}\end{bmatrix}}.}

This also works with arbitrary boundary conditions y a , y b , n {\displaystyle \underbrace {y_{a},y_{b},\ldots } _{n}} , not necessary the initial ones,

[ y a y b ] = [ y a [ n ] y b [ n ] ] = [ i = 1 n c i λ i a e i [ n ] i = 1 n c i λ i b e i [ n ] ] = [ i = 1 n c i λ i a λ i n 1 i = 1 n c i λ i b λ i n 1 ] {\displaystyle {\begin{bmatrix}y_{a}\\y_{b}\\\vdots \end{bmatrix}}={\begin{bmatrix}\mathbf {y} _{a}[n]\\\mathbf {y} _{b}[n]\\\vdots \end{bmatrix}}={\begin{bmatrix}\sum _{i=1}^{n}{c_{i}\,\lambda _{i}^{a}\,\mathbf {e} _{i}[n]}\\\sum _{i=1}^{n}{c_{i}\,\lambda _{i}^{b}\,\mathbf {e} _{i}[n]}\\\vdots \end{bmatrix}}={\begin{bmatrix}\sum _{i=1}^{n}{c_{i}\,\lambda _{i}^{a}\,\lambda _{i}^{n-1}}\\\sum _{i=1}^{n}{c_{i}\,\lambda _{i}^{b}\,\lambda _{i}^{n-1}}\\\vdots \end{bmatrix}}}
= [ c i λ i a + n 1 c i λ i b + n 1 ] = [ λ 1 a + n 1 λ 2 a + n 1 λ n a + n 1 λ 1 b + n 1 λ 2 b + n 1 λ n b + n 1 ] [ c 1 c 2 c n ] . {\displaystyle ={\begin{bmatrix}\sum {c_{i}\,\lambda _{i}^{a+n-1}}\\\sum {c_{i}\,\lambda _{i}^{b+n-1}}\\\vdots \end{bmatrix}}={\begin{bmatrix}\lambda _{1}^{a+n-1}&\lambda _{2}^{a+n-1}&\cdots &\lambda _{n}^{a+n-1}\\\lambda _{1}^{b+n-1}&\lambda _{2}^{b+n-1}&\cdots &\lambda _{n}^{b+n-1}\\\vdots &\vdots &\ddots &\vdots \end{bmatrix}}\,{\begin{bmatrix}c_{1}\\c_{2}\\\vdots \\c_{n}\end{bmatrix}}.}

This description is really no different from general method above, however it is more succinct. It also works nicely for situations like

{ a n = a n 1 b n 1 b n = 2 a n 1 + b n 1 . {\displaystyle {\begin{cases}a_{n}=a_{n-1}-b_{n-1}\\b_{n}=2a_{n-1}+b_{n-1}.\end{cases}}}

where there are several linked recurrences.[6]

Solving with z-transforms [edit]

Certain difference equations - in particular, linear constant coefficient difference equations - can be solved using z-transforms. The z-transforms are a class of integral transforms that lead to more convenient algebraic manipulations and more straightforward solutions. There are cases in which obtaining a direct solution would be all but impossible, yet solving the problem via a thoughtfully chosen integral transform is straightforward.

Solving non-homogeneous linear recurrence relations with constant coefficients [edit]

If the recurrence is non-homogeneous, a particular solution can be found by the method of undetermined coefficients and the solution is the sum of the solution of the homogeneous and the particular solutions. Another method to solve a non-homogeneous recurrence is the method of symbolic differentiation. For example, consider the following recurrence:

a n + 1 = a n + 1 {\displaystyle a_{n+1}=a_{n}+1}

This is a non-homogeneous recurrence. If we substitute n n + 1 {\displaystyle n\mapsto n+1} , we obtain the recurrence

a n + 2 = a n + 1 + 1 {\displaystyle a_{n+2}=a_{n+1}+1}

Subtracting the original recurrence from this equation yields

a n + 2 a n + 1 = a n + 1 a n {\displaystyle a_{n+2}-a_{n+1}=a_{n+1}-a_{n}}

or equivalently

a n + 2 = 2 a n + 1 a n {\displaystyle a_{n+2}=2a_{n+1}-a_{n}}

This is a homogeneous recurrence, which can be solved by the methods explained above. In general, if a linear recurrence has the form

a n + k = λ k 1 a n + k 1 + λ k 2 a n + k 2 + + λ 1 a n + 1 + λ 0 a n + p ( n ) {\displaystyle a_{n+k}=\lambda _{k-1}a_{n+k-1}+\lambda _{k-2}a_{n+k-2}+\cdots +\lambda _{1}a_{n+1}+\lambda _{0}a_{n}+p(n)}

where λ 0 , λ 1 , , λ k 1 {\displaystyle \lambda _{0},\lambda _{1},\dots ,\lambda _{k-1}} are constant coefficients and p ( n ) {\displaystyle p(n)} is the inhomogeneity, then if p ( n ) {\displaystyle p(n)} is a polynomial with degree r {\displaystyle r} , then this non-homogeneous recurrence can be reduced to a homogeneous recurrence by applying the method of symbolic differencing r {\displaystyle r} times.

If

P ( x ) = n = 0 p n x n {\displaystyle P(x)=\sum _{n=0}^{\infty }p_{n}x^{n}}

is the generating function of the inhomogeneity, the generating function

A ( x ) = n = 0 a ( n ) x n {\displaystyle A(x)=\sum _{n=0}^{\infty }a(n)x^{n}}

of the non-homogeneous recurrence

a n = i = 1 s c i a n i + p n , n n r , {\displaystyle a_{n}=\sum _{i=1}^{s}c_{i}a_{n-i}+p_{n},\quad n\geq n_{r},}

with constant coefficients ci is derived from

( 1 i = 1 s c i x i ) A ( x ) = P ( x ) + n = 0 n r 1 [ a n p n ] x n i = 1 s c i x i n = 0 n r i 1 a n x n . {\displaystyle \left(1-\sum _{i=1}^{s}c_{i}x^{i}\right)A(x)=P(x)+\sum _{n=0}^{n_{r}-1}[a_{n}-p_{n}]x^{n}-\sum _{i=1}^{s}c_{i}x^{i}\sum _{n=0}^{n_{r}-i-1}a_{n}x^{n}.}

If P(x) is a rational generating function, A ( x ) {\displaystyle A(x)} is also one. The case discussed above, where p n = K {\displaystyle p_{n}=K} is a constant, emerges as one example of this formula, with P ( x ) = K / ( 1 x ) {\displaystyle P(x)=K/(1-x)} . Another example, the recurrence a n = 10 a n 1 + n {\displaystyle a_{n}=10a_{n-1}+n} with linear inhomogeneity, arises in the definition of the schizophrenic numbers. The solution of homogeneous recurrences is incorporated as p = P = 0 {\displaystyle p=P=0} .

Solving first-order non-homogeneous recurrence relations with variable coefficients [edit]

Moreover, for the general first-order non-homogeneous linear recurrence relation with variable coefficients:

a n + 1 = f n a n + g n , f n 0 , {\displaystyle a_{n+1}=f_{n}a_{n}+g_{n},\qquad f_{n}\neq 0,}

there is also a nice method to solve it:[7]

a n + 1 f n a n = g n {\displaystyle a_{n+1}-f_{n}a_{n}=g_{n}}
a n + 1 k = 0 n f k f n a n k = 0 n f k = g n k = 0 n f k {\displaystyle {\frac {a_{n+1}}{\prod _{k=0}^{n}f_{k}}}-{\frac {f_{n}a_{n}}{\prod _{k=0}^{n}f_{k}}}={\frac {g_{n}}{\prod _{k=0}^{n}f_{k}}}}
a n + 1 k = 0 n f k a n k = 0 n 1 f k = g n k = 0 n f k {\displaystyle {\frac {a_{n+1}}{\prod _{k=0}^{n}f_{k}}}-{\frac {a_{n}}{\prod _{k=0}^{n-1}f_{k}}}={\frac {g_{n}}{\prod _{k=0}^{n}f_{k}}}}

Let

A n = a n k = 0 n 1 f k , {\displaystyle A_{n}={\frac {a_{n}}{\prod _{k=0}^{n-1}f_{k}}},}

Then

A n + 1 A n = g n k = 0 n f k {\displaystyle A_{n+1}-A_{n}={\frac {g_{n}}{\prod _{k=0}^{n}f_{k}}}}
m = 0 n 1 ( A m + 1 A m ) = A n A 0 = m = 0 n 1 g m k = 0 m f k {\displaystyle \sum _{m=0}^{n-1}(A_{m+1}-A_{m})=A_{n}-A_{0}=\sum _{m=0}^{n-1}{\frac {g_{m}}{\prod _{k=0}^{m}f_{k}}}}
a n k = 0 n 1 f k = A 0 + m = 0 n 1 g m k = 0 m f k {\displaystyle {\frac {a_{n}}{\prod _{k=0}^{n-1}f_{k}}}=A_{0}+\sum _{m=0}^{n-1}{\frac {g_{m}}{\prod _{k=0}^{m}f_{k}}}}
a n = ( k = 0 n 1 f k ) ( A 0 + m = 0 n 1 g m k = 0 m f k ) {\displaystyle a_{n}=\left(\prod _{k=0}^{n-1}f_{k}\right)\left(A_{0}+\sum _{m=0}^{n-1}{\frac {g_{m}}{\prod _{k=0}^{m}f_{k}}}\right)}

If we apply the formula to a n + 1 = ( 1 + h f n h ) a n + h g n h {\displaystyle a_{n+1}=(1+hf_{nh})a_{n}+hg_{nh}} and take the limit h 0 {\displaystyle h\to 0} , we get the formula for first order linear differential equations with variable coefficients; the sum becomes an integral, and the product becomes the exponential function of an integral.

Solving general homogeneous linear recurrence relations [edit]

Many homogeneous linear recurrence relations may be solved by means of the generalized hypergeometric series. Special cases of these lead to recurrence relations for the orthogonal polynomials, and many special functions. For example, the solution to

J n + 1 = 2 n z J n J n 1 {\displaystyle J_{n+1}={\frac {2n}{z}}J_{n}-J_{n-1}}

is given by

J n = J n ( z ) , {\displaystyle J_{n}=J_{n}(z),}

the Bessel function, while

( b n ) M n 1 + ( 2 n b z ) M n n M n + 1 = 0 {\displaystyle (b-n)M_{n-1}+(2n-b-z)M_{n}-nM_{n+1}=0}

is solved by

M n = M ( n , b ; z ) {\displaystyle M_{n}=M(n,b;z)}

the confluent hypergeometric series. Sequences which are the solutions of linear difference equations with polynomial coefficients are called P-recursive. For these specific recurrence equations algorithms are known which find polynomial, rational or hypergeometric solutions.

Solving first-order rational difference equations [edit]

A first order rational difference equation has the form w t + 1 = a w t + b c w t + d {\displaystyle w_{t+1}={\tfrac {aw_{t}+b}{cw_{t}+d}}} . Such an equation can be solved by writing w t {\displaystyle w_{t}} as a nonlinear transformation of another variable x t {\displaystyle x_{t}} which itself evolves linearly. Then standard methods can be used to solve the linear difference equation in x t {\displaystyle x_{t}} .

Stability [edit]

Stability of linear higher-order recurrences [edit]

The linear recurrence of order d {\displaystyle d} ,

a n = c 1 a n 1 + c 2 a n 2 + + c d a n d , {\displaystyle a_{n}=c_{1}a_{n-1}+c_{2}a_{n-2}+\cdots +c_{d}a_{n-d},}

has the characteristic equation

λ d c 1 λ d 1 c 2 λ d 2 c d λ 0 = 0. {\displaystyle \lambda ^{d}-c_{1}\lambda ^{d-1}-c_{2}\lambda ^{d-2}-\cdots -c_{d}\lambda ^{0}=0.}

The recurrence is stable, meaning that the iterates converge asymptotically to a fixed value, if and only if the eigenvalues (i.e., the roots of the characteristic equation), whether real or complex, are all less than unity in absolute value.

Stability of linear first-order matrix recurrences [edit]

In the first-order matrix difference equation

[ x t x ] = A [ x t 1 x ] {\displaystyle [x_{t}-x^{*}]=A[x_{t-1}-x^{*}]}

with state vector x {\displaystyle x} and transition matrix A {\displaystyle A} , x {\displaystyle x} converges asymptotically to the steady state vector x {\displaystyle x^{*}} if and only if all eigenvalues of the transition matrix A {\displaystyle A} (whether real or complex) have an absolute value which is less than 1.

Stability of nonlinear first-order recurrences [edit]

Consider the nonlinear first-order recurrence

x n = f ( x n 1 ) . {\displaystyle x_{n}=f(x_{n-1}).}

This recurrence is locally stable, meaning that it converges to a fixed point x {\displaystyle x^{*}} from points sufficiently close to x {\displaystyle x^{*}} , if the slope of f {\displaystyle f} in the neighborhood of x {\displaystyle x^{*}} is smaller than unity in absolute value: that is,

| f ( x ) | < 1. {\displaystyle |f'(x^{*})|<1.}

A nonlinear recurrence could have multiple fixed points, in which case some fixed points may be locally stable and others locally unstable; for continuous f two adjacent fixed points cannot both be locally stable.

A nonlinear recurrence relation could also have a cycle of period k {\displaystyle k} for k > 1 {\displaystyle k>1} . Such a cycle is stable, meaning that it attracts a set of initial conditions of positive measure, if the composite function

g ( x ) := f f f ( x ) {\displaystyle g(x):=f\circ f\circ \cdots \circ f(x)}

with f {\displaystyle f} appearing k {\displaystyle k} times is locally stable according to the same criterion:

| g ( x ) | < 1 , {\displaystyle |g'(x^{*})|<1,}

where x {\displaystyle x^{*}} is any point on the cycle.

In a chaotic recurrence relation, the variable x {\displaystyle x} stays in a bounded region but never converges to a fixed point or an attracting cycle; any fixed points or cycles of the equation are unstable. See also logistic map, dyadic transformation, and tent map.

Relationship to differential equations [edit]

When solving an ordinary differential equation numerically, one typically encounters a recurrence relation. For example, when solving the initial value problem

y ( t ) = f ( t , y ( t ) ) , y ( t 0 ) = y 0 , {\displaystyle y'(t)=f(t,y(t)),\ \ y(t_{0})=y_{0},}

with Euler's method and a step size h {\displaystyle h} , one calculates the values

y 0 = y ( t 0 ) , y 1 = y ( t 0 + h ) , y 2 = y ( t 0 + 2 h ) , {\displaystyle y_{0}=y(t_{0}),\ \ y_{1}=y(t_{0}+h),\ \ y_{2}=y(t_{0}+2h),\ \dots }

by the recurrence

y n + 1 = y n + h f ( t n , y n ) , t n = t 0 + n h {\displaystyle \,y_{n+1}=y_{n}+hf(t_{n},y_{n}),t_{n}=t_{0}+nh}

Systems of linear first order differential equations can be discretized exactly analytically using the methods shown in the discretization article.

Applications [edit]

Biology [edit]

Some of the best-known difference equations have their origins in the attempt to model population dynamics. For example, the Fibonacci numbers were once used as a model for the growth of a rabbit population.

The logistic map is used either directly to model population growth, or as a starting point for more detailed models of population dynamics. In this context, coupled difference equations are often used to model the interaction of two or more populations. For example, the Nicholson–Bailey model for a host-parasite interaction is given by

N t + 1 = λ N t e a P t {\displaystyle N_{t+1}=\lambda N_{t}e^{-aP_{t}}}
P t + 1 = N t ( 1 e a P t ) , {\displaystyle P_{t+1}=N_{t}(1-e^{-aP_{t}}),}

with N t {\displaystyle N_{t}} representing the hosts, and P t {\displaystyle P_{t}} the parasites, at time t {\displaystyle t} .

Integrodifference equations are a form of recurrence relation important to spatial ecology. These and other difference equations are particularly suited to modeling univoltine populations.

Computer science [edit]

Recurrence relations are also of fundamental importance in analysis of algorithms.[8] [9] If an algorithm is designed so that it will break a problem into smaller subproblems (divide and conquer), its running time is described by a recurrence relation.

A simple example is the time an algorithm takes to find an element in an ordered vector with n {\displaystyle n} elements, in the worst case.

A naive algorithm will search from left to right, one element at a time. The worst possible scenario is when the required element is the last, so the number of comparisons is n {\displaystyle n} .

A better algorithm is called binary search. However, it requires a sorted vector. It will first check if the element is at the middle of the vector. If not, then it will check if the middle element is greater or lesser than the sought element. At this point, half of the vector can be discarded, and the algorithm can be run again on the other half. The number of comparisons will be given by

c 1 = 1 {\displaystyle c_{1}=1}
c n = 1 + c n / 2 {\displaystyle c_{n}=1+c_{n/2}}

the time complexity of which will be O ( log 2 ( n ) ) {\displaystyle O(\log _{2}(n))} .

Digital signal processing [edit]

In digital signal processing, recurrence relations can model feedback in a system, where outputs at one time become inputs for future time. They thus arise in infinite impulse response (IIR) digital filters.

For example, the equation for a "feedforward" IIR comb filter of delay T {\displaystyle T} is:

y t = ( 1 α ) x t + α y t T , {\displaystyle y_{t}=(1-\alpha )x_{t}+\alpha y_{t-T},}

where x t {\displaystyle x_{t}} is the input at time t {\displaystyle t} , y t {\displaystyle y_{t}} is the output at time t {\displaystyle t} , and α {\displaystyle \alpha } controls how much of the delayed signal is fed back into the output. From this we can see that

y t = ( 1 α ) x t + α ( ( 1 α ) x t T + α y t 2 T ) {\displaystyle y_{t}=(1-\alpha )x_{t}+\alpha ((1-\alpha )x_{t-T}+\alpha y_{t-2T})}
y t = ( 1 α ) x t + ( α α 2 ) x t T + α 2 y t 2 T {\displaystyle y_{t}=(1-\alpha )x_{t}+(\alpha -\alpha ^{2})x_{t-T}+\alpha ^{2}y_{t-2T}}

etc.

Economics [edit]

Recurrence relations, especially linear recurrence relations, are used extensively in both theoretical and empirical economics.[10] [11] In particular, in macroeconomics one might develop a model of various broad sectors of the economy (the financial sector, the goods sector, the labor market, etc.) in which some agents' actions depend on lagged variables. The model would then be solved for current values of key variables (interest rate, real GDP, etc.) in terms of past and current values of other variables.

See also [edit]

  • Holonomic sequences
  • Iterated function
  • Orthogonal polynomials
  • Recursion
  • Recursion (computer science)
  • Lagged Fibonacci generator
  • Master theorem (analysis of algorithms)
  • Circle points segments proof
  • Continued fraction
  • Time scale calculus
  • Combinatorial principles
  • Infinite impulse response
  • Integration by reduction formulae
  • Mathematical induction

References [edit]

Footnotes [edit]

  1. ^ Jacobson, Nathan , Basic Algebra 2 (2nd ed.), § 0.4. pg 16.
  2. ^ Partial difference equations, Sui Sun Cheng, CRC Press, 2003, ISBN 978-0-415-29884-1
  3. ^ Greene, Daniel H.; Knuth, Donald E. (1982), "2.1.1 Constant coefficients – A) Homogeneous equations", Mathematics for the Analysis of Algorithms (2nd ed.), Birkhäuser, p. 17 .
  4. ^ Chiang, Alpha C., Fundamental Methods of Mathematical Economics, third edition, McGraw-Hill, 1984.
  5. ^ Papanicolaou, Vassilis, "On the asymptotic stability of a class of linear difference equations," Mathematics Magazine 69(1), February 1996, 34–43.
  6. ^ Maurer, Stephen B.; Ralston, Anthony (1998), Discrete Algorithmic Mathematics (2nd ed.), A K Peters, p. 609, ISBN9781568810911 .
  7. ^ "Archived copy" (PDF). Archived (PDF) from the original on 2010-07-05. Retrieved 2010-10-19 . CS1 maint: archived copy as title (link)
  8. ^ Cormen, T. et al, Introduction to Algorithms, MIT Press, 2009
  9. ^ R. Sedgewick, F. Flajolet, An Introduction to the Analysis of Algorithms, Addison-Wesley, 2013
  10. ^ Stokey, Nancy L.; Lucas, Robert E., Jr.; Prescott, Edward C. (1989). Recursive Methods in Economic Dynamics. Cambridge: Harvard University Press. ISBN0-674-75096-9.
  11. ^ Ljungqvist, Lars; Sargent, Thomas J. (2004). Recursive Macroeconomic Theory (Second ed.). Cambridge: MIT Press. ISBN0-262-12274-X.

Bibliography [edit]

  • Batchelder, Paul M. (1967). An introduction to linear difference equations. Dover Publications.
  • Miller, Kenneth S. (1968). Linear difference equations. W. A. Benjamin.
  • Fillmore, Jay P.; Marx, Morris L. (1968). "Linear recursive sequences". SIAM Rev. 10 (3). pp. 324–353. JSTOR 2027658.
  • Brousseau, Alfred (1971). Linear Recursion and Fibonacci Sequences. Fibonacci Association.
  • Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Second Edition. MIT Press and McGraw-Hill, 1990. ISBN 0-262-03293-7. Chapter 4: Recurrences, pp. 62–90.
  • Graham, Ronald L.; Knuth, Donald E.; Patashnik, Oren (1994). Concrete Mathematics: A Foundation for Computer Science (2 ed.). Addison-Wesley. ISBN0-201-55802-5.
  • Enders, Walter (2010). Applied Econometric Times Series (3 ed.). Archived from the original on 2014-11-10.
  • Cull, Paul; Flahive, Mary; Robson, Robbie (2005). Difference Equations: From Rabbits to Chaos. Springer. ISBN0-387-23234-6. chapter 7.
  • Jacques, Ian (2006). Mathematics for Economics and Business (Fifth ed.). Prentice Hall. pp. 551–568. ISBN0-273-70195-9. Chapter 9.1: Difference Equations.
  • Minh, Tang; Van To, Tan (2006). "Using generating functions to solve linear inhomogeneous recurrence equations" (PDF). Proc. Int. Conf. Simulation, Modelling and Optimization, SMO'06. pp. 399–404.
  • Polyanin, Andrei D. "Difference and Functional Equations: Exact Solutions". at EqWorld - The World of Mathematical Equations.
  • Polyanin, Andrei D. "Difference and Functional Equations: Methods". at EqWorld - The World of Mathematical Equations.
  • Wang, Xiang-Sheng; Wong, Roderick (2012). "Asymptotics of orthogonal polynomials via recurrence relations". Anal. Appl. 10 (2): 215–235. arXiv:1101.4371. doi:10.1142/S0219530512500108. S2CID 28828175.

External links [edit]

  • "Recurrence relation", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
  • Weisstein, Eric W. "Recurrence Equation". MathWorld.
  • "OEIS Index Rec". OEIS index to a few thousand examples of linear recurrences, sorted by order (number of terms) and signature (vector of values of the constant coefficients)

Find a Closed-form Solution the Affine Recurrence Relation

Source: https://en.wikipedia.org/wiki/Recurrence_relation