Mathematical Methods 4/14/2008 Roger Waxler 1
October 30, 2017 | Author: Anonymous | Category: N/A
Short Description
. Example 1.1: The n-dimensional wave equation,. Roger ../sq_well_eigs.eps mathematical methods const ......
Description
4/14/2008
Mathematical Methods
Roger Waxler
1
Introduction
Introduction This course is designed to introduce the basic techniques needed to study the differential equations which arise in acoustics and vibration. There are no generally applicable techniques for solving differential equations. There are specific techniques which work in special cases, but in general solving an arbitrary system of differential equations is considered a hopeless task. Luckily, some of the special cases are of physical interest. There are several ways that differential equations are classified. The order of a differential equation is the highest derivative appearing in the equation. A differential equation is said to be linear if the unknown functions appear linearly, otherwise the equation is said to be nonlinear. A differential equation in which only one independent variable is differentiated is said to be an ordinary differential equation. If several independent variables are differentiated then these derivatives are necessarily partial derivatives and the equation is said to be a partial differential equation. A differential equation is said to be homogeneous if multiplying the unknown function by a constant has the effect of multiplying the entire equation by a constant. Example 1.1: The n-dimensional wave equation,
∆−
1 ∂2 p = 0, c2 ∂t2
is a second order linear partial differential equation. ..................
Example 1.2: The n-dimensional plate equation,
∂2 ∆ + K 2 u = 0, ∂t 2
is a fourth order linear partial differential equation. .................. 2
Introduction
Example 1.3:
Assuming a sinusoidal time dependence p(x, t) = f (x) sin(ωt) in the wave
equation one obtains the n-dimensional Helmholtz equation, ω2 ∆ + 2 f = 0, c a second order linear partial differential equation if n > 1. If n = 1 it is a second order linear ordinary differential equation. ..................
Example 1.4: Newton’s law, m
d2 x = F (x) dt2
is a second order nonlinear ordinary differential equation except in the special case F = −kx when it becomes a 1-dimensional Helmholtz equation. ..................
Example 1.5: The equations of lossless fluid mechanics; the continuity equation ∂ρ + ∇ · (ρv) = 0, ∂t and the Euler equation, ρ
∂ + v · ∇ v = −∇P ∂t
along with an isentropic equation of state, ρ = f (P ); are a system of nonlinear first order partial differential equations. Here ρ is the density, p the pressure and v the velocity of the fluid at a given point in time and space. ..................
In the above ∇ is the n-dimensional gradient operator n X ∂vj ∇·v = ∂xj j=1
3
Introduction
and ∆ is the n-dimensional Laplace operator ∆=∇·∇ n X ∂2 . = ∂x2j j=1 Most of our time will be spent on linear problems. The solutions to a homogeneous linear differential equation satisfy the principle of superposition. Let L be a linear differential operator so that Lf = 0 is a homogeneous linear differential equation. Then for any functions f1 and f2 and constants a and b L af1 + bf2 = aLf1 + bLf2 . In particular, if both f1 and f2 are solutions, Lf1 = 0 and Lf2 = 0, then af1 + bf2 is also a solution. This is never the case for nonlinear equations. Inhomogeneous linear differential equations are of the form Lf = g where g is known. If f1 and f2 are solutions of this inhomogeneous problem then L(f1 − f2 ) = g − g = 0 so that f1 and f2 differ by a solution of the homogeneous problem Lf = 0. If the solution to a differential equation is to represent a physical quantity it must be real valued. However it is sometimes more convenient to find complex valued solutions. For linear equations Lf = 0 with L and f both real, both the real and imaginary parts of f are also solutions, and are real valued. A large class of solvable (in the sense that they can be reduced to algebraic equations) linear equations are those with constant coefficients. The solutions to such equations are superpositions of exponential functions. That is because d kx e = kekx dx so that differentiation can be replaced by multiplication by a number, k. 4
Introduction
Example 1.6: An example of a linear system of ordinary differential equations with constant coefficients is
d2 d d d2 − + 1 u + 3 v − w=0 dx2 dx dx dx2 d d2 −2 u + − 2 w=0 dx dx2 d2 d d − 2 +2 v+ w = 0. dx dx dx The general solution will be superpositions of solutions of the form A u(x) v(x) = B ekx . C w(x) 2
To see what restrictions are placed on the constants A, B, C and k substitute into the
differential equation. One obtains the algebraic equation 2 0 A 2k − k + 1 3k −k 2 −2k 0 k2 − 2 B = 0 . 0 C 0 −k 2 + 2k k
Thus, k must be chosen so that the above matrix has determinant 0 and then, for such k, A the vector B must be an eigenvector of eigenvalue 0. C
..................
Example 1.7: If c is a constant then the n-dimensional wave equation has constant coefficients. The exponential solutions are the so-called plane wave solutions p(x, t) = Aeik·x−iωt . Substituting into the wave equation one finds that k·k=
ω2 . c2
..................
Example 1.8:
Acoustics is typically based on a linear approximation to fluid dynamics.
Consider an undisturbed fluid at rest with constant density ρ0 and pressure P0 and no velocity. Let primed variables denote small disturbances from this quiescent state: ρ = ρ0 + ρ′ , 5
Introduction
P = P0 + P ′ v = v′ . Expanding the equations of fluid dynamics in the primed variables and keeping only linear terms one obtains, identifying ρ0 = f (P0 ), ρ′ = f ′ (P0 )P ′ , f ′ (P0 ) and
∂P ′ + ρ0 ∇ · v ′ = 0 ∂t
∂v′ + ∇P ′ = 0. ρ0 ∂t
One finds plane wave solutions
P v
=
A B
eik·x−iωt
with −f ′ (P0 )ωA + ρ0 k · B = 0 and −ρ0 ωB + kA = 0. Note first that if k · B = 0 then A = 0 and the solution is trivial. Assuming that k · B 6= 0 the second equation implies that k · kA = ρ0 ωk · B. Substituting into the first equation one finds ω2 =
k·k . f ′ (P0 )
Any k and ω satisfying this relation and any B with k · B 6= 0 leads to a solution with A= The factor
1 f ′ (P0 )
ρ0 ωk · B . k·k
is identified with the speed of sound squared.
..................
6
Linear Algebra
Review of Linear Algebra In this chapter the basic concepts of linear algebra are reviewed. Most often our field of scalar quantities will be the field of complex numbers C, however, we will sometimes have occasion to restrict our scalars to being real. Thus, when generality seems desirable, the symbol K will be used to denote either the set of real numbers R or the set of complex numbers C. Vector Spaces: Crudely put, a vector space is a set which is closed under linear superposition: for vectors v and w, av + bw is also a vector. Here a and b are scalars (ordinary numbers). Most often our field of scalar quantities will be the field of complex numbers C, however, we will sometimes have occasion to restrict our scalars to being real. The precise definition is: V is vector space over K if the elements of V can be multiplied by elements of K and added to each other so that given any a, b ∈ K and v, w ∈ V av + bw ∈ V. Example 2.1: R is a vector space over itself. C is both a vector space over itself and over R. ..................
Example 2.2: The set of n-vectors x1 x2 x= ...
xn
where xj ∈ R is a vector space over R. This space is called n-dimensional Euclidean space and is denoted Rn .
Similarly, if xj ∈ C the space is called n-dimensional complex space and is denoted
Cn . Cn is a vector space over both R and C.
7
Linear Algebra
..................
Example 2.3: The set of all complex valued functions of a real variable is a vector space over both R and C since if f and g are any functions then af + bg is also a function. Note that a function f is said to be 0, f = 0, only if f (x) = 0 regardless of x. ..................
Given a vector space V , N elements of V , v1 , v2 , . . . , vN , are said to be linearly independent if the only linear combination of the vectors to add up to 0 is the trivial one in which all the coefficients are 0: linear independent if a1 v1 + a2 v2 + . . . + aN vN = 0 only for a1 = a2 = . . . = aN = 0. If v1 , v2 , . . . , vN are not linearly independent then there is a linear dependence relation between them. That is, there are coefficients a1 , a2 , . . . , aN not all of which are 0, for which a1 v1 + a2 v2 + . . . + aN vN = 0. This means that these vectors are not independent in the sense that any one of them can be expressed as a linear combination of the rest.
w
v
v
w
For N = 1 the situation is trivial: one vector is always linearly dependent on itself. For N = 2 the vectors v1 and v2 are linearly dependent if they are proportional to each other: 8
Linear Algebra
a1 v1 + a2 v2 = 0 =⇒ v2 = − aa21 v1 . Geometrically, representing vectors in the usual way as arrows pointing out from the origin in an n-dimensional Euclidean space, two vectors are linearly dependent if they are parallel. Conversely, while one vector determines a line, two linearly independent vectors determine a plane. Similarly, given two linearly independent vectors v1 and v2 and a third vector v3 , there is a linear dependence relation between v1 , v2 and v3 only if v3 lies in the plane determined by v1 and v2 . Otherwise v1 , v2 and v3 determine a 3-dimensional subspace of V . In general, given v1 , v2 , . . . , vN ∈ V , the subspace of V given by the set of all linear combinations a1 v1 + a2 v2 + . . . + aN vN is itself a vector space and is called the vector space spanned by v1 , v2 , . . . , vN . The largest number of linearly independent vectors among the v1 , v2 , . . . , vN is called the dimension of this subspace. It can be shown that in an n-dimensional vector space V i) Any n linearly independent vectors in V span V . ii) No set of m vectors can span V if m < n. iii) No set of m vectors is linearly independent if m > n. It follows that, given any n linearly independent vectors b1 , b2 , . . . , bn in V , any vector v can be written as a linear combination of the bj , v = c1 b1 + c2 b2 + . . . + cn bn . Here the cj are in K. This follows since V is n-dimensional and thus there must be a linear dependence relation between any vector v and the set of the b′j s. A set of n linearly independent vectors in an n-dimensional vector space is called a basis for the vector space. Example 2.4: Rn is an n-dimensional vector space over R since the vectors 1 0 0, . .. 0
0 1 0, . .. 0
0 0 1, ..., . .. 0
0 0 0 . .. 1
are linearly independent. These n vectors form a basis for Rn known as the standard basis. 9
Linear Algebra
A non-standard basis for R3 is 0 −1 1 0 , 2 , 1. 1 0 −1
Similarly Cn is an n-dimensional vector space over C. However, Cn is also a 2n-
dimensional vector space over R since the vectors 1 0 0, . .. 0
i 0 0, . .. 0
0 1 0, . .. 0
0 i 0, . .. 0
0 0 1, . .. 0
0 0 i , . .. 0
0 0 0, ..., . .. 1
0 0 0 . .. i
are linearly independent. ..................
Example 2.5: The set of nth degree polynomials with complex coefficients, a0 + a1 x + a2 x2 + . . . + an xn , is an (n+1)-dimensional vector space over C since the monomials 1, x, x2 , . . . , xn are a basis for this set. Note that the set of all polynomials with complex coefficients, without restriction on degree, is infinite dimensional over C. ..................
An inner product on a vector space V is a function of two vectors, say v and w, usually denoted by hv, wi, which satisfies i) hv, wi ∈ K ii) hv, wi = hw, vi iii) hv, aw1 + bw2 i = ahv, w1 i + bhv, w2 i. 10
Linear Algebra
iv) hv, vi > 0 if v 6= 0. Here z is the complex conjugate of z, a and b are in K and w1 and w2 are in V . Inner products are generalizations of the dot product in Rn . In fact, for x, y ∈ Rn ,
hx, yi = x · y is an inner product. In Cn the standard inner product is hx, yi = x · y. Once
one has an inner product one can speak of orthogonality. Two vectors, v, w, are orthogonal p if hv, wi = 0. Similarly, the norm of a vector is kvk = hv, vi. An orthonormal basis for a vector space is a basis whose elements are all mutually orthogonal and whose norms are all 1. Recall that in the case of the standard inner product in Rn orthogonality means geometrically perpendicular and norm means geometric length.
w v • w = v w cos θ
θ
v
If b1 , . . ., bn is an orthonormal basis then hbj , bk i = δj,k . The nice thing about an orthonormal basis is that if one wishes to express any vector v as a linear superposition of the bj , v = c1 b1 + c2 b2 + . . . + cn bn , then the coefficients cj are easy to find: cj = hbj , vi. Example 2.6: Let f and g be functions on [−1, 1]. Define hf, gi =
Z
1
f (x) g(x) dx.
−1
11
Linear Algebra
It’s easy to check that this is an inner product on the vector space of functions on [−1, 1]. Note that with respect to this inner product the functions sn (x) = sin(nπx) cn (x) = cos(nπx), for n = 1, 2, 3, . . ., and c0 =
√1 2
are all mutually orthogonal. Further, they all have norm
1. The basic fact of Fourier analysis is that the span of these functions is large enough function space to satisfy all of our needs. Thus, they can be used as if they were an ordinary orthonormal basis. The precise statement is that if kf k is finite then, given the Fourier coefficients an = bn =
Z
Z
1
cn (x) f (x) dx
−1 1
sn (x) f (x) dx,
−1
one has
N
X
an cn + bn sn = 0. lim f −
N →∞
n=0
The way to read this expression is that any f with finite norm can be well approximated, with respect to this norm, by superpositions of sine and cosine functions. The difference between equality with respect to this norm and actual equality is that the Fourier series can differ from the function but only on a set which is so small that it doesn’t effect the integral giving the norm (most often an isolated point). This is the origin of the Gibbs phenomenon. It turns out to be true in a similar sense that any function with finite norm can be well approximated by polynomials. However, the monomials fn (x) = xn are neither mutually orthogonal nor of norm 1. They can be orthogonalized by the Gram-Schmidt procedure, and then normalized (by dividing an unnormalized function by the square root of its norm). The standard choice of orthogonal polynomials with respect to this norm are known as the Legendre polynomials. The first few are P0 (x) = 1 12
Linear Algebra
P1 (x) = x P2 (x) =
1 (3x2 − 1) 2
P3 (x) =
1 (5x3 − 3x) 2
P4 (x) =
1 (35x4 − 30x2 + 3) 8
and so on. These functions are not normalized, Z
1
Pj (x)2 dx =
−1
2 , 2j + 1
but can be used as if they formed an orthogonal basis. ..................
Matrices and Linear Operators: Matrices arise in several ways. The most straightforward way is from systems of linear equations: a system of m linear equations for n unknowns x1 , x2 , . . . , xn , n X
ajk xk = bj
k=1
for j = 1, 2, . . . , m, can be written as
a11 a21 . ..
am1
a12 a22 .. . am2
a1n x1 b1 a2n x2 b2 . = . .. .. .. ··· . · · · amn xn bm
··· ···
or Ax = b where A is the obvious m × n matrix, x ∈ Rn and b ∈ Rm . This looks very much like the 1-dimensional linear equation ax = b and should be thought of (as much as possible) in the same way. 13
Linear Algebra
Given an m × n matrix A the transpose of A, At , is the n × m matrix obtained by exchanging rows for columns,
a11 a 12 At = ...
a1n
a21 a22 .. . a2n
The adjoint of A, A∗ , is the complex conjugate a ¯11 a ¯21 ¯12 a ¯22 a A∗ = .. ... . a ¯1n
a ¯2n
··· ··· ··· ···
am1 am2 . .. .
amn
of the transpose, ··· a ¯m1 ··· a ¯m2 . .. ··· . ··· a ¯mn
We will restrict out attention to the case m = n in which there are as many equations
as unknowns. By analogy with the one dimensional case we would like to express the solution of Ax = b as x = A−1 b, if this can be done. Clearly this procedure works if A is invertible in the sense that there is a matrix A−1 with A−1 A = I (here I is the n × n identity matrix, with 1 on the diagonal and 0 elsewhere). If A is invertible then the only solution to Ax = 0 is the trivial solution x = 0. The converse turns out to be true as well: if the only solution to Ax = 0 is the trivial solution x = 0 then A is invertible. Also, it can be shown that A−1 A = I implies that AA−1 = I from which it follows that (At )−1 = (A−1 )t . If one writes the columns of A as vectors a 1j
a2j aj = .. . anj
then Ax = x1 a1 + x2 a2 + . . . xn an . Thus Ax = 0 for x 6= 0 is a linear dependence relation between the columns of A. Similarly
At x = 0 for x 6= 0 is a linear dependence relation between the rows of A (note that if the rows of A are not linearly independent then the n equations in the linear system Ax = b are not independent). It follows that A is invertible if either its columns or rows are linearly independent. 14
Linear Algebra
A final criteria for the invertibility of A is based on the determinant. The determinant of an n × n matrix, det A, may be defined inductively as follows. If n = 1, so that A
is just a number, let det A = A. If n > 1 let A(jk) be the n − 1 × n − 1 matrix obtained by removing the j th row and k th column from A. Then define det A =
n X
(−1)k a1k det A(1k) .
k=1
Properties of the determinant are
det(AB) = det A det B. If A′ is obtained from A by exchanging neighboring rows then det A′ = − det A. Similarly if A′ is obtained from A by exchanging neighboring columns then det A′ = − det A.
The determinant is a bit abstract. Unfortunately it arises often. For example, it turns out that A is invertible if and only if det A 6= 0. In fact, if det A 6= 0, there is an
explicit formula for A−1 . If
A−1
α11 α21 = ...
α12 α22 .. .
αn1
then
αjk =
αn2
··· ···
α1n α2n .. .
··· · · · αnn
(−1)j+k det A(kj) . det A
To summarize: the following statements are equivalent. i) For any b ∈ Cn the equation Ax = b has a solution. ii) A is invertible. iii) The columns of A are linearly independent. iv) The rows of A are linearly independent. v) det A 6= 0. vi) The only solution of Ax = 0 is the trivial solution x = 0. Example 2.7: The system of equations 2x − y + z = 1 −x + 2y = −1 y−z =2 15
Linear Algebra
is equivalent to
2 −1 0
−1 2 1
Using the formula one finds that
2 det −1 0 and
2 −1 −1 2 0 1
so that
−1 2 1
1 x 1 0 y = −1 . 2 z −1 1 0 = −4 + 1 − 1 = −4 −1
−1 1 −2 0 1 0 =− −1 −2 4 −1 −1 −2
−2 −1 3
x 2 0 2 1 y = 1 1 2 1 −1 4 z 2 1 2 −3 3 =
2
.
1 4
− 74
..................
Example 2.8: If b1 , . . ., bn is a basis for a vector space V , with inner product h·, ·i, to find the coefficients cj in the expansion v = c1 b1 + c2 b2 + . . . + cn bn one must solve the n equations hbj , vi = It follows that
hb , b i c1 1 1 . .. .. = .
cn
n X
k=0
···
··· hbn , b1 i · · ·
hbj , bk ick .
−1 hb1 , bn i hb1 , vi .. .. . . .
hbn , bn i
hbn , vi
Note that the condition that the matrix in this last expression be invertible is equivalent to the linear independence of the basis vectors. 16
Linear Algebra
..................
A linear transformation T on a vector space V is a function for which T v ∈ V for all v ∈ V and which is linear, T (av + bw) = aT v + bT w. Given a basis for V , b1 , b2 , . . . , bn , any v ∈ V can be written v = c1 b1 + c2 b2 + . . . + cn bn . If T is linear one has T v = c1 T b1 + c2 T b2 + . . . + cn T bn . But T bj ∈ V can be written as a superposition of the bj , T bj = a1j b1 + a2j b2 + . . . + anj bn , so that Tv =
X
ck ajk bj .
jk
It follows that Tv =
X
αj bj
j
with
α1 a11 α2 a21 . = . .. .. αn
an1
a12 a22 .. . an2
··· ···
a1n c1 a2n c2 . . .. . ..
··· · · · ann
cn
The matrix in this last expression is the matrix representing T in the basis b1 , b2 , . . . , bn . Example 2.9:
Let V be the vector space of polynomials of degree 3 or less. Then the
derivative operator is a linear transformation from V into itself. Explicitly, if p(x) = a0 + a1 x + a2 x2 + a3 x3 is an element of V then the derivative of p is d p(x) = a1 + 2a2 x + 3a3 x2 . dx 17
Linear Algebra
With respect to the basis {1, x, x2 , x3 } one has
0 d 0 = 0 dx 0
1 0 0 0
0 2 0 0
0 0 3 0
in the sense that if one specifies a third degree polynomial by listing the coefficients of the various powers in a 4 vector, arranged in increasing powers beginning with the constant term, a0 a1 p(x) = , a2 a3 then one has
..................
0 1 0 d 0 0 2 p(x) = 0 0 0 dx 0 0 0 a1 2a = 2 . 3a3 0
0 a0 0 a1 3 a2 0 a3
A vector v spans a 1-dimensional subspace. Eigenvectors of a linear transformation T are vectors which, under the action of T , are not moved out of the subspace they span. That means that there is some number λ for which T v = λv. λ is an eigenvalue of T and v is an eigenvector corresponding to λ. Now consider matrices A with complex matrix elements ajk . Recall the adjoint of A, A∗ , is the matrix with matrix elements a¯kj , that is, the complex conjugate of the transpose of A. There is a class of matrices called normal which satisfy A∗ A = AA∗ . The importance of normal matrices is that their eigenvectors can be chosen to give an orthonormal basis. While the concept of a normal matrix is a bit abstract, for us it will be sufficient to note that self-adjoint matrices, those for which A∗ = A, are normal. Thus any self-adjoint n × n matrix has n orthonormal eigenvectors which can be used as a basis. 18
Linear Algebra
Note that given a self-adjoint matrix A and eigenvectors b1 , b2 , . . . , bn chosen to be orthonormal one can form the matrix U whose columns are the bj . Then one finds that
λ1 0 0 U ∗ AU = . ..
0 λ2 0 .. .
0 0 λ3 .. .
0
0
0
... ... ... ... ...
0 0 0 .. . λn
where λj is the eigenvalue corresponding to bj . Further, U ∗ U = I so that hU v, U wi = hv, wi. Given any vector v one has v=
n X j=1
But
hbj , vi bj .
hb1 , vi U ∗ v = ... hbn , vi
so that U ∗ v is the vector of the coefficents of the expansion of v with respect to the basis of the bj . Noting that U ∗ (Av) = (U ∗ AU )(U ∗ v) one sees that U ∗ AU is the matrix representation of A with respect to the basis of eigenvectors bj . Note that finding eigenvalues is equivalent to asking if there is any λ with (A − λI)v = 0. In turn, this is possible only if the secular equation, det(A − λI) = 0, is satisfied. In general this is an nth degree polynomial equation for λ which has at most n roots. Example 2.10: Consider the 2 × 2 matrix
1 −1
−1 2
.
the secular equation is (1 − λ)(2 − λ) − 1 = λ2 − 3λ + 1 = 0 19
Linear Algebra
which has roots
√ 3 5 λ± = ± . 2 2
To find eigenvectors we need to solve 1 −1 0 − = −1 2 0 =
− 12 ∓ −1
√
5 2 1 2
√
± 25 0 √ 3 5 0 ± 2 2 ! −1 x √ . 5 y ∓ 3 2
! x y
2
Although this is two equations they are not independent (since the determinant of the matrix is zero). It follows that
√ 1 5 (− ∓ )x − y = 0. 2 2
The eigenvectors can be chosen to be
1 − 12 −
√ 1 5 + 2 2 , , 5 1 2
√
the first corresponding to λ+ , the second to λ− . ..................
Example 2.11: The matrix
1 0
1 1
is a simple example of a 2 × 2 matrix which only has one linearly independent eigenvector. The secular equation is (1 − λ)2 = 0 so that there is only one eigenvalue, λ = 1. The eigenvectors must satisfy 0 a 0 1 = 0 b 0 0 so that b = 0. Thus all eigenvectors are of the form 1 a 0 which is, up to linear lindependence, one vector. ..................
20
Multi-Variable Calculus
Review of Calculus of Several Variables Differentiation: Derivatives of functions of one variable are straightforward to define: df f (x + h) − f (x) = lim . dx h→0 h Other notation for the derivative is f ′ (x). Note that the equation for the line tangent to the graph y = f (x) at the point (a, f (a)) is y = f (a) + f ′ (a)(x − a). For functions of several variables this definition won’t work since h would have to be a vector and one can’t divide by vectors. What is straightforward to define are the partial derivatives ∂f f (x1 , . . . , xj−1 , xj + h, xj+1 , . . . , xn ) − f (x1 , . . . , xn ) = lim . h→0 ∂xj h Let f : Rn → Rm be an m-dimensional vector. The 1-dimensional definition is usually generalized by introducing an m × n matrix Df (x), depending on x, for which
1
f (x + h) − f (x) − Df (x)h = 0. h→0 khk lim
If such a matrix exists, it turns out there can be only one, and Df (x) is called the derivative of f . Note that the derivative matrix gives the best linear approximation to f about x. Luckily, if the derivative of f exists at a point x, ∂f1 ∂x1
...
∂fm ∂x1
··· ...
Df (x) = ...
∂f1 ∂xn
.. . .
∂fm ∂xn
The partial derivative matrix might exist even if the derivative does not, but we will not have occasion to deal with such pathologies. Further, if they are continuous, then mixed partial derivatives are equal:
∂2f ∂2f = . ∂xj ∂xk ∂xk ∂xj 21
Multi-Variable Calculus
Note that if f : Rn → R then Df (x)t = ∇f. Let f : Rn → Rm and g : Rk → Rn be functions. Recall the chain rule, n
X ∂f ∂ ∂gk f g(x) = . g(x) ∂xj ∂xk ∂xj k=1
In this matrix notation the chain rule has an appealing form. Recall the notation (f ◦ g)(x) = f g(x) . The chain rule becomes matrix multiplication, D(f ◦ g)(x) = Df g(x) Dg(x).
Example 3.1: Consider a partical moving under the influence of a potential energy V (x). Let x(t) be the curve describing the partical’s trajectory; t is time. Recall that the force felt by the partical is the gradient of V , F = ∇V while the velocity with which the partical moves is v(t) =
dx(t) dt .
The rate of change of potential energy with time is ∂V x(t) dx(t) = ∇V (x) · ∂t dt = F · v.
This last expression is known as the mechanical power gained by the partical. ..................
Example 3.2: Let f : Rn → R be a function of n variables. Let x0 ∈ Rn . The direction of ∇f (x0 ) is the direction of greatest increase of f at x0 . The level surface of f at x0 is orthogonal to ∇f (x0 ). In particular, ∇f (x) is normal to the surface given by f (x) = 0. ˆ Then To see this one need only consider the curve x0 = x0 + ds. ˆ df (x0 + ds) ˆ = ∇f (x0 ) · d. s=0 ds
ˆ is in the same direction as ∇f (x), and is zero when d ˆ ⊥ ∇f (x). Clearly this is maximal when d 22
Multi-Variable Calculus
..................
Example 3.3: The formalism of Thermodynamics consists of arcane notation for variants of the chain rule. In the simplest situations three macroscopic quantities are used to specify the state of a system: pressure P , temperature T and density ρ. In general an equation connecting these three variables, called an equation of state, is either derived or inferred from the microscopic properties of the system. The equation of state can generally be written F (P, ρ, T ) = 0 for some function F . In the case of an ideal gas F (P, ρ, T ) = P − ρRT where R is the gas constant (Boltzman’s constant divided by particle mass) and one has the familiar P = ρRT. In particular, only two of the variables P , ρ, T are independent, however, one is free to decide which. For example, in an ideal gas, if one decides that P and T should be the independent variables then ρ is given by
P RT
.
It is common in thermodynamics to use a differential relation to express the equation of state. Imagine that the state of the system varies with some parameter t. Then through the equation of state the derivatives of P , T and ρ are related, ∂F dP ∂F dT ∂F dρ d F (P, ρ, T ) = + + dt ∂P dt ∂T dt ∂ρ dt = 0. If, as an example, one decides that P and T should be the independent variables then ρ becomes a function of P and T for which dρ =− dt
∂F ∂P ∂F ∂ρ
dP − dt
By the chain rule −
∂F ∂P ∂F ∂ρ
= 23
∂ρ ∂P
∂F ∂T ∂F ∂ρ
dT . dt
Multi-Variable Calculus
and −
∂F ∂T ∂F ∂ρ
=
∂ρ . ∂T
In thermodynamics these equation are given the shorthand notation dρ = with
and
In general, the notation
∂f ∂x
y
∂ρ ∂ρ dP + dT ∂P T ∂T P ∂ρ =− ∂P T
∂F ∂P ∂F ∂ρ
∂ρ =− ∂T P
∂F ∂T ∂F ∂ρ
.
means choose x and y as the independent variables, solve the
equation of state to express the remaining variable as a function of x and y, substitute into f and then differentiate with respect to x. Of course, if one can explicitly solve the equation of state there is no need to use these implicit relations. For the case of the ideal gas law one finds dP dT dρ = − . ρ P T Now consider some quantity G(P, ρ, T ), G might be the entropy or the total energy. Then d ∂G dP ∂G dT ∂G dρ G(P, ρ, T ) = + + . dt ∂P dt ∂T dt ∂ρ dt But
dρ dt
can be expressed in terms of
dP dt
and
dT dt
as above, so that one has
∂G ∂G ∂ρ dP ∂G ∂G ∂ρ dT d G(P, ρ, T ) = + + + , dt ∂P ∂ρ ∂P dt ∂T ∂ρ ∂T dt generally written in the shorthand notation dG = with
∂G ∂G dP + dT ∂P T ∂T P
∂G ∂G ∂G ∂ρ + = ∂P T ∂P ∂ρ ∂P 24
Multi-Variable Calculus
and
∂G ∂G ∂G ∂ρ = + . ∂T P ∂T ∂ρ ∂T
In practice, one often solves for the dependent variable and substitutes into G, obtaining a P , T ), which can be differentiated without the new function of 2 variables, for example G(P, RT
chain rule. A function which arises in Acoustics is the entropy S = S(P, ρ, T ). Choosing P and ρ for the independent variables one has S = S(P, ρ,
P ). Rρ
Making the approximation that acoustic processes are isentropic (that is, occur without a change in entropy) one has S(P, ρ,
P ) = const . Rρ
This is now a relation between ρ and P . Solving, if one can, gives ρ = f (P ) for some function f , or, if one prefers, P = g(ρ) for some function g. The quantity
∂P0 g (ρ0 ) = ∂ρ0 S ′
arises in linear acoustics (Example 1.8) as the speed of sound squared. Note that ∂ρ0 ∂P0 S 1 = ′ . g (ρ0 )
f ′ (P0 ) =
25
Multi-Variable Calculus
This quantity can be obtained without producing explicit functions g(ρ) or f (P ). Note that the ideal gas law implies that 1 1 1 dP = dρ + dT. P ρ T The chain rule gives dS =
∂S ∂S dP + dρ. ∂P ρ ∂ρ P
If the entropy is constant then any derivative of S is 0. In particular, ∂S ∂P ∂S ∂S =0= + ∂ρ S ∂P ρ ∂ρ S ∂ρ P so that ∂P ∂ρ
S
=−
At this point one introduces the specific heats cV = T and
Using the ideal gas law
∂S ∂P
P . ρ
∂S ∂P ∂S =T ∂T ρ ∂P ρ ∂T ρ
∂S ∂S ∂ρ cp = T =T . ∂T P ∂ρ P ∂T P cV = P
and
∂S ∂ρ
∂S ∂P ρ
∂S cp = −ρ ∂ρ P
so that
∂P P cp = . ∂ρ S ρ cV
..................
Change of variables: An important class of functions are the change of variables. These have n = m and the determinant of their derivative matrix is non-zero. Let g be a change of variables. Then 26
Multi-Variable Calculus
it can be shown that g is invertible. That is, for every x there is one y with x = g(y). y is called g −1 (x) and −1 D(g −1 )(x) = Dg(y) . Thus, if we are concerned with some function f (x) for which (f ◦ g)(y) is somehow simpler,
we can study f ◦ g and invert back to f later. If m = 1, that is f : Rn → R, then the chain rule gives D(f ◦ g)(y) = Df (x) Dg(y) ∂f = ( ∂x 1
...
∂gn ∂y1
··· ...
) ...
∂f ∂xn
...
∂g1 ∂y1
∂g1 ∂yn
.. .
∂gn ∂yn
or, taking transposes,
∂ ∂y1
∂g1 ∂y1
...
∂g1 ∂yn
··· ...
. . .. (f ◦ g) = .. ∂ ∂yn
∂gn ∂y1
∂ ∂x1
.. .. f. . .
∂gn ∂yn
∂ ∂xn
Often one sees this formula written in shorthand, setting x = g(y) and writing
∂ ∂y1
∂x1 ∂y1
...
∂x1 ∂yn
··· ...
. . .. = .. ∂ ∂yn
∂xn ∂y1
∂ ∂x1
.. .. . . .
∂xn ∂yn
∂ ∂xn
Further, since the matrix on the right is a function of y, one obtains a formula for the change of variables in the gradient operator,
∂ ∂x1
∂x1 ∂y1
...
∂x1 ∂yn
··· ...
. . .. = .. ∂ ∂xn
∂xn ∂y1
−1
.. .
∂xn ∂yn
∂ ∂y1
. .. . ∂ ∂yn
Example 3.4: Spherical coordinates in 3-dimensions can be given by r sin θ cos φ x y = r sin θ sin φ . r cos θ z 27
Multi-Variable Calculus
z
θ
r
y
Φ x
The matrix of partial derivatives for this transformation is ∂x
∂r ∂y ∂r ∂z ∂r
∂x ∂θ ∂y ∂θ ∂z ∂θ
∂x ∂φ ∂y ∂φ ∂z ∂φ
sin θ cos φ = sin θ sin φ cos θ
r cos θ cos φ −r sin θ sin φ r cos θ sin φ r sin θ cos φ −r sin θ 0
and has determinant r2 sin θ. Note that this change of variables is singular at r = 0 and sin θ = 0. The singularity at sin θ = 0 is fairly benign, however, this change of variables does introduce complications at r = 0. To express the Cartesian gradient in spherical coordinates use the formula above:
∂ ∂x ∂ ∂y ∂ ∂z
−1 ∂ sin θ cos φ sin θ sin φ cos θ ∂r ∂ = r cos θ cos φ r cos θ sin φ −r sin θ ∂θ ∂ −r sin θ sin φ r sin θ cos φ 0 ∂φ φ ∂ sin θ cos φ 1r cos θ cos φ − rsin ∂r sin θ cos φ ∂ = sin θ sin φ 1r cos θ sin φ ∂θ r sin θ ∂ cos θ − 1r sin θ 0 ∂φ
or ∂ ∂ 1 ∂ sin φ ∂ = sin θ cos φ + cos θ cos φ − , ∂x ∂r r ∂θ r sin θ ∂φ ∂ 1 ∂ cos φ ∂ ∂ = sin θ sin φ + cos θ sin φ + ∂y ∂r r ∂θ r sin θ ∂φ 28
Multi-Variable Calculus
and ∂ ∂ 1 ∂ = cos θ − sin θ . ∂z ∂r r ∂θ
To compute the Laplace operator in spherical coordinates, substitute the above expressions into ∆ =
∂2 ∂x2
+
∂2 ∂y 2
+
∂2 ∂z 2 .
After a somewhat long calculation, making sure to
∂ ∂ f (x) ∂x = use the product rule for differentiation ( ∂x
∂f ∂ ∂x ∂x
2
∂ + f (x) ∂x 2 ), one finds
2 ∂ 1 1 ∂ ∂ 1 ∂2 ∂2 . + sin θ + ∆= 2 + ∂r r ∂r r2 sin θ ∂θ ∂θ sin2 θ ∂φ2
Two other important changes of variables are polar coordinates in R2 ,
r cos θ x = r sin θ y
and cylindrical coordinates in R3 ,
r cos θ x y = r sin θ . z z ..................
29
(3.1)
Multi-Variable Calculus
Taylor expansions: Taylor’s formula is one of the more useful tools around. In one dimension it reads 1 1 f (x) = f (a) + f ′ (a)(x − a) + f ′′ (a)(x − a)2 + . . . + f (n) (a)(x − a)n + . . . . 2 n! The question of how many terms to keep in this sum depends on the application, as does the choice of a. If the infinite series converges then f is said to be analytic at a. If f is not analytic at a the series might still be useful. It often happens that, even though the entire series diverges, the first few terms in the series are a reasonable approximation to f near a. Example 3.5: Consider sin x. It turns out that sin is analytic for all x. Even though sin x is analytic about x = 0 so that the Taylor series for sin x about x = 0, sin x =
∞ X (−1)n x2n+1 , (2n + 1)! n=0
converges, if x is not very close to 0 one needs to keep a prohibitively large number of terms to get a good approximation. In order to uniformly approximate sin x for some range of x using relatively low order polynomials one needs to piece together different expansions about different points. Consider x ∈ [0, π2 ]. About x =
π 2
it is
∞ X π (−1)n (x − )2n . sin x = (2n)! 2 n=0
To get a good uniform approximation over this interval it suffices to use these two expansions up to order 5. Between 0 and expansion about
π 4
use the expansion about 0. Between
π 4
and
π 2.
..................
A convenient formula for the error in Taylor series is available. Write N X 1 (n) f (x) = f (a)(x − a)n + EN (f, a, x). n! n=1
30
π 2
use the
Multi-Variable Calculus
The error E is given by EN (f, a, x) =
1 f (N +1) (c) (x − a)N +1 (N + 1)!
where c is unknown, but is between a and x. It is often possible to bound the derivatives of a function over some interval, giving a bound on the error. For example, for all n, |
dn sin x |≤1 dxn
so that approximating sin x about x = 0 by an nth order Taylor polynomial incurs an error no larger than 1 |x|N +1 . (N + 1)! To obtain a Taylor expansion for functions of several variables one needs only apply the one variable formula several times. In, say, 3 dimensions, first do a Taylor expansion in x. The coefficients depend on y and z. Expand them in y, and then again in z. One obtains
f (x, y, z) =
N X
j+k+n=0
1 ∂ j+k+n f (x − a)j (y − b)k (z − c)n + . . . . j!k!n! ∂xj ∂y k ∂z n x=a,y=b,z=c
This formula gives us a way to look for local max and min for functions of several variables. Again, at a local max or min the graph of the function must flatten out (think of the top of a hill, at the very top you’re neither climbing up nor down). Thus one must have ∇f = 0 (a max or min in any direction). To second order the Taylor expansion becomes (setting a = b = c = 0) ∂ 2 f ∂x2 0 x x 2 1 ∂ f f (x, y, z) = f (0) + ∇f (0) · y + y · ∂x∂y 2 0 z z ∂2f ∂x∂z 0
The second derivative matrix
∂2f ∂x2 0 ∂2f ∂x∂y 0 ∂2f ∂x∂z 0
∂2f ∂x∂y 0 ∂2f 2 ∂y 0 ∂2f ∂y∂z 0
31
∂2f ∂x∂y 0 ∂2f 2 ∂y 0 ∂2f ∂y∂z 0
∂2f ∂x∂z 0 ∂2f ∂y∂z 0 ∂2f 2 ∂z 0
x y + ... z
∂2f ∂x∂z 0 ∂2f ∂y∂z 0 ∂2f 2 ∂z 0
Multi-Variable Calculus
is symmetric and thus has three orthogonal eigenvectors with real eigenvalues. Let these eigenvalues be λ1 , λ2 and λ3 and let the corresponding eigenvectors be v(1) , v(2) and v(3) . Assume they are normalized. Then one can write x y = t1 v(1) + t2 v(2) + t3 v(3) z
where
x (j) tj = v · y . z
Thus, if ∇f (0) = 0, then
1 2 2 2 λ1 t1 + λ2 t2 + λ 3 t3 . f (x, y, z) = f (0) + 2
If all the λj > 0 one has a min. If all the λj < 0 one has a max. If some λj < 0 while others are > 0 then one has a saddle, neither max nor min (like a mountain pass between two peaks). If any of the λj = 0 then it’s not clear what’s happening and one must do more. Integration: Integration of functions of one variable is relatively straightforward. The integral of a function of one variable is defined geometrically to be the area between the graph of the function and the x-axis, with a positive contribution from the part above the x-axis and a negative contribution from the part below. Analytically, one has the Fundamental Theorem of Calculus which says that derivatives and integrals undo each other, Z x f ′ (t) dt = f (x) − f (a) a
or d dx
Z
x
g(t) dt = g(x).
a
Thus, integration of functions of one variable is reduced to guessing at anti-derivatives. Recall that inverse to the chain rule one has the technique of substitution (one dimensional change of variables),
Z
a
b
f (x) dx =
Z
g −1 (b)
g −1 (a)
32
f g(y) g ′ (y) dy
Multi-Variable Calculus
and inverse to the product rule one has integration by parts Z
a
b
Z
b f (x)g (x) dx = f (x)g(x) a − ′
b
f ′ (x)g(x) dx.
a
Numerically, efficient algorithms can be devised to estimate areas under curves. Example 3.6: Integration by parts is sometimes usefull in generating asymptotic expansions to integrals. A classic example is the Riemann-Lebesgue lemma which is the J = 1 version of the following. Let f : R −→ R be J times differentiable with the nth derivatives |f (n) | integrable
over R for n = 0, 1, 2, . . . , J. Then necessarily f (n) (±∞) = 0. Consider the large ω behavior of the Fourier transform of f , 1 fˆ(ω) = √ 2π
Z
∞
f (t)eiωt dt.
−∞
Noting that eiωt =
1 deiωt iω dt
and integrating by parts J times one finds that (−1)J 1 √ fˆ(ω) = (iω)J 2π
Z
1 1 |fˆ(ω)| ≤ J √ ω 2π
Z
It follows that
∞
f (J) (t)eiωt dt.
−∞ ∞
−∞
|f (J) (t)| dt
so that the Fourier transform goes to zero as ω → ∞ at least as fast as
1 ωJ
. If J = ∞ then
the Fourier transform is said to go to zero “exponentially fact”. If J is finite and f does not have an intrgrable J + 1 derivative then its Fourier transform decreases precisely as
1 ωJ
as
ω → ∞. Another example is provided by the large x asymptotic expansion for the integral (related to the Error function)
Z
∞
2
e−t dt.
x
Using
2
e
−t2
1 de−t =− 2t dt 33
Multi-Variable Calculus
and integrating by parts repeatedly one finds Z
∞
2
e−t dt =
x
1 1 3 − 3 + 5 + .... 2x 4x 8x
..................
Integration of functions of several variables is a problem. For one thing, the possible regions one can integrate over increases in complexity as the number of dimensions increases. Further, the only analytical tool available is to transform an integral over several variables into several integrals over one variable, and this can be done only for relatively simple functions integrated over extremely simple domains. In addition, while numerical methods exist to estimate integrals over several variables they are by no means as efficient as the one variable algorithms. To attempt to simplify integrals over several variables there are essentially only two tricks available. The first is to use a change of variables, if one can be found, which either simplifies the function which is being integrated or the domain over which one is integrating. If g is an n-dimensional change of variables then Z
n
f (x) d x =
D
Z
g −1 (D)
f g(y) | det Dg(y) | dn y.
Here g −1 (D) is the set of points that gets mapped to D by g. The factor det Dg(y) is
known as the Jacobian determinant for the change of variables. Note the absolute value. (Why is there no absolute value in the 1-dimensional case?) Example 3.7:
Let D be the 3-dimensional ball of radius R centered at the origin. Let q
be a vector in R3 and consider
Z
eq·x dx dy dz.
D
First, choose the coordinate system so that q is parallel to the z axis and let 0 q = 0. q 34
Multi-Variable Calculus
Thus q · x = qz. Then change variables to spherical coordinates so that D becomes simple. The points that get mapped to D in spherical coordinates are r ∈ [0, R], θ ∈ [0, π] and φ ∈ [0, 2π). Thus Z
q·x
e
dx dy dz =
Z
2π
0
D
Z
π
0
Z
R
eqr cos θ r2 sin θ dr dθ dφ.
0
The integral over φ can be done immediately. To do the integral over θ make a one dimensional change of variables η = cosθ. Then dη = − sin θ dθ and Z
2π
0
Z
0
π
Z
R
e
qr cos θ
2
r sin θ dr dθ dφ = 2π
0
Z
1
Z
R
eqηr r2 dr dη
−1 0 R
Z
2π eqr − e−qr r dr q 0 h R 1 qR R 1 −qR i . e + e − + = 2π q2 q3 q2 q3
=
..................
The second tool is the n-dimensional generalization of the fundamental theorem of calculus called Stoke’s theorem. In it’s general form it is deceptively complex and requires a lot of mathematical machinery just to write down. This complexity arises both from the variety of surfaces over which one can integrate in n dimensions as well as from the variety of ways one can take a first derivative. For our purposes it is sufficient to restrict our attention to two simple low dimensional cases known as Stoke’s and Gauss’ theorem (classically, Stoke’s theorem was the low dimensional case, when the generalization was found it was also called Stoke’s theorem). For functions f : R2 → R2 there are two types of first derivatives of interest. The divergence of f is a scalar given by ∇·f =
∂f1 ∂f2 + . ∂x ∂y
Similarly the curl is also a scalar given by ∇×f =
∂f1 ∂f2 − . ∂x ∂y 35
Multi-Variable Calculus
For functions f : R3 → R3 there are also two types of first derivatives of interest. The divergence of f is a scalar given by ∇·f =
∂f1 ∂f2 ∂f3 + + . ∂x ∂y ∂z
However the curl is a vector given by ∂f3
−
∂y
1 ∇ × f = ∂f ∂z − ∂f2 ∂x −
∂f2 ∂z ∂f3 ∂x ∂f1 ∂y
.
Both Stoke’s and Gauss’ theorem deal with functions of two or three variables. Gauss’ theorem for functions of three variables is as follows: let V be a volume in R3 denote by ∂V the boundary of V and by n the outward pointing unit normal vector field to ∂V . Then if u : R3 → R3 is a vector field Z Z 3 ∇ · ud x = V
∂V
n · u dσ.
Here dσ is the surface element on ∂V . A general discussion of surface elements is beyond the scope of this course. In general it is required to have a parameterization of the surface, x(v, w), and a change of variables in a neighborhood of the surface from x, y, z to v, w, t where t is the distance along the normal vector from the surface. Finding a surface element is simplified if a change of variables can be found so that the surface in question is obtained by holding one of the new variables constant. For example, consider a sphere of radius R. In spherical coordinates this is a surface of constant r, r = R. The volume element in spherical coordinates is r2 sin θ dθ dφ dr. Thus, the surface element to a sphere of radius R can be taken to be R2 sin θ dθ dφ. Normal vectors are easier. Consider a surface given by an equation f (x) = 0. Recall that ∇f is a vector pointing in the direction in which f changes most rapidly. This implies that ∇f is normal to the surface. 36
Multi-Variable Calculus
Example 3.8: Consider a fluid. As before let ρ(x) and v(x) be the density and velocity at point x. Then ρv is the mass flux in the system. It gives the mass per unit area and time transported in the direction of v. Given a volume V the change in mass in V is given by ∂ ∂t
Z
V
3
ρd x = −
Z
∂V
ρv · n dσ.
That is, the change of mass in V is negative of the rate at which mass enters or leaves V . By Gauss’ theorem
Z
ρv · n dσ = −
∂V
Z
∇ · (ρv) d3 x.
V
Since this must be true for any volume V one can conclude that the equation of continuity, ∂ρ = −∇ · (ρv) ∂t must be true. ..................
Stokes theorem relates the integral of the curl of a function over a surface to the integral of the function over the boundary of the surface. Let S be a surface in R3 . Then Z
S
(∇ × u) · n dσ =
Z
∂S
u · dl.
Here dl is the line element over the boundary ∂S of S. Line elements are easier to handle than surface elements. If x(t) is a parameterization of a curve then the line element along this curve is dl = x′ (t) dt. An identity which follows from Gauss’ theorem and which is used often in Acoustics is Green’s theorem. It is a 3-dimensional version of integration by parts. Let f and g be functions on R3 . Green’s theorem states that Z Z 3 f ∆g − g∆f d x = v
∂V
37
f ∇g − g∇f · n dσ.
Multi-Variable Calculus
It follows from Gauss’ theorem on noting that f ∆g − g∆f = ∇ · f ∇g − g∇f . Dirac delta function: The Dirac delta function is not a function at all, but is useful notation for the linear transformation which takes a function into it’s value at a given point, say w. The rule is Z
V
f (x)δ(x − w) dn x =
n
f (w) 0
if w ∈ V if w ∈ /V .
Note that this is a linear operation. An integral with a delta function in it is easy, it’s not an integral at all. Example 3.9: Let f be continuous at y and let gǫ be any family of functions satisfying lim gǫ (x) = 0 ǫ↓0
if x 6= 0 and
Z
∞
gǫ (x) dx = 1
−∞
for all ǫ > 0. Then lim ǫ↓0
Z
∞
−∞
f (x)gǫ (x − y) dx = f (y).
In this sense lim gǫ (x − y) = δ(x − y). ǫ↓0
Examples of such families are gǫ (x) =
0 1 2ǫ
if |x| > ǫ if |x| < ǫ
or 1 2 1 gǫ (x) = √ e− ǫ x . πǫ
38
Multi-Variable Calculus
Example 3.10: If f is continuous at y note that, ignoring the singularity at y = 0 and using formal integration by parts, Z
∞
d2 f (x) 2 |x − y| dx = lim ǫ↓0 dx −∞
Z
y+ǫ
d2 |x − y| dx dx2 y−ǫ d = f (y) lim |x − y| x=y−ǫ x = y + ǫ ǫ↓0 dx = 2f (y). f (x)
In this sense d2 |x − y| = 2 δ(x − y). dx2 ..................
There is a subtlety under change of variables. Let g be a change of variables. Then, taking the integral notation above seriously (even though δ(x) is really nonsense) one has, assuming that 0 ∈ V , Z
V
n
f (x)δ(x) d x =
Z
g −1 (V )
= f (0).
f g(y) δ g(y) | det Dg(y)| dn y
Thus we find that in order for this notation to be self-consistent we must have δ g(y) | det Dg(y)| = δ y − g −1 (0) . One often sees this formula expressed by saying that under the change of variables g, δ(x) becomes
or even
1 δ y − g −1 (0) | det Dg(y)| 1 δ y − g −1 (0) . | det Dg g −1 (0) |
Unfortunately this doesn’t work if g is singular where it’s 0, that is, if det Dg g −1 (0) = 0. 39
Multi-Variable Calculus
This is the case in polar coordinates where det Dg(r, θ, φ) = r2 sin θ, and g −1 (0) means r = 0. In polar coordinates there are many ways to see what’s going on, but the simplest is to guess. One wants Z
n
f (x)δ(x) d x =
Z
f (r sin θ cos φ, r sin θ sin φ, r cos θ) r2 sin θ δ(x) dr dθ dφ
= f (0).
In δ(x) one expects a factor of
1 r2 ,
by analogy with the regular case, and clearly a factor of
δ(r) is needed, but what does one do with θ and φ? The problem is that, at r = 0, θ and φ make no difference. However, formally putting r = 0 in f and integrating over θ and φ gives the correct answer up to a factor of 4π. Thus, choosing δ(x) =
1 δ(r) 4πr2
seems to be a good guess. It turns out to be right. Example 3.11: Here we show that in three dimensions ∆
1 = −4πδ(x − y) kx − yk
in the sense that if f is continuous at 0 then Z
f (x)∆
1 d3 x = −4πf (y). kx − yk
First note that the shift of variables x 7→ x − y is a change of variables whose partial derivative matrix is the identity matrix
1 I = 0 0
0 1 0
0 0. 1
Thus, under a shift of variables, partial derivatives don’t change. It follows that the formula is true in general if it’s true for y = 0. The representation of ∆ in spherical coordinates developed in Example 3.4 is valid everywhere except at r = 0. For r 6= 0 it shows that ∆
1 = 0. kxk 40
Multi-Variable Calculus
Thus only r = 0 can contribute to this integral. To see what’s happening at r = 0 integrate over a small ball of radius ǫ centered at x = 0, B, and then let ǫ ↓ 0. Since f is continuous at 0 it approaches f (0) and can be replaced by f (0) in the integral. One has from Gauss’ theorem
Z
1 ∆ d3 x = kxk B
Z
∂B
=−
Z
n· ∇ 2π
0
Z
0
= −4π. Here, that n·∇= on the surface of a sphere, is used. ..................
41
∂ ∂r
π
1 dσ kxk sin θ dθ dφ
Complex Analysis
Review of Complex Analysis The set of complex numbers, C, is a 2-dimensional vector space over R with, in addition to the vector space structure, a multiplication defined. The standard basis for C is the set {1, i} where 1 is the real number and i is anything whose square is −1, i2 = −1. Any complex number can be written as a linear combination x + iy where x and y are real. Given z = x + iy, x is called the real part of z, x = Re z, and y the imaginary part, y = Im z. The complex conjugate of z is z¯ = x − iy.
y
z z=x+iy |z|
θ
x
Taking seriously the representation of complex numbers as 2-dimensional vectors over R and representing them graphically using the real part as the x component and the imaginary part as the y component one sees that |z| defined by
is the Euclidean norm
p
|z|2 = z¯z x2 + y 2 . In polar coordinates z = x + iy becomes z = |z|(cos θ + i sin θ)
where θ is the angle between the x-axis and the vector (x, y). It is a fact, known as DeMoivre’s Theorem that eiθ = cos θ + i sin θ.
(4.1)
This can be seen as a definition of the complex exponential once one has checked that it has all the desired properties. There are many ways to do this. The easiest is to note that d (cos θ + i sin θ) = i(cos θ + i sin θ), dθ 42
Complex Analysis
as one expects from eiθ , and, setting θ = 0 one obtains 1, also as expected. Example 4.1: The set of 2 × 2 antisymmetric matrices with real matrix elements and equal diagonal elements is algebraically identical to the set of complex numbers. Such a matrix is of the form
x −y y x
for real x and y. This is a 2-dimensional vector space over R with a basis given by
Note that
0 1
1 0 0 1
−1 0
0 , 1
0 1
−1 0
−1 0
=−
.
1 0
0 1
0 −1 acts like i. so that 1 0 Many interesting conclusions can be drawn from this representation. For example,
note that multiplying a complex number z by eiθ rotates the vector representing z through an angle θ. It follows that e
iθ
=
cos θ sin θ
− sin θ cos θ
is the matrix which rotates 2-dimensional vectors through an angle θ. ..................
Functions, f : C → C, given by f (x + iy) = u(x, y) + iv(x, y) are really functions from R2 → R2 given by F (x, y) =
u(x, y) v(x, y)
.
This leads to some subtleties when one tries to speak of differentiation. On the one hand, one would like the derivative of f to be a complex valued function f ′ . On the other, the 43
Complex Analysis
derivative of F is a 2 × 2 matrix given by the partial derivative matrix. The condition that these two notions be identical leads to the notion of an analytic function as follows. Note that multiplication by a complex number a + ib is linear and induces a linear transformation of R2 . Explicitly (a + ib)(x + iy) = ax − by + i(ay + bx) is the same as
ax − by ay + bx
=
a −b b a
x . y
A function f = u + iv : C → C is said to be analytic if the derivative matrix of the associated
function from R2 → R2 corresponds, as a linear transformation, to multiplication by a complex number. Explicitly, if DF =
∂u ∂x ∂v ∂x
∂u ∂y ∂v ∂y
a −b = b a for some numbers a and b then f = u + iv is analytic. The Cauchy-Riemann Equations for
an analytic function follow immediately: ∂u ∂v = =a ∂x ∂y and ∂v ∂u = = b. − ∂y ∂x The derivative of f is then defined to be f ′ = a + ib. Analyticity (or complex differentiability) turns out to be a very strong condition, much more restrictive than ordinary differentiability. Analytic functions are extremely regular in their behavior. It turns out that if a function f is analytic at some point, say w ∈ C, then f is infinitely differentiable at w and the Taylor series for f about w converges (this is often taken as an alternative definition of analyticity). In fact the radius of convergence of the Taylor expansion is the distance from w to the closest point of non-analyticity. For example, the expansion
converges for all z with |z| < 1.
∞ X 1 = (−1)n z n 1 + z n=0
44
Complex Analysis
Example 4.2: A function f is analytic if it is complex differentiable. Checking for complex differentiability amounts to checking that the Cauchy-Riemann equations hold. This is often straightforward, although sometimes checking that the Taylor series converges is easier. That ex+iy = ex cos y + i sin y
is analytic everywhere follows easily from the Cauchy-Riemann equations. That any polynomial p(z) = a0 + a1 z + a2 z 2 + . . . + an z n is analytic everywhere follows since p(z) is its own Taylor series and, since it is a finite sum, converges everywhere. If j < 0 the function z j is analytic everywhere except at z = 0 where it’s not differentiable and if j > 0 but is not an integer then z j is also analytic everywhere except at z = 0, although if j > 1 the reason is subtle (see below). The Cauchy-Riemann equations show that the function f (z) = z¯, complex conjugation, is not analytic anywhere. Neither is any polynomial in z¯. ..................
Complex integration is also more involved than real integration since it is an essentially 2-dimensional concept and necessarily involves integration over curves. Let Γ be some curve in C. In order to integrate a function f over Γ one needs a parameterization of Γ, that is a function w : [a, b] → C whose image is Γ. Then Z
f (z) dz =
Γ
Z
b
a
f w(t) w′ (t) dt.
Clearly, given a geometric curve Γ there are many ways to parameterize it. For example, the circle of radius r centered at 0 can be parameterized by w(t) = reit for t ∈ [0, 2π] as well as √ by w(t) = t + i r2 − t2 for t ∈ [0, r], or even w(t) = rti for t ∈ [1, e2π ]. However, one can show that the integral over Γ is independent of what function w(t) is used to parameterize it. Note as well that there are no subtleties involved with w′ since w is a function (or in fact two real valued functions) of one variable. 45
Complex Analysis
A great aid in integrating analytic functions is Cauchy’s Theorem which is a direct application of Gauss’ theorem to the real and imaginary parts of f w′ . Consider a region S. Then Cauchy’s theorem can be stated as Z
f (z) dz = 0
(4.2)
∂S
if f is analytic in S. The reason is simple. Let f = u + iv and w(t) = x(t) + iy(t) be a parameterization of ∂S. Then f w′ = ux′ − vy ′ + i(uy ′ + vx′ ) so that Z
′ ′ Z v x(t), y(t) u x(t), y(t) x (t) x (t) dt dt + i f (z) dz = · · ′ y ′ (t) y (t) u x(t), y(t) −v x(t), y(t) ∂S Z Z v u · dl · dl + i = u −v ∂S ∂S Z
where dl =
x′ (t) y ′ (t)
dt
is a line element along ∂S. Gauss’ law implies that Z Z u u d2 x · dl = ∇· −v −v ∂S ZS ∂u ∂v 2 = − d x ∂y S ∂x
and that
Z v v d2 x · dl = ∇· u u ∂S ZS ∂v ∂u 2 = + d x. ∂y S ∂x
Z
However, if f is analytic then u and v satisfy the Cauchy-Riemann equations which state that the integrands in the two expressions above are 0. An immediate consequence of Cauchy’s theorem is the notion of path independence. Let f : C → C and let Γ1 and Γ2 be different curves in C, open or closed, for which either curve may be continuously deformed into the other without encountering any point at which f is not analytic. In particular, Γ1 and Γ2 enclose a region in which f is analytic. Then Z Z f (z) dz. f (z) dz = Γ2
Γ1
46
Complex Analysis
z2
Γ1
n
Γ1
n t
t
t n
S
z1
Γ2
S
Γ2
In the figures above two situations are depicted. In one the points z1 and z2 are connected by two dinstinct paths. The region S is the interior of the closed curve formed by the union of Γ1 and Γ2 . In the other both curves Γj are closed, but one is contained in the interior of the other. The region S is the annular region between the two curves. The orientation of the boundary ∂S is determined by the convention that the outward pointing normal n and the tangent t have the orientation of x ˆ and y ˆ respectivlely. The orientations of the curves Γj are indicated with arrow heads. Note that the orientations of the Γj and ∂S need not coincide. Γ2
R
Γ1
Example 4.3: Since
eiz z+i
is analytic in the upper half-plane one may, for any R > 0, replace
the integral over [−R, R] (the contour Γ1 in the figure above) by the integral over the semicircle of radius R (the contour Γ2 in the figure above). Parameterizing the semi-circle with Reiθ and then letting R → ∞ one has Z
∞
−∞
h eix dx = lim R→∞ x+i = 0.
Z
−R
−∞
eix dx + x+i
Z
0
π
iθ
eiRe iReiθ dθ + iθ Re + i
Z
∞
R
i eix dx x+i
Note that this argument requires that the integrand decrease more rapidly with increasing R than R−1 . 47
Complex Analysis
..................
∂S
S ε
w
An immediate consequence of path independence is the Calculus of Residues: Let S be a region in which f is analytic everywhere except at an isolated point w in the interior of S. Let Cǫ (w) be the circle of radius ǫ centered at w. Then Z Z f (z) dz = lim f (z) dz. ǫ↓0
∂S
The quantity
1 lim 2πi ǫ↓0
Cǫ (w)
Z
ǫ f (z) dz = lim ǫ↓0 2π Cǫ (w)
Z
2π
f (w + ǫeit ) eit dt,
0
if it exists, is called the residue of f at w and will be denoted by Res(f, w). Example 4.4: Using the calculus of residues the integral Z ∞ eix dx 2 −∞ x + 1 can be calculated with very little effort. Let SR be that part of the disk of radius R which is in the upper half plane. Since Z
eiz z 2 +1
SR
But
lim
R→∞
so that
Z
∞
−∞
has isolated singularities at z = ±i
eiz eiz dz = 2πi Res( , i). z2 + 1 z2 + 1 Z
SR
eiz dz = z2 + 1
Z
∞
−∞
eix dx x2 + 1
eiz eix dx = 2πi Res( , i) x2 + 1 z2 + 1 Z 2π it ei(i+ǫe ) = lim ǫi eit dt it )2 + 1 ǫ↓0 (i + ǫe 0 Z 2π 1 dt = 2e 0 π = . e 48
Complex Analysis
To get from the second to the third lines in this last expression the integrand was expanded in series in ǫ using ei(i+ǫe and
it
)
=e
1 1 + ǫeit + . . . e
1 1 = it 2 it (i + ǫe ) + 1 2iǫe + ǫ2 e2it 1 it 1 1 − ǫe + . . . = 2iǫeit 2 and noting that all but the first terms vanish in the limit ǫ ↓ 0.
..................
Consider a function f which has non-analytic (to be called singular) behavior at an isolated point w. There are three possibilities. w is either a pole, a branch point or an essential singularity. A function f has a pole at z = w if there is some m ∈ {1, 2, 3, . . .} for which
(x − w)m f (z) is analytic at w. The smallest such m is called the order of the pole. A pole is thus a singularity of the form f (z) ∼
const (z − w)m
for some positive integer n. The integer m is the order of the pole. If m = 1 the pole is said to be simple. Functions which have poles in some region S but are otherwise analytic in S are said to be meromorphic in S. The function 1 + z − 3z 2 (z − 1)2 is meromorphic in C with a pole of order 2 at 1. The function 1 sin z is meromorphic in C with simple poles at mπ for any integer m. Similarly tan z has simple poles at (m + 12 )π for any integer m. The function eiz z2 + 1 49
Complex Analysis
has simple poles at ±i. A branch point is a point around which a function is multivalued. These typically arise with inverse functions when there isn’t a unique inverse. The simplest example is the square root. Any number w 6= 0 has two square roots. Explicitly, given z with z 2 = w
the number −z also satisfies (−z)2 = w. Thus, in defining a square root a choice must be √ made. In R one generally chooses to be the positive square root. In C things are more subtle and there are many more possible choices. Note that this structure collapses at 0 since √ 1 i −0 = 0, so 0 is unique. If z = |z|eiφ then the two possible square roots of z are |z| 2 e 2 φ 1
i
1
and |z| 2 eiπ+ 2 φ . Here |z| 2 is the positive root of |z|. Choose the first and follow a path which surrounds 0 and comes back to z. The simplest is a circle of radius |z|. After going around once φ has increased by 2π which means the first square root has changed to the second. The canonical example of a multivalued function is the natural logarithm, ln z. The defining relation for the natural logarithm is eln z = z. But this equation does not have a unique solution: any integer multiple of 2πi can be added to ln z since e2mπi+ln z = e2mπi eln z = z. Thus, there are an infinite number of natural logarithms, all differing by integer multiples of 2πi. Again, the point at which this degenerates is 0. Note that if ln(|z|eiφ ) = ln |z| + iφ then traversing a circle around 0 adds 2πi to the logarithm. Essential singularities are isolated singularities which are neither poles nor branch 1
points. An example is e z at z = 0. Essential singularities are as messy they get. We won’t encounter them often. For functions with poles there is an extension of the notion of a Taylor expansion called a Laurent expansion. Explicitly, given a function f with a pole of order m at w one 50
Complex Analysis
can write ∞ X
f (z) =
n=−m
an (z − w)n
a−m a−m+1 a−1 = + + ... + + a0 + a1 (z − w) + a2 (z − w)2 + . . . m m−1 (z − w) (z − w) z−w
(4.3)
for z sufficiently close to w. A Laurent expansion gives complete information about the behavior of f near w. In particular, for z very close to w one can approximate f (z) by f (z) ≈
a−m . (z − w)m
The coefficients an can be determined using the formula Z Z 2π n n+1 (z − w) dz = iǫ ei(n+1)t dt Cǫ (w)
0
= 2πiδn,−1
where δj,k is 1 if j = k and 0 if j 6= k. Thus Z 1 (w − z)−n−1 f (z) dz. an = 2πi Cǫ (w)
(4.4)
Note that a−1 = Res(f, w). Although formula (4.4) for an works fine, in practice it is often easier to note that (z − w)m f (z) is analytic at w and Taylor expand it. Then an is the (n + m)th Taylor coefficient of (z − w)m f (z), an = Example 4.5:
dn+m 1 m . (z − w) f (z) n+m (n + m)! dz z=w
Rather than using either of the above formulae to find the coefficients an
it’s often easier to just expand. The first few terms of the Laurent series about 0 for ez
1 1 = 1 2 −1 z + 2 z + 61 z 3 + . . . 1 1 = 1 z 1 + 2 z + 61 z 2 + . . . 1 2 1 1 1 − z + z + ... = z 2 12 1 1 1 = − + z + .... z 2 12 51
Complex Analysis
If ln z is chosen so that ln 1 = 0 the first few terms of the Laurent series about 1 for 1 1 = ln z ln 1 + (z − 1)
1 (z − 1) − − 1)2 + 13 (z − 1)3 + . . . 1 1 = 1 z − 1 1 − 2 (z − 1) + 13 (z − 1)2 + . . . 1 1 1 2 1 + (z − 1) − (z − 1) + . . . z−1 2 12 1 1 1 = + − (z − 1) + . . . . z − 1 2 12 =
..................
1 2 (z
52
Linear O.D.E.
Linear Ordinary Differential Equations The most general inhomogeneous nth order linear ordinary differential equation may be written
dn dn−1 an (x) n + an−1 (x) n−1 + . . . + a0 (x) f (x) = g(x). dx dx
(5.1)
Here g is known. It turns out that to solve this equation it is necessary to have completely solved the equation with g = 0. We will thus begin with
dn dn−1 an (x) n + an−1 (x) n−1 + . . . + a0 (x) f (x) = 0. dx dx
(5.2)
(5.2) is a homogeneous equation. We’ve already seen two things: there are in general many solutions to a linear ODE and linear combinations of solutions are also solutions. The immediate problem is to write down a general form for the solutions of the equation. Regular Points: A regular point for (5.2) is a point at which the aj (x) are all continuous and an 6= 0. The basic fact about linear ODEs is that at regular points a there is a unique solution f with prescribed values of f (a), f ′ (a), . . ., f (n−1) (a). Thus, given any n numbers α0 , α1 , . . . , αn−1 and given any regular point a there is a solution whose first n − 1 derivatives at a are the αj . Since any solution obviously has n − 1 derivatives at a there is a one to one correspondence
between the set of n independent numbers, in other words Rn or Cn , and the set of solutions to the linear ODE. Imagine that we have somehow produced n linearly independent solutions of (5.2), f1 , f2 , . . . , fn . Then the linear combinations f = c1 f1 + c2 f2 + . . . + cn fn
(5.3)
are all solutions. The set of all such combinations is an n-dimensional vector space. In other words, the general such linear combination has n free parameters. One can hope that by choosing these parameters correctly one can give f whatever n − 1 derivatives one wants at (j)
a prescribed point a. (To see that this is possible choose the fk so that fk (a) = δk,j+1 so 53
Linear O.D.E.
that ck = αk+1 . Any other choice for the fk amounts to a change of basis.) Thus, in order to completely solve an nth order linear ODE it is sufficient to produce n linearly independent solutions. Example 5.1:
Fortunately (or perhaps intentionally) one of the most commonly used
equations in applications is the simplest: the 1-dimensional Helmholtz equation d2 + k 2 f (x) = 0. 2 dx This equation is a linear ODE which is regular on all of R. The general solution may be written f (x) = c1 cos kx + c2 sin kx. Note that c1 = f (0) and c2 =
f ′ (0) k .
Linear superposition is not the only way to parameterize solutions to linear equations. A common parameterization of solutions to the 1-dimensional Helmholtz equation is f (x) = A cos(kx − φ). Here A and the phase φ are the parameters. One can easily show that A2 = c21 + c22 and tan φ =
c2 . c1
..................
Example 5.2: Consider time harmonic solutions of the 1-dimensional bending equation, u(x, t) = f (x) sin(ωt). With the notation k =
√
K ω one has d4 2 − k f = 0. dx4 54
Linear O.D.E.
√ √ √ √ The four solutions cos( k x), sin( k x), cosh( k x) and sinh( k x) are clearly linearly independent since no combination of trigonometric and hyperbolic functions of a real variable can be zero everywhere. Thus √ √ √ √ f (x) = c1 cos( k x) + c2 sin( k x) + c3 cosh( k x) + c4 sinh( k x). Imagine that the rod is subjected to a periodic stress at x = 0 in such a way that the deflection at 0, α0 , is negative the amount of bending α1 = −α0 . Assume further that the second and third derivatives, α2 and α3 , are 0 at x = 0. Then
1 0 −k 0
0 1 k2 0 3 −k 2
1 0 k 0
0 c1 1 1 k 2 c2 −1 = α0 . 0 c3 0 3 k2 c4 0
This system is most easily solved directly (without trying to invert the matrix). One finds c3 = c1 , c4 = c2 , c1 =
α0 2
and c2 = − 2α√0k . Thus
√ √ √ √ sin( k x) + sinh( k x) α0 √ f (x) = cos( k x) + cosh( k x) − . 2 k ..................
The fact that a solution is completely determined by it’s 0th through (n − 1)st derivatives at one point allows one to study models in which different differential equations are solved in different regions. If the model is L1 f = 0 for x < a and L2 f = 0 for x > a the way to proceed is to find the general solution to L1 f1 = 0 and the general solution to L2 f2 = 0 and then impose the constraint that (j)
(j)
f1 (a) = f2 (a) for j = 0, 1, . . . , n − 1. Example 5.3:
Consider a model in which f satisfies 1-dimensional Helmholtz equations
with different values of k for x > 0 and x < 0: 2 d2 f −k− f = 2 2 −k+ f dx 55
if x < 0 if x > 0.
Linear O.D.E.
For x < 0 one has f (x) = a1 cos(k− x) + a2 sin(k− x) and for x > 0 f (x) = b1 cos(k+ x) + b2 sin(k+ x). At x = 0 we must have a1 = b1 and k− a2 = k+ b2 . Thus the general solution may be written a1 cos(k− x) + a2 sin(k− x) if x < 0 f (x) = k− a1 cos(k+ x) + k+ a2 sin(k+ x) if x > 0. ..................
The set of equations f (j) (a) = αj is a set of n linear equations for the coefficients ck . Explicitly
c1 α0 c2 α1 . = . . .. .. ··· (n−1) (n−1) (n−1) cn αn−1 (a) (a) · · · fn (a) f2 f1 f1 (a) f1′ (a) .. .
f2 (a) f2′ (a) .. .
··· ···
fn (a) fn′ (a) .. .
This equation has a solution for any choice of the αj only if the matrix on the left is invertible. One can conclude that the following statements are equivalent: i) Any solution of (5.2) may be written in the form (5.3). ii) The fj are n linearly independent solutions of (5.2). iii) The fj are solutions of (5.2) for which the determinant
W(f1 , . . . , fn ) = det
f1 (a) f1′ (a) .. .
(n−1) (a) f1
6= 0 56
f2 (a) f2′ (a) .. .
(n−1) (a) f2
··· ···
fn (a) fn′ (a) .. .
··· (n−1) · · · fn (a)
(5.4)
Linear O.D.E.
for any a. The determinant in (5.4) is known as the Wronskian of f1 , . . . , fn . In general, producing n linearly independent solutions to an nth order ODE can be extremely difficult if n > 1. Often one needs to resort to numerical procedures. As a rule the difficulty increases with n. However, most of the problems we will encounter are second order. The most general second order linear ODE is d d2 + a0 (x) f (x) = 0. a2 (x) 2 + a1 (x) dx dx As indicated we restrict ourselves to x for which a2 6= 0. Thus we can divide by a2 writing the equation
d2 d + p(x) + q(x) f (x) = 0 dx2 dx
(5.5)
with obvious definitions of p and q. The Wronskian of two functions has a simple form W(f1 , f2 ) = f1 f2′ − f1′ f2 . If f1 and f2 are solutions of (5.2) then one finds by direct calculation that d + p W(f1 , f2 ) = 0. dx This first order equation can be solved immediately giving Abel’s identity W(f1 , f2 ) = Ke−
R
p dx
where K is a constant. In particular, to determine W(f1 , f2 ) one need only find K, and this
can be done at any point x. Further, since ex is never 0, f1 and f2 are linearly independent solutions if their Wronskian is non-zero anywhere. Given one solution to a second order linear ODE a second solution can be constructed using the method called reduction of order. Let f1 be a solution of (5.5). We will look for another solution of the form f2 = uf1 and will obtain an equation for u by substituting f2 into (5.5). One obtains d2 f1′ d + (2 + p) u = 0. dx2 f1 dx 57
Linear O.D.E.
This is a first order equation for u′ which can be solved immediately, 1 − u = const e (f1 )2 ′
R
p dx
.
The constant is uninteresting and can be set to 1. A final comment about regular points. If the coefficients aj (x) are analytic at x then it turns out that the solutions f are analytic at x as well. (Note that this holds for the solutions of the Euler equations of Example 5.5 except at x = 0.) Since the solutions are analytic at any regular point they have convergent Taylor expansions about regular points. Consider a linear second order ODE with analytic coefficients. Let a be a
Example 5.4:
regular point. Then the Taylor expansions of the solutions can be obtained directly from the differential equation as follows. Write the equation in the form (5.5) and expand everything in sight in a Taylor expansion about a: p(x) =
∞ X
pj (x − a)j ,
∞ X
qj (x − a)j
∞ X
cj (x − a)j .
j=0
q(x) =
j=0
and f (x) =
j=0
Then
∞ ∞ ∞ d2 X X X k d m + + pk (x − a) qm (x − a) cj (x − a)j = 0 2 dx dx m=0 j=0 k=0
from which it follows that ∞ h X
(j + 2)(j + 1)cj+2 +
j=0
i X (m + 1)cm+1 pk + cm qk (x − a)j = 0.
k+m=j
In order for a power series to be 0 each coefficient must be 0. Thus (j + 2)(j + 1)cj+2 +
X
(m + 1)cm+1 pk + cm qk = 0.
k+m=j
At j = 0 one has 2c2 + c1 p0 + c0 q0 = 0. 58
Linear O.D.E.
Note that c2 can be expressed as a function of c0 and c1 c2 = −
p0 c1 + q0 c0 . 2
At j = 1, 6c3 + 2c2 p0 + c1 (q0 + p1 ) + c0 q1 = 0 giving c3 as a function of c0 and c1 , c3 = −
(q0 + p1 − p20 )c1 + (q1 − p0 q0 )c0 . 6
Ultimately, all the Taylor coefficients cj can be expressed as some combination of the pj and qj times c0 plus some other combination of the pj and qj times c1 , determining the solutions up to two free parameters, c0 and c1 . Note that the solution with f (0) = 1 and f ′ (0) = 0, call it f1 (x), is obtained by setting c0 = 1 and c1 = 0 while the solution with f (0) = 0 and f ′ (0) = 1, call it f2 (x) is obtained by setting c0 = 0 and c1 = 1. Then f (x) = c0 f1 (x) + c1 f2 (x). Of course the expressions for cj generally get more complicated as j increases. As a general technique, this approach is useful only for obtaining the first few terms in the Taylor expansions of f . There are, however, special cases. One classic example is Airy’s equation, d2 − x f (x) = 0. dx2
(5.6)
One finds that (j + 2)(j + 1)cj+2 − cj−1 = 0 for j > 0 and 2c2 = 0 for j = 0. This is a fairly simple recursion relation. Note that it relates coefficients c3j to c3j−3 , c3j+1 to c3j−2 and c3j+2 to c3j−1 . One finds, working from j = 1 out, that c3j =
1 c0 , (3j)(3j − 1)(3j − 3)(3j − 4) · · · (3)(2)
c3j+1 =
1 c1 (3j + 1)(3j)(3j − 2)(3j − 3) . . . (4)(3) 59
Linear O.D.E.
and c3j+2 = 0. Thus we have produced two linearly independent solutions to Airy’s equation: one by choosing c0 = 1 and c1 = 0, f1 (x) = 1 +
1 3 1 1 x + x6 + x9 + · · · , 3·2 6·5·3·2 9·8·6·5·3·2
and another by choosing c0 = 0 and c1 = 1, f2 (x) = x +
1 1 1 4 x + x7 + x10 + · · · . 4·3 7·6·4·3 10 · 9 · 7 · 6 · 4 · 3
..................
Regular Singular Points: If an (a) = 0 in (5.2) then a is a singular point. We will restrict our attention to second order equations. In the form (5.5) a singular point x is a point at which p and q diverge. Loosely put, regular singular points are points at which the divergence of p is first order and the divergence of q is second order. We begin with an example which is characteristic of and neccesary for the whole theory. Example 5.5: A class of exactly solvable linear ODEs are the Euler equations, d2 d ax2 2 + bx + c f (x) = 0. dx dx Note that at x = 0 the coefficient of
d2 dx2
is 0 and the basic facts quoted above don’t apply.
x = 0 is said to be a singular point for these equations. For x 6= 0 we can expect that all solutions can be built from linear combinations of any two linearly independent solutions. Try the form f (x) = xr . Then ar(r − 1) + br + c = 0 so that we find r=
a−b±
p
(a − b)2 − 4ac . 2a
60
Linear O.D.E.
Thus, if (a − b)2 − 4ac 6= 0 two solutions have been produced, f1 (x) = x
a−b+
and f2 = x
a−b−
√
√
(a−b)2 −4ac 2a
(a−b)2 −4ac 2a
.
If (a − b)2 − 4ac = 0 only one solution f1 (x) = x
a−b 2a
has been produced. The other is of the form f2 (x) = u(x)x where b−a a
′
u (x) = x 1 = . x
e
b −a
a−b 2a
R
1 x
dx
Thus we may choose f2 (x) = x
a−b 2a
ln x
and the general solution can be written f (x) = x
a−b 2a
c1 + c2 ln x .
Note that even though the coefficient of
d2 dx2
is 0 at x = 0 one can obtain solutions
for all x 6= 0. Note that there are always solutions which are singular at x = 0. ..................
The general approach will be the same as taken for the Euler equations of Example 5.5: solve the equations near, but not at a, and see how they behave as x approaches a. We saw for the Euler equations that there is a variety of possible behavior: solutions can have poles or branch points at singular points or can be regular. 61
Linear O.D.E.
Example 5.6:
Unfortunately singular ODEs arise often in applications. Consider the
3-dimensional Helmholtz equation 2 ∆ + k Ψ = 0. In a homework problem you are asked to transform the Laplacian into cylindrical coordinates obtaining ∆=
∂2 1 ∂ 1 ∂2 ∂2 + + + . ∂r2 r ∂r r2 ∂φ2 ∂z 2
(5.7)
In this coordinate system the Helmholtz equation is separable in the sense that solutions in the form of a product Ψ(r, φ, z) = R(r)Φ(φ)Z(z) can be found, reducing a PDE to three ODEs. In fact, setting Φ(φ) = eimφ and Z(z) = eikz z one finds that
d2 m2 1 d 2 2 − 2 − kz + k R(r) = 0. + dr2 r dr r
(5.8)
This equation is singular at r = 0. In Example 3.4 the Laplacian was expressed in spherical coordinates, (3.1). Again, the Helmholtz equation is separable in this coordinate system. Setting Ψ(r, θ, φ) = R(r)Θ(θ)Φ(φ) one obtains solutions if Φ(φ) = eimφ ,
and then
1 d d m2 sin θ − Θ(θ) = −l(l + 1)Θ(θ) sin θ dθ dθ sin2 θ
(5.9)
d2 2 d l(l + 1) 2 R(r) = 0. + − + k dr2 r dr r2
(5.10)
Equation (5.9) is singular at θ = 0 and θ = π. Equation (5.10) is singular at r = 0. The choice of l(l + 1) as separation parameter in (5.9) turns out to be convenient. Equations (5.8), (5.9) and (5.10) will have to be dealt with in detail in this course. 62
Linear O.D.E.
..................
A singular point a of a second order linear ODE (5.5) is said to be a regular singular point if p and q are both meromorphic with a pole at a of order no larger than 1 for p and 2 for q. It follows that, for x close enough to a, p(x) =
p−1 + p0 + p1 (x − a) + p2 (x − a)2 + . . . x−a
and q(x) =
q−1 q−2 + + q0 + q1 (x − a) + q2 (x − a)2 + . . . . 2 (x − a) x−a
To understand the behavior of solutions to (5.5) as x approaches a it seems reasonable to approximate p and q by their most divergent terms, p(x) ≈
p−1 x−a
and q(x) ≈
q−2 . (x − a)2
Substituting these approximations into (5.5) and shifting variables so that a = 0 one obtains an Euler equation
d2 q−2 p−1 d + 2 f (x) ≈ 0. + dx2 x dx x
Thus, it seems reasonable to expect that the behavior, as x approaches a regular singular point, of the correct solution f is similar to the behavior of the related Euler equation. Recalling Example 5.5 this behavior is (x − a)r where r satisfies the indicial equation r(r − 1) + p−1 r + q−2 = 0. The facts are as follows. Let r1 and r2 be the roots of the indicial equation. Assume that r1 ≥ r2 and set a = 0. Then i) If r1 − r2 is not an integer then there are analytic functions g1 and g2 so that g1 (0) = g2 (0) = 1 and f1 (x) = xr1 g1 (x) 63
Linear O.D.E.
f2 (x) = xr2 g2 (x) are linearly independent solutions. ii) If r1 = r2 then there are analytic functions g1 and g2 so that g1 (0) = g2 (0) = 1 and f1 (x) = xr1 g1 (x) f2 (x) = f1 (x) ln x + xr1 g2 (x) are linearly independent solutions. iii) If r1 − r2 is a positive integer then there are analytic functions g1 and g1 and a constant c so that g1 (0) = g2 (0) = 1 and f1 (x) = xr1 g1 (x) f2 (x) = c f1 (x) ln x + xr2 g2 (x) are linearly independent solutions. The Taylor coefficients of the gj and the constant c can be determined by substituting into the differential equation as in Example 5.4. The procedure is tedious but straightforward. However, the Taylor expansion of the gj is not very interesting. What is important about these results is that the nature of the solutions singularities is made explicit: f1 (x) = xr1 1 + a1 x + . . . and
f2 (x) = cf1 (x) ln x + xr2 1 + b1 x + . . .
for some coefficients aj , bj and c. In case (i) c = 0 and in case (ii) c = 1. Example 5.7: Bessel’s differential equation is d2 1 d m2 + − + 1 f (x) = 0. dx2 x dx x2
(5.11)
Choose m > 0. The solutions of this equation are known as Bessel functions. Note that if f is a solution to Bessel’s equation then R(r) = f
p
k 2 − kz2 r
64
Linear O.D.E.
is a solution to the radial part of the Helmholtz equation in cylindrical coordinates, (5.8). Note that there is no apriori restriction on the sign of k 2 − kz2 . Bessel’s equation is singular at 0 with p(x) = and q(x) = The indicial equation is
Thus r1 = m and r2 = −m.
1 x
−m2 + 1. x2
0 = r(r − 1) + r − m2 = r2 − m2 .
The nature of the solutions depends on m. The case of interest for the Helmholtz equation is m = 0, 1, 2, . . .. There is always a solution which is regular at 0, the f1 given above. The standard notation for this solution is Jm (x) and is called the Bessel function of order m of the first type. It is generally chosen so that 1 x m Jm (x) = 1 + c1 x + c2 x2 + . . . . m! 2
The second solution which arises from the analysis above, the f2 , is called the mth order
Bessel function of the second type and is denoted by Ym (x) (although notation varies: Morse and Ingard call it Nm ). It is generally chosen so that, Ym (x) =
2 (m − 1)! x −m Jm (x) ln x − 1 + d1 x + d2 x2 + . . . . π π 2
The coefficients cj are determined by substituting into the differential equation and setting the coefficients of the different powers of x equal to 0, beginning with the lowest power and working out. The dj are more subtle since adding any multiple of Jm to Ym doesn’t effect the stated properties of Ym : Ym + cJm is a solution to Bessel’s equation which diverges as x → 0 in the same way that Ym does. Note that this has no effect on the coefficients dj until j = 2m (see problem 24). Note that, up to a multiplicative constant, Jm is the only solution which is finite at x = 0. Further, for m > 0 the solution Ym diverges like
1 xm
and for m = 0 like ln x. Jm is
analytic while Ym , because of the natural logarithm, is multivalued with a branch point at x = 0. 65
Linear O.D.E.
..................
Example 5.8: Now consider (5.10). Solutions to this equation can be given by R(r) = f kr where f satisfies
d2 2 d l(l + 1) + − + 1 f (x) = 0. dx2 x dx x2
(5.12)
Solutions to (5.12) are sometimes called spherical Bessel functions. This equation has a regular singular point at x = 0. The indicial equation is 0 = r(r − 1) + 2r − l(l + 1) = (r − l)(r + l + 1). Thus, r1 = l and r2 = −l − 1. It follows that there is a one solution (up to an overall
multiplicative constant) which behaves like xl as x → 0. It is generally given as jl (x) =
xl 1 + a1 x + a2 x2 + . . . . (2l + 1)(2l − 1) · · · (3)(1)
Any other solution which is linearly independent of jl must diverge as x → 0, behaving like
x−l−1 . Further, this second solution might also have a logarithmic divergence and thus be
multi-valued about 0 since r1 − r2 = 2l + 1 is an integer. In this case, however, we turn out to be lucky: there is no logarithmic term (the coefficient c in case (iii) turns out to be 0). A standard choice is yl (x) = −
(2l + 1)(2l − 1) · · · (3)(1) 2 1 + b x + b x + . . . . 1 2 xl+1
Note that replacing x by −x in (5.12) doesn’t change the differential equation. Thus
jl (−x) and yl (−x) are solutions as well. But jl (−x) ∼ xl as x → 0 implying that jl (x) and jl (−x) are linearly dependent, which means that they are proportional to each other. Thus xl 1 + a1 x + a2 x2 + . . . (2l + 1)(2l − 1) · · · (3)(1) = const (−1)l
xl 1 − a1 x + a2 x2 + . . . . (2l + 1)(2l − 1) · · · (3)(1) 66
Linear O.D.E.
This is only possible if const = (−1)l and 0 = a1 = a3 = a5 = . . . . For yl the situation is a bit more subtle and depends on the choice of yl . One can add factors of jl to yl without effecting the behavior of yl at 0. Further, since yl (−x) is also a solution which diverges like
1 xl+1
at 0, the best one can say is that
yj (−x) = c1 yl (x) + c2 jl (x). However, one can insist that yl (−x) = (−1)l+1 yl (x) since if this were not the case for yl one could choose yl (x) + (−1)l+1 yl (−x) as the second solution. Thus, with this choice for yl , 0 = b1 = b3 = b5 = . . . . The coefficients a2 and b2 can now be determined by substituting into the differential equation. ..................
Large x Asymptotics: We’ve seen how to estimate the local behavior of solutions to second order linear ODEs at regular points and in the neighborhood of a regular singular point. We saw that near a regular point the solutions don’t behave in any spectacular way in the sense that nothing particular stands out. The solutions are analytic and, if we want, we can generate their Taylor expansions. Near a singular point, on the other hand, the solutions can be singular. We can find the precise form of the singular parts and, since they dominate near a singularity, from them we know what the solution has to look like as one closes in on the singularity. Thus, in a sense, singular points make things easier: there is some distinctive asymptotic behavior which can be extracted without fully solving the equation. A similar thing often happens as x → ∞. There is often some explicit behavior which can be extracted without fully solving the equation. As a simple example consider d2 λ 2 f (x) = 0. + κ + dx2 x2 As x → ∞ one expects that the term
λ x2
(5.13)
might become negligible compared to κ2 . If so then
one should have f (x) ≈ c1 eiκx + c2 e−iκx 67
Linear O.D.E.
for sufficiently large x. To see if this works let’s try to generate a series f (x) = e
±iκx
a2 a1 + 2 + ... . 1+ x x
Substituting into the differential equation one finds d2 a1 λ ±iκx a2 2 0= 1+ +κ + 2 e + 2 + ... dx2 x x x d2 a d λ a2 1 1 + = e±iκx ± 2iκ + + + . . . dx2 dx x2 x x2 1 1 ±iκx (∓2iκa1 + λ) 2 + (2a1 ∓ 4iκa2 + λa1 ) 3 + . . . . =e x x It follows that a1 = ± a2 = −
λ , 2iκ
(λ + 2)λ 8κ2
and so on. Thus there are two linearly independent solutions, f± , whose asymptotic forms, as x → ∞ are
λ 1 (λ + 2)λ 1 f± (x) = e±iκx 1 ± − + . . . . 2iκ x 8κ2 x2
(5.14)
The general term in this expansion is not hard to find. One has a one step recursion obtained from the coefficient of
1 xj+2 ,
λ + j(j + 1) aj ∓ 2iκ(j + 1)aj+1 = 0. By starting at j = 1 and working up one finds aj+1 = (±1)(j+1)
λ + j(j + 1) λ + (j − 1)j · · · (λ + 2)λ . (j + 1)!(2iκ)j+1
Note that the even and odd coefficents have different behavior. For j even, j = 2k, λ + 2k(2k + 2) λ + (2k − 2)2k · · · (λ + 2)λ a2k = (−1)k 2k (2k)!(2κ) is real and independent of ±. For j odd, j = 2k + 1, λ + (2k + 1)(2k + 3) λ + 2k(2k + 1) · · · (λ + 2)λ a2k+1 = ±(−1)k+1 i 2k+1 (2k + 1)!(2κ) 68
Linear O.D.E.
is pure imaginary and proportional to ±. In particular, f+ (x) = f− (x) so that f1 =
f+ + f− 2
f2 =
f+ − f− 2i
and
are two real linearly independent solutions with leading order large x asymptotic behavior f1 (x) ≈ cos x and f2 (x) ≈ sin x. Note as well that f1 is even in x while f2 is odd. Example 5.9:
Consider Bessel’s differential equation (5.11). First eliminate the
d dx
term
using the transformation f (x) = u(x)g(x) and choosing u so that the resulting equation for g has no first derivative term: d2 1 d m2 + − + 1 ug dx2 x dx x2 d2 u′ d u′′ 1 d 1 u′ m2 =u + 2 + + + − + 1 g dx2 u dx u x dx x u x2
0=
so that choosing 2 one has
Solving (5.15) one finds
1 u′ + =0 u x
d2 u′′ 1 u′ m2 + + − + 1 g = 0. dx2 u xu x2 1 u = const √ . x
An overall multiplicative constant is uninteresting so set const = 1. 69
(5.15)
Linear O.D.E.
Thus, solutions f to Bessel’s equation can be written 1 f (x) = √ g(x) x where
d2 m2 − − dx2 x2
1 4
+ 1 g = 0.
This equation is precisely of the form (5.13) with κ = 1 and λ = −m2 + 41 . Using (5.14) one finds that there are two linearly independent solutions to Bessel’s equation with asymptotic forms
±ix e√ . x
(±)
These are called Hankel functions, are denoted by Hm and are chosen to be r m2 − 14 1 (m2 − 94 )(m2 − 41 ) 1 2 ±i(x− mπ − π ) (±) 2 4 e − + ... . 1∓ Hm (x) = πx 2i x 8 x2 There are constants a± and b± for which
(±) Hm = a± Jm + b± Ym . (±)
To see find what these constants are requires knowledge about Jm , Ym and Hm
for the
same range of x. Short distance asymptotics for Jm and Ym and long distance asymptotics (±)
for Hm
will not suffice. Note that both Jm and Ym are real so that one is proportional to
(±)
(±)
Re Hm , the other to Im Hm . It turns out (in a homework problem) that (±) Hm = Jm ± iYm .
In particular, for large x Jm (x) ≈
r
2 mπ π cos(x − − ) πx 2 4
Ym (x) ≈
r
2 mπ π sin(x − − ). πx 2 4
and
..................
Example 5.10:
The same style of large x analysis can be applied to the spherical Bessel
equation (5.12). Again, setting f (x) = u(x)g(x) and choosing u so that the resulting equation for g has no first derivative term one finds that d2 u′ d u′′ 2 d 2 u′ l(l + 1) +2 + + + − +1 g =0 u dx2 u dx u x dx x u x2 70
Linear O.D.E.
so that 2
u′ 2 + = 0. u x
Solving for u one finds that one can choose u=
1 x
and then, with this choice, that d2 l(l + 1) − + 1 g = 0. dx2 x2 It follows that two linearly independent solutions to the spherical Bessel equation can be found with asymptotic form x1 e±x . These are called spherical Hankel functions and are given in standard form by (±) hl (x)
l(l + 1) 1 i∓(l+1 ) ±ix e 1∓ + ... . = x 2i x
The spherical Bessel functions can actually be given in elementary form: jl (x) = (−1)l xl and
1 d l sin x x dx x
yl (x) = (−1)(l+1) xl It follows that
(5.16)
1 d l cos x . x dx x
(5.17)
sin x x cos x y0 (x) = − x sin x cos x j1 (x) = 2 − x x cos x sin x y1 (x) = − 2 − x x j0 (x) =
and so on. Note that the leading order large x asymptotics for the spherical Bessel functions can be obtained from (5.16) and (5.17) by letting the derivatives act on as few possible. It follows that for x large jl (x) ≈
(−1)l dl sin x x dxl 71
1 x
terms as
Linear O.D.E.
and yl (x) ≈
(−1)l+1 dl cos x x dxl
from which it follows that (±)
= jl ± iyl .
hl ..................
Example 5.11: As a final example lets look at large positive x asymptotics for solutions to Airy’s equation (5.6). Here, for large x we can’t expect behavior like econst x . Instead try p κxα
f (x) ≈ x e
a1 a2 1 + δ + 2δ + . . . . x x
(5.18)
Substituting into the differential equation d2 p κxα α 2 2α 3 p−2 κxα . − x x e p(p − 1) + ακ(2p + α − 1)x + (ακ) x − x = x e dx2 The largest terms are the x3 and x2α terms. They must cancel each other. This implies that α=
3 2
and (ακ)2 = 1 so that ακ = ±1 and then 2 κ=± . 3 The next largest term is the xα term which must vanish. Thus 2p + α − 1 = 0 so that 1 p=− . 4 The remaining term will be canceled by part of the a1 term in the asymptotic expansion, and so on. Thus, there are two linearly independent solutions to Airy’s equation which, as 1
2
3
x → ∞, have asymptotic form x− 4 e± 3 x 2 . The standard solutions are Ai(x) ≈
1 − 1 − 1 − 2 x 23 π 2x 4e 3 2 72
Linear O.D.E.
and 1
1
2
3
Bi(x) ≈ π − 2 x− 4 e 3 x 2 . The large −x behavior is somewhat different: as x → −∞ it can be shown that 1 1 2 3 Ai(x) ≈ π − 2 x− 4 sin( x 2 ) 3
and 1 1 2 3 Bi(x) ≈ π − 2 x− 4 cos( x 2 ). 3
..................
Example 5.12:
Imagine a sphere of radius r = R + ǫ sin(ωt) with ǫ ≪ R. Consider the
pressure amplitude induced in free space. Recall linear lossless acoustics (Example 1.8), 1 ∂P ′ + ρ0 ∇ · v ′ = 0 2 c ∂t and ρ0
∂v′ + ∇P ′ = 0. ∂t
Differentiating the first equation with respect to t and substituting in from the second one has ∇2 − Assuming harmonic time dependence
1 ∂2 ′ P = 0. c2 ∂t2
P ′ (x, t) = Re PA (x)e−iωt and v′ (x, t) = Re vA (x)e−iωt one has ∇2 +
ω2 PA = 0 c2
−iωρ0 vA = ∇PA . 73
Linear O.D.E.
Since ǫ ≪ R one can model the oscillating sphere as a velicity condition: vA (r, θ, φ) = ǫω. This translates to a condition on PA , −
∂PA = −iω 2 ρ0 ǫ. ∂r r=R
(5.19)
PA satisfies the Helmholtz equation and, since the boundary conditions are independent of θ and φ, can be chosen independent of θ and φ. It follows that (−)
(+)
PA (r) = c1 h0 (kr) + c2 h0 (kr). Further, one expects an outward travelling wave at large r so that c2 = 0. Thus (+)
PA (r) = c1 h0 (kr) eikr = c1 . r To determine c1 use (5.19) from which one finds (+)′
c1 kh0
(kR) = iω 2 ρ0 ǫ.
Thus c1 = iω 2 ρ0 ǫ
R2 e−ikR k(ikR − 1)
so that PA (r) = iω 2 ρ0 ǫ
R2 eikr . k(ikR − 1)eikR r
..................
Example 5.13: Now consider an infinitely long cylinder whose radius is r = R + ǫ sin(ωt) with ǫ ≪ R and kR ≪ 1. Then (+)
PA (r, θ, z) = c1 H0 (kr) with −
∂PA = −iω 2 ρ0 ǫ. ∂r r=R 74
Linear O.D.E.
It follows that (+)′
−c1 kH0 (+)′
To evaluate H0
(kR) = −iω 2 ρ0 ǫ.
(kR) we use the assumption that kR ≪ 1 and the small x asymptotics for
Bessels equation. One has (+)′
H0
d (J0 (x) + iY0 (x)) dx d 2 ≈ i ln(x) dx π 2i . = πx
(x) =
It follows that c1 ≈
π 2 ω ρ0 ǫR. 2
..................
75
Boundary Value Problems
Boundary Value Problems for O.D.E.s Consider the second order homogeneous ordinary differential equation (5.5), d2 d + q(x) f (x) = 0, + p(x) dx2 dx for x ∈ (a, b). A boundary value problem for this differential equation on (a, b) is two conditions F f (a), f ′ (a) = 0
and
G f (b), f ′ (b) = 0.
The boundary conditions are said to be linear and/or homogeneous if F and G are. We will only be concerned with linear boundary conditions, αa f (a) + βa f ′ (a) = A αb f (b) + βb f ′ (b) = B. The boundary conditions are homogeneous only if A = B = 0. To see how to solve such a boundary value problem let f1 and f2 be linearly independent solutions of (5.5). Then f = c1 f1 + c2 f2 for some constants c1 and c2 . The boundary conditions become the following equation for c1 and c2
A . = B A is non-zero then this If the boundary conditions are inhomogeneous, so that the vector B equation has a solution, that is, one can produce a unique pair of coefficients c1 and c2 , only
αa f1 (a) + βa f1′ (a) αa f2 (a) + βa f2′ (a) αb f1 (b) + βb f1′ (b) αb f2 (b) + βb f2′ (b)
c1 c2
if the matrix is invertible. If so, then one can explicitly produce a solution, f , satisfying the boundary conditions. Note that this solution is unique: multiples of f by a constant, cf , will not satisfy the boundary conditions unless the constant c = 1. 76
Boundary Value Problems
Example 6.1: For x ∈ (0, L) consider the one dimensional Helmholtz equation (
d2 + k 2 )f (x) = 0. dx2
Let f satisfy the boundary conditions f ′ (0) = A and f (L) = 0. Writing f (x) = c1 sin(kx) + c2 cos(kx) one has
k 0 sin(kL) cos(kL)
c1 c2
=
A 0
.
It follows that if k cos(kL) 6= 0 then 1 c1 A cos(kL) 0 = c2 0 k cos(kL) − sin(kL) k so that f (x) = If k = 0 then
d2 f dx2
A A tan(kL) sin(kx) − cos(kx). k k
= 0 so that f (x) = Ax − AL. If cos(kL) then sin(kL) = ±1 so
that the boundary conditions lead to kc1 = A and ±c1 = 0. Thus, unless A = 0 there is no solution. If A = 0 then c1 = 0 and c2 is arbitrary. ..................
Example 6.2:
Consider lossless acoustics in one spatial dimension. The steady state
reponse at frequency ω reduces to the study of (
ω2 d2 + )PA = 0 dx2 c2
and −iωρ0 vA = − 77
dPA . dx
Boundary Value Problems
Note that a relation between PA and vA is equivalent to a relation between PA and PA′ . Often one models the action of a loudspeaker as a velocity specification at the position of the loudspeaker. Consider a cylindrical tube of length L with a loudspeaker mounted at x = 0. Let the end at x = L be closed and let the velocity of the face of the loudspeaker be u0 cos(ωt). Then vA (L) = 0 since the end is closed and vA (0) = u0 . This translates to a boundary value problem for PA : PA′ (0) = −iωρ0 u0 and PA′ (L) = 0. Writing ω ω PA (x) = c1 sin( x) + c2 cos( x) c c one has
ω c cos( ωc L)
0 − sin( ωc L)
c1 c2
=
iωρ0 u0 0
.
Thus, if sin( ωc L) 6= 0 then ω ω ω PA (x) = −iρ0 cu0 sin( x) + cot( L) cos( x) . c c c For frequencies with sin( ωc L) = 0 there is no choice of c1 and c2 which satisfy the boundary conditions unless u0 = 0 when c1 = 0 and c2 is arbitrary. At such a frequency the system is at resonance. Note that the form given for PA above becomes infinite as ω approaches a resonant frequency. Physically this means that in this lossless model the response to a periodic excitation at a resonant frequency is infinite in the sense that there is no steady state. ..................
Homogeneous boundary conditions are more subtle and can not always be satisfied. For these one has αa f (a) + βa f ′ (a) = 0 αb f (b) + βb f ′ (b) = 0. 78
Boundary Value Problems
Again let f1 and f2 be linearly independent solutions of (5.5) so that f = c1 f1 + c2 f2 for some constants c1 and c2 . The boundary conditions become the following equation for c1 and c2
αa f1 (a) + βa f1′ (a) αa f2 (a) + βa f2′ (a) αb f1 (b) + βb f1′ (b) αb f2 (b) + βb f2′ (b)
c1 c2
0 . = 0
The only way for there to be non-zero solutions (that is, at least one of the cj is non-zero) is for det
αa f1 (a) + βa f1′ (a) αa f2 (a) + βa f2′ (a) αb f1 (b) + βb f1′ (b) αb f2 (b) + βb f2′ (b)
= 0.
(6.1)
As we saw in the last two examples this condition is rarely satisfied. If the differential equation (and thus the solutions) has a free parameter (such as k in the one-dimensional Helmholtz equation) then a given homogeneous boundary value problem can typically be satisfied only for a discrete set of values of the free parameter. Further, if f satisfies a homogeneous boundary value problem then cf does as well for any constant c. Finally, if (6.1) is satisfied for any pair of linearly independent solutions f1 and f2 then it is satisfied for all pairs of linearly independent solutions (changing the pairs of solution is just a change of basis). Consider the simple example given by the Helmholtz equation (
d2 + k 2 )f = 0 dx2
with the boundary conditions f (0) = f (L) = 0. Writing f (x) = c1 sin(kx) + c2 cos(kx) the boundary condition can be satisfied only if 0 1 = − sin(kL) = 0. det sin(kL) cos(kL) This is only possible for k =
nπ L
where n is an integer. If so then c2 = 0 and f (x) = c1 sin(
for any c1 . 79
nπ x) L
Boundary Value Problems
Homogeneous boundary value problems lead to the notion of eigen-functions for differential operators. Let L=
d2 d + p(x) + q(x) dx2 dx
be a second order differential operator and let αa f (a) + βa f ′ (a) = 0 αb f (b) + βb f ′ (b) = 0 be homogeneous boundary conditions. The differential equation (L − λ)f (x) = 0
(6.2)
restricted to functions satisfying the boundary conditions is called an eigenvalue problem for L corresponding to the stated boundary conditions. Note that there will be a solution f only for certain values of λ for which (6.1) is satisfied. These values of λ are called eigenvalues of L and the corresponding solutions f are called eigenfunctions. The equation (6.1) appropriate to (6.2) is called an eigenvalue condition. An example is provided by the functions sin( nπ L x) for the one dimensional Helmholtz equation in (0, L) with f (0) = f (L) = 0. In that example λ = k 2 and the eigenvalue condition is − sin(kL) = 0. Another type of homogeneous boundary value problem is given by insisting that the solution in (0, L) be a periodic function on all of R with period L. This is insured if f (0) = f (L) and f ′ (0) = f ′ (L). These are called periodic boundary conditions. If f1 and f2 are linearly independent solutions then f = c1 f1 + c2 f2 and one has
f1 (0) − f1 (L) f2 (0) − f2 (L) f1′ (0) − f1′ (L) f2′ (0) − f2′ (L)
c1 c2
0 = . 0
This system has a non-trivial solution only if det
f1 (0) − f1 (L) f2 (0) − f2 (L) f1′ (0) − f1′ (L) f2′ (0) − f2′ (L)
= 0.
(6.3)
Again, if the differential equation is put in the form (6.2) one has an eigenvalue problem for d2 dx2
with eigenvalue condition (6.3). 80
Boundary Value Problems
For the Helmholtz equation one has f (x) = c1 eikx + c2 e−ikx so that periodic boundary conditions lead to the eigenvalue condition det
1 − eikL 1 − e−ikL ikL ik(1 − e ) −ik(1 − e−ikL )
= −2ik(1 − eikL )(1 − e−ikL ) = −4ik(1 − cos kL) = 0.
Thus the eigenvalue condition becomes cos kL = 1 so that one has k=
2nπ L
for n ∈ {. . . , −2, −1, 0, 1, 2, . . .}. The eigenfunctions are x
e2inπ L . Recall that the eigen values are λ = k 2 . Thus, for this example there are two eigenfunctions, x
e±2i|n|π L , for each eigenvalue
4n2 π 2 L2 .
Note that the general solution, at a given eigenvalue k
is a superposition of the two eigenfunctions and thus still contains two free parameters. Example 6.3: Consider the free vibration of a circular membrane of radius R. In the linear approximation the dispacement of the membrane, u(x, t), satisfies the wave equation 1 ∂2 ∇2 − 2 2 u(x, t) = 0. c ∂t If the membrane is clamped along it’s edge then there is the additional condition u(x, t) = 0 if kxk = R. Transforming to polar coordinates and assuming harmonic time dependence, u(x, t) = Re uA (r, θ)e−iωt , one has the Helmholz equation ∂2 1 ∂ 1 ∂2 ω2 + + + 2 uA (r, θ) = 0 ∂r2 r ∂r r2 ∂θ2 c 81
Boundary Value Problems
with the boundary condition uA (R, θ) = 0. We will look for separable solutions uA (r, θ) = X(r)Y (θ) with X(R) = 0, X(0) finite and Y (θ) periodic with period 2π. Substituting into the Helmholtz equation one has
We get a solution if
1 ′ 1 ω2 ′′ Y X + X + 2 XY + 2 XY = 0. r r c ′′
Y ′′ + λ1 Y = 0, λ1 1 X ′′ + X ′ − 2 X + λ2 X = 0, r r and
ω2 = λ2 . c2 The equation for Y is a one dimensional Helmholtz equation with periodic boundary
conditions. Thus λ1 = ±m for m = 0, 1, 2, . . . corresponding to solutions Y (θ) = a eimθ + b e−imθ . Given this choice for Y the equation for X leads to Bessel’s equation and has solutions p p X(r) = a Jm ( λ2 r) + b Ym ( λ2 r). The condition that X be finite at r = 0 implies that b = 0. The condition that X(R) = 0 implies that Jm ( It follows that
√
p λ2 R) = 0.
λ2 R must be a root of Jm . There are an infinite number of these roots. Let
xm,j be these roots, Jm (xm,j ) = 0, listed in increasing order. For xm,j large enough they can be estimated using the large x asymptotic form for Jm . This leads to the approximation cos(xm,j −
mπ π − )≈0 2 4 82
Boundary Value Problems
which gives xm,j ≈ (nm +
1+m 2
+ 41 )π for large j. nm is an integer. For small values of j the
zeros must be approximated numerically. One finds x0,1 = 2.405, x0,2 = 5.520, x0,3 = 8.654, . . . x1,1 = 3.832, x1,2 = 7.016, x1,3 = 10.173, . . . x2,1 = 5.136, x2,2 = 8.417, x2,3 = 11.620, . . . and so on. In general, setting km,j =
√
λ2 one has km,j =
xm,j R
and solutions X(r) = Jm (km,j r). Thus we have solutions uA as long as
m,j (r, θ)
= Jm (km,j r) aeimθ + be−imθ
ω2 2 = km,j . c2
The frequencies ωm,j = ckm,j are called the resonant frequencies of the system. ..................
Example 6.4:
Now consider a related problem: the vibration of a circular membrane of
radius R whose center is being vibrated harmonically by a small piston of radius r0 attatched to the center of the membrane. The piston moves through a distance ǫ cos(ωt). This leads to the condition u(x, t) = ǫ cos(ωt) for kxk = r0 as well as u(x, t) = 0 if kxk = R. Proceeding as in the previous example one has u(x, t) = Re uA (r, θ)e−iωt 83
Boundary Value Problems
with uA (r0 , θ) = ǫ and uA (R, θ) = 0. Since ǫ 6= 0 the solution must be independent of θ. It follows that ω ω uA (r, θ) = a J0 ( r) + b Y0 ( r). c c The boundary conditions become
J0 ( ωc r0 ) Y0 ( ωc r0 ) J0 ( ωc R) Y0 ( ωc R)
ǫ a . = 0 b
This system has a non-trivial solution only if ω ω ω ω J0 ( r0 )Y0 ( R) − J0 ( R)Y0 ( r0 ) 6= 0. c c c c Frequencies for which this determinant is 0 are precisely the resonant frequencies of the system. If the system is not at resonance one has 1 Y0 ( ωc R) −Y0 ( ωc r0 ) a ǫ = ω ω ω ω ω ω b 0 J0 ( c r0 )Y0 ( c R) − J0 ( c R)Y0 ( c r0 ) −J0 ( c R) J0 ( c r0 ) leading to u(x, t) = ǫ Re
Y0 ( ωc R)J0 ( ωc r) − J0 ( ωc R)Y0 ( ωc r) −iωt e . J0 ( ωc r0 )Y0 ( ωc R) − J0 ( ωc R)Y0 ( ωc r0 )
If ω is a resonant frequency then one of the assumtions are wrong: there are no solutions whose time dependence is given by e−iωt . In fact, a lossless resonant system has no steady state: the amplitude of vibration in such a system grows with time. ..................
84
Eigenfunction Expansions I
Eigenfunction Expansions I: finite domains Recall that given an n × n matrix A the equation A−λ v =0
has solutions only for those values of λ for which det(A − λ) = 0. Recall further that if A is self-adjoint then it has n orthogonal eigenvectors which can be normalized and used as an orthonormal basis. Note that in general w∗ Av = (A∗ w)∗ v. Thus, if hw, vi = w∗ v is the standard inner product on Cn then A is self-adjoint if and only if hw, Avi = hAw, vi. Now consider a second order differential operator L(λ) =
d d P (x) + Q(x) − λR(x) dx dx
on an interval (a, b) with P , Q and R real and P and R positive on (a, b). Consider functions f, g : (a, b) −→ C and introduce the inner product hf, gi = Consider hf,
Z
b
f (x)g(x) dx.
a
d d P (x) + Q(x) − λR(x) gi dx dx
for functions f and g which are sufficiently well behaved. Integrating by parts twice one has Z
a
so that
b
Z b ¯ d df dg d ′ b ¯ f (x) P (x) g(x) dx = f P g a − P dx dx dx a dx dx Z b b b d d ′ ′ P f¯ g dx = f¯P g a − f¯ P g a + dx dx a
d d d d ′ ′ b ¯ ¯ P (x) + Q(x) − λR(x) gi = P (f g − f g) a + h P (x) + Q(x) − λR(x) f, gi. hf, dx dx dx dx 85
Eigenfunction Expansions I
Thus if
b P (f¯g ′ − f¯′ g) a = 0
then L is self-adjoint in the sense that
hf, Lgi = hLf, gi. This is where the boundary conditions come in. If one restricts attention to a set of b functions which satisfy some conditions at a and b which make P (f¯g ′ − f¯′ g) a = 0 then L is
self-adjoint on this restricted set. The most common examples of such a condition are
i) The Sturm-Liouville problem: any linear homogeneous two point boundary condition αa f (a) + βa f ′ (a) = 0 αb f (b) + βb f ′ (b) = 0 with αa , βa , αb and βb all real. ii) Periodic boundary conditions with P = 1: f (a) = f (b) and f ′ (a) = f ′ (b). The essential facts are i) The equation L(λ)f = 0 has non-zero solutions only for λ in some discrete, infinite set {λ1 , λ2 , . . .}. The λj are called the eigenvalues of L. The set of all the eigenvalues is called the spectrum of L. The solutions fj to L(λj )fj = 0 which satisfy the boundary conditions are called the eigenvectors of L. They are of finite norm. For the Sturm-Liouville problem there is only one eigenfunction for each eigenvalue λj . For periodic boundary conditions there can be two eigenfunctions per eigenvalue. 86
Eigenfunction Expansions I
ii) The fj form a basis for the set of all functions on (a, b) in the sense that given any g there is a sequence of coefficients cj with g(x) =
X
cj fj (x).
j
This is called an eigenfunction expansion of g. iii) The fj are an orthogonal basis in the sense that Z
b
fj (x)fk (x) R(x) dx = δj,k
a
Z
a
b
|fj (x)|2 R(x) dx.
This gives a simple formula for the cj : cj = R b a
Example 7.1:
1 |fj (x)|2 R(x) dx
Z
b
fj (x)g(x) R(x) dx.
a
Periodic boundary conditions on the one dimensional Helmholtz equation
leads to the Fourier series. Consider d2 − λ f (x) = 0 dx2 on (a, b) with the boundary conditions f (a) = f (b) and f ′ (a) = f ′ (b). √
√
Then writing f (x) = c1 e λx + c2 e− λx one has √ √ √ √ λa λb − λa − λb c1 0 e − e e − e √ √ √ √ √ √ . = λa λb − λb − λa c2 0 λ e −e − λ e −e
This leads to the eigenvalue condition
√ √ λ 1 − cosh λ(b − a) = 0. Thus the eigenvalues are
2jπ 2 . λj = − b−a 87
Eigenfunction Expansions I
For j = 0 there is one eigenfunction, chosen normalized, f0 (x) = √
1 . b−a
For each j > 0 there are two eigenfunctions f±j (x) = √
2πij 1 e± b−a x . b−a
The eigenfunction expansion of an arbitrary function g on (a, b) with respect to these eigenfunctions leads the the Fourier series for g,
..................
∞ X hf , gi 2πij √j g(x) = e b−a x . b−a j=−∞
Example 7.2: Consider a clamped vibrating string described by the wave equation ∂2 1 ∂2 u(t, x) = 0 − ∂x2 c2 ∂t2 where u(t, 0) = 0 = u(t, L). Impose the initial conditions u(0, x) = g(x) and for some functions g and h with g(0) = 0 = g(L) and h(0) = 0 = h(L). First let’s develop the eigenfunction expansion for
d2 dx2
∂u ∂t t=0
= h(x)
with the boundary conditions
f (0) = 0 = f (L) (there are referred to as Dirichlet boundary conditions). Again, we need to consider
d2 − λ f (x) = 0 dx2
on (0, L). We have already seen that the eigenvalues for this problem are jπ 2 λj = − L corresponding to the (normalized) eigenfunction r 2 jπ sin x. fj (x) = L L Here j ∈ {1, 2, 3, . . .}. Then one can write r u(t, x) =
∞
2X jπ cj (t) sin x L j=1 L 88
Eigenfunction Expansions I
where cj (t) =
r
2 L
Z
L
sin
0
jπ x u(t, x) dx. L
Substituting this form for u into the wave equation one finds ∞ X jπ 1 d2 jπ ( )2 + 2 2 cj (t) sin x = 0. L c dt L j=1
Taking the inner product of this equation fk (x) = for ck ,
q
2 L
sin kπ L x yields the following equation
kπ 1 d2 ( )2 + 2 2 ck (t) = 0. L c dt
It follows that ck (t) = ak sin(
kcπ kcπ t) + bk cos( t). L L
To determine ak and bk one uses the initial conditions. One has r and
r
∞
jπ 2X cj (0) sin x = g(x) L j=1 L ∞
jπ 2X ′ cj (0) sin x = h(x). L j=1 L
Again, taking the inner product of these equations with fk one obtains initial conditions for the ck , ck (0) = hfk , gi and c′k (0) = hfk , hi. It follows that ck (t) = so that u(t, x) =
r
L kcπ kcπ hfk , hi sin( t) + hfk , gi cos( t) kcπ L L
2 X L jcπ jcπ jπ hfj , hi sin( t) + hfj , gi cos( t) sin x. L j=1 jcπ L L L ∞
89
Eigenfunction Expansions I
Now consider the case in which the string is plucked at the center, x =
L 2,
by being
pulled a distance ǫ from equilibrium and then released. One then has 2ǫ x if 0 < x < L2 g(x) = L L − x if L2 < x < L and h(x) = 0. One finds that hfj , gi = so that u(t, x) =
r
2 2Lǫ jπ sin 2 2 L j π 2
∞ 4ǫ X (2j + 1)cπ (2j + 1)π 1 cos t sin x . π 2 j=1 (2j + 1)2 L L
..................
Example 7.3: Consider the time dependent Schr¨ odinger equation for a particle of mass m confined to a one dimensional interval [a, b] with rigid walls −i¯ h
h2 ∂ 2 ¯ ∂ ψ(x, t) = − ψ(x, t) ∂t 2m ∂x2
with ψ(a, t) = 0 = ψ(b, t). One can solve this problem by expanding ψ(x, t) with respect to the eigenfunctions of
∂2 ∂x2
with Dirichlet boundary conditions at a and b. One has ψ(x, t) =
r
∞
2 X jπx cj (t) sin b − a j=1 b−a
where the coefficients cj (t) satisfy the first order equation −i¯ h
dcj h2 j 2 π 2 ¯ = cj dt 2m (b − a)2
with the initial condition cj (0) =
r
2 b−a
Z
b
ψ(0, x) sin
a
jπx dx. b−a
It follows that cj (t) = cj (0)e 90
i
jπ¯ h 2m(b−a)2
t
Eigenfunction Expansions I
so that ψ(x, t) =
r
∞
jπ¯ h 2X jπx i t . cj (0)e 2m(b−a)2 sin L j=1 b−a
If one denotes the eigenfunctions by ψj (x) = then one has ψ(x, t) =
∞ X
e
i
r
jπx 2 sin b−a b−a
jπ¯ h 2m(b−a)2
j=1
2
h ¯ i¯ h − 2m
=e with the operator ei¯h
2
h ¯ − 2m
e
∂2 ∂x2
t
∂2 ∂x2
t
t
ψj (x)hψj , ψ(x, 0)i
ψ(x, 0)
defined by the eigenfunction expansion 2
h ¯ i¯ h − 2m
∂2 ∂x2
t
φ=
∞ X
i
e
jπ¯ h 2m(b−a)2
j=1
t
ψj hψj , φi.
..................
Now consider the differential equation d2 d + p + q − λ f (x) = 0 dx2 dx
(7.1)
corresponding to a second order O.D.E. in standard form. To relate this equation to an eigenvalue equation for a self-adjoint operator functions P , Q and R must be found for which d2 d 1 d d + p + q − λ = P + Q − λR . dx2 dx P dx dx This equation gives
P′ , P Q q= P
p=
and 1= It follows that P =e
R . P R
91
p dx
Eigenfunction Expansions I
so that (7.1) becomes d R e dx
p dx
R d +e dx
p dx
q − λe
R
p dx
f = 0.
(7.2)
Given either Sturm-Liouville or P -periodic boundary conditions this now becomes a selfadjoint eigenvalue problem whose eigenfunctions satisfy the orthogonality condition Z
b
a
R fj (x)fk (x) e
p dx
dx = δj,k
Z
b 2
|fj (x)| e
a
R
p dx
dx.
(7.3)
Sturm-Liouville problems can also be posed for differential equations which have a regular singular point at an endpoint. Consider the differential operator L=
d d2 +p +q 2 dx dx
on (a, b). Let L have a regular singular point at a. One’s chief concern is that L should be related to a (possibly) self-adjoint operator of the form given in (7.3), ˜= d e L dx
R
p dx
d +e dx
R
p dx
q.
˜ = hLf, ˜ gi. There is a difficulty with this expression independent of boundOne needs hf, Lgi ˜ might well not exist unless ary conditions: if L has a regular singular point at a then hf, Lgi
f and g are chosen from some restricted set. More explicitly, p is allowed to have a simple pole at x = a, p(x) ∼ q(x) ∼
q−2 (x−a)2
p−1 x−a
as x ↓ a, and q is allowed to have a second order pole at a,
as x ↓ a. Thus, for x close to a e
and then
R e
R
p dx
∼ (x − a)p−1
p dx
q ∼ q−2 (x − a)p−1 −2 . R Thus, depending on p−1 , it is possible to have hf, e p dx qgi = ∞ unless f and g go to 0 rapidly enough as x ↓ a.
There are several ways out of this dilemma. For our purposes the most straightforward is to restrict the set of functions we deal with to those whose behavior near x = a is 92
Eigenfunction Expansions I
that given by the indicial equation. Let r1 and r2 be the roots of the indicial equation for L. Conditions of the form lim f (x) − c1 (x − a)r1 − c2 (x − a)r2 = 0 x↓a
if r1 6= r2 and
lim f (x) − c1 (x − a)r − c2 (x − a)r ln |x − a| = 0 x↓a
if r1 = r2 = r can be imposed on both f and g. If one insists that αa c1 + βa c2 = 0 for real ˜ and Lg ˜ go to 0 as x ↓ a and, if there is some real, homogeneous values αa and βa then both Lf boundary condition at b, αb f (b) + βb f ′ (b) = 0, b ˜ is self-adjoint and its eigenfunctions then one can easily check that (f¯g ′ − f¯′ g) a = 0. Thus L can be made into an ortho-normal basis. (Note that if a is a regular point then r1 = 0 and
r2 = 1 and the classical Sturm-Liouville theory is recovered.) Example 7.4:
Recall Example 6.3, the free vibrations of a clamped circular membrane.
One must solve the two dimensional wave equation in polar coordinates, ∂2 1 ∂ 1 ∂2 1 ∂2 u(r, θ, t) = 0 + + − ∂r2 r ∂r r2 ∂θ2 c2 ∂t2 with the boundary condition u(R, θ, t) = 0 and initial conditions u(r, θ, 0) = g(r, θ) and ∂u ∂t t=0 = h(r, θ). Rather than proceeding as in Example 6.3, where some solutions were
found (although it was not made clear to what extent all solutions were found), let’s try to find an eigenfunction expansion for
∂2 ∂r 2
+
The normalized eigenfunctions of
1 ∂ r ∂r
d2 dθ 2
+
1 ∂2 r 2 ∂θ 2 .
with periodic boundary conditions are
√1 eimθ 2π
for m ∈ {0, ±1, ±2, . . .} corresponding to the eigenvalue m2 . It follows that any function of θ, in particular u(r, θ, t), can be expanded in an eigenfunction expansion in the functions √1 eimθ : 2π
with
∞ X 1 um (r, t)eimθ u(r, θ, t) = √ 2π m=−∞
1 um (r, t) = √ 2π
Z
2π
0
93
u(r, θ, t)eimθ dθ.
Eigenfunction Expansions I
Substituting into the wave equation, multiplying by
√1 eimθ 2π
and integrating over
θ one finds that um (r, t) satisfies the equation ∂2 1 ∂ m2 1 ∂2 um (r, t) = 0 + − − ∂r2 r ∂r r2 c2 ∂t2 with the conditions um (R, t) = 0 and um (0, t) finite. The operator Lm =
∂2 ∂r 2
2
∂ + 1r ∂r −m r 2 has
a regular singular point at r = 0, however the condition that um be finite at r = 0 means we can restrict to functions that behave like rm as r ↓ 0 and are 0 at r = R. On this restricted set of functions Lm is self-adjoint. The eigenfunctions of Lm were found in Example 6.3. Using 2 the notation developed there the eigenvalues are −km,j =−
2 ωm,j c2
corresponding to normalized
eigenfunctions fm,j (r) = Nm,j J|m| (k|m|,j r) where the normalization factor, Nm,j , is given by (7.3), 1
Nm,j = qR R
2 J|m| (k|m|,j r) r dr x|m|,j = qR 2 x R 0 |m|,j J|m| (z) z dz 0
and the orthogonality relation by Z R
fm,j (r)fm,k (r) r dr = δjk .
0
The integral in the normalization factor Nm,j can be computed either numerically, or using the identity Z
a
Jm (z)
0
2
a2 z dz = 2
Jm (a)2 − Jm−1 (a)Jm+1 (a) if m > 0 J0 (a)2 + J1 (a)2 if m = 0
(7.4)
Note that the functions 1 uA m,j (r, θ) = √ Nm,j J|m| (k|m|,j r)eimθ 2π are the sought after eigenfunctions of condition
Z
0
R
Z
∂2 ∂r 2
+
1 ∂ r ∂r
+
1 ∂2 r 2 ∂θ 2 .
They satisfy the orthogonality
2π
uA m,j (r, θ) uA n,k (r, θ) r dθ dr = δnm δjk .
0
94
Eigenfunction Expansions I
Setting cm,j (t) =
Z
R
0
one has
Z
2π
uA m,j (r, θ) u(r, θ, t) r dθ dr
0
∞ ∞ X X
u(r, θ, t) =
cm,j (t) uA m,j (r, θ).
m=−∞ j=1
The coefficients cm,j (t) satisfy the equation d2 2 + ωm,j cm,j (t) = 0 dt2 with the initial conditions
cm,j (0) =
Z
R
0
and c′m,j (0) it follows that
=
Z
0
R
Z
2π
uA m,j (r, θ) g(r, θ) r dθ dr
0
Z
2π
uA m,j (r, θ) h(r, θ) r dθ dr.
0
c′m,j (0) cm,j (t) = cm,j (0) cos(ωm,j t) + sin(ωm,j t) ωm,j
so that u(r, θ, t) =
∞ ∞ X X
cm,j (0) cos(ωm,j t) +
m=−∞ j=1
..................
c′m,j (0) sin(ωm,j t) uA m,j (r, θ). ωm,j
In the last example the eigenvalues and eigenfunctions of the Laplacian in a disk were found. The geometry, that of a disk, is special because a coordinate system can be found, polar coordinates, in which the Laplacian is separable so that the problem reduces to solving O.D.E.’s. In more complicated geometries, in which the Laplacian is not separable, the eigenvalue problem can still be posed. The discussion here will be restricted to two dimensions although it is readily extended to arbitrary dimension. Let A be a region in R2 . There are some restrictions that need to be put on A, but they are difficult to state mathematically. Loosely put, attention will be restricted to regions A in which Gauss’ law can be applied. Note that Z Z 2 2¯ ¯ ¯ − φ∇ψ¯ dA ∇ · ψ∇φ ψ∇ φ − φ∇ ψ dA = A ZA ¯ ψ∇φ − φ∇ψ¯ · dl. = ∂A
95
Eigenfunction Expansions I
Thus, if there are boundary conditions on imposed on ψ and φ on ∂A so that Z
∂A
¯ ψ∇φ − φ∇ψ¯ · n dl = 0
then ∇2 is self-adjoint in A with respect to the inner product hψ, φi =
Z
ψ¯ φ dA
A
in the sense that hψ, ∇2 φi = h∇2 ψ, φi. A large class of self-adjoint boundary conditions for
∇2 is given by
(αφ + βn · ∇φ) ∂A = 0
for real constants α and β. The two most important special cases are Dirichlet boundary conditions, φ ∂A = 0 and Neumann boundary conditions, n · ∇φ ∂A = 0. Given self adjoint boundary conditions the eigenfunctions of ∇2 are complete and
can be chosen orthonormal. In the case of either Dirichlet or Neumann boundary conditions more can be said: 2
hφ, ∇ φi = −
Z
A
∇φ¯ · ∇φ dA +
= −k∇φk2
Z
∂A
¯ φ∇φ · n dl
≤ 0. It follows that if λ is an eigenvalue of ∇2 with eigenvector φ then λ=
hφ, ∇2 φi ≤ 0. hφ, φi
Let λj be a listing of the eigenvalues of −∇2 . In more than one dimension it is possible for a single eigenvalue to have more than one eigenfunction so that some of the λj might repeat. Such an eigenvalue is called degenerate and the span of its eigenfunctions is called its eigenspace. The dimension of the eigenspace is called the multiplicity of the eigenvalue. In general the λj form an unbounded increasing sequence, λj ≤ λj+1 and λj → ∞ as j → ∞. Finally, note that for Neumann boundary conditions 0 is always an eigenvalue corresponding to the constant eigenfunction √1 , where |A| is the area of A. For Dirichlet boundary |A|
conditions 0 is never an eigenvalue of the Laplacian. 96
Eigenfunction Expansions I
Example 7.5: Consider a free particle of mass m confined to a two dimensional region A. The quantum-mechanical Hamiltonian for this situation is −
¯2 2 h ∇ 2m
with Dirichlet boundary conditions on ∂A. The the ground state eigenvalue is greater than zero. Thus the particle is never at rest. ..................
Example 7.6:
Consider sound propagation in a long straight pipe with cross section A.
Let the axis of the pipe be the z axis and let ∇⊥ =
∂ ∂x ∂ ∂y
.
Then the linear pressure satisfies the wave equation ∂2 1 ∂2 2 P (x, y, z, t) = 0 + ∇ − ⊥ ∂z 2 c2 ∂t2 and the velocity is related to the pressure through Euler’s equation ρ0
∂v = −∇P. ∂t
The condition that the normal component of the velocity to the wall of the pipe must be zero, n · v ∂A = 0,
translates to a boundary condition on P ,
n · ∇P ∂A = 0.
This boundary condition can be satisfied, and the x, y part of the equation can be separated from the z, t part by expanding in an eigenfunction expansion in the Neumann eigenfunctions 97
Eigenfunction Expansions I
of ∇2⊥ . More explicitly, let −λj be a listing of the Neumann eigenvalues and let ψj be the corresponding eigenfunctions of ∇2⊥ . Then
P (x, y, z, t) =
X
fj (z, t) ψj (x, y)
j
with
so that
∂2 1 ∂2 fj (z, t) = 0. − λ − j ∂z 2 c2 ∂t2
Consider the steady state solutions of frequency ω. For these p 2 p 2 ω ω i −i 2 −λj z 2 −λj z c c fj (z, t) = aj e + bj e e−iωt P (x, y, z, t) =
X j
p 2 p 2 ω ω i −λj z −i −λj z 2 c2 ψj (x, y) aj e c + bj e e−iωt .
Note that only those terms with
ω2 c2
> λj propagate.
The coefficients aj and bj are determined by the boundary conditions. If the pipe is infinitely long then, for large z, the sound wave must be a traveling wave propagating to the right. For such a wave the coefficients bj are 0. The aj can be determined if, for example, the frequency ω component of the pressure amplitude PA (x, y, z), satisfying P (x, y, z, t) = PA (x, y, z) e−iωt , is known at z = 0. Then aj =
Z
ψj (x, y) PA (x, y, 0) dx dy.
A
As a concrete example consider a cylindrical pipe with radius R. Let there be a piston mounted at z = 0 with radius
R 2
and velocity (along the y-axis) u0 cos(ωt). The
pressure field of the resulting sound wave is calculated. ′ N A is a disk of radius R. Let xN m,j be the j solution of Jm (x) = 0, let km,j = N N and ωm,j = ckm,j . Then the Neumann eigenvalues of −∇2⊥ in A are N λm,j = (km,j )2
corresponding to the eigenfunctions 1 N N J|m| (k|m|,j r)eimθ . ψm,j (r, θ) = √ Nm,j 2π 98
xN m,j R
Eigenfunction Expansions I N N For m = 0 = j one has k0,0 = 0 and then N0,0 = N Nm,j = qR R 0
=
R
√
2 R .
Otherwise
1 N r) J|m| (k|m|,j
xN |m|,j
q N R x|m|,j
J|m| (z)
0
The pressure wave is given by P (r, θ, z, t) = Re
∞ ∞ X X
2
2
ψm,j (r, θ) am,j e
r dr . z dz
p
i c
N )2 z−iωt ω 2 −(ωm,j
m=−∞ j=0
.
The coefficients am,j are determined by the velocity boundary condition at z = 0: ∂P 1 if 0 ≤ r < R2 = ωρ u sin(ωt) · 0 0 0 if R2 < r < R ∂z z=0 ∞ ∞ X X iq 2 −iωt N )2 a ω − (ωm,j = Re ψm,j (r, θ) m,j e c m=−∞ j=0 which gives am,j = q
ρ0 cu0 ω N )2 ω 2 − (ωm,j
= δm,0
√
Z
R 2
0
Z
2π
ψm,j (r, θ) r dθ dr
0
N ρ0 cu0 q 2π N0,j
ω
N )2 ω 2 − (ω0,j
Z
0
R 2
N J0 (k0,j r) r dr.
Thus, P is independent of θ and is given by (noting that Re(ie−iωt ) = sin ωt) hr π ω ei c z−iωt P (r, z, t) = ρ0 cu0 R Re 2 +
∞ X j=1
R
xN 0,j 2
i J0 (x) x dx N q q N J0 (k0,j r)e c 2 R x0,j N )2 N ω 2 − (ω0,j J0 (x) x dx x0,j 0
ω
0
p
N )2 z−iωt ω 2 −(ω0,j
i
.
N Note that for j > 0 for which ω0,j 6= 0 the speed of propagation is not c, but is frequency
dependent, given by c0,j = q
cω N )2 ω 2 − (ω0,j
.
This phenomenon, the speed of propagation being frequency dependent, is known as dispersion. 99
Eigenfunction Expansions I
Finally, the integrals over the Bessel functions can be done numerically, or by using the identities (7.4) and
Z
a
J0 (z) z dz = a J1 (a).
0
′ To find the zeros xN mj of Jm one can use the identities
J0′ (x) = −J1 (x) and, for m ≥ 0, ′ Jm (x) = Jm−1 (x) −
m Jm (x). x
′ Below, Jm is plotted for m = 0 and m = 1. The zeros are labelled on the horizontal axis.
Note that xN 0,1 = 3.832 xN 0,2 = 7.016 xN 0,3 = 10.173 and xN 1,1 = 5.331 xN 1,2 = 8.536 xN 1,3 = 11.706.
0.6 dJ_0(x) dJ_1(x) 0.4
0.2
0
-0.2
-0.4
-0.6 3.832 5.331 7.016 8.536 10.17311.706
Finding the zeros of
100
J0′ and J1′ .
Eigenfunction Expansions I
..................
so that
The notion of self-adjointness is not restricted to second order operators. Consider Z b 4 Z b 4¯ d3 g d g df¯ d2 g d2 f¯ dg d3 f¯ b d f ¯ ¯ f 4 dx = f 3 − + 2 − 3g + g dx 2 4 dx dx dx dx dx dx dx a a a dx hf,
Thus,
d4 dx4
d3 g d4 df¯ d2 g d2 f¯ dg d3 f¯ b d4 ¯ f gi = − + − g f, gi. + h dx4 dx3 dx dx2 dx2 dx dx3 dx4 a
is self adjoint on (a, b) if
d3 g df¯ d2 g d2 f¯ dg d3 f¯ b f¯ 3 − + − g = 0. dx dx dx2 dx2 dx dx3 a
For us the most important use of dimensional plate equation,
d4 dx4
is to model the vibration of a bar using the one
d4 ∂2 u(x, t) = 0. + K dx4 ∂t2
The most common self adjoint boundary conditions are i) clamped at a (or b): u(a, t) = 0 = ii) free at a (or b):
iii) periodic:
∂u , ∂x x=a
∂ 3 u ∂ 2 u = 0 = , ∂x2 x=a ∂x3 x=a u(a, t) = u(b, t) ∂u ∂u = ∂x x=a ∂x x=b ∂ 2 u ∂ 2 u = ∂x2 x=a ∂x2 x=b ∂ 3 u ∂ 3 u = . ∂x3 x=a ∂x3 x=b
In general, any self adjoint boundary conditions for a fourth order operator must consist of four equations, and the eigenvalue condition will be a determinant of a four by four matrix. Thus, the eigenvalue problem for a fourth order operator is considerably more complicated than that for a second order operator. There is no way to avoid some amount of numerical work, and in special cases tricks are very important. 101
Eigenfunction Expansions I
Example 7.7: Consider the vibrations of a bar of length L clamped at both ends. In solving the one dimensional plate equation we begin by generating an eigenfunction expansion for d4 dx4 .
Consider d4 − λ ψ(x) = 0 dx4
with the boundary conditions ψ(0) = 0 = ψ(L), ψ ′ (0) = 0 = ψ ′ (L). The general solution may be written ψ(x) = a cos kx + b sin kx + c cosh kx + d sinh kx with k 4 = λ. At 0 one has 0=a+c and 0 = kb + kd so that ψ(x) = a(cos kx − cosh kx) + b(sin kx − sinh kx). At L one has 0 = a(cos kL − cosh kL) + b(sin kL − sinh kL) and 0 = ka(− sin kL − sinh kL) + kb(cos kL − cosh kL). Thus the eigenvalue condition may be written 0 = (cos kL − cosh kL)2 + (sin2 kL − sinh2 kL) = 2 − 2 cos kL cosh kL, or cos kL cosh kL = 1. 102
Eigenfunction Expansions I 1 cos(x) 1.0/cosh(x) 0.8
0.6
0.4
0.2
0 0
5
10
15
20
x = kL The plot above shows that there are eigenvalues when kL = 4.73, 7.851, 10.995, . . . . Given one of these values for kL the corresponding (unnormalized) eigenfunction is sin kL − sinh kL f (x) = − (cos kx − cosh kx) + sin kx − sinh kx . cos kL − cosh kL
Below the first three (unnormalized) eigenfunctions are plotted with L = 1. Note that they are not sinusoidal. 2 f_1(x) f_2(x) f_3(x)
1.5 1 0.5 0 -0.5 -1 -1.5 -2 0
0.2
0.4
0.6
0.8
1
Let the values of kL which satisfy the eigenvalue condition be labeled ξj . The eigenvalues are λj =
ξj4 L4 .
Let the normalized eigenfunction corresponding to λj be sin ξ − sinh ξ ξj x ξj x ξj x ξj x j j − cosh ) + sin − sinh (cos ψj (x) = Nj − cos ξj − cosh ξj L L L L 1
where Nj = hfj , fj i− 2 . Then the displacement is u(x, t) =
∞ X j=1
ψj (x) aj cos
ξj2 t ξj2 t √ + bj sin √ . L2 K L2 K
103
Eigenfunction Expansions I
The coefficients aj and bj can be determined by initial conditions: aj = hψj , u(x, 0)i and
√ L2 K ∂u bj = hψ , i. j ξj2 ∂t t=0
Note that for kL large enough the eigenvalue condition becomes cos kL + . . . = 0, correct to order e−kL . Further sin kL − sinh kL = 1 − 2 sin kL e−kL + . . . cos kL − cosh kL correct to order e−2kL . Substituting this approximation into the eigenfunctions one has f (x) = sin kx − cos kx + e−kx − sin kL e−k(L−x) + . . . correct to order e−kL uniformly on [0, L]. Note that the exponential terms are negligable except when x is not much further than
1 k
from either 0 or L.
Solving the approximate eigenvalue condition, cos kL = 0, gives kL = (n + 21 )π. 1
This approximation is valid as long as e−(n+ 2 )π ≪ 1 (this is not a very stringent condition 3
1
since for n ≥ 2 e−(n+ 2 )π ≤ e− 2 π < 0.009). The eigenfunctions are given approximately by
L−x x 1 1 1 x 1 x fn (x) = sin (n + )π − cos (n + )π + e−(n+ 2 )π L − (−1)n−1 e−(n+ 2 )π L + . . . . 2 L 2 L
The exponential terms are important only if x is no more than
2L (n+ 21 )
from either 0 or L (since
e−4π is tiny). Thus fn is essentially sinusoidal except for the first and last cycles. Below is a plot of f10 with L = 1. 2 f_10(x) 1.5 1 0.5 0 -0.5 -1 -1.5 -2 0
0.2
0.4
104
0.6
0.8
1
Eigenfunction Expansions I
..................
The last topic of this chapter is something we’ve been avoiding: how to deal with the angular part of the Laplacian in spherical coordinates. Recall that, in spherical coordinates ((3.1)), ∆=
∂2 2 ∂ 1 1 ∂ ∂ 1 ∂2 . + + sin θ + ∂r2 r ∂r r2 sin θ ∂θ ∂θ sin2 θ ∂φ2
We need to study the operator
L=
1 ∂ ∂ 1 ∂2 sin θ + . sin θ ∂θ ∂θ sin2 θ ∂φ2
(7.5)
The range of θ and φ is chosen so that, for fixed r, the surface of a sphere is covered. The two basic conditions on functions f (θ, φ) that we will act on with L is that they be single valued and finite. In terms of θ and φ this dictates that φ ∈ (0, 2π) with periodic boundary conditions imposed, and that θ ∈ (0, π) with no condition other than that functions are finite. Subject to these conditions we look for eigenfunctions of L, Lf (θ, φ) = λf (θ, φ). The φ dependence can be dealt with by using the periodic eigenfunctions of
∂2 ∂φ2 ,
1 √ eimφ . 2π Then look for eigenfunctions of the form f (θ, φ) = fm (θ) Then
1 imφ e . 2π
1 ∂ ∂ m2 sin θ − − λ fm (θ) = 0. sin θ ∂θ ∂θ sin2 θ
This equation has regular singular points at 0 and π and is usually analysed by making the change of variables x = cos θ. 105
Eigenfunction Expansions I
Then we need to solve the equation d m2 2 d (1 − x ) − − λ g(x) = 0 dx dx 1 − x2
(7.6)
for x ∈ (−1, 1) and set fm (θ) = g(cos θ). The solutions of this equation are called Generalized Legendre functions. The complete study of these functions is quite involved. We won’t get into it. Here we’ll just notice that the indicial equation at both ±1 is r(r − 1) + r −
m2 =0 4
which gives r = ±m. In particular, for m = 0 f (±1) is non-zero while for m 6= 0 f (±1) = 0. The facts are that for each m the eigenvalues of (7.6) are given by λ = −l(l + 1) for l ∈ {|m|, |m| + 1, |m| + 2, . . .}. The corresponding eigenfunctions are the generalized Legendre polynomials Plm (x) =
(−1)m dl+m 2 2 m 2 (1 − x (x − 1)l . ) 2l l! dxl+m
None of the other solutions of (7.6) are finite at both x = ±1. They satisfy the orthogonality realtion
Z
1
−1
m Plm ′ (x)Pl (x) dx =
2 (l + m)! δl ′ l . 2l + 1 (l − m)!
The standard choice for normalized mutually orthogonal eigenfunctions of L, (7.5), are called spherical harmonics and are given by s 2l + 1 (l − m)! m Ylm (θ, φ) = Pl (cos θ)eimφ . 4π (l + m)! Here LYlm = −l(l + 1)Ylm for l ∈ {0, 1, 2, 3, . . .} and m ∈ {−l, −l + 1, . . . , l − 1, l}. They satisfy Z 2π Z π Yl′ m′ (θ, φ) Ylm (θ, φ) sin θ dθ dφ = δl′ l δm′ m . 0
0
The eigenfunction expansion in spherical harmonics is given by f (θ, φ) =
l ∞ X X
l=0 m=−l
106
clm Ylm (θ, φ)
Eigenfunction Expansions I
with clm =
Z
0
2π
Z
π
Ylm (θ, φ) f (θ, φ) sin θ dθ dφ.
0
A few spherical harmonics are given below. r 1 Y00 = 4π r 3 sin θ eiφ Y11 = − 8π r 3 cos θ Y10 = − 4π r 1 15 Y22 = sin2 θ e2iφ 4 2π r 15 sin θ cos θ eiφ Y21 = − 8π r 5 3 1 ( cos2 θ − ). Y20 = 4π 2 2 For the −m spherical harmonics use Yl −m = (−1)m Ylm . Example 7.8:
Consider the radiation from a compact source. Away from the source the
acoustic pressure satisfies the wave equation, written here in spherical coordinates ∂2 2 ∂ 1 1 ∂2 P (r, θ, φ, t) = 0. + + L − ∂r2 r ∂r r2 c2 ∂t2 Expanding in spherical harmonics one has P (r, θ, φ, t) =
l ∞ X X
flm (r, t)Ylm (θ, φ)
l=0 m=−l
where
∂2 2 ∂ l(l + 1) 1 ∂2 + − − 2 2 flm (r, t) = 0. ∂r2 r ∂r r2 c ∂t
The l = 0 equation has exact solutions. In general let flm (r, t) =
ulm (r, t) . r
107
Eigenfunction Expansions I
Then
∂2 l(l + 1) 1 ∂2 ulm (r, t) = 0. − − ∂r2 r2 c2 ∂t2
For l = 0 one has
u00 (r, t) = g(r − ct) + h(r − ct) for arbitrary differentiable functions g and h. In particular f00 (r, t) =
g(r − ct) + h(r + ct) . r
Note that for arbitrary l, m the radial dependence has the asymptotic form flm (r, t) ≈ for large r (large enough so that the
glm (r − ct) + hlm (r + ct) r
l(l+1) r2
term can be ignored).
Now consider the frequency ω response. Setting P (r, θ, φ, t) = PA (r, θ, φ)e−iωt the pressure amplitude PA satisfies the Helmholtz equation, here writtten in spherical coordinates with k =
ω c
∂2 2 ∂ 1 2 PA (r, θ, φ) = 0. + + L + k ∂r2 r ∂r r
(7.8)
Expanding PA in spherical harmonics one has PA (r, θ, φ) =
∞ X l X
fA lm (r)Ylm (θ, φ)
l=0 m=−l
and
d2 2 d l(l + 1) 2 + − + k fA lm (r) = 0. dr2 r dr r2
(7.9)
This is the spherical Bessel equation. Thus
(+)
fA lm (r) = alm hl so that
(−)
(kr) + blm hl
(kr)
l ∞ X X (+) (−) alm hl (kr) + blm hl (kr) Ylm (θ, φ). PA (r, θ, φ) = l=0 m=−l
108
(7.10)
Eigenfunction Expansions I
Consider the case in which the source is radiating into free space. Recall the large x asymptotic form for the spherical Hankel functions,
(±)
hl
(x) ≈
i∓(l+1) ±ix e . x
− Note that the h+ l represent outgoing spherical waves while the hl represent incoming spherical
waves. Thus, since there is nothing to reflect waves back at the source, in (7.10) the coefficients blm = 0 so that PA (r, θ, φ) =
l ∞ X X
(+)
alm hl
(kr)Ylm (θ, φ).
l=0 m=−l
It follows that for kr sufficiently large l ∞ eikr X X −(l+1) i alm Ylm (θ, φ) PA (r, θ, φ) ≈ kr l=0 m=−l
eikr = F(θ, φ) kr
with F(θ, φ) =
l ∞ X X
i−(l+1) alm Ylm (θ, φ).
l=0 m=−l
To estimate F it is sufficient to estimate the alm . To make things concrete let the source be some object, S, which is vibrating. Let the object be small enough to fit into a sphere of radius r0 . In the case in which
kr0 ≪ 1 (physically this is the case in which the wavelength is much greater than r0 ) the alm can be estimated. Introduce a second sphere, of radius r1 , with r0 ≪ r1 ≪ spheres at the origin of coordinates. 109
1 k
=
λ 2π
and center both
Eigenfunction Expansions I
λ
r1 r0
If r0 < r < r1 the k 2 term in (7.8) can be ignored so that ∇2 PA ≈ 0 and then (7.9) becomes
d2 l(l + 1) 2 d fA lm (r) = 0. − + dr2 r dr r2
This is an Euler equation. The solutions which decrease as r increases are fA lm (r) =
αlm rl+1 (+)
(this also follows from the small x asymptitics for hl
(x)) so that, for r0 ≤ r ≤ r1 ,
∞ l X 1 X PA (r, θ, φ) ≈ αlm Ylm (θ, φ). rl+1 l=0
m=−l
For r0 small enough the acoustic velocity amplitude at r = r0 should be close to the velocity amplitude of the surface of the resonator. Further, through Euler’s equation, the acoustic velocity amplitude at r = r0 is proportional to ∇PA r=r . Thus 0 Z Z ∂PA rl+2 π 2π Ylm (θ, φ) αlm = − 0 sin θ dφ dθ l+1 0 0 ∂r r=r0 Z Z r0l+2 π 2π Ylm (θ, φ) ˆ r · v(r0 , θ, φ) sin θ dφ dθ = −iωρ0 l+1 0 0 is close to ωρ0 r0l+2 times the velocity amplitude of the surface of the resonator. Since r1 ≪ r0 it follows that 1 α00 + 2 α1,1 Y1,1 + α1,0 Y1,0 + α1,−1 Y1,−1 + . . . PA (r1 , θ, φ) ≈ √ r1 4π r1 110
Eigenfunction Expansions I r0 r1 .
where the omitted terms are smaller by a factor of alm =
1 (+)
hl
(kr1 )
Z
2π
0
Z
Now using
π
Ylm (θ, φ) PA (r1 , θ, φ) sin θ dθ dφ
0
one has a0,0 = kα00 . Finally, to determine α00 use ∇ 2 PA ≈ 0 and Gauss’s law giving Z
∂S
n · ∇PA dσ =
r02
Z
2π
0
Z
π
0
∂PA sin θ dθ dφ ∂r r=r0
from which it follows, using Euler’s equation, that Z −iρ0 ω 2 n · vA dσ. a0,0 = √ 4π c ∂S
As long as α00 6= 0 the leading term in PA is the first term so that, for r ≫ λ, Z eikr −iρ0 ω 2 . PA (r, θ, φ) ≈ n · vA dσ 4πc kr ∂S In this case note that since (∇2 + k 2 )
eikr = −4πδ(x) kr
one sees that PA is approximately the solution of −iρ0 ω 2 (∇ + k )PA = c 2
2
Z
∂S
n · vA dσ δ(x)
which goes to zero as r → ∞. ..................
Example 7.9: Consider the non-relativistic quantum mechanics of a free particle of mass m confined in a rigid spherical cavity of radius R. Given an initial state φ(r, θ, φ) the time evolved state ψ(r, θ, φ, t) is the solution of the time dependent Schr¨ odinger equation −i¯ h
∂ψ = Hψ ∂t 111
Eigenfunction Expansions I
where ψ(r, θ, φ, 0) = φ(r, θ, φ) and ¯ 2 ∂2 h 2 ∂ 1 1 ∂ ∂ 1 ∂2 H=− + + sin θ + 2m ∂r2 r ∂r r2 sin θ ∂θ ∂θ sin2 θ ∂φ2 in a sphere 0 ≤ r ≤ R subject to Dirichlet boundary conditions ψ(R, θ, φ, t) = 0 on the boundary. Given the eigenvalues and eigenfunctions λ, ψλ of H one has ψ(r, θ, φ, t) =
X λ
i
hψλ , φi e h¯ λt ψλ (r, θ, φ).
Thus, the problem reduces to the eigenvalue problem for H, (H − λ)ψ = 0. To solve by separation of variables write ψ(r, θ, φ) = flm (r)Ylm (θ, φ). One finds that
One has
∂2 2 ∂ l(l + 1) 2mλ + − + 2 flm (r) = 0. ∂r2 r ∂r r2 ¯ h flm (r) = Nlm jl (
√
2mλ r) h ¯
with the eigenvalues λ determined by jl (
√
2mλ R) = 0. h ¯
If xl,j is the j th zero of jl (x) then the eigenvalues are λl,j
(¯ hRxl,j )2 = 2m
..................
Example 7.10: We find the solution of (∇2 + k 2 )f (x) = 0 112
Eigenfunction Expansions I
which satisfies
f kxk=r0 = 1 + sin θ cos φ.
and has the large kxk = r asymptotic form
f (x) ≈
eikr . r
This example is rigged to be easy: 1 + sin θ cos φ =
√
4π Y0,0 −
r
2π Y1,1 (θ, φ) − Y1,−1 (θ, φ) . 3
Thus f (x) = f0,0 (r)Y0,0 + f1,1 (r)Y1,1 (θ, φ) + f1,−1 (r)Y1,−1 (θ, φ) with
f0,0 (r0 ) =
√
d2 2 d 2 f0,0 (r) = 0, + + k dr2 r dr
4π and f (r) → 0 as r → ∞ and
d2 2 d 2 2 f1,±1 (r) = 0, + − + k dr2 r dr r2
q f1,±1 (r0 ) = ∓ 2π 3 and f1,±1 (r) → 0 as r → ∞. It follows that √
and f1,±1 (r) = ∓ so that
(+)
f (x) = =
h0 (kr) (+)
h0 (kr0 )
4π
(+) h0 (kr) (+) h0 (kr0 )
f0,0 (r) =
q
2π 3
(+) h1 (kr). (+) h1 (kr0 )
(+)
+
h1 (kr) (+)
h1 (kr0 )
r0 ik(r−r0 ) e + r
sin θ cos φ
1 i k2 r 2 + kr i + kr10 k2 r02
..................
113
eik(r−r0 ) sin θ cos φ.
Eigenfunction Expansions II
Eigenfunction Expansions II: infinite domains Consider a second order ordinary differential operator L= on an infinite domain, say (a, ∞). Let kf k =
d d P +Q dx dx
sZ
∞
a
|f (x)2 dx
be a norm. Note that if f, g have finite norm then f, g → 0 as x → ∞. If f, g have finite norm and P is bounded for large enough x then hf, Lgi =
Z
∞
f (x)Lg(x) dx = hLf, gi − P (a) f (a)g ′ (a) − f ′ (a)g(a) a
so that L is self-adjoint on (a, ∞) if P (a) f (a)g ′ (a) − f ′ (a)g(a) = 0. Note that a Sturm-Liouville boundary condition at a, αf (a) + βf ′ (a) = 0 with α and β both real, is sufficient to make L self-adjoint. Self-adjoint operators on infinite domains have generalized eigenfunction expansions. Consider the differential equation Lψ = ǫRψ where R(x) > 0 and ǫ ∈ R. If, given ǫ, there is a solution ψ with finite norm, kψk < ∞, then ǫ is said to be an eigenvalue and ψ an eigenfunction. Typically, the set of eigenvalues is discrete. If, given ǫ, there are no solutions with finite norm, but there is at least one solution which is polynomially bounded, |ψ(x)| ≤ const (1 + |x|n ) 114
Eigenfunction Expansions II
for some n, then ǫ is said to be a generalized (or continuum) eigenvalue and ψ a generalized (or continuum) eigenfunction. Typically, the set of generalized eigenvalues is a continuum. Let the eigenvalues and corresponding eigenfunctions be given by ǫj and ψj respectively. Let the continuum eigenfunctions be ψn (ǫ, ·) and let In be the domain over which ǫ varies. The facts are that one has the orthogonality relations hψj , ψk iR = δjk ,
hψn (ǫ, ·), ψk iR = 0,
with hψ, φiR =
Z
∞
hψn (ǫ, ·), ψm (η, ·)iR = δnm δ(ǫ − η),
ψ(x)φ(x) R(x) dx,
a
and the completeness relation X
ψj (x) ψj (y)R(y) +
XZ n
j
In
ψn (ǫ, x) ψn (ǫ, y)R(y) dǫ = δ(a,∞) (x − y).
An immediate consequence is the generalized eigenfunction expansion of a function f relative to L and R: f (x) =
X j
cj ψj (x) +
XZ n
f˜n (k)ψn (ǫ, x) dǫ
In
with cj = hψj , f iR and f˜n (ǫ) = hψn (ǫ, ·), f iR . The operator L is diagonolized by the eigenfunction expansion in the sense that Lf (x) =
X j
ǫj cj R(x)ψj (x) +
XZ n
ǫf˜n (ǫ)R(x)ψn (ǫ, x) dǫ.
In
Note that solving the differential equation (L − Rǫ)ψ = 0 can only determine ψ up to a multiplicative constant (constant in x, but possibly dependent on ǫ). In order that the orthogonality and completeness relations return one times the delta function the multiplicative constant must be chosen appropriately. This is completely analogous to normalizing an eigenvector, and the constant will be called a normalization factor. 115
Eigenfunction Expansions II
Note that the normalizations are determined only up to an arbitrary complex phase since heiθ ψ, eiθ ψiR = hψ, ψiR . Further, it is common in the integral over the continuous spectrum to integrate over some variable other than the eigenvalue parameter ǫ (a frequent choice is wavenumber k given by ǫ = k 2 ). If λ is some parameter related differentiably to ǫ then the completeness relation can be written X XZ dǫ ψj (x) ψj (y)R(y) + ψn (ǫ(λ), x) ψn (ǫ(λ), y)R(y) dλ = δ(a,∞) (x − y) dλ ǫ−1 (In ) n j so that the continuum eigenfunctions normalized with respect to λ, φn (λ, x), are given by r dǫ ψn (ǫ(λ), x). φn (λ, x) = dλ Example 8.1: For L =
d2 dx2
on (−∞, ∞) all of the solutions to ψ ′′ = ǫψ are of the form √
ψ(x) = a e
ǫx
√
+ b e−
ǫx
.
The only way for these functions to be polynomially bounded for all x is if ǫ ∈ (−∞, 0). In particular, there are no eigenvalues but there are continuum eigenvalues (−∞, 0) and, given ǫ ∈ (−∞, 0), continuum eigenfunctions √ ǫx
N± (ǫ)e±
.
The normalization factor can be determined either by the orthogonality relation, Z ∞ √ √ √ √ N± (ǫ)N± (η) e±i(− ǫ+ η) x dx = 2πN± (ǫ)N± (η)δ( ǫ − η) −∞
√ = 4π ǫ N± (ǫ)N± (η)δ(ǫ − η) = δ(ǫ − η)
so that one may choose N± (ǫ) =
s
1 √ , 4π ǫ
or by the completeness relation, XZ XZ ∞ √ 2 ±i ǫ (x−y) dǫ = |N± (ǫ)| e ±
0
=
∞
0 ± ∞
Z
−∞
|N± (k 2 )|2 eik (x−y) 2|k|dk
= δ(x − y) 116
|N± (k 2 )|2 e±ik (x−y) 2kdk
Eigenfunction Expansions II
so that, again, one may choose N± (ǫ) =
s
1 √ . 4π ǫ
Setting ǫ = −k 2 the generalized eigenfunction expansion is just the Fourier transform. The completeness relation becomes δ(x − y) =
XZ
=
±
Z
1 2π
For L =
d2 dx2
0 ∞
√ 1 √ e±i ǫ (x−y) dǫ 4π ǫ
eik(x−y) dk.
−∞
..................
Example 8.2:
∞
on (0, ∞) with Dirichlet boundary conditions ψ(0) = 0 all of
the solutions of the differential equation ψ ′′ = ǫψ which satisfy the boundary conditions are of the form
√ ψ(x) = a sin( −ǫ x).
They are are polynomially bounded for all x > 0 only if ǫ ∈ (−∞, 0). Setting ǫ = −k 2 and noting that changing k to −k gives nothing new, one obtains the generalized eigenfunction expansion, often called the Fourier sine transform, Z ∞ f (x) = fˆ(k) a sin(kx) dk 0
with fˆ(k) =
Z
∞
f (x) a sin(kx) dx.
0
The normalization constant, a, can be determined by the completeness condition Z ∞ δ(0,∞) (x − y) = |a|2 sin(kx) sin(ky) dk 0 Z 1 ∞ 2 ik(x+y) |a| (e − eik(x−y) − e−ik(x−y) − e−ik(x+y) ) dk =− 4 0 π = |a|2 δ(x − y). 2 Since x and y are both positive, δ(x + y) = 0 so that it is sufficient to choose r 2 . a= π 117
Eigenfunction Expansions II
Similarly there is the Fourier cosine transform f (x) = with fˆ(k) =
r
2 π
Z
r
2 π
Z
∞
fˆ(k) cos(kx) dk
0
∞
f (x) cos(kx) dx
0
corresponding to Neumann boundary conditions at x = 0. ..................
Example 8.3: As an application consider sound propagation near a wall. Let the wall be at x = 0, let the pressure be constant in the y direction and suppose the sound field at z = 0 is known for all time and that the sound is propagating in the z direction. Then Z
1 P (x, z, t) = π
∞
−∞
Z
∞
a(q, z, ω) cos(qx)e−iωt dq dω.
0
Applying the wave equation to P one finds that ω2 ∂2 + − q 2 a(q, z, ω) = 0 2 2 ∂z c so that i
a(q, z, ω) = α(q, ω)e
p
ω2 c2
−q 2 z
−i
+ β(q, ω)e
p
ω2 c2
−q 2 z
and thus 1 P (x, z, t) = π
Z
∞
−∞
Z
0
∞
i
α(q, ω)e
p
ω2 c2
−q 2 z
−i
+ β(q, ω)e
p
ω2 c2
−q 2 z
cos(qx)e−iωt dq dω.
For the sound to propagate and remain finite in the direction of increasing z one must have α(q, ω) = 0 for ω < 0 and β(q, ω) = 0 for ω > 0 and the square roots must be defined so that, for q > | ωc |, the exponentials decrease with increasing z. Then, taking into account the reality of p, one may write 2 P (x, z, t) = Re π
Z
0
∞
Z
∞
i
α(q, ω)e
0
118
p
ω2 c2
−q 2 z−iωt
cos(qx) dq dω
Eigenfunction Expansions II
choosing the square root here to have non-negative imaginary part. Since P (x, 0, t) = f (x, t) is known one has 1 α(q, ω) = π
Z
∞
−∞
Z
∞
f (x, t) cos(qx)eiωt dx dt.
0
Substituting back into the integral for P (x, z, t) one can compute P (x, z, t) for all t and z > 0. In particular, one can study diffraction near a wall: choose cos(ω0 t) if 0 < x < x0 f (x, t) = 0 if x > x0 . Then α(q, ω) = and 2 P (x, z, t) = Re π
sin(qx0 ) δ(ω − ω0 ) + δ(ω + ω0 ) q Z
∞
0
sin(qx0 ) i e q
p
ω2 c2
−q 2 z−iω0 t
cos(qx) dq.
..................
Example 8.4: Consider L=−
d2 dx2
on (0, ∞) with the boundary condition ψ ′ (0) = −aψ(0) for some positive constant a. The eigenvalue problem (−
d2 − ǫ)ψ = 0 dx2
has the general solution ψ(x) = N
√
√ √ ǫ cos( ǫ x) − a sin( ǫ x) .
These solutions are bounded for ǫ ≥ 0 so that the continuous spectrum is (0, ∞). Normalizing √ with respect to k = ǫ one has ψ(x) = N k cos(kx) − a sin(kx) p a = N k 2 + a2 cos(kx − arctan( )) k so that (by comparison with the cosine transform) s 2 . N = 2 π(k + a2 ) 119
Eigenfunction Expansions II
In addition to the continuous spectrum there is one eigenvalue corresponding to the √ square integrable solution obtained when ǫ = −a2 or ǫ = ia. The corresponding normalized eigenfunction is
√
φ(x) =
2a e−ax .
The resulting eigenfunction expansion is r Z ∞ √ a 2 c(k) cos(kx − arctan( )) f (x) = c0 2a e−ax + π 0 k with c0 =
Z
∞
√
2a e−ax f (x) dx
0
and c(k) =
r
2 π
Z
∞
0
a cos(kx − arctan( ))f (x) dx. k
Note that when a = 0 one obtains the cosine transform and in the limit a → ∞ one obtains the sine transform. ..................
Example 8.5: Now consider L=− where u(x) = Then the solutions of
d2 + u(x) dx2
0 −D
if |x| > 1 . if |x| ≤ 1
L−ǫ ψ =0
√ can be constructed as follows. Let k = ǫ. One has eikx + B− e−ikx√ A−√ ψ(x) = aei k2 +D x + be−i k2 +D A+ eikx + B+ e−ikx
x
if x < −1 if −1 ≤ x ≤ 1 if 1 < x
with the conditions that ψ and its derivative be continuous at x = ±1. It follows that √ √ −ik i k2 +D −i k2 +D e eik A− a e e √ √ √ √ = 2 +D 2 +D −ik ik k k −i i 2 2 ike −ike b B− −i k + D e i k +De 120
Eigenfunction Expansions II
and
eik ikeik
e−ik −ike−ik
A+ B+
=
so that
√
2
i k +D √ √ e 2 i k 2 + D ei k +D
A− B−
=T
A+ B+
√ −i k2 +D
√ √ e 2 −i k 2 + D e−i k +D
a b
where
√ √ −1 i k2 +D −i k2 +D e−ik eik e e √ √ √ √ T = 2 2 ike−ik −ikeik i k 2 + D e−i k +D −i k 2 + D ei k +D √ √ −1 ik −ik −i k2 +D i k2 +D e e e e √ √ √ · √ 2 2 2 ikeik −ike−ik i k + D ei k +D −i k 2 + D e−i k +D T11 (k) T12 (k) = T21 (k) T22 (k)
with √ √ p p 1 2 2i(k− k2 +D) 2 2i(k+ k2 +D) 2 2 − (k − k + D) e T11 (k) = √ (k + k + D) e 4k k 2 + D p D cos( k 2 + D) 2k k 2 + D p D T21 (k) = − √ cos( k 2 + D) 2k k 2 + D T12 (k) = −
and T22 (k) =
√
√ √ p p 1 2 2 √ (k + k 2 + D)2 e2i(−k+ k +D) − (k − k 2 + D)2 e−2i(k+ k +D) . 4k k 2 + D
If ǫ > 0 then k is real and ψ is polynomially bounded, but does not have finite norm. Thus (0, ∞) are all continuum eigenvalues and, given ǫ ∈ (0, ∞), there are two corresponding continuum eigenfunctions. (They can be taken to be what one obtains with A− 6= 0, B− = 0 and A− = 0, B− 6= 0, or whatever linear combinations are convenient). If ǫ < 0 then k is pure imaginary and can be chosen to have positive imaginary part. Then ψ grows exponentially unless both A− = 0 and B+ = 0. But this is possible only if T11 (k) = 0. Given such a k, ψ decreases exponentially and is of finite norm so that ǫ = k 2 is √ an eigenvalue. Setting k = i D κ the condition T11 (k) = 0 becomes e−i
√ √ D 1−κ2
p = −κ − i 1 − κ2 121
Eigenfunction Expansions II
with κ 6= 1. For κ > 1 one would have e
√
D
√
κ2 −1
= −κ +
p
κ2 − 1,
but the left side is positive while the right is negative, so there are no solutions with κ > 1. For 0 < κ < 1 one can break into real and imaginary parts. One finds cos and sin
√
√
D
D
p
p 1 − κ2 = −κ
p 1 − κ2 = 1 − κ2 .
In principle, since this is two equations one can not expect a real solution, but these two equa√ √ tions are actually equivalent, since cos θ = −κ implies that sin θ = 1 − cos2 θ = 1 − κ2 . The most convenient one to solve is the second. Solving amounts to finding the values of √ √ √ x ∈ (0, 1) for which sin( D x) = x. For D ≤ 1 there are no solutions. For D > 1 there √ is a least one solution, with the number of solutions increasing as D increases. For each solution x there is an eigenvalue ǫ = −(1 − x2 )D. Now let D > 1 and let ψ1 , . . . , ψN be the normalized eigenfunctions corresponding to eigenvalues ǫ1 , . . . , ǫN . For the continuum eigenfunctions let ψ− (k, ·) be the normalized solution with A− = 0 and B− = B and let ψ+ (k, ·) be the normalized solution with B− = 0 and A− = A. To determine the normalization constant for the eigenfunctions it is sufficient to compute kψj k and check that it is 1. For the continuum eigenfunctions we will use the completeness relation δ(x − y) = =
N X
ψj (x)ψj (y) +
±
j=1
N X
XZ
ψj (x)ψj (y) +
Z
∞
−∞
j=1
∞
0
ψ± (k, x)ψ± (k, y) dk
ψ+ (k, x)ψ− (k, y) dk.
Since the ψj go to zero exponentially as x → −∞ if we let both x and y go to −∞ we get δ(x − y) =
Z
∞
BAe−ik(x−y) dk
−∞
122
Eigenfunction Expansions II
so that it is sufficient to choose A = B =
√1 . 2π
Applications: This operator arises both in the quantum mechanics of a particle of mass m in h ¯2 2m L
a square well in which the Hamiltonian is H =
and in the one dimensional Helmholtz
equation with slower wave speed in a finite region in which c(x) =
if |x| > 1 if |x| < 1
c0 c0 − δ
and u(x) =
ω2 ω2 − . c20 c(x)2
In the former case a representation of the time evolution operator may be obtained by eigenfunction expansion: hx|e
¯2 i h h ¯ 2m Lt
|yi =
N X
e
¯2 i h h ¯ 2m ǫj t
ψj (x)ψj (y) +
Z
∞
−∞
j=1
¯2 i h
2
e h¯ 2m k t ψ+ (k, x)ψ− (k, y) dk.
In the later case the possible free vibration fields y(x, ω) at (angular) frequency ω of the string may be obtained by eigenfunction expansion: y(x, ω) =
N X
cj ψj (x) +
j=1
Z
∞
c(k) ψ+ (k, x) dk.
−∞
..................
Now let’s apply this machinery to Bessel’s equation, 1 d m2 d2 + − − ǫ ψ(x) = 0. dx2 x dx x2 Thus, we are considering the operator L=
1 d m2 d2 + − . dx2 x dx x2
Note that L can be related to the standard form 1 d d 2 L+k = P +Q+k R P dx dx 2
123
Eigenfunction Expansions II
by choosing
P′ P
=
1 x
so that Q=−
P = x,
m2 , x
R = x.
The most common example is to take x ∈ (0, ∞) and, as a boundary condition, insist that ψ be finite. Then √ √ ψ( −ǫ , x) = a Jm ( −ǫ x). This function is polynomially bounded only if
√
−ǫ is real. Thus, L has only continuum
eigenvalues; (0, ∞), for m > 0, and [0, ∞) for m = 0. The completeness relation gives, √ setting k = −ǫ, Z ∞ |a|2 Jm (kx)Jm (ky) y 2kdk = δ(x − y). 0
To determine a let x, y → ∞. Then, recalling the large x asymptotic form for Jm from Example 5.9, 4 δ(x − y) = π
r Z ∞ mπ π mπ π y − ) cos(ky − − ) dk |a|2 cos(kx − x 0 2 4 2 4
from which it follows, writing the cosines as sums of exponentials and using the Fourier transform completeness relation, that |a|2 =
1 . 2
Thus one can write the completeness relation as Z
0
∞
Jm (kx)Jm (ky) y k dk = δ(x − y).
The transform obtained in this way is known as the Hankel transform. Given a function f the Hankel transform is usually written f˜(k) =
Z
∞
x Jm (kx) f (x) dx
0
so that the inverse transform is given by f (x) =
Z
∞
k Jm (kx) f˜(k) dk.
0
124
Eigenfunction Expansions II
Other transforms can be obtained by choosing different boundary conditions. For example, consider x ∈ (a, ∞) for a > 0 with the boundary condition ψ(a) = 0. Then
√ √ √ ψ( −ǫ , x) = A Jm ( −ǫ x) + BYm ( −ǫ x)
with
√ √ A Jm ( −ǫ a) + BYm ( −ǫ a) = 0
so that
√ √ √ √ Jm ( −ǫ a) √ Ym ( −ǫ x) . ψ( −ǫ , x) = A Jm ( −ǫ x) − Ym ( −ǫ a) √ This function is polynomially bounded only if −ǫ is real. Thus, L has only continuum eigenvalues; (0, ∞), for m > 0, and [0, ∞) for m = 0. The completeness relation gives, √ setting k = −ǫ, 2
Z
∞
0
Jm (ka) Jm (ka) Ym (kx) Jm (ky) − Ym (ky) ykdk = δ(x − y). |A|2 Jm (kx) − Ym (ka) Ym (ka)
Again, to determine |A|2 let x, y → ∞. Recalling the large x asymptotic form for Jm and Ym , 4 δ(x − y) = π
r Z ∞ Jm (ka) mπ π mπ π y − )− sin(kx − − ) |A|2 cos(kx − x 0 2 4 Ym (ka) 2 4 Jm (ka) mπ π mπ π − )− sin(ky − − ) dk. · cos(ky − 2 4 Ym (ka) 2 4
Using the identity α cos θ + β sin θ = one finds 4 δ(x − y) = π
p β α2 + β 2 cos(θ − tan−1 ) α
r Z ∞ Jm (ka) 2 y 2 cos(kx − φ(k)) cos(ky − φ(k)) dk |A| 1 + x 0 Ym (ka)
with φ(k) = tan−1
Jm (ka) mπ π − + . Ym (ka) 2 4 125
Eigenfunction Expansions II
Choosing |A|2 = 2 1+
1
Jm (ka) 2 Ym (ka)
and noting that Z ∞ ik(x+y)−2iφ(k) −ik(x+y)+2iφ(k) lim dk e +e x+y→∞ 0 Z ∞ 1 ik(x+y) d −2iφ(k) −ik(x+y) d 2iφ(k) = − lim e dk e −e e x+y→∞ i(x + y) 0 dk dk =0 the completeness relation becomes Z ∞ Jm (kx) − Jm (ka) Ym (kx) Jm (ky) − Ym (ka) (ka) 2 0 1 + YJm m (ka)
Jm (ka) Ym (ka) Ym (ky)
ykdk = δ(x − y).
If the transform associated with this completeness relation is chosen to be Z ∞ Jm (ka) ˜ Ym (kx) xdx f (k) = f (x) Jm (kx) − Ym (ka) a
then the inverse transform is Z ∞ Jm (ka) ˜ f (x) = Ym (kx) f (k) Jm (kx) − Ym (ka) 0 1+ Example 8.6:
k
Jm (ka) 2 Ym (ka)
dk.
Consider sound propagating from the ground. Let the normal component
of the acoustic velocity field be known at z=0, vz (r, θ, 0, t) = f (r, θ, t). Let
Writing
1 ∂2 2 ∇ − 2 2 P = 0. c ∂t Z ∞Z ∞ ∞ 1 X P˜m (k, z, ω) eimθ−iωt J|m| (kr) kdk dω P (r, θ, z, t) = 2π m=−∞ −∞ 0
one has
d2 ω2 ˜ 2 − k + 2 Pm (k, z, ω) = 0 dz 2 c 126
Eigenfunction Expansions II
so that P˜m (k, z, ω) = Am (k, ω) ei
p
ω2 c2
−k2 z
.
To determine Am (k, ω) use the boundary condition at z = 0, r Z ∞Z ∞ ∞ 1 X ∂f (r, θ, t) ω2 = − k 2 eimθ−iωt J|m| (kr) kdk dω −ρ0 Am (k, ω) i ∂t 2π m=−∞ −∞ 0 c2 so that Am (k, ω) = q
ρ0 ω ω2 c2
− k2
Z
∞
−∞
Z
0
∞
Z
2π
f (r, θ, t) e−imθ+iωt J|m| (kr) rdθ dr dt.
0
As a concrete example let f (r, θ, t) =
1 δ(r)δ(t). r
Then Am (k, ω) = 2πρ0 cδm,0 √
ω2
so that P (r, θ, z, t) = 2πρ0 c
Z
∞ −∞
Z
0
∞
ω √ ei ω 2 − k 2 c2
..................
127
p
ω − k 2 c2 ω2 c2
−k2 z−iωt
J0 (kr) kdk dω.
Integral Transforms
Integral Transforms An integral transform of a function f (x) relative to a family of functions h(k, x) is the linear transformation (T f )(k) =
Z
h(k, x)f (x) dx.
In most cases there is also an inverse transform from which f can be recovered. Z ˜ x)(T f )(k) dk. f (x) = h(k, The class of functions for which the transform exists depends on h. The most common integral transform is the Fourier transform: Z ∞ 1 ˆ f (k) = √ eikx f (x) dx. 2π −∞
(9.1)
Given the Fourier transform, fˆ, the original function, f , can be reconstructed using the Fourier inversion formula, 1 f (x) = √ 2π
Z
∞
e−ikx fˆ(k) dk.
(9.2)
−∞
Note that substituting (9.1) into (9.2) gives the completeness relation Z ∞ ′ 1 eik(x−x ) dk = δ(x − x′ ) 2π −∞ and substituting (9.2) into (9.1) gives the orthogonality relation Z ∞ ′ 1 ei(k−k )x dx = δ(k − k ′ ). 2π −∞ Note that the norm kf k =
sZ
∞
−∞
|f (x)|2 dx
is preserved by the Fourier transform in the sense that Z ∞ Z ∞ Z ∞ 1 ikx ˆ 2 e f (k) dk kf k = e−iqx fˆ(q) dq dx 2π −∞ −∞ −∞ Z ∞Z ∞ fˆ(k)fˆ(q)δ(k − q) dq dk = −∞ −∞ Z ∞ = |fˆ(k)|2 dk −∞
128
(9.3)
Integral Transforms
so that we end up with the relation kf k = kfˆk. Another identity arising for Fourier integrals is the relation between convolution and multiplication. The convolution of f with g is given by (f ∗ g)(x) =
Z
∞
−∞
g(x − y)f (y) dy.
Substituting the Fourier decomposition of g and f into the convolution gives Z ∞ Z ∞ Z ∞ 1 −ik(x−y) (f ∗ g)(x) = e gˆ(k) dk e−iqx fˆ(q) dq dy 2π −∞ −∞ −∞ Z ∞Z ∞ = e−ikx gˆ(k)fˆ(q) δ(k − q) dq dk −∞ −∞ Z ∞ = e−ikx gˆ(k)fˆ(k) dk −∞
so that (fd ∗ g)(k) =
√
2π gˆ(k)fˆ(k).
Example 9.1: Some examples of Fourier transforms: First, a Gaussian, 2
f (x) = e−λx gives
Z ∞ 2 1 ˆ f (k) = √ eikx−λx dx 2π −∞ Z ∞ ik 2 k2 1 =√ e−λ(x− 2λ ) − 4λ dx 2π −∞ k2 1 =√ e− 4λ . 2λ
Note that the Fourier transform of a Gaussian is a Gaussian. Further, the broader f is the more sharply peaked fˆ is, and conversely. Now consider a Lorentzian, f (x) =
1 . x2 + λ2
129
Integral Transforms
To compute the Fourier transform it’s best to use the calculus of residues. One finds Z ∞ 1 1 ˆ f (k) = √ dx eikx 2 x + λ2 2π −∞ r π 1 −λ|k| = e . 2 λ Again, the more sharply peaked the Lorentzian, the broader it’s Fourier transform. Note that the Fourier inversion formula shows that the Fourier transform of e−λ|x| is r λ 2 d . e−λ|x| (k) = 2 π k + λ2
Then
Finally, consider a square pulse, n 1 f (x) = 0
..................
if a < x < b otherwise.
Z b 1 ˆ f (k) = √ eikx dx 2π a 1 ikb e − eika . = √ ik 2π
Not all functions can be Fourier transformed. For example, f (x) = ex has no Fourier transform. In general any integrable function has a bounded Fourier transform, and any function for which the norm (9.3) is finite has a Fourier transform with equal norm. The general question of what functions can be Fourier transformed and what the Fourier transform means (recall that the Fourier transform of 1 is δ(k) so that it doesn’t exist even though we know how to make sense of it) is involved. Further, given a function which can be Fourier transformed there is an extensive theory which indicates how regular a function the Fourier transform is. Let’s see what can be said of the examples above. Note that for the square pulse fˆ goes to 0 as k → ∞ much more slowly than in the previous two examples. This is typical of Fourier transforms of discontinuous functions. In fact there is a relation between differentiability of f and the rate at which fˆ(k) → 0 for large k. Explicitly, if f is n times differentiable on R but not n + 1 times differentiable, with f (j) bounded and lim f (j) (x) = 0
x→∞
130
Integral Transforms
for j ∈ {0, 1, 2, . . . , n} then one can integrate by parts n times giving Z ∞ 1 ˆ f (k) = √ eikx f (n) (x) dx. 2π (ik)n −∞ Assume further that the non-differentiability of f (n) arises from a jump discontinuity at x = a. Then Z ∞ Z a 1 ikx (n) ikx (n) e f (x) dx + e f (x) dx 2π (ik)n −∞ a 1 eika f (n) (a − 0+ ) − f (n) (a + 0+ ) =√ 2π (ik)n+1 Z a Z ∞ ikx (n+1) + e f (x) dx + eikx f (n+1) (x) dx .
fˆ(k) = √
−∞
a
Note that additional integrations by parts won’t increase the power of at which fˆ(k) → 0 as k → ∞ is
1 kn .
1 k
so that the rate
If there are additional isolated singular points the
integration region can be broken up further, and the singularities dealt with individually. The small k behavior of fˆ(k) is determined by the large x behavior of f (x) in the sense that 1 fˆ(0) = √ 2π
Z
1 f (0) = √ 2π
Z
Further, ˆ′
∞
f (x) dx.
−∞ ∞
x f (x) dx
−∞
so that the differentiability of fˆ(k) at k = 0 is determined by the rate at which f (x) → 0
as x → ∞. It is a fact that fˆ(k) is analytic if f (x) → 0 exponentially fast as x → ∞. The converse is true if f is analytic and integrable in a strip: the rate of exponential decrease of fˆ(k) is the distance from the real line to the closest singularity of f . This is precisely what happened for the Lorentzian. Example 9.2: Consider a problem in one dimensional acoustics: a sound source at x = 0 given by a velocity condition v(0, t) = f (t). Then the acoustic pressure satisfies the wave equation ∂2 1 ∂2 − 2 2 P (x, t) = 0 ∂x2 c ∂t 131
Integral Transforms
with the boundary condition
∂P = −ρ0 f ′ (t). ∂x x=0
Since this is one dimensional we can solve the problem: P (x, t) = ρ0 cf (t −
x ). c
However, this approach cannot be generalized to higher dimensions, so we will see how to use Fourier transforms to solve this problem. Write 1 f (t) = √ 2π
Z
where 1 fˆ(ω) = √ 2π
∞
e−iωt fˆ(ω) dω
−∞
Z
∞
eiωt f (t) dt
−∞
is the Fourier transform of f . Similarly write Z ∞ 1 e−iωt PA (x, ω) dω. P (x, t) = √ 2π −∞ Here PA (x, ω) is the Fourier transform of P (x, t) with respect to t. Then Z ∞ ∂2 1 ∂2 1 − 2 2 √ e−iωt PA (x, ω) dω 0= 2 ∂x c ∂t 2π −∞ Z ∞ 1 ω2 ∂2 =√ PA (x, ω) dω + e−iωt ∂x2 c2 2π −∞ with the boundary condition Z ∞ Z ∞ ∂ 1 ∂ 1 −iωt √ e PA (x, ω) dω x=0 = −ρ0 √ e−iωt fˆ(ω) dω. ∂x 2π −∞ ∂t 2π −∞
It follows that
with the boundary condition
Solving one finds
∂2 ω2 PA (x, ω) = 0 + ∂x2 c2 ∂PA (x, ω) = iρ0 ω fˆ(ω). x=0 ∂x ω PA (x, ω) = ρ0 cfˆ(ω) ei c x
132
(9.4)
Integral Transforms
so that
ρ0 c P (x, t) = √ 2π
Z
∞
x fˆ(ω) e−iω(t− c ) dω
−∞
= ρ0 cf (t −
x ). c
Now, rather than radiating into free space, imagine the source is mounted at one end of a closed resonator of length L. Then in addition to (9.4) one has ∂PA (x, ω) =0 x=L ∂x
so that
PA (x, ω) =
ω iρ0 cfˆ(ω) cos (L − x) . ω sin( c L) c
..................
Example 9.3: Now consider a radiator in 3 dimensions. Let S be the surface of the radiator. For a point s ∈ S let the normal component of the velocity of the radiator be u(s, t). Then the acoustic pressure satisfies 1 ∂2 ∇2 − 2 2 P (x, t) = 0 c ∂t with the boundary condition
∂u −n · ∇P x∈S = ρ0 . ∂t
Fourier transforming with respect to time, 1 u(s, t) = √ 2π and 1 P (x, t) = √ 2π one has
with the boundary condition
Z
∞
u ˆ(s, ω)e−iωt dω
−∞
Z
∞
PA (x, ω)e−iωt dω,
−∞
ω2 2 ∇ + 2 PA (x, ω) = 0 c ˆ. n · ∇PA x∈S = iρ0 ω u 133
Integral Transforms
If the radiator is in free space then there is an additional boundary condition at spatial ∞: the pressure wave must be outgoing so that it satisfies the Somerfeld radiation condition
ω
ei c kxk PA (x, ω) ∼ const kxk
for large kxk. This problem was studied in Example 7.8. Change to spherical coordinates and recall that for
ω cr
≫ 1 one has (if the integral is not 0) Z ei ωc r −iρ0 ω . PA (x, ω) ≈ u ˆ(s, ω) dσ(s) 4π r S
It follows that P (x, t) ≈ so that
−iρ0 5
3
22 π2
Z
Z
∞
ω
ei c (r−ct) dω ω u ˆ(s, ω) dσ(s) r −∞ S
Z Z ∞ ω ρ0 1 ∂ P (x, t) ≈ 5 3 u ˆ(s, ω)ei c (r−ct) dω dσ(s) 2 2 π 2 r ∂t S −∞ R ∂ u(s, t − rc ) dσ(s) ρ0 ∂t S = . 4π r As a concrete example consider the case in which the source begins vibrating sud-
denly at t = 0. Assume the vibration is at frequency ω0 , but with exponentially decreasing amplitude so that
Z
S
u(s, t) dσ(s) =
0 if t < 0 A e−γt cos(ω0 t) if t ≥ 0
for some ω0 ∈ R and A, γ ∈ (0, ∞). It follows that 0 ρ0 δ(t) − Re(iω0 + γ) 1 −(iω0 +γ)(t− rc ) P (x, t) ≈ A 4π re
(9.5)
if t < if t ≥
r c r c
if r is large enough. The δ(t) is the spike in P which arises from turning the velocity on suddenly at t = 0. ..................
Example 9.4:
The previous example solved a boundary value problem by Fourier trans-
forming in time. Now consider an initial value problem. Imagine that the acoustic pressure and it’s rate of change are known at t = 0, P (x, 0) = f (x) 134
Integral Transforms
and
∂P = g(x). ∂t t=0
Then
1 ∂2 2 ∇ − 2 2 P (x, t) = 0 c ∂t
with the initial conditions given above. Fourier transforming with respect to x one has Z 1 P (x, t) = p(k, t)eik·x d3 k 3 3 2 (2π) R with
1 ∂2 k · k + 2 2 p(k, t) = 0. c ∂t
Setting
k=
√
k·k
there are coefficients a(k) and b(k) with Z 1 a(k) cos(ckt) + b(k) sin(ckt) eik·x d3 k. P (x, t) = 3 (2π) 2 R3 Applying the initial conditions gives a(k) = fˆ(k) and b(k) =
1 gˆ(k). ck
..................
Example 9.5: A linear integral operator acting on a function f is a transformation of the form (Ff )(t) =
Z
∞
G(t, s)f (s) ds.
−∞
Sometimes G is referred to as the integral kernel of this linear operator. Now consider a linear, time translation invariant operator. Translation invariance means that Z ∞ Z ∞ G(t, s)f (s) ds = G(t + τ, s + τ )f (s) ds −∞
−∞
135
Integral Transforms
for any τ . Choosing τ = −s one sees that the only way for this transformation to be time translation invariant is if G(t, s) = g(t − s) for g(τ ) = G(τ, 0). But then (Ff )(t) =
Z
∞
−∞
g(t − s)f (s) ds
is just a convolution so that Fourier transforming yields d)(ω) = (Ff
√
2π gˆ(ω)fˆ(ω).
Linear time translation invariant filters abound in practice. The impedance in a one-dimensional acoustics problem is an example. If a linear relation exists between P (x, t) and v(x, t) at some point x0 then, assuming that the mean state of the fluid in question is constant in time, the relation between P and v must be time translation invariant so that on Fourier transforming one finds that there is a function Z(ω) with Pˆ (x0 , ω) = Z(ω)ˆ v (x0 , ω). Similarly, if some electro-acoustic transducer is sufficiently linear then there is a time translation invariant (assuming the characteristics of the transducer don’t change with time, which is in practice never true, but reasonable over short enough spans of time) linear relation between the pressure on the face of the transducer, Pf (t) and the voltage, V (t) (or current, I(t),) produced by the transducer. Fourier transforming yields the relation Pˆf (ω) = M (ω)Vˆ (ω). Perhaps the most well known examples are provided by A.C. circuit theory. Given a voltage source V (t), the current I(t) at some point in the circuit is related to V through a frequency dependent impedance Z(ω) through Ohm’s law ˆ Vˆ (ω) = Z(ω)I(ω). Circuit diagrams provide an algorithm for producing Z from the impedances of individual circuit elements. 136
Integral Transforms
As a concrete example consider a resonator of length L with d2 ω 2 PA (x) = 0, + ( + iα) dx2 c PA′ (L) = 0 and PA′ (0) = iωρ0 u ˆ0 (ω). Here u ˆ0 (ω) is the Fourier transform of the piston motion and α is small. Then iωρ0 u ˆ0 (ω) cos ( ωc + iα)(L − x) . PA (x) = ( ωc + iα) sin ( ωc + iα)L
It follows that the impudence at x = 0 is
iωρ0 ω PA (0) = ω cot ( + iα)L . u ˆ0 (ω) ( c + iα) c
In particular the acoustic pressure at x = 0 is Z ∞ ω 1 iωρ0 cot ( P (0, t) = √ + iα)L u ˆ0 (ω) e−iωt dω. ω c 2π −∞ ( c + iα)
Some words about doing the integral above. If u ˆ0 (ω) is analytic in the lower half-
plain then the integrand has a second order pole at ω = −icα and simple poles at ω = j cπ L −icα for j ∈ {±1, ±2, . . .}. Further, if u ˆ0 (ω) is polynomially bounded then (since t > 0) one can close the ω integration contour in the lower half-plane. One obtains P (0, t) =
√
2π
cπ ρ0 c2 X j − i αL cπ π u ˆ0 (j − icα)e−(ij L +cα)t L j L
j6=0
+
√
Note that
2π
2
ρ0 c L
(1 − αct)ˆ u0 (−icα) − iαcˆ u′0 (−icα) e−cαt .
1 u ˆ(0) = √ 2π
Z
∞
u(t) dt.
−∞
In practice u ˆ(ω) is 0 near ω = 0 because so that the piston, on the average, stays at x = 0. Thus the j = 0 term is usually negligible. Further, u ˆ(ω) is often not analytic everywhere, but just in a neighborhood of some of the poles j cπ L − icα. There are two extreme cases to consider. 137
Integral Transforms
In the first, the piston vibrates at essentially one frequency, say ω0 , so that u ˆ(ω) is essentially a delta function, u ˆ(ω) = Aδ(ω − ω0 ) and the integral above is straightforward. In the other case u ˆ(ω) is non-zero and slowly varying relative to the cotangent. In this case, the integral is dominated by the regions close to the poles where u ˆ(ω) is non-zero. Then the cotangent can be approximated by it’s leading term. Assuming that u ˆ(ω) is non-zero only in a neighborhood of ωj = j cπ L − icα one can use the asymptotic formula Z ∞ 1 1 iωj ρ0 L −iωj t u ˆ0 (ωj ) e dω P (0, t) ≈ √ ωj 2π ( c + iα) c −∞ ω − ωj + iαc cπ ρ0 c2 √ j − i αL cπ π = u ˆ0 (j − icα)e−(ij L +cα)t 2π L j L ..................
The last transform we’ll look at is the Laplace transform. The Laplace transform is not directly related to a self adjoint operator. Instead it is a variation of the Fourier transform. The Laplace transform of a function f defined on (0, ∞) is Z ∞ ˜ f (s) = f (x) e−sx dx. 0
The transform can actually be inverted. The inversion formula is Z γ+i∞ 1 f (x) = f˜(s)esx ds. 2πi γ−i∞
Any γ which is to the right of all the poles of f˜(s) will work. To see that this inversion formula works note that Z γ+i∞ Z γ+i∞ Z ∞ 1 1 sx ˜ f (s)e ds = f (y) es(x−y) dy ds 2πi γ−i∞ 2πi γ−i∞ 0 Z Z 1 γx ∞ ∞ −γy e = e f (y) eiω(x−y) dy dω 2π −∞ 0 0 if x < 0 = f (x) if x > 0.
Note that another form of the restriction on γ is that the function n 0 if x < 0 −γx e f (x) · 1 if x > 0
have a Fourier transform.
138
Green’s Functions
Green’s Functions Let L be a partial differential operator in variables x1 , x2 , . . . , xn . In general, a Green’s function for L is a solution of the equation LG(x1 , y1 , . . . , xn , yn ) = δ(x1 − y1 )δ(x2 − y2 ) · · · δ(xn − yn ). Note that a Green’s function is not unique. Given a Green’s function G(x1 , y1 , . . . , xn , yn ) the function G(x1 , y1 , . . . , xn , yn ) + f (x1 , x2 , . . . , xn ) is also a Green’s function whenever Lf = 0. Example 10.1: Let L= If
d2 . dx2
d2 G(x, y) = δ(x, y) dx2
then G(x, y) =
1 |x − y| + ax + b 2
for any constants a and b. If L = ∇2 then in R2 G(x, y) =
−1 ln(kx − yk) 2π
is a Green’s functions and in R3 G(x, y) =
−1 4πkx − yk
is a Green’s function. Now let L = ∇2 + k 2 in R3 . Then (∇2 + k 2 )G(x, y) = δ(x − y) 139
Green’s Functions
implies that G(x, y) =
−1 eikkx−yk 4πkx − yk
is a Green’s function for L. These last three examples are all called free space Green’s functions. They can all be obtained using Fourier transforms. The functions Z −1 eiq·(x−y) n G(x, y) = d q (2π)n Rn kqk2 ± i0+ are free space Green’s functions for ∇2 on Rn and Z 1 eiq·(x−y) G(x, y) = dn q (2π)n Rn k 2 − kqk2 ± i0+ are free space Green’s functions for ∇2 + k 2 on Rn . ..................
Now let G1 and G2 be different Green’s functions for the same operator L. Then L(G1 − G2 ) = δ(x − y) so that G1 and G2 differ by a solution to Lf = 0. This allows one to choose a Green’s function satisfying some boundary conditions. Given any Green’s function G one must choose an f with Lf = 0 so that G + f satisfies the desired boundary conditions. Example 10.2: Let’s construct a Green’s function for L = ∇2 + k 2 in {(x, y, z) ∈ R3 x > 0} which satisfies Dirichlet boundary conditions at x = 0. Let G(x, x′ ) =
′ −1 eikkx−x k ′ 4πkx − x k
be the free space Green’s function for L. Given
x′ x′ = y ′ z′ 140
Green’s Functions
define
−x′ ˜ ′ = y′ . x z′
˜ ′ is not. Thus Note that if x′ is in the domain then x
˜′) = 0 LG(x, x so that ˜′) GD (x, x′ ) = G(x, x′ ) − G(x, x is a Green’s function. A moments reflection shows that when x = 0 one has GD = 0 so that GD is the desired Green’s function. The procedure used is known as the method of images. The Green’s function for the Helmholtz equation which satisfies Neumann boundary conditions at x = 0 can be constructed similarly: ˜ ′ ). GN (x, x′ ) = G(x, x′ ) + G(x, x This Dirichlet and Nemann Green’s functions can also be constructed using eigenfunction expansions, a sine transform in the Dirichlet case: 2 GD (x, x ) = 3 π ′
Z
∞
−∞
Z
∞
−∞
Z
0
∞
′
′
sin(qx x) sin(qx x′ ) eiqy (y−y )+iqz (z−z ) dqx dqy dqz , k 2 − kqk2 + i0+
and a cosine transform in the Neumann case: Z ∞Z ∞Z ∞ ′ ′ 2 cos(qx x) cos(qx x′ ) eiqy (y−y )+iqz (z−z ) ′ GN (x, x ) = 3 dqx dqy dqz . π −∞ −∞ 0 k 2 − kqk2 + i0+ ..................
There is a general technique for constructing Green’s functions for ordinary differential operators of the form L=
d d P + Q, dx dx
(10.1)
for x ∈ (a, b), satisfying any specified boundary conditions at a and b. To construct such a Green’s function let ψ1 and ψ2 be any solutions of d d P +Q ψ =0 dx dx 141
Green’s Functions
with ψ1 satisfying the given boundary condition at a and ψ2 satisfying the given boundary condition at b. Then G(x, y) = c ψ1 (x< )ψ2 (x> ) with x< = min(x, y) and x> = max(x, y). The constant c is determined so that 1 = lim ǫ↓0
Z
y+ǫ
y−ǫ Z y+ǫ
δ(x − y)
d d P + Q G(x, y) ǫ↓0 y−ǫ dx dx y+ǫ d = P (y) lim G(x, y) y−ǫ ǫ↓0 dx = c P (y) ψ1 (y)ψ2′ (y) − ψ1′ (y)ψ2 (y) . = lim
Recalling the Wronskian, c=
−1 P (y)W(ψ1 , ψ2 ; y)
so that G(x, y) = −
ψ1 (x< )ψ2 (x> ) . P (y)W(ψ1 , ψ2 ; y)
(10.2)
Abel’s formula shows that c is indeed a constant. Often one is given an ordinary differential operator in standard form rather than the form (10.1), 2 ′ ˜ = d + P (x) d + Q(x) L dx2 P (x) dx P (x) 1 = L. P (x)
Note that one has ˜ LG(x, y) =
1 δ(x − y) P (x)
where G(x, y) is given in (10.2) so that ˜ G(x, ˜ y) = δ(x − y) L where ˜ y) = P (y)G(x, y). G(x, 142
Green’s Functions
Example 10.3: The simplest example is L=
d2 dx2
on the interval [a, b] with boundary conditions αa f (a) + βa f ′ (a) = 0 αb f (b) + βb f ′ (b) = 0. For this example the general solution of lψ = 0 is ψ(x) = A + Bx. It follows that one may choose ψ1 (x) = −
αa a + βa +x αa
ψ2 (x) = −
αb b + βb + x. αb
Thus G(x, y) =
αa a+βa αa
1 −
αb b+βb αb
(−
αb b + βb αa a + βa + x< )(− + x> ). αa αb
..................
Example 10.4: Now consider the 1-d Helmholtz equation on the line L=
d2 + k2 . dx2
The homogeneous solutions are all of the form ψ(x) = Aeikx + Be−ikx . Note that if ψ1 (x) = A1 eikx + B1 e−ikx ψ2 (x) = A2 eikx + B2 e−ikx 143
Green’s Functions
then W(ψ1 , ψ2 ) = 2ik(A1 B2 − B1 A2 ). In this general case one has G(x, y) = −
1 2ik(A1 B2 − B1 A2 )
A1 eikx< + B1 e−ikx<
A2 eikx> + B2 e−ikx>
The various Greens functions are determined by the homogeneous solutions they are asymptotic to as x → ±∞. The “outgoing” Greens function is given by choosing A1 = 0 = B2 . One finds in this case that G(x, y) =
1 ik|x−y| e . 2ik
If k is pure imaginary, k = iκ, then the bounded Greens function is given by G(x, y) = −
1 −κ|x−y| e . 2κ
..................
Example 10.5: Consider the operators given by the differential equations of Euler type 1 d 1 d2 +α +β 2 2 dx x dx x 1 d α d 1 = α x + β 2. x dx dx x
L=
Solutions of the related ordinary differential equation may be obtained using the Ansatz ψ(x) = xr . The exponent r satisfies the indicial equation r(r − 1) + αr + β = 0 so that one obtains two solutions from the two roots r 1−α α − 1 2 − β. ± r± = 2 2
Assuming that r+ 6= r− one has
p (α − 1)2 − 4β W(xr+ , xr− ) = . xα 144
Green’s Functions
It follows that the possible Greens functions of L are of the form
..................
r
r
r
G(x, y) =
r
xα (Ax− ) p (1 − AB) (α − 1)2 − 4β
Another method for constructing Greens functions is by eigenfunction expansion, if there is one available. Assume one has an eigenfunction expansion for L given by discrete spectrum ǫj , ψj (x) and continuous spectrum Γn , ψn (ǫ, x). Let η ∈ C not be in the spectrum of L. Then the Greens function for L − η which satisfies the boundary conditions used in defining L may be given by Gη (x, y) =
X ψj (x)ψj (y) j
ǫj − η
+
XZ
Γn
n
ψn (ǫ, x)ψn (ǫ, y) dǫ. ǫ−η
(10.3)
Examples are provided by the Fourier, sine and cosine transform representations of the Greens functions for the Helmholtz equation given in Example 10.1 and Example 10.2. In these cases the desired value for η, say −k 2 is in the continuous spectrum, which is a branch
cut for Gη (x, y). Greens functions are obtained by allowing η = −k 2 ± i0+ to approach −k 2
from above or below in the complex plane. Note that different Greens functions are obtained depending on whether −k 2 is approached from above or below. Note that, as functions of η, these Greens functions are analytic in η as long as η is not in the spectrum of L. The discrete eigenvalues of L are simple poles of Gη (x, y); the residues at the eigenvalues are the projection operators given as integral operators by ψj (x)ψj (y). The components of the continuous spectrum can be shown to be branch cuts of Gη (x, y). Using the formula Im
1 = πδ(x) x + i0+
one finds that Gη+i0+ (x, y) − Gη−i0+ (x, y) = −2πi 145
X
m with η∈Γm
ψm (η, x)ψm (η, y).
(10.4)
Green’s Functions
Note that for second order ordinary differential operators (10.4), in conjunction with (10.2), provides a general method for finding the normalization of the continuous spectrum eigenfunctions. If L is given by (10.1) on (−∞, ∞) and the Greens function Gz (x, y) for L − z by (10.2) then one has Gη+i0+ (x, y)−Gη−i0+ (x, y) Z b d d Gη+i0+ (x, w) = P (w) + Q(w) − η Gη−i0+ (w, y) dw dw a d d P (w) + Q(w) − η Gη+i0+ (x, w) dw − Gη−i0+ (w, y) dw dw d P (w)Gη−i0+ (w, y) = Gη+i0+ (x, w) dw ∞ d − Gη−i0+ (w, y) P (w)Gη+i0+ (x, w) w=−∞ . dw
(10.5)
Example 10.6: Consider
L=
d2 dx2
on (−∞, ∞). Note that the spectrum of L is continuous, is given by (−∞, 0) and is two-fold degenerate. Let η ∈ C with η ∈ / (−∞, 0) and consider the bounded Greens function for L − η given by √ 1 Gη (x, y) = − √ e− η |x−y| 2 η √ 1 √ = − √ e η x< e− η x> 2 η
with the square root above chosen to have positive real part, in other words with branch cut (−∞, 0). Then (10.4) and (10.5) give for ǫ ∈ (−∞, 0) 2πi
√ 1 i√−ǫ (x−y) + e−i −ǫ (x−y) ψn (ǫ, x)ψn (ǫ, y) = − √ e 2i −ǫ n=± X
from which one obtains the normalization N± (−ǫ) =
s
1 √
4π −ǫ
of the continuum eigenfunctions obtained previously from comparison to Fourier transformation. 146
Green’s Functions
..................
Example 10.7: Consider L = ∇2 in free space in cylindrical coordinates. In cylindrical coordinates the Greens function for L − η satisfies ∂2 1 ∂2 1 ∂ ∂2 δ(r − r′ ) ′ ′ ′ + δ(θ − θ′ )δ(z − z ′ ). + + − η G (r, θ, z, r , θ , z ) = η ∂r2 r ∂r r2 ∂θ2 ∂z 2 r It may be obtained by eigenfunction expansion using Fourier transform in z, Fourier serires in θ and Hankel transform in r. One obtains Z ∞ Z ∞ im(θ−θ′ )+ik(z−z′ ) ∞ 1 X e Jm (qr)Jm (qr′ ) Gη (r, θ, z, r , θ , z ) = q dk dq. 4π 2 m=−∞ −∞ 0 k2 + q2 − η ′
′
′
Alternatively, one may use a mixed approach. Use eigenfunction expansion, Fourier transform in z and Fourier serires in θ, for z and θ and (10.2) for r. The two eigenfunction expansions give Z ∞ ∞ ′ ′ 1 X eim(θ−θ )+ik(z−z ) gm (k, r, r′ ) dk Gη (r, θ, z, r , θ , z ) = 2 4π m=−∞ −∞ ′
with
′
′
∂2 1 ∂ m2 δ(r − r′ ) 2 ′ + − + k − η g (k, r, r ) = . m ∂r2 r ∂r r2 r
Using (10.2) one finds
p p (1) Jm ( k 2 − η r< )Hm ( k 2 − η r> ) gm (k, r, r ) = − p p (1) W Jm ( k 2 − η r), Hm ( k 2 − η r) p p π (1) = Jm ( k 2 − η r< )Hm ( k 2 − η r> ) 2 ′
so that
Z ∞ ∞ p p ′ ′ 1 X (1) eim(θ−θ )+ik(z−z ) Jm ( k 2 − η r< )Hm ( k 2 − η r> ) dk. Gη (r, θ, z, r , θ , z ) = 8π m=−∞ −∞ ′
′
′
.................. 147
Green’s Functions
Example 10.8: Consider L = ∇2 in free space in spherical coordinates. Assume an outgoing radiation condition at r = ∞: eikr r
ψ(x) ∼
as r → ∞. The Greens function for L − η satisfies ∂2 1 ∂ 1 ∂2 2 ∂ 1 ∂ + sin θ + + − η Gη (r, θ, φ, r′ , θ′ , φ′ ) ∂r2 r ∂r r2 sin θ ∂θ ∂θ sin2 θ ∂φ2 δ(r − r′ ) δ(θ − θ′ ) δ(φ − φ′ ). = r2 sin θ Expanding in spherical harmonics one has ′
′
′
Gη (r, θ, φ, r , θ , φ ) =
l ∞ X X
Ylm (θ, φ)Ylm (θ′ , φ′ )gη;l,m (r, r′ )
l=0 m=−l
with
∂2 2 ∂ l(l + 1) δ(r − r′ ) ′ + − − η g (r, r ) = . η;l,m ∂r2 r ∂r r2 r2
One has
√ (+) √ jl ( −η r< )hl ( −η r> ) gη;l,m (r, r ) = − . √ (+) √ r2 W jl ( −η r), hl ( −η r) ′
Using
√ √ √ (+) √ W jl ( −η r), hl ( −η r) = iW jl ( −η r), yl ( −η r) i(2l + 1) =−√ −η r2
one finds
′
gη;l,m (r, r ) = −i
√
√ −η (+) √ jl ( −η r< )hl ( −η r> ). 2l + 1
..................
We end by mentioning some applications. The basic application of a Green’s function is to solving inhomogeneous problems of the form Lf = g. If a Greens function G for L is known then f (x) = f0 (x) + where Lf0 = 0.
Z
G(x, y)g(y)dn y
148
Green’s Functions
Example 10.9: A classic example is Poisson’s equation ∇2 φ(x) = −4πρ(x). By Example 10.1 the solution is given by in free space by Coulomb’s law φ(x) =
Z
ρ(y) d3 y. kx − yk
..................
Green’s functions can also be used in a subtle way to solve Helmholtz equations with inhomogeneous boundary conditions in general domains using the Kirchof-Helmholtz integral theorem which is derived as follows. Let
∇2 + k 2 ψ = 0
in some domain V with some inhomogeneous boundary conditions on ∂V . Let G(x, y) be any Green’s function of ∇2 + k 2 which is symmetric with respect to interchange of x and y (for example, the free speace Green’s function). Then ψ(x) = =
(∇2x Z
Z
V
=
Z
V
=
Z
G(x, y)ψ(y) dn y V 2 2 (∇y + k )G(x, y) ψ(y) dn y 2 2 2 2 (∇y + k )G(x, y) ψ(y) − G(x, y) (∇y + k )ψ(y) dn y ∇y · ∇y G(x, y) ψ(y) − G(x, y) ∇y ψ(y) dn y ˆ dσ(y). ∇y G(x, y) ψ(y) − G(x, y) ∇y ψ(y) · n
V
=
2
Z
+k )
∂V
The resulting formula, ψ(x) =
Z
∂V
n ˆ · ∇y G(x, y) ψ(y) − G(x, y) n ˆ · ∇y ψ(y) dσ(y),
is known as the Helmholtz-Kirchoff integral theorem (or vica-versa). 149
(10.6)
Green’s Functions
One use of the Helmholtz-Kirchoff integral theorem is to problems in which an inhomogeneous boundary condition of the form α(x) ψ + β(x) n ˆ · ∇ψ(x) = γ(x)
(10.7)
is specified on ∂V . Assume (without loss of generality) that α2 + β 2 = 1. Eq. (10.6) can be written ψ(x) =
Z
β G(x, y) − α n ˆ · ∇y G(x, y)
Z
β G(x, y) − α n ˆ · ∇y G(x, y) γ(y)
∂V
=
∂V
α ψ(y) + β n ˆ · ∇ψ(y)
− α G(x, y) + β n ˆ · ∇y G(x, y)) β ψ(y) − α n ˆ · ∇y ψ(y) dσ(y)
− α G(x, y) + β n ˆ · ∇y G(x, y)) β ψ(y) − α n ˆ · ∇y ψ(y) dσ(y).
Thus if G(x, y) is chosen to satisfy the homogeneous version of (10.7), α G(x, y) + β n ˆ · ∇y G(x, y)) = 0 then one has the explicit result ψ(x) =
Z
∂V
β(y) G(x, y) − α(y) n ˆ · ∇y G(x, y) γ(y) dσ(y).
Note the special cases: the Neumann problem is given by α = 0 and β = 1, the Dirichlet problem by β = 0 and α = 1. Example 10.10: Let
∇2 + k 2 ψ(x) = 0
for r = |x| > R (that is, outside of the sphere of radius R). Let the normal derivative of φ be specified on the surface of the sphere (the so-called Neumann problem): ∂φ = γ(θ, φ) ∂r r=R 150
Green’s Functions
with γ known. Assume an outgoing radiation condition at r = ∞: ψ(x) ∼
eikr r
as r → ∞.
Let G(r, θ, φ, r′ , θ′ , φ′ ) be the Greens function in spherical coordinates satisfying
Neumann boundary conditions at r = R ∂G =0 ∂r r=R
and the radiation condition
G(r, θ, φ, r′ , θ′ , φ′ ) ∼ as r → ∞. Then ψ(r, θ, φ) = R
2
Z
2π
0
Z
eikr r
π
G(r, θ, φ, R, θ′ , φ′ )γ(θ′ , φ′ ) sin θ′ dθ′ dφ′ .
0
One may use ′
′
′
Gη (r, θ, φ, r , θ , φ ) =
∞ X l X
Ylm (θ, φ)Ylm (θ′ , φ′ )gη;l,m (r, r′ )
l=0 m=−l
with
√ (+) √ √ √ j ′ ( −η R) jl ( −η r< ) − yl′ (√−η R) yl ( −η r< ) hl ( −η r> ) l . √ gη;l,m (r, r′ ) = − (+) √ √ √ jl′ ( −η R) 2 √ r W jl ( −η r) − y′ ( −η R) yl ( −η r) , hl ( −η r) l
′
Note that gη;l,m (r, r ) is independent of m (we will drop the index for m). Using √ (+) √ √ jl′ ( −η R) √ yl ( −η r) , hl ( −η r) W jl ( −η r)− ′ √ yl ( −η R) √ √ √ √ jl′ ( −η R) √ yl ( −η r) , jl ( −η r) + iyl ( −η r) = W jl ( −η r) − ′ √ yl ( −η R) √ √ √ jl′ ( −η R) W jl ( −η r), yl ( −η r) = i+ ′ √ yl ( −η R) √ jl′ ( −η R) 2l + 1 √ =− i+ ′ √ yl ( −η R) −η r2 one has √
√ √ jl′ ( −η R) √ (+) √ jl ( −η r< ) − ′ √ √ gη;l (r, r ) = yl ( −η r< ) hl ( −η r> ). jl′ ( −η R) yl ( −η R) (2l + 1) i + y′ (√−η R) ′
−η
l
151
Green’s Functions
If γlm = R
2
Z
2π
0
Then ψ(r, θ, φ) =
Z
π
Ylm (θ′ , φ′ )γ(θ′ , φ′ ) sin θ′ dθ′ dφ′
0
∞ X
gη;l (r, r′ )
l=0
l X
m=−l
..................
152
γlm Ylm (θ, φ).
Problems
Problems 1) Consider the 1-dimensional wave equation
Here c is a constant. Show that
∂2 1 ∂2 − p = 0. ∂x2 c2 ∂t2
∂2 1 ∂2 ∂ 1 ∂ − = − ∂x2 c2 ∂t2 ∂x c ∂t 1 ∂ ∂ + = ∂x c ∂t
Use this to show that for any differentiable function f
∂ 1 ∂ + ∂x c ∂t 1 ∂ ∂ − . ∂x c ∂t
p(x, t) = f (x ± ct) are two solutions. Show that the plane wave solutions are of this form and determine the appropriate function f .
2) Find the plane wave solutions to Maxwell’s equations.
3) Consider the vector space V equal to the set of all complex linear combinations a cos(kx) + b sin(kx). Here a, b ∈ C. What is the dimension of V over C? Show that V is precisely the set of solutions of the 1-dimensional Helmholtz equation
Express
d dx
and
d2 dx2
d2 + k 2 p = 0. 2 dx
as a matrix with respect to the basis sin(kx), cos(kx). Repeat
for the basis eikx , e−ikx .
4) Find the exponential solutions to the plate equation. Assume a time dependence of the form eiωt . 153
Problems
5) Show that in the linear approximation to fluid dynamics ρ′ and p′ both satisfy the wave equation. (Showing that this is true for the traveling wave solutions is not enough.)
6) Let H be an n by n self-adjoint matrix. Assume the eigenvalues of H are ǫj and the eigenvectors, chosen to be orthonormal, are vj . a) Find the solution of the associated Shr¨odinger equation i
dψ = Hψ dt
with initial condition ψ(0) = φ0 . b) Find the solution of d2 y = Hy dt2 with initial conditions y(0) = p and y′ (0) = v. c) Carry out (a) and (b) for the matrix
H=
1 −1 −1 2
.
7) Consider a linear array of 5 microphones. Let Pj (θ) be the response of the j microphone to a plane wave incident on the array at an angle of θ to the normal to array. Let PI (θ) = e−10(θ−θ0 )
2
be an “ideal” beam pattern. Given coefficients cj , let P (θ) =
2 X
cj Pj (θ).
j=−2
Determine the coefficients cj making is as close as possible to PI (θ) in the least squares sense: by minimizing kP − PI k =
Z
π 2
−π 2
|P (θ) − PI (θ)|2 dθ.
Use any convenient computer software to do the computations and plot the resulting beam pattern P . 154
Problems
8) Let f, g : R2 → R2 be given by f (x, y) =
exy sin x + cos y
and g(x, y) =
x2 y y−x
.
Find Df (x, y), Dg(x, y), (Df ◦ g)(x, y), ∂ f (g(x, y)) ∂x and ∂ f (g(x, y)). ∂y Find the linear approximation to f ◦ g near (x, y) = (0, π4 ).
9) Using
∂2f ∂x∂y
=
∂2f ∂y∂x
show that a dx + b dy is exact only if ∂b ∂a = . ∂y ∂x
Use this to show that −y dx + x dy is not exact. Find d arctan xy and show that
1 x2 +y 2
is an
integrating factor for −y dx + x dy.
10) Transform the Cartesian gradient in R2 into polar coordinates and find the Laplace operator in these coordinates. Repeat the exercise for cylindrical coordinates in R3 .
11) Consider the parabolic coordinates w, τ x=τ y = wτ 2 . Transform the Cartesian gradient and find the Laplace operator in these coordinates. 155
Problems
12) Let f (x, y) = exy . Find all critical points for f (points where ∇f = 0) and determine if they are maxima, minima or saddles. If any are saddles determine the directions through which the saddles are mins and through which they are maxs.
13) Consider an ideal gas in which P = ρRT . Let P = P0 + P ′ , ρ = ρ0 + ρ′ and T = T0 + T ′ where the primed variables are small disturbances about some quiescent state given by P0 , ρ0 and T0 . It can be shown that if the specific heat cP is constant then the difference in the entropy of the disturbed state S and the quiescent state S0 is S − S0 = cP ln
T P − R ln . T0 P0
Assuming that the processes are isentropic (S = S0 ) express ρ as a function of P and find the speed of sound c=
h
∂ρ i ∂P S S=S0 , P =P0
− 12
.
14) What is the surface element on the surface of a cylinder of radius R whose axis is the z axis?
15) Use the fact that a sphere of radius r is the set of solutions to x2 + y 2 + z 2 = r2 to find its unit normal n in spherical coordinates. Use this result to show that n·∇=
∂ . ∂r
16) Let D = {x kxk ≤ R and x > 0, y > 0, z > 0} be the restriction of the ball of radius R
to the first quadrant. Compute
Z
xz dx dy dz.
D
156
Problems
17) Show that, in 3 dimensions, ∆ + k2
1 eikkx−yk = −4πδ(x − y). kx − yk
18) Show that given a function F (x) any solution φ to the equation ∆ + k2 φ = F
can be written
Z
1 φ(x) = φ0 (x) − 4π
1 eikkx−yk F (y) d3 y kx − yk
where φ0 is a solution of the homogeneous Helmholtz equation ∆ + k 2 φ0 = 0. 19) Let cos z = 21 (eiz + e−iz ) and sin z =
1 iz 2i (e
− e−iz ). Find the poles of tan z and find the
first three terms in the Laurent expansions about each of these poles.
20) The function z p for arbitrary p is usually defined by z p = ep ln z . Show that z p is single valued only when p is an integer. What phase does z p pick up when z circles 0 counterclockwise once? Make a choice for z p and compute Z z p dz. Cǫ (0)
For which p is this integral 0?
21) Compute
and
Z
∞
−∞
Z
∞
−∞
1 dx x2 + 1
eix dx. (x2 + 1)2 157
Problems
22) Find the general solution of d2 d + 4 + 4 f = 0. dx2 dx
23) Consider the model introduced in Example 5.3. Express the solutions as linear combinations of exponentials e±ikx rather than sines and cosines. Find the general solution which obeys the condition that for x > 0 f (x) = const eik+ x .
24) Find the solution of
d4 2 −k f =0 dx4
which satisfies f (0) = −f ′′ (0) = α0 and f ′ (0) = f (3) (0) = 0. Instead, impose the condition that as x → ∞ f (x) ∼ const e−i
√
k x
.
What constraint does this impose on the allowed values of f (0), f ′ (0), f ′′ (0) and f (3) (0)?
25) Find all the solutions of d2 d x2 2 + 2x − 1 f (x) = 0 dx dx which are finite at x = 0. 158
Problems
26) Recall that Jm (x) =
1 x m 1 + c1 x + c2 x2 + . . . . m! 2
Find c1 and c2 . Recall further that Ym (x) =
(m − 1)! x −m 2 Jm (x) ln x − 1 + d1 x + d2 x2 + . . . . π π 2
What choice for Ym has to be made so that all the odd coefficients d2j+1 = 0? Make this choice and then determine d2 for m 6= 1. What can be said about d2 for m = 1?
27) Using Abel’s formula and the small x asymptotics calculate W(Jm , Ym ). Similarly, using (+)
(−)
the large x asymptotics compute W(Hm , Hm ). Finally, assuming that (±) Hm = c(Jm ± iYm )
use these two Wronskians to determine c.
28) Consider (5.5). Find functions u and Q so that setting f (x) = u(x)g(x) the new unknown g satisfies
d2 + Q g = 0. dx2
Thus, to study second order linear equations it is sufficient to study equations with no first derivative term.
29) Using the technique developed in the previous problem find the general solution to the spherical Bessel equation with l = 0.
30) Find the first correction to the large x asymptotic forms for Jm and Ym .
31) In (5.18) find δ and the two possible values for a1 . 159
Problems
32) Find the large x asymptotic form for solutions to d2 λ + + 1 f (x) = 0. dx2 x Choose a form p κxα
f (x) = x e and determine p, κ and α.
1 + ...
33) Show that the eigenvalues of a self adjoint operator are real.
34) Let L =
d2 dx2
+ q have self adjoint boundary conditions on (a, b). Let ψj (x) be the
normalized eigenfunctions for L corresponding to the eigenvalues λj . Let δ(a,b) be the delta function restricted to (a, b): for any y ∈ (a, b) f (y) =
Z
b
a
δ(a,b) (y − x) f (x) dx.
Show that for any y ∈ (a, b) X j
ψj (y) ψj (x) = δ(a,b) (y − x).
Now consider the equation (L − ǫ)f = g for some ǫ not in the spectrum of L. Show that the general solution may be written in the form f (x) = f0 (x) +
XZ j
b
a
ψj (x) ψj (y) g(y) dy λj − ǫ
for some f0 satisfying (L − ǫ)f0 = 0. Show that if f satisfies the imposed boundary conditions then f0 = 0. If ǫ is in the spectrum of L What condition on g must be satisfied if (L − ǫ)f = g is to have a solution? 160
Problems
35) Consider one dimensional lossless acoustics, ∂2 1 ∂2 P (x, t) = 0 − ∂x2 c2 ∂t2 ρ0
∂P (x, t) ∂v(x, t) =− , ∂t ∂x
in the interval (0, L). Assume that in the steady state,
P (x, t) v(x, t)
=
PA (x) vA (x)
e−iωt ,
impedances Z0 (ω) and ZL (ω) are specified at x = 0 and x = L respectively, PA (0) = Z0 vA (0) and PA (L) = ZL . vA (L) Find an equation whose solutions are the resonant frequencies of the system. Let ωj and ωk be two distinct resonant frequencies. Let PAj and PAk be the corresponding resonant pressure amplitudes. Find hPAj , PAk i. Under what conditions on Z0 and ZL is the problem of finding the resonant frequencies and amplitudes self adjoint?
36) Consider a vibrating string of length L, clamped at both ends. Let u(x, t) be the displacement of the string at x and t. Imagine striking the string at x =
L 3
and model this
striking by the initial condition L ∂u = A δ(x − ). t=0 ∂t 3
Assume as well that the initial displacement of the string is 0. Find the subsequent motion of the string. Which modes don’t get excited? 161
Problems
37) Solve the time dependent Schr¨ odinger equation for a particle of mass m confined to a rigid cylindrical container of radius R and height H.
38) Consider a rigid rectangular box of dimensions L, W and H. Find the resonant frequencies and modes for the Laplacian in this box with Neumann boundary conditions. Under what conditions are there no degenerate resonant frequencies?
39) Consider a cube A with sides of length 1. Find the lowest eigenvalue of −∇2 in this cube with boundary conditions
for 0 ≤ α ≤ ∞.
n ˆ · ∇ψ ∂A = −αψ ∂A
40) Consider a semi-infinite duct whose cross section A is constant and has area |A|. Let the
lowest non-zero Neumann eigenvalue of −∇2 in A be λ1 . Imagine that a piston is mounted in
˜ If the piston has velocity a baffle at z = 0 somewhere in A. Let the area of the piston be |A|. √ u0 cos(ωt) with ω < c λ1 find the large z asymptotic form for the pressure P (x, y, z, t). What length scale must z be much larger than in order for this asymptotic form to be valid?
41) Consider a duct in air (c = 343 m/sec) with square cross section 5 cm by 5 cm. For what range of frequencies are there precisely 4 propagating modes? Write out the velocity dispersion for these modes.
42) Estimate the eigenvalues of a vibrating bar of length L free at both ends. Plot the first 5 eigen modes.
43) Consider an annular membrane of inner radius r1 and outer radius r2 . Find the Dirichlet eigenfunctions and eigenvalue condition. For r2 = 1 and r1 = 0.1 find the first three resonant frequencies. 162
Problems
44) Show that, given a1m for m ∈ {−1, 0, 1}, there is a constant vector A with 1 1 X 1 a1m Y1m (θ, φ) = A · ∇ . 2 r m=−1 r
Give an explicit expression for A in terms of the a1m .
45) Consider the acoustic resonant frequencies and modes of a spherical resonator of radius R filled with a gas in which the speed of sound is c. Find equations whose solutions are the resonant frequencies. Given the resonant frequencies find the resonant modes. For the first three spherically symmetric modes estimate the resonant frequencies and find the normalization constants for the modes.
46) Consider a sphere of radius R whose surface is vibrating with velocity u(θ, φ) = Re
l ∞ X X
alm Ylm (θ, φ).
l=0 m=−l
Find the acoustic pressure of the sound radiating from the sphere into empty space.
47) Find the generalized eigenfunction expansion for L=
d2 dx2
on (1, ∞) with boundary conditions ψ(1) = 0.
48) Find the generalized eigenfunction expansion for L=− for v(x) =
n
d2 + v(x) dx2
−10 0
on (0, ∞) with boundary conditions ψ ′ (0) = 0. 163
if x < 1 otherwise
Problems
49) Find the Fourier transforms of the following functions: f (t) =
0 e−(a+bi)t
f (x) = f (x) =
if t < t0 if t ≥ t0
1 x − x0 + iα
cos(κx) (x − x0 )2 + α2
50) Show that a necessary and sufficient condition for a function f (x) to be real is that it’s Fourier transform fˆ(k) satisfy fˆ(k) = fˆ(−k).
51) Consider a simple closed circuit with a voltage source V (t) and one element connected in series. a) Show that if the element is a capacitor, Z(ω) = iωC, the current I(t) is
1 C
times the
integral of V . b) Show that if the element is an inductor, Z(ω) =
1 iωL ,
the current I(t) is L times the
derivative of V . c) If the element is a linear combination of a capacitor and a resistor, Z(ω) = R + iωC, and if the voltage source provides an impulse at t = t0 , V (t) = v0 δ(t − t0 ) find the current I(t).
52) Consider the three dimensional heat equation ∂ − ∇2 T (x, t) = 0. ∂t If one has the initial condition T (x, 0) = δ(x) find T (x, t) for t > 0. 164
Problems
53) Let 1 f± (x) = lim 2π ǫ↓0 Show that both f± satisfy
Z
∞
−∞
Z
∞
−∞
eiq(x−y) g(y) dy dq. k 2 − q 2 ± iǫ
d2 2 f± (x) = g(x). + k dx2
Assuming that
1 gˆ(k) = √ 2π
Z
∞
e−ikx g(x) dx
−∞
is analytic with |ˆ g (k)| ≤ const (1 + k 2 )m for some m, calculate f± .
54) Consider a cube with sides of length L. Two opposing sides of the cube are vibrating, one with velocity u(t), the other with velocity −u(t). Find the acoustic pressure P (r, θ, φ, t) for large r if a) the Fourier transform of u is 2
u ˆ(ω) = Ae−λ(ω−ω0 ) , b) the Fourier transform of u is u ˆ(ω) = A Assume that L ≪
c ω0 ,
r≫
c ω0 ,
λ≫
1 2 − . ω − ω0 + αi ω − 2ω0 + αi
1 ω02
and α ≪ ω0 .
55) Find the generalized eigenfunction expansion for dition f (0) + f ′ (0) = 0.
d2 dx2
on (0, ∞) with the boundary con-
56) Consider a semi-infinite pipe with square cross section of length L. A piston that is mounted at the open end of the pipe has area A and velocity u(t). a) Find, but don’t attempt to evaluate, an expression for the pressure on the face of the piston. b) If A = L2 evaluate the pressure on the face of the piston. 165
Problems
57) Let L=
d2 1 d m2 + − dx2 x dx r2
on (1, ∞) with the boundary condition ψ ′ (1) = 0. Find the outgoing Green’s function for L − η and the generalized eigenfunction expansion for L.
58) Consider the wedge given in cylindrical coordinates by
Let
{r, θ, z 0 < r < ∞, 0 < θ < θ0 , 0 < z < ∞}.
∇2 −
1 ∂2 P (r, θ, z, t) = 0. c2 ∂t2
Let P (r, θ, z, t) satisfy a radiation condition at spatial infinity and Neumann conditions on the surfaces of the wedge
∂P ∂P = 0 = . ∂θ θ=0 ∂θ θ=θ0
Assume that the value of P (r, θ, 0, t) = f (r, θ, t) is known on the bottom of the wedge. Use the Helmholtz-Kirchoff integral theorem (10.6) to find P (r, θ, z, t) in terms of sums and integrals over known quantities.
59) Let L = ∇2 + k 2 in three dimensions where k=
k− k+
if 0 ≤ |x| < R if |x| ≥ R.
Assuming that k− > k+ find the eigenfunction expansion for L.
166
Solutions
Problem Solutions 1) We have ∂ 1 ∂ ∂ 1 ∂ 1 ∂2 1 ∂2 1 ∂2 ∂2 − + + − − = ∂x c ∂t ∂x c ∂t ∂x2 c ∂x∂t c ∂t∂x c2 ∂t2 1 ∂2 ∂2 − = ∂x2 c2 ∂t2 Similarly ∂2 1 ∂ ∂ 1 ∂ 1 ∂2 1 ∂2 1 ∂2 ∂ = + − − + − ∂x c ∂t ∂x c ∂t ∂x2 c ∂x∂t c ∂t∂x c2 ∂t2 1 ∂2 ∂2 − = ∂x2 c2 ∂t2 Thus ∂2 1 ∂ ′ 1 ∂2 ∂ ′ ± − f (x ± ct) − f (x ± ct) f (x ± ct) = ∂x2 c2 ∂t2 ∂x c ∂t = 0. Finally, the plane wave solutions are ω
eikx−iωt = eik(x− k t) = f (x − ct) for f (x) = eikx and c=
ω . k
2)
3) Note that for any k 6= 0 the functions sin(kx) and cos(kx) are linearly independent since a cos(kx) + b sin(kx) =
p b a2 + b2 cos(kx − arctan ) a 167
Solutions
and is 0 only if a2 + b2 = 0, which is only possible if a = b = 0. Further, any function w(x) = a cos(kx) + b sin(kx) is linearly dependent on sin(kx) and cos(kx) with linear dependence relation w(x) − a cos(kx) − b sin(kx) = 0. Thus, the vector space V is 2-dimensional. Since
d2 cos(kx) = −k 2 cos(kx) dx2
and
d2 sin(kx) = −k 2 sin(kx) dx2
implies that (
d2 2 + k ) a cos(kx) + b sin(kx) = 0. dx2
Thus, any element of V solves the one dimensional Helmholtz equation. Finally, the one dimensional Helmholtz equation has constant coefficients so that the solutions are linear combinations of exponentials αeikx + βe−ikx . But e±ikx = cos(kx) ± i sin(kx) are in V . It follows that any linear combination of them are also in V . Now consider d (a sin kx + b cos kx) = ka cos kx − kb sin kx. dx Thus, under this transformation, 0 −kb a = 7−→ k ka b so that with respect to the basis sin kx, cos kx d 0 = k dx 168
−k 0
−k 0
a b
Solutions
and d2 = dx2
0 k
−k 0
−k 0
0 k
=
−k 2 0
0 −k 2
.
Similarly, d (aeikx + beikx ) = ikaeikx − ikbe−ikx dx so that with respect to the basis eikx , e−ikx
and d2 = dx2
ik 0
d = dx
ik 0
0 −ik
ik 0
0 −ik
0 −ik
=
−k 2 0
0 −k 2
.
4) We want solutions of the plate equation of the form Aeκ·x+iωt . Then
∆2 − K
∂ 2 κ·x+iωt 2 2 Ae = (κ · κ) + Kω Aeκ·x+iωt ∂t2 = 0.
Thus, noting that κ · κ = kκk2 ,
kκk4 = −Kω 2
Thus κ = ±k, ±ik with
1
kkk = K 4
√
ω.
5) Using P ′ = c2 ρ′ we have ∂ρ′ + ρ0 ∇ · v ′ = 0 ∂t and ρ0
∂v′ + c2 ∇ρ′ = 0. ∂t 169
Solutions
Acting with
∂ ∂t
on the first equation gives ∂ 2 ρ′ ∂v′ + ρ ∇ · =0 0 ∂t2 ∂t
and substituting for
∂v′ ∂t
from the second equation gives ∂ 2 ρ′ − c2 ∇ · ∇ρ′ = 0. ∂t2
Thus ρ′ satisfies the wave equation. For v′ things are a bit more complicated. Taking the curl, ∇×, of the second
equations and noting that ∇ × ∇P ′ = 0 we find
∂ ∇ × v′ = 0 ∂t so that ∇ × v′ is constant. Taking ρ0 and substituting for
∂ρ′ ∂t
∂ ∂t
and the divergence, ∇·, of the second equation we find
∂ρ′ ∂ 2 ∇ · v′ 2 + c ∇ · ∇ = 0, ∂t2 ∂t
from the first equation, ∂2 2 − c ∆ ∇ · v′ = 0, ∂t2
so that ∇ · v′ satisfies the wave equation.
6) a) Let U be the matrix whose columns are the eigenvectors vj . Let c = U ∗ψ and note that c is the vector of coefficients of the eigenvector expansion expansion of ψ: ψ=
X
cj vj .
j
Then, since U U ∗ = I, i
dU ∗ ψ = U ∗ Hψ = U ∗ HU U ∗ ψ dt 170
Solutions
so that, since U ∗ HU is diagonal with diagaonal matrix elements equal to the eigenvalues ǫj , i
dcj = ǫj cj . dt
Thus cj = αj e−iǫj t where the αj are determined at t = 0 to be the expansion coefficients U ∗ φ0 , αj = hvj , φ0 i. Note finally that ψ=
X hvj , φ0 i e−itǫj vj . j
b) Similarly, let c = U ∗ y. Then d2 c = U ∗ HU c dt2 so that cj = aj cos( At t = 0 one finds
p p −ǫj t) + bj sin( −ǫj t). aj = hvj , pi
or, equivalently, a = U ∗ p, and hvj , vi bj = √ −ǫj
√ or, equivalently, b = U ∗ −H −1 v. Thus
X p p hvj , vi sin( −ǫj t) vj y= hvj , pi cos( −ǫj t) + √ −ǫj j c) Consider d2 dx2
y1 y2
=
1 −1
171
−1 2
y1 y2
.
Solutions
We saw in Example 2.10 that the normalized eigenvectors are
the first corresponding to λ+ =
1 1 −2 −
3 2
+
√ 1 5 + √ 2 2 , , 5 1 2
√
5 2 ,
3 2
the second to λ− =
−
√ 5 2 .
These eigenvectors are
not normalized. To normalize we multiply by 1 q 1 + ( 21 +
√
5 2 )
=q
1 5 2
√ . + 5
Thus, if 1
U=q
then U U ∗ = I and
U
∗
or
5 2
+
√
5
−1 2
1 −1
∗
1 −1 −1 2
η1 η2
U
1 1 −2 −
U=
1 2
√ 5 2
=
0 λ−
λ+ 0
λ+ 0
+ 1
√
0 λ−
0 λ−
5 2
!
U ∗.
If η = U ∗ ψ then
d i dt
=
λ+ 0
so that η=
η1 η2
a+ e−iλ+ t a− e−iλ− t
and then ψ=q
1 5 2
√ + 5
1 1 −2 −
1 2
√
5 2
+ 1
=
λ+ η1 λ− η2
√
5 2
!
a+ e−iλ+ t a− e−iλ− t
If w = U ∗ y then
d2 dt2 so that
w1 w2
w1 w2
=
=
λ+ 0
0 λ−
w1 w2
=
λ+ w1 λ− w2
! √ √ a1 e√−λ+ x + b1 e−√−λ+ x a2 e −λ− x + b2 e− −λ− x 172
Solutions
and then
y1 y2
=q
1 5 2
+
√
5
1 1 −2 −
√
1 2
+ 1
5 2
√
5 2
! √ √ a1 e√−λ+ x + b1 e−√−λ+ x . a2 e −λ− x + b2 e− −λ− x
!
7) The pressure field at the j sensor is (j ranges from -2 to 2) Pj (θ) = eikjd sin(θ) with k = given by
ω c
and d the separation between the sensors. The appropriate coefficients cj are
hP−2 , P−2 i · · · c−2 .. .. . = . ··· c2 hP2 , P−2 i · · ·
8) With
hP2 , P2 i
f (x, y) =
Df (x, y) =
xexy xexy cos x − sin y
g(x, y) = one has Dg(x, y) =
x2 y y−x
2xy −1
x2 1
so that
=
2
2
(y − x)ex y(y−x) cos(x2 y) 2
.
(Df ) g(x, y) = Df (x2 y, y − x) 2 (y − x)ex y(y−x) = cos(x2 y)
D(f ◦ g)(x, y) =
exy sin x + cos x
With
hP2 , PI i
one has
Then
−1 hP−2 , P2 i hP−2 , PI i .. .. . .
.
2
x2 yex y(y−x) − sin(y − x)
x2 yex y(y−x) − sin(y − x)
2xy −1
x2 1 2
(2xy 2 − 3x2 y)ex y(y−x) (2x2 y − x3 )ex y(y−x) 2xy cos(x2 y) + sin(y − x) x2 cos(x2 y) − sin(y − x) 173
.
Solutions
Finally, π π f ◦ g(0, ) = f (0, ) = 2 2 and π D(f ◦ g)(0, ) = 2
0 1
1 1 0 −1
so that the linear approximation to f ◦ g near (0, π2 ) is given by 0 1 + f ◦ g(x, y) ≈ 1 1 0 . = x−y
0 −1
x−1 y−1
9) If adx + bdy is exact then there is a function f (x, y) with df = adx + bdy =
∂f ∂x x+ dy. ∂d ∂y
Then ∂a ∂f ∂b = = . ∂y ∂x∂y ∂x Thus −ydx + xdy cannot be exact since ∂(−y) = −1 ∂y and ∂x = 1. ∂x Finally, a computation shows that d arctan Thus
1 x2 +y 2
y 1 = 2 − ydx + xdy . x x + y2
is an integrating factor for −ydx + xdy. 174
Solutions
10) For x = r cos θ and y = r sin θ we have ∂x ∂x cos θ ∂r ∂θ = ∂y ∂y sin θ ∂r ∂θ so that
∂ ∂x ∂ ∂y
−r sin θ r cos θ
−1 ∂ sin θ ∂r = ∂ r cos θ ∂θ −1 ∂ cos θ − 1r sin θ ∂r = 1 ∂ sin θ cos θ r ∂θ ∂ ∂ cos θ ∂r − 1r sin θ ∂θ = . ∂ ∂ sin θ ∂r + 1r cos θ ∂θ
cos θ −r sin θ
Then ∂ 1 ∂ ∂ 1 ∂ ∂2 = cos θ − sin θ cos θ − sin θ ∂x2 ∂r r ∂θ ∂r r ∂θ 2 cos θ sin θ ∂ cos θ sin θ ∂ 2 sin2 θ ∂ sin2 θ ∂ 2 ∂ 2 −2 + + = cos θ 2 + 2 ∂r r2 ∂θ r ∂r∂θ r ∂r r2 ∂θ2 and ∂2 1 ∂ 1 ∂ ∂ ∂ + cos θ + cos θ = sin θ sin θ 2 ∂y ∂r r ∂θ ∂r r ∂θ 2 cos θ sin θ ∂ cos θ sin θ ∂ 2 cos2 θ ∂ cos2 θ ∂ 2 ∂ + 2 + + = sin2 θ 2 − 2 ∂r r2 ∂θ r ∂r∂θ r ∂r r2 ∂θ2 so that using sin2 θ + cos2 θ = 1 one has ∂2 ∂2 ∂2 1 ∂ 1 ∂2 + = + + . ∂x2 ∂y 2 ∂r2 r ∂r r2 ∂θ2 For cylindrical coordinates r cos θ x y = r sin θ z z
there is no need to go through the transformation procedure again since z is unchanged while x and y are transformed to polar coordinates. Thus ∂ ∂ ∂ cos θ ∂r − 1r sin θ ∂θ ∂x ∂ ∂ ∂ ∂y = sin θ ∂r + 1r cos θ ∂θ ∂ ∂z
∂ ∂z
and
∂2 ∂2 ∂2 ∂2 1 ∂ 1 ∂2 ∂2 + + = + + + . ∂x2 ∂y 2 ∂z 2 ∂r2 r ∂r r2 ∂θ2 ∂z 2 175
Solutions
11) For x = τ and y = ωτ 2 one has
so that
∂x ∂r ∂y ∂r
∂x ∂θ ∂y ∂θ
1 0
∂ ∂x ∂ ∂y
=
1 2ωτ
0 τ2
−1
∂ ∂τ ∂ ∂ω −1 ∂ 1 − 2ω ∂τ τ 1 ∂ 0 τ2 ∂ω ∂ 2ω ∂ ∂τ − τ ∂ω . 1 ∂ τ 2 ∂ω
= = =
2ωτ τ2
Then ∂2 ∂2 ∂2 6ω ∂ 4ω ∂ 2 4ω 1 ∂2 + = + − + ( + ) . ∂x2 ∂y 2 ∂τ 2 τ 2 ∂ω τ ∂τ ∂ω τ2 τ 4 ∂ω 2
12) One has ∇e
xy
0 y xy e = = 0 x
only when x = 0 and y = 0. Then ∂2 ∂x2 ∂2 ∂x∂y
∂2 ∂x∂y ∂2 ∂y 2
To find the eigenvalues set
!
e
xy
det
0
=
−λ 1
y2 1 + xy
1 −λ
1 + xy x2
e
xy
0
=
0 1 1 0
.
= λ2 − 1 = 0.
Thus the eigenvalues are −1 and 1. The eigenvector for −1 is v and for 1,
−
1 −1
1 . v 1 +
Thus, 0 is a saddle point, is a max along the line containing v − and a min along the line containing v + . 176
Solutions
13) With S − S0 = 0 one has cp ln so that
Substituting into P = ρRT one has
P T = R ln T0 P0
P R T cp = . P0 T0 R
ρ T ρ P cp P = = P0 ρ0 T0 ρ0 P0 so that
Then
R
P 1− cp ρ = . ρ0 P0 R
1 ∂ P 1− cp = ρ 0 P =P0 c2 ∂P P0 R ρ0 = (1 − ) . cp P0
14) Using dx dy dz = r dr dθ dz in cylindrical coordinates, and noting that the cylinder a p radius R is obtained by setting x2 + y 2 = r = R we find that the surface element on this
cylinder is
p δ( x2 + y 2 − R) dx dy dz = δ(r − R) r dr dθ dz = R dθ dz.
15) A sphere of radius r is the solution set of x2 + y 2 + z 2 − r2 = 0. Here r is treated as a constant. A normal vector is given by
2x ∇(x2 + y 2 + z 2 − r2 ) = 2y . 2z Normalizing one obtains a unit normal x 1 y . n= p x2 + y 2 + z 2 z 177
Solutions
Clearly this points away from 0 so it is outward pointing. In spherical coordinates one has sin θ cos φ n = sin θ cos φ . cos θ
Using
one has
∂ ∂x ∂ ∂y ∂ ∂z
1 ∂ r cos θ cos φ ∂θ − ∂ ∂ + 1r cos θ sin φ ∂θ + sin θ sin φ ∂r ∂ 1 ∂ cos θ ∂r − r sin θ ∂θ
∂ sin θ cos φ ∂r +
=
sin φ r sin θ cos φ r sin θ
∂ ∂φ , ∂ ∂φ
∂ ∂ sin θ cos φ ∂r + 1r cos θ cos φ ∂θ − sin θ cos φ ∂ ∂ n · ∇ = sin θ cos φ · sin θ sin φ ∂r + 1r cos θ sin φ ∂θ + ∂ 1 ∂ cos θ cos θ − sin θ
∂r
r
∂θ
sin φ r sin θ cos φ r sin θ
∂ ∂φ , ∂ ∂φ
1 ∂ 1 ∂ ∂ + sin θ cos θ cos2 φ − sin φ cos φ = sin2 θ cos2 φ ∂r r ∂θ r ∂φ ∂ 1 ∂ 1 ∂ + sin2 θ sin2 φ + sin θ cos θ sin2 φ + sin φ cos φ ∂r r ∂θ r ∂φ 1 ∂ ∂ − sin θ + cos2 θ ∂r r ∂θ ∂ = . ∂r
16) With D = {x kxk ≤ R and x > 0, y > 0, z > 0} one has, on transforming to spherical coordinates,
Z
xz dx dy dz =
Z
R
0
0
D
=
Z
Z
R
0
4
π 2
Z
r dr
π 2
(r sin θ cos φ)(r cos θ) r2 sin θ dθ dφ dr
0
Z
π 2
2
sin θ cos θ dθ
0
π2 π2 1 5 R 1 r sin3 θ sin φ 5 0 3 0 0 1 5 = R . 15
Z
π 2
cos φ dφ
0
=
17) As in the notes it suffices to consider y = 0. Note that if x 6= 0
∆ + k2
1 ∂2 2 ∂ 1 −kr eikkxk = e + kxk ∂r2 r ∂r r = 0. 178
Solutions
Further, if f is continuous at 0 then Z
Z 1 1 2 2 ikkxk 3 f (x) ∆ + k f (x) ∆ + k e d x = lim eikkxk d3 x ǫ↓0 B kxk kxk ǫ Z 1 eikkxk d3 x ∆ + k2 = f (0) lim ǫ↓0 B kxk ǫ Z Z 1 ikkxk 1 ikkxk 3 2 = f (0) lim n·∇ e dσ + k e d x ǫ↓0 kxk ∂Bǫ Bǫ kxk Z 2π Z π ∂ 1 −kr sin θ dθ dφ = f (0) lim e ǫ↓0 ∂r r r=ǫ 0 0 = −4πf (0).
18) Using the solution to the previous problem one has Z 2 ∆+k
Z 1 2 ∆+k eikkx−yk F (y) d3 y kx − yk Z = δ(x − y)F (y) d3 y
1 eikkx−yk F (y) d3 y = kx − yk
= −4πF (x). The claim follows.
19) Since tan z =
sin z cos z
the poles of tan z occur where cos z = 0. This means that eiz =
−e−iz = e(2n+1)iπ−iz or z = (2n + 1)π − z. Thus the poles of tan z are 1 (n + )π. 2 About (n + 12 )π one has sin z = (−1)
n
∞ X (−1)j j=0
and cos z = (−1)
n+1
1 2j z − (n + )π (2j)! 2
∞ X (−1)j 1 2j+1 z − (n + )π . (2j + 1)! 2 j=0
179
Solutions
Thus, 2j z − (n + 21 )π tan z = 2j+1 z − (n + 12 )π 2 4 1 z − (n + 12 )π + . . . = −1 − 21 z − (n + 21 )π + 24 3 5 1 z − (n + 12 )π − 16 z − (n + 21 )π + 120 z − (n + 12 )π + . . . 1 1 1 2 1 1 + z − (n + z − (n + )π + )π + . . . . =− 2 45 2 z − (n + 21 )π 3 P∞
(−1)j j=0 (2j)! − P∞ (−1)j j=0 (2j+1)!
20) Choose for z p the function which is real when z ∈ (0, ∞). Then, writing z = reiθ , z p = rp eipθ . Increasing θ by 2π changes the phase of z p by ei2pπ . For z p to be single valued we must have ei2pπ = 1. This happens only when p is an integer. One has, for p 6= −1, Z
p
Z
p+1
z dz = iǫ
Cǫ (0)
ei(p+1)θ dθ
0
ǫ e2πi(p+1) − 1 . p+1 p+1
= For p = −1
2π
Z
z
−1
dz = i
Cǫ (0)
Z
2π
dθ
0
= 2πi.
The integral is 0 for all integers p except −1.
21) x2 +1 = (x−i)(x+i) so that then lim
R→∞
Z
Γ
1 x2 +1
has simple poles at ±i. If Γ = {z |z| = R and Im z > 0}
1 dz = lim R→∞ z2 + 1 =0
Z
180
0
π
1
R2 e2iθ
+1
iReiθ dθ
Solutions
so that
Z
∞
1 dx = lim 2 R→∞ x +1
Z
1 dz +1 −∞ Γ∪[−R,R] 1 = 2πi Res 2 ,i . z +1 Expanding about i, writing z = i + (z − i), one has z2
Thus Res
1
z 2 +1
,i =
1 2i
so that
z2
1 1 = +1 (z − i) 2i + (z − i) 1 − 1 + ... . = 2i(z − i) Z
∞
−∞
x2
1 dx = π. +1
In the second integral the integrand
eiz (z 2 +1)2
has second order poles at z = ±i and
goes exponentially to 0 as Im z → ∞. Thus Z ∞ Z eiz eix dx = lim dz 2 2 R→∞ Γ∪[−R,R] (z 2 + 1)2 −∞ (x + 1) eiz = 2πi Res 2 ,i . 2 (z + 1)
Expanding about i one has
i i+(z−i)
iz
(z 2
e e = 2 2 + 1) (i + (z − i))2 + 1
1 ei(z−i) 4e (z − i) + 1 (z − i)2 2 2i 1 1 1 1 + i(z − i) + . . . 1 − =− (z − i) + . . . 4e (z − i)2 i 1 i 1 1 + ... . − =− 2 4e (z − i) 2e z − i
=−
Thus
Z
∞
−∞
i π eix dx = −2πi = . (x2 + 1)2 2e e
22) This equation has constant coefficients so that at least one solution is of the form ekx . Substituting into the equation one has k 2 + 4k + 4 ekx = 0. 181
Solutions
Thus k 2 + 4k + 4 = (k + 2)2 = 0. This produces only one solution, f1 (x) = e−2x . To find a second linearly independent solution, f2 , use the method of reduction of order, setting f2 (x) = u(x)e−2x . Substituting into the differential equation one has d2 d + 4 + 4 ue−2x dx2 dx = u′′ (x)e−2x
0=
so that u′′ (x) = 0. The most general form for u is u(x) = ax + b for some constants a and b. It is sufficient to choose u(x) = x so that f2 (x) = xe−2x . Thus, the most general solution is f (x) = (c1 + c2 x)e−2x .
23) For x < 0 one has f (x) = aeik− x + be−ik− x while for x > 0 one has f (x) = ceik+ x . At x = 0 we have a+b=c and ika − ik− b = ik+ c. Thus a=
k+ 1 (1 + )c 2 k−
b=
k+ 1 (1 − )c. 2 k−
and
The most general solution can be written f (x) = a ·
(
eik− x + 2k− k− +k+ e
k− −k+ −ik− x k− +k+ e ik+ x
182
if x < 0 if x > 0.
Solutions
24) Using the form √ √ √ √ f (x) = c1 cos( k x) + c2 sin( k x) + c3 cosh( k x) + c4 sinh( k x) the conditions f (0) = −f ′′ (0) = α0 and f ′ (0) = f (3) (0) = 0 lead to c1 + c3 = α0 and c1 − c3 =
α0 k
so that c1 =
k+1 α0 , 2k
c3 =
k−1 α0 . 2k
and
Similarly
√
kc2 +
√
3
3
kc4 = −k 2 c2 + k 2 c4 = 0
so that c2 = c4 = 0. Thus f (x) = α0
k + 1
√ √ k−1 cos( k x) + cosh( k x) . 2k 2k
To implement the condition that f (x) ∼ e−i
√
k x
for large x it’s more convenient to
use exponentials, writing f (x) = aei As x → ∞ one has f (x) ≈ aei
√
√ k x
k x
+ be−i
+ be−i
√
√ k x
k x
√ k x
+ ce
√ k x
+ ce
√
+ de−
√ k x
√ k x
+ de−
At x = 0 this gives f (0) = b + d, √ √ f ′ (0) = −i k b − k d, 183
.
. Thus we must have a = 0 and c = 0
so that f (x) = be−i
k x
.
Solutions
f ′′ (0) = −kb + kd and 3
3
f (3) (0) = ik 2 b − k 2 d. It follows that f (0) +
1 3 1 ′′ f (0) = 2d = −k − 2 f ′ (0) − k − 2 f (3) (0), k
and f (0) −
1 3 1 ′′ f (0) = 2b = ik − 2 f ′ (0) − ik − 2 f (3) (0). k
25) This is an Euler equation. Choosing f (x) = xr one finds r(r − 1) + 2r − 1 = r2 + r − 1 = 0
so that
r=− Thus the most general solution is
√ 1 −1± 5 . 2 1
f (x) = c1 x− 2 + But − 12 −
√ 5 2
√
5 2
1
+ c2 x− 2 −
√ 5 2
.
< 0 so that the only solutions which are finite at x = 0 are − 12 +
f (x) = c1 x
√
5 2
.
26) Note that Jm (−x) is a solution of Bessel’s equation, and is finite at x = 0. Thus Jm (−x) = cJm (x) for some constant c. Thus c x m 1 −x m 1 − c1 x + c2 x2 + . . . = 1 + c1 x + c2 x2 + . . . . m! 2 m! 2
It follows that c = (−1)m and 0 = c1 = c3 = c5 = . . .. To find c2 one substitutes into the
differential equation, d2 1 x 1 d m2 m 2 0= + − 2 +1 1 + c2 x + . . . dx2 x dx x m! 2 1 h = m m(m − 1) + m − m2 xm−2 + xm 2 m! i m 2 m+2 + (m + 2)(m + 1) + (m + 2) − m x + x c2 + . . . h i 1 1 + 4(m + 1)c2 xm + . . . = m 2 m! 184
Solutions 1 so that c2 = − 4(m+1) .
The function Ym (−x) is also a solution of Bessel’s equation so that Ym (−x) = aJm (x) + bYm (x). Using Jm (−x) = (−1)m Jm (x) and ln(−x) = ln(x) + iπ we have (m − 1)! x −m 2 1 − d1 x + d2 x2 + . . . . aJm (x) + bYm (x) = (−1)m Jm (x)(ln x + iπ) − (−1)m π π 2 All that is needed of Ym is that it be a solution of Bessel’s equation which is linearly independent of Jm .
We know that this means that Ym (x) ∼ x−m for small x.
But
(−1)m Ym (x) + Ym (−x) is also a solution which diverges like x−m for small x. Thus, Ym may be chosen so that Ym (−x) = (−1)m Ym (x) + aJm (x), in other words, so that b = (−1)m . Substituting in the expansion for Ym one finds i 2h x −m 3 aJm (x) = (−1) 2d1 x + 2d3 x + . . . . iπJm (x) + (m − 1)! π 2 m
This necessitates a = (−1)m 2i and 0 = d1 = d3 = d5 = . . ..
Note that Ym has still not been determined uniquely. The function Ym + cJm still satisfies the conditions: it is a solution to Bessel’s equation, it diverges like x−m for small x and Ym (−x) + cJm (−x) = (−1)m Ym (x) + cJm (x) . Thus, Ym is determined only up to a multiple of Jm . To completely specify the standard choice the large x behavior will be used. Note that this ambiguity can effect d2 only for m = 1. In particular, for m = 1, d2 will not be determined. In general, d2m will not be determined. To find d2 substitute into the differential equation, 0=
i h 2 d2 1 d m2 (m − 1)! x −m 2 1 + d x + . . . . + − + 1 J (x) ln x − 2 m dx2 x dx x2 π π 2
Note that d2 d2 m2 m2 1 d 1 d 2 ′ − 2 + 1 Jm (x) ln x = (ln x) − 2 + 1 Jm (x) + Jm (x) + + 2 2 dx x dx x dx x dx x x 2 ′ = Jm (x). x 185
Solutions
It follows that d2 x 4 ′ m2 1 d −m 2 J (x) = (m − 1)! − 2 +1 1 + d2 x + . . . . + x m dx2 x dx x 2 But d2 x 1 1 d m2 −m 2 m + − + 1 + ... 1 + d x + . . . = 2 1 − 4(m − 1)d 2 2 dx2 x dx x2 2 xm while
Thus, for m > 1 the
1 m+2 m 4 ′ m−2 J (x) = m−2 x + ... . mx − x m 2 m! 4(m + 1) 4 ′ x Jm (x)
doesn’t contribute to the determination of d2 and one finds d2 =
1 . 4(m − 1)
For m = 0 one has d2 = − 34 . For m = 1 the coefficient d2 is undetermined, for the reasons discussed above.
27) Abel’s formula implies that for any two solutions f1 and f2 to Bessel’s equation −
W(f1 , f2 ) = Ke K = . x
R
1 x
dx
Here K is independent of x and depends only on the choice of f1 and f2 . Thus (+) (−) W(Hm , Hm )=
K1 x
and W(Jm , Ym ) = (±)
Since the Hm
K2 . x
are well known for large x it is easiest to compute K1 at x = ∞,
(+) (−) K1 = lim xW(Hm , Hm ) x→∞ (+) (−)′ (+)′ (−) = lim x Hm (x)Hm (x) − Hm (x)Hm (x) . x→∞
186
Solutions
Using the large x asymptotic form, (±) Hm (x) ≈
r
2 ±i(x− mπ − π ) 2 4 , e πx
one has K1 =
2 (−2i). π
Similarly, since the Fm and Ym are well known for small x it is easiest to compute K2 = lim xW(Jm , Ym ) x→0 ′ ′ = lim x Jm (x)Ym (x) − Jm (x)Ym (x) . x→0
Using the small x forms, Jm (x) ≈
1 2m m!
Y0 (x) ≈
xm ,
2 ln x π
and Ym (x) ≈ −
2m (m − 1)! 1 π xm
for m 6= 0 (note the correction from the notes), one has K2 =
2 . π
Finally, making the assumption H (±) = c(Jm ± iYm ) and noting that W(f, f ) = 0 and W(f, g) = −W(g, f ), one has −4i πx = c2 W(Jm + iYm , Jm − iYm ) = c2 − 2iW(Jm , Ym )
(+) (−) W(Hm , Hm )=
= c2
(−4i) πx
so that c = ±1. 187
Solutions
28) One has
d2 d + p + q ug dx2 dx d2 u′ d u′′ d u′ + 2 + + p + p + q g =u dx2 u dx u dx u
0=
so that choosing
2 one has
Thus
u′ +p=0 u
d2 u′′ u′ + + p + q g = 0. dx2 u u 1
u = e− 2 and
R
p(x) dx
u′′ u′ +p u u 1 1 = q − p′ − p2 . 2 4
Q=q+
29) For l = 0 one has
Setting f (x) = x1 g(x) one finds
so that
d2 2 d + + 1 f (x) = 0. dx2 x dx d2 + 1 g(x) = 0 dx2 1 f (x) = c1 cos x + c2 sin x . x
30) Using H (±) = Jm ± iYm one has Jm = and Ym
1 (+) (−) Hm + Hm 2
1 (+) (−) = Hm − Hm . 2i 188
Solutions (±)
Using the large x asymptotic forms for Hm one finds r m2 − 41 mπ π mπ π 2 Jm (x) ≈ cos(x − − )− sin(x − − ) + ... πx 2 4 2x 2 4
and
Ym (x) ≈
r
m2 − mπ π 2 sin(x − − )+ πx 2 4 2x
1 4
cos(x −
mπ π − ) + ... . 2 4
31) One has d2 p κxα p−2 κxα α 2 2α 3 −x x e =x e p(p − 1) + ακ(2p + α − 1)x + (ακ) x − x . dx2 It follows that
(ακ)2 x2α − x3 = 0 so that α =
3 2
and κ = ± 23 . Then 2p +
consider d2 α − x xp−δ eκx 2 dx
1 2
= 0 so that p = − 14 . To obtain the next corrections
(p − δ)(p − δ − 1) + ακ(2(p − δ) + α − 1)xα + (ακ)2 x2α − x3 1 1 1 1 3 p−δ−2 κxα =x e (− − δ)(− − δ − 1) ± (2(− − δ) + )x 2 . 4 4 4 2 3 The largest term here, the one proportional to x 2 , must cancel the remaining term from α
=xp−δ−2 eκx
above, the p(p − 1) term. It follows that
1 3 1 5 = 0. ±c1 (2(− − δ) + )x 2 −δ + 4 2 16
Thus, δ =
3 2
15 and c1 = ± 16 .
32) One has d2 λ p κxα p−2 κxα α 2 2α 2 + +1 x e =x e p(p − 1) + ακ(2p + α − 1)x + (ακ) x + x + λx . dx2 x
It follows that
(ακ)2 x2α + x2 = 0 so that α = 1 and κ = ±i and then that ακ(2p + α − 1)xα + λx = 0 so that ±2ip + λ = 0. Thus p = ∓ 2i λ. 189
Solutions
33) Let L be a self-adjoint operator and let λ, ψ be an eigenvalue and corresponding eigenvector. Then hψ, Lψi = hLψ, ψi since L is self-adjoint. Using Lψ = λψ hψ, λψi = hλψ, ψi ¯ ¯ = λ. so that λhψ, ψi = λhψ, ψi. It follows that λ
34) One has
Z
a
b
X
ψj (y) ψj (x) f (x) dx =
X j
j
ψj (y)hψj , f i = f (y)
since this is precisely the eigenfunction expansion for f . Further, since (L − ǫ)f0 = 0 and (L − ǫ)ψj = (λj − ǫ)ψj , XZ (L − ǫ) f0 (x) + j
a
b
X ψj (x) ψj (y) g(y) dy = λj − ǫ j
Z
b
ψj (x) ψj (y)g(y) dy
a
= g(x).
Since ǫ is not an eigenvalue of L the only solution of (L−ǫ)f0 = 0 which satisfies the boundary conditions is f0 = 0. Finally, if ǫ = ǫk is an eigenvalue of L then (L − ǫk )f =
X hψj , f i(ǫj − ǫk )ψj j
=g X = hψj , giψj . j
It follows that hψj , f i(ǫj − ǫk ) = hψj , gi. Thus, one must have hψk , gi = 0 so that g is orthogonal to ψk . 190
Solutions
35) PA satisfies the Helmholtz equation d2 ω2 + 2 PA (x) = 0 dx2 c so that ω ω PA (x) = a cos( x) + b sin( x). c c The relation between PA and vA is given by iωρ0 vA = PA′ so that the impedence conditions lead to the homogeneous boundary conditions PA (0) =
Z0 ′ P (0) iωρ0 A
PA (L) =
ZL ′ P (L). iωρ0 A
Substituting the general form for PA into these boundary conditions leads to Z0 1 − icρ 0 a 0 . = ZL ZL ω ω ω ω 0 b cos( c L) + icρ0 sin( c L) sin( c L) − icρ0 cos( c L) The condition that ω be a resonant frequency is that this equation have a non-trivial solution, or that the determinant of the matrix above is zero, ω ω ZL Z0 ZL ω ω sin( L) − cos( L) + cos( L) + sin( L) = 0. c icρ0 c icρ0 c icρ0 c Now let PAj be the resonant pressure amplitude corresponding to the resonant frequency ωj . Then ωj2 d2 PAj = − 2 PAj . dx2 c Thus hPAj , and
ωk2 d2 P i = − hPAj , PAk i Ak dx2 c2
ω ¯ j2 d2 h 2 PAj , PAk i = − 2 hPAj , PAk i. dx c
But, integrating by parts twice, L d2 d2 ′ ′ ¯ ¯ hPAj , 2 PAk i = h 2 PAj , PAk i + PAj PAk − PAj PAk dx dx 0 191
Solutions
and using the impedence conditions d2 ωk ω d2 ¯j ¯ j ωk ω hPAj , 2 PAk i = h 2 PAj , PAk i+iρ0 P¯Aj (0)PAk (0) + ¯ + P¯Aj (L)PAk (L) + ¯ dx dx Z0 Z0 ZL ZL so that the problem is self-adjoint if Z0 and ZL are both pure imaginary. Finally ω ¯j k ¯Aj (L)PAk (L) ωk + ω¯¯ j iρ0 P¯Aj (0)PAk (0) ω + P + ¯ Z0 ZL Z0 ZL hPAj , PAk i = . 2 ω ¯ j2 ωk − c2 c2 36) Recall the formula r u(t, x) =
2 X L jcπ jcπ jπ hfj , hi sin( t) + hfj , gi cos( t) sin x. L j=1 jcπ L L L ∞
In the problem at hand we have g(x) = 0 and h(x) = Aδ(x − L3 ). Thus, using r jπ 2 fj (x) = sin x L L one has ∞ 2A X 1 jπ jcπ jπ u(t, x) = sin( ) sin( t) sin x. cπ j=1 j 3 L L
If j is divisible by 3 then sin( jπ 3 ) = 0. Thus the modes with j = 3n for some natural number n are not excited. More physically, these are the modes with nodes at
L 3.
37) In cylindrical coordinates we must solve h2 ∂ 2 ¯ 1 ∂2 1 ∂ ∂2 ∂ ψ(r, θ, z, t) + + + −i¯ h ψ(r, θ, z, t) = − ∂t 2m ∂r2 r ∂r r2 ∂θ2 ∂z 2 subject to the boundary conditions ψ(R, θ, z, t) = 0 = ψ(r, θ, 0, t) = ψ(r, θ, H, t) assuming one knows the initial condition φ(r, θ, z) = ψ(r, θ, z, 0). A representation of the solution can be obtained by using an eigenfunction expansion of the operator 1 ∂ 1 ∂2 ∂2 ∂2 + 2 2 + 2. L= 2 + ∂r r ∂r r ∂θ ∂z The eigenfunctions of L are r 1 inθ 2 kπz ψm,k,j (r, θ, z) = Nm,j J|m| (k|m|,j r) √ e sin H H 2π where Nm,j are the normalization factors and k|m|,j the wave numbers given in Example 7.4. Then ψ(r, θ, z, t) =
∞ ∞ X ∞ X X
¯2 i h
e h¯ 2m
2 2 +( kπ k|m|,j t H )
n=−∞ k=1 j=1
192
hψm,k,j , φiψm,k,j (r, θ, z).
Solutions
38) We need to solve
∂2 ∂2 ∂2 + + − λ ψ(x, y, z) = 0 ∂x2 ∂y 2 ∂z 2
with ∂ψ ∂ψ = 0 = ∂x x=0 ∂x x=L
∂ψ ∂ψ = 0 = ∂y y=0 ∂y y=W
∂ψ ∂ψ = 0 = . ∂z z=0 ∂z z=H
We will construct ψ out of Neumann solutions to
d2 − λx φ = 0 dx2 on (0, B). One has φj (x) =
r
2 jπx cos( ) B B
corresponding to λj = −
j 2 π2 B2
for j ∈ {0, 1, 2, . . .}. Thus one has eigenfunctions ψ(x, y, z) =
r
jπx kπy mπz 8 cos( ) cos( ) cos( ) LW H L W H
and eigenvalues λjkm = −
j 2 π2 k2 π2 m2 π 2 − − L2 W2 H2
for j, k, m ∈ {0, 1, 2, . . .}. These eigenvalues are not degenerate as long as L, W and H are relatively prime.
39) .............................. 193
Solutions
40) Let ψj and λj be the Neumann eigenfunctions and eigenvalues of −∇2⊥ in A. One has
with
p 2 ∞ X ω ω 1 P (x, y, z, t) = a0 p ei c z−iωt + aj ψj (x, y)ei c2 −λj z−iωt |A| j=1
Since ω < cλ1 once
q
ω2 c2
ωρ0 u0 aj = q ω2 c2 − λj
Z
ψj (x, y) dx dy.
˜ A
− λ1 z is large enough only the j = 0 term will remain. Using ˜ a0 = ρ0 cu0 |A|
one has P (x, y, z, t) = ρ0 cu0
˜ ω |A| ei c z−iωt |A|
for z≫q
1 ω2 c2
.
− λ1
41) First we need the Neumann eigenfunctions and eigenvalues of −∇2 in a square with sides 0.05 meters. These are ψjk (x, y) = 40 cos(20jπx) cos(20kπy) and λjk = 400π 2 (j 2 + k 2 ). The first few eigenvalues are 0, which is simple, 400π 2 , which is two fold degenerate, 800π 2 , which is also simple and 2000π 2 which is two fold degenerate. It follows that if 800π 2 <
ω2 < 2000π 2 2 c
then the 0, 0, 1, 0, 0, 1 and 1, 1 modes can all propogate. Thus gives √ √ (20 2π)343 = 30478 < ω < 2000π343 = 48190. 194
Solutions
The velocity dispersion for these modes is cjk (ω) = q
ω ω2 c2
− λjk
which gives c0,0 = c = 343 m/sec c0,1 = c1,0 = q
and
c1,1 = q
42) We need to solve
ω ω2 c2
− 400π 2
ω ω2 c2
.
− 800π 2
d4 4 ψ(x) = 0 − k dx4
with ψ ′′ (0) = 0 = ψ ′′ (L) and ψ ′′′ (0) = 0 = ψ ′′′ (L). The general solution to the differential equation is ψ(x) = a cos kx + b sin kx + c cosh kx + d sinh kx. Using ψ ′′ (0) = k 2 (−a + c) and ψ ′′′ (0) = k 3 (−b + d) one has
Then
ψ(x) = a cos kx + cosh kx + b sin kx + sinh kx . ψ (L) = 0 = k a − cos kL + cosh kL + b − sin kL + sinh kL ′′
and
2
ψ ′′′ (L) = 0 = k 3 a sin kL + sinh kL + b − cos kL + cosh kL 195
Solutions
so that either k = 0 or 0 = − cos kL + cosh kL − cos kL + cosh kL − − sin kL + sinh kL sin kL + sinh kL = 2 − 2 cos kL cosh kL.
This is the same eigenvalue condition as for the clamped bar. The solutions are kL = 4.73, 7.851, 10.995, 14.137, 17.279. k = 0 is also an eigenvalue, but corresponds to uniform motion ofthe bar and will not be considered here. 1 cos(x) 1.0/cosh(x) 0.8
0.6
0.4
0.2
0 0
5
10
15
20
x = kL Finally, given an eigenvalue, one has sin kL − sinh kL a =− b cos kL − cosh kL so that the (unnormalized) eigenfunctions are sin kL − sinh kL cos kx + cosh kx + sin kx + sinh kx. ψ(x) = − cos kL − cosh kL The first five modes are plotted below for L = 1. 2 f_1(x) f_2(x) f_3(x) f_4(x) f_5(x)
1.5 1 0.5 0 -0.5 -1 -1.5 -2 -2.5 0
0.2
0.4
0.6
x L
196
0.8
1
Solutions
43) We solve
∂2 1 ∂ 1 ∂2 1 d2 u(r, θ, t) = 0 + + − ∂r2 r ∂r r2 ∂θ2 c2 dt2
with the boundary conditions
u(r1 , θ, t) = 0 = u(r2 , θ, t). The resonant frequencies are the values of ω for which the equation ∂2 1 ∂ 1 ∂2 ω2 ψ(r, θ) = 0 + + + ∂r2 r ∂r r2 ∂θ2 c2 has non-zero solutions satisfying Dirichlet boundary conditions at r = r1 and r = r2 . It follows that
ω ω ψ(r, θ) = aJm ( r) + bYm ( r) eimθ c c
with
ω ω aJm ( r1 ) + bYm ( r1 ) = 0 c c and ω ω aJm ( r2 ) + bYm ( r2 ) = 0. c c This system of equations has a non-zero solution only if ω ω ω ω Jm ( r1 )Ym ( r2 ) − Jm ( r2 )Ym ( r1 ) = 0. c c c c These are the eigenvalue conditions whose solutions are the resonant frequencies. Given a resonant frequency the unnormalized eigenmode may be given by ψ(r, θ) =
ω ω ω ω − Ym ( r1 )Jm ( r) + Jm ( r1 )Ym ( r) eimθ . c c c c
For r1 = 0.1 and r2 = 1 the resonant frequencies are given by kc where Jm (0.1k)Ym (k) − Jm (k)Ym (0.1k) = 0. These functions are plotted below for m = 0, 1, 2. These values of m are sufficient to produce the first three resonant frequencies. 197
Solutions 1 m=0 m=1 m=2
0.5
0
-0.5
-1
-1.5 3.314 3.941 5.142
Finding the resonant frequencies of an annular membrane.
The unnormalized radial modes are ploted below. 3 m=0 m=1 m=2
2.5 2 1.5 1 0.5 0 -0.5 0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
The first three radial modes.
44) One has 1 1 X 1 a1m Y1m (θ, φ) = 2 − r2 m=−1 r
and
r
r
r
3 −iφ a1,−1 sin θ e 8π r r 3 3 1 a1,0 z . (a1,1 − a1,−1 )x + i(a1,1 + a1,−1 )y − = 3 − r 8π 4π 3 a1,1 sin θ eiφ − 8π
3 a1,0 cos θ + 4π
1 1 A · ∇ = − 3 Ax x + Ay y + Az z r r 198
Solutions
so that
q
3 8π
(a1,1 − a1,−1 )
q 3 A = − 8π i(a1,1 + a1,−1 ) . q 3 4π a1,0 45) Since one has Neumann boundary conditions at r = R and since the pressure amplitude must be finite one has PA (r, θ, φ) =
∞ X l X
l=0 m=−l
with the resonance conditions
ω alm jl ( r)Ylm (θ, φ) c
ω jl′ ( R) = 0. c The spherically symmetric modes have l = 0. The resonance conditions for these modes is ω j0′ ( R) = 0. c Using j0 (x) =
sin x x
one has cos x sin x − 2 . x x The zeros of this last equation are the solutions of j0′ (x) =
x cos x − sin x = 0. This eigenvalue condition is plotted below. 15 x*cos(x)-sin(x) 10
5
0
-5
-10
-15 4.493
7.725
x= 199
ω cR
10.904
Solutions
The resonant frequencies are ω1 = 4.493 Rc , ω2 = 7.725 Rc , ω2 = 10.904 Rc , and so on. Since the spherical harmonic functions are normalized the normalization factor for these three modes is qR
1 R ω j ( j r)2 r2 0 l c
dr
= qR R 0
=q
R 2
1 sin2 (
−
ωj c r) dr
1 c 2ωj
sin(
ωj ωj c R) cos( c R)
.
46) The pressure amplitude may be written PA (r, θ, φ) =
∞ X l X
(+)
αlm hl
l=0 m=−l
with the boundary condition
ω ( r)Ylm (θ, φ) c
l ∞ X X ∂PA alm Ylm (θ, φ). = ∂r r=R l=0 m=−l
It follows that
αlm =
47) The general solution of
calm (+) ′ ω ( c R)
ωhl
.
d2 ( 2 + k 2 )ψ = 0 dx
satisfying the boundary condition is ψ(x) = a sin k(x − 1) .
There are no square integrable solutions and these solutions are bounded for any k ∈ R. It follows that the continuous spectrum is (0, ∞) and the continuum eigenfunctions are (normalized by comparison to a sin transform) r
2 sin k(x − 1) . π The resulting eigenfunction expansion is given by r Z ∞ 2 fˆ(k) sin k(x − 1) dk f (x) = π 0 with r Z ∞ 2 f (x) sin k(x − 1) dx. fˆ(k) = π 1 ψk (x) =
200
Solutions
48) The general solution of
−
d2 2 ψ(x) = 0 + v(x) − k dx2
satisfying the boundary condition is cos(qx) ψ(x) = a α eik(x−1) + β e−ik(x−1) with q= and
if 0 < x < 1 if x > 1
p 100 + k 2
−1 cos(q) 1 1 α = −q sin(q) ik −ik β i cos(q) −ik −1 = −q sin(q) 2k −ik 1 1 iq cos(q) + sin(q) 2k = 21 . iq cos(q) − 2 2k sin(q)
ψ is square integrable if k is pure imaginary (so k 2 < 0) and, letting k = iκ with κ > 0, β = 0. It follows that −κ2 is an eigenvalue if p p κ = 100 − κ2 tan 100 − κ2 .
The graphical solution of this equation is presented below. Note that there are four eigenvalues. 10 8 6 4 2 0 0
2
4
6
8
10
κ
The Eigenvalue Condition
ψ is bounded if k is real so that the continuous spectrum is (0, ∞). For such k the dontinuum eigenfunction is normalized by choosing 1 a= p . π(α2 + β 2 ) 201
Solutions
49) With
f (t) = one has
0 e−(a+bi)t Z
1 fˆ(ω) = √ 2π
∞
if t < t0 if t ≥ t0
e−(a+bi)t+iωt dt
t0 −(a+ib−iω)t0
1 e . =√ 2π i(−ω + b) + a To compute the Fourier transform of f (x) =
1 x − x0 + iα
we can note that this is the same as computing the inverse transform of the first example. One finds fˆ(k) =
√
2π i
0 e−(α+ix0 )k
if k < 0 . if k ≥ 0
Finally, for f (x) = one has 1 fˆ(k) = √ 2 2π
Z
cos(κx) (x − x0 )2 + α2
∞
−∞
ei(κ+k)x + ei(−κ+k)x dx. (x − x0 )2 + α2
Thus we need to compute the integrals Z
∞
−∞
ei(±κ+k)x dx (x − x0 + iα)(x − x0 − iα) i(±κ+k)x −Res( (x−x0 e+iα)(x−x0 −iα) , x0 − iα) if ±κ + k < 0 = 2πi i(±κ+k)x if ±κ + k > 0 Res( (x−x0 e+iα)(x−x0 −iα) , x0 + iα) i(±κ+k)(x0 −iα) e if ±κ + k < 0 2iα = 2πi ei(±κ+k)(x0 +iα) if ±κ + k > 0 2iα π = ei(±κ+k)x0 −|±κ+k|α α
so that fˆ(k) =
r
π 1 i(κ+k)x0 −|κ+k|α e + ei(−κ+k)x0 −|−κ+k|α . 2 2α 202
Solutions
50) If f is real then
Z ∞ 1 ˆ f (k) = √ f (x) e−ikx dx 2π −∞ = fˆ(−k).
Conversely, if fˆ(k) = fˆ(−k) then Z ∞ 1 fˆ(−k) eikx dk f (x) = √ 2π −∞ Z ∞ 1 =√ fˆ(k) e−ikx dk 2π −∞ = f (x) so that f is real.
51) a) Using
one has
b) Similarly, using
Z
e−iωt dt =
−1 −iωt e iω
Z ∞ ˆ V (ω) −iωt 1 e dω I(t) = √ 2π −∞ iωC Z Z ∞ −1 Vˆ (ω)e−iωt dω dt =√ 2πC −∞ Z −1 = V (t) dt. C d −iωt e = −iωe−iωt dt
one has
c) One has
Z ∞ 1 iωLVˆ (ω)e−iωt dω I(t) = √ 2π −∞ Z −L d ∞ ˆ =√ V (ω)e−iωt dω 2π dt −∞ dV . = −L dt Z ∞ 1 ˆ V (ω) = √ v0 δ(t − t0 ) eiωt dω 2π −∞ 1 = √ v0 eiωt 2π 203
Solutions
and then
Z ∞ 1 v0 I(t) = e−iω(t−t0 ) dω 2π −∞ R + iωC v0 0 if t > t0 R = (t−t0 ) C if t < t0 . C e
52) Fourier transforming with respect to x one has T (x, t) =
1 (2π)
3 2
Z
Tˆ(k, t)eik·x d3 k.
R3
Substituting into the heat equation one finds that d + k 2 Tˆ(k, t) = 0 dt with k 2 = k · k = kx2 + ky2 + kz2 . It follows that 2 Tˆ(k, t) = a(k) e−k t
so that T (x, t) =
1 (2π)
3 2
Z
2
a(k) eik·x−k t d3 k.
R3
The coefficient a(k) is determined by the initial condition T (x, 0) = δ(x) =
1 (2π)
3 2
Z
a(k) eik·x d3 k
R3
so that a(k) = Thus T (x, t) =
1 3 2
1 3
(2π) 2 Z
2
eik·x−k t d3 k
(2π) R3 π 32 − x·x e 4t . = 2t 204
.
Solutions
53) Note that 1 f± (x) = √ lim 2π ǫ↓0
Z
∞
−∞
a) One has
eiqx gˆ(q) dq. k 2 − q 2 ± iǫ
Z ∞ d2 1 k2 − q2 2 √ lim + k f± (x) = eiqx gˆ(q) dy 2 2 2 ǫ↓0 dx 2π −∞ k − q ± iǫ Z ∞ Z ∞ 1 1 iqx iqx = √ lim e g ˆ (q) dy e gˆ(q) dy ± iǫ 2 2 2π ǫ↓0 −∞ −∞ k − q ± iǫ = g(x). b) One has 1 f± (x) = √ lim 2π ǫ↓0
Z
∞
−∞
(q −
√
eiqx √ gˆ(q) dq. k 2 ± iǫ )(q + k 2 ± iǫ )
the given conditions on gˆ show that gˆ grows no faster than a power of q as |q| → ∞. Thus, for x > 0 one can compute this integral by closing the contour in the upper half-plane and for x < 0 by closing the contour in the lower half-plane. Choosing square roots so that sign(Im one finds
p
k 2 ± iǫ ) = ±1
r π i g |k|sign(x) e∓i|kx| . f± (x) = ∓ 2 |k|
54) Recall, Example 9.3, that for large r ρ0 P (r, θ, φ, t) ≈ 4π
∂ ∂t
R
S
u(s, t − rc ) dσ(s) . r
In the case at hand this implies that P (r, θ, φ, t) ≈ a) One has
L2 ρ0 ′ r u (t − ). 2πr c
Z ∞ 2 A u(t) = √ e−λ(ω−ω0 ) −iωt dω 2π −∞ Z ∞ t2 t 2 A =√ e−λ(ω−ω0 +i 2λ ) −iω0 t− 4λ dω 2π −∞ t2 A = √ e−iω0 t− 4λ . 2λ 205
Solutions
Thus
so that
b) One has
A t −iω0 t− t2 4λ u′ (t) = √ e − iω0 − 2λ 2λ t − rc −iω0 (t− r )− (t− rc )2 AL2 ρ0 c 4λ P (r, θ, φ, t) ≈ √ . e − iω0 − 2λ 2λ 2πr
1 2 − e−iωt dω ω − ω + iα ω − 2ω + iα 0 0 −∞ n √ 0 if t < 0 = i 2π A −(iω0 +α)t e − 2e−(2iω0 +α)t if t > 0.
u(t) = A
Thus
Z
∞
h √ 0 u (t) = i 2π A − δ(t) + −(iω0 + α)e−(iω0 +α)t − 2(2iω0 + α)e−(2iω0 +α)t ′
if t < 0 i if t > 0
so that √ i 2π Aρ0 h r P (r, θ, φ, t) ≈ − δ(t − ) c 4πr 0 r r + −(iω0 + α)e−(iω0 +α)(t− c ) − 2(2iω0 + α)e−(2iω0 +α)(t− c )
if t < 0 i . if t > 0
55) The general solution of ψ ′′ = ǫψ is √ √ ψ(x) = a cos( −ǫ x) + b sin( −ǫ x). Clearly, ψ is polynomially bounded only if ǫ ≤ 0. Let k = 0, ψ(0) + ψ ′ (0) = 0 implies that
√
−ǫ. The boundary condition at
a + kb = 0 so that
1 ψ(x) = a cos(kx) − sin(kx) . k
To determine a we use the completeness relation Z ∞ 1 1 δ(x − y) = |a|2 cos(kx) − sin(kx) cos(ky) − sin(ky) 2kdk k k Z0 ∞ 1 = |a|2 1 + 2 cos(kx − φ(k)) cos(ky − φ(k)) 2kdk k 0 206
Solutions
with 1 φ(k) = arctan( ). k Split the cosine into a sum of exponentials, let |a|2 =
1 π k + k1
and note that Z ∞ 1 1 e−2i arctan( k ) eik(x+y) +e2i arctan( k ) e−ik(x+y) 0
Z
1 dk 1 + k12
1
∞
k 2 e−2i arctan( k ) ik(x+y) = e dk 1 + k2 −∞ Z ∞ d k 2 e−2i arctan( k1 ) 1 dk eik(x+y) =− i(x + y) −∞ dk 1 + k2
goes to 0 as x + y → ∞ (integrate by parts once, integrating the eik(x+y) ). The transform that results is f˜(k) =
Z
0
∞
1 f (x) cos(kx) − sin(kx) dx k
with inverse transform f (x) =
Z
∞
0
2k 2 1 ˜ dk. f (k) cos(kx) − sin(kx) k π(k 2 + 1)
56) a) It follows from the results of Example 7.6 and problem 37 that Z ∞ p 2 2 ∞ X ω π jπx 2 jπy 2 2 i cos( P (x, y, z, t) = √ ) cos( ) ajk e c2 − L2 (j +k ) z−iωt dω. L L L 2π j,k=0 −∞ To determine the coeficcients ajk use the boundary condition ∂P = −ρ0 u′ (t) z=0 ∂z
from which it follows that ajk = q
2 2 π 2 2 Lπ −L 2 (j + k ) ρ0 ω u ˆ(ω)
ω2 c2
Z
207
A
cos(
jπy jπx ) cos( ) dx dy. L L
Solutions
Here u ˆ is the Fourier transform of u. Thus Z ∞ 4ρ0 X jπx jπy jπy ′ jπx′ √ P (x, y, 0, t) = cos( ) cos( ) ) cos( ) dx′ dy ′ cos( L L L L L2 2π j,k=0 A Z ∞ ωu ˆ(ω) q e−iωt dω. · 2 π2 ω 2 2 −∞ c2 − L2 (j + k )
b) If A = L2 then
2 Lπ so that
Z
A
cos(
jπy jπx ) cos( ) dx dy = δ0j δ0k L L
Z ∞ ω 2 P (x, y, z, t) = ρ0 c √ u ˆ(ω) ei c z−iωt dω L 2π −∞ z = ρ0 cu(t − ) c
so that P (x, y, 0, t) = ρ0 cu(t).
57) The solutions to
are
d2 1 d m2 + − − η ψ(x) = 0 dx2 x dx r2 √ √ ψ(x) = AJm ( η x) + BYm ( η x).
Let √ √ √ ′ √ ψ− (x) = Ym′ ( η )Jm ( η x) − Jm ( η )Ym ( η x) and (+) √ ψ+ (x) = Hm ( η x). ′ Note that ψ− (1) = 0 and that
2i √ (+) ′ √ η Hm ( η) . W ψ− (x), ψ+ (x) = − πx
The outgoing Greens function for L − η is Gη (x, y) =
iπxψ− (x< )ψ+ (x> ) . √ (+) ′ √ 2 η Hm ( η ) 208
Solutions (+) ′
Since Hm
√ ( η ) 6= 0 for any η there is no discrete spectrum (the Greens function
has no poles). The continuous spectrum is given (0, ∞) with continuum eigenfunctions ψη (x) = N (η)ψ− (x). To determine the normalization N (η) we may use the asymptotic form r mπ π mπ π 2 ′ √ √ √ ′ √ − ) − Jm ( η ) sin( η x − − ) . Ym ( η ) cos( η x − ψ− (x) ≈ πx 2 4 2 4 Using a cos θ + b sin θ =
p b a2 + b2 cos(θ + arctan ) a
and comparing with the sine and cosine transforms one finds
1 . N (η)2 = √ √ 2 √ ′ 2 η Jm ( η ) + Ym′ ( η )2
Given f : R → C and changing the continuous spectral integration variable from η to k =
√
η
one obtains the transform f˜(k) =
Z
∞
′ Ym′ (k)Jm (kx) − Jm (k)Ym (kx) p f (x) xdx ′ (k)2 + Y ′ (k)2 Jm m
∞
′ Ym′ (k)Jm (kx) − Jm (k)Ym (kx) ˜ p f (k) kdk. ′ (k)2 + Y ′ (k)2 Jm m
1
and the inverse transform f (x) =
Z
0
58) Let G(k, r, θ, z, r′ , θ′ , z ′ ) be the Greens function for the Helmholtz operator ∇2 + k 2 satisfying Neumann conditions on the surface of the wedge θ = 0, θ0 , Dirichlet conditions at the bottom of the wedge z = 0 and the radiation condition at spatial infinity. Let Pˆ (r, θ, z, ω) and fˆ(r, θ, ω) be the Fourier transforms in t of P and f respectively. Then Z ∞ Z θ0 ∂G( ωc , r, θ, z, r′ , θ′ , z ′ ) ˆ fˆ(r′ , θ′ , ω) r′ dθ′ dr′ . P (r, θ, z, ω) = ′ z ′ =0 ∂z 0 0
To find G(k, r, θ, z, r′ , θ′ , z ′ ) eigenfunction expansions in θ and z will be used, combined with the Wronskian representation in r. One has ∞ Z jπθ′ jπθ 4 X ∞ ′ ′ ′ ′ ) cos( )gj (q, r, r′ ) dq sin(qz) sin(qz ) cos( G(k, r, θ, z, r , θ , z ) = πθ0 j=0 0 θ0 θ0 209
Solutions
with
One finds
∂2 δ(r − r′ ) 1 ∂ j 2 π2 2 2 ′ + k − q g (q, r, r ) = + − . j ∂r2 r ∂r θ02 r2 r gj (q, r, r′ ) =
Thus
p p π (1) J jπ ( k 2 − q 2 r< )H jπ ( k 2 − q 2 r> ). 2 θ0 θ0
Z Z Z ∞ Z 2 X ∞ ∞ θ0 ∞ P (r, θ, z, t) = θ0 j=0 −∞ 0 0 0
r r ω2 ω2 jπθ′ jπθ 2 r )H (1) ( ) cos( )J jπ ( − q − q 2 r> ) · sin(qz) cos( jπ < θ0 θ0 θ0 c2 c2 θ0 · fˆ(r′ , θ′ , ω)q dq r′ dθ′ dr′ dω.
59) Consider the solutions of (∇2 + k 2 − η)ψ(r, θ, φ) = 0. The arbitrary solution is a superposition of solutions of the form ψ(r, θ, φ) = Ylm (θ, φ)Rl (r) where
Note that
∂2 l(l + 1) 2 ∂ 2 − + + k (r) − η Rl (r) = 0. ∂r2 r ∂r r2 q 2 − η r) jl ( k− q q Rl (r) = Nl a h(1) ( k 2 − η r) + b h(2) ( k 2 − η r) l l l l + +
if 0 < r < R if r > R
with q q q −1 2 − η R) (1) (2) 2 2 k j ( l hl ( k+ − η R) hl ( k+ − η R) − al √ 2 q q = (1) ′ q k −η − bl ′ 2 − η R) 2 − η R) h(2) ′ ( k 2 − η R) √ k j ( hl ( k+ − + l k2 −η l +
1
q q (1) 2 − η R), h(2) ( k 2 − η R) W hl ( k+ + l q q q 2 − η R) (2) ′ (2) 2 2 k− j ( l hl ( k+ − η R) −hl ( k+ − η R) √ 2 q q q · k− −η ′ (1) ′ (1) 2 2 2 √ j ( k− − η R) −hl ( k+ − η R) hl ( k+ − η R) 2 −η l k+ q q q 2 − η R) (2) ′ (2) 2 − η R) 2 − η R) j ( k 2 2 l h ( k −h ( k − i(k − η)R + + l l √ 2 q q q = + k− −η ′ (1) (1) ′ 2 2 2 2 √ j ( k − η R) −h ( k − η R) h ( k − η R)
=
l
+
l
210
+
2 −η l k+
−
Solutions 2 2 For k− > η > k+ one has
q q 2 − η = i η − k 2 . There is discrete spectrum given k+ +
by those values of η for which bl = 0, q
2 k+
−
(1) ′ η hl (
q q q q q (1) ′ 2 2 2 2 2 − η R) = 0. k+ − η R)jl ( k− − η R) − k− − η hl ( k+ − η R)jl ( k−
2 The continuous spectrum is η < k− . For large R one has
√
√ 2 il+1 + bl q e−i k+ −η r 2 −ηr 2 −ηr k+ k+ q q h i 2 2 − η r) + (i−l a − il+2 b ) sin( k 2 − η r) =q (i−l−1 al + il+1 bl ) cos( k+ l l + 2 −ηr k+
Rl (r) ≈ al q
i−l−1
ei
2 −η r k+
so that, by comparison with the cosine transform, Nl2 =
1 1 . π (i−l−1 al + il+1 bl )2 + (i−l al − il+2 bl )2
211
Midterm, Fall 1999
Midterm, Fall 1999
1) Find a general form for the solutions to the system of equations du dv + +v =0 dx dx du dv +u+2 = 0. dx dx 2) Let S = {(x, y, z) x2 + y 2 + z 2 = R2 } be the sphere of radius R. Let dσ be the surface
element on S. Find
Z
z 2 dσ.
S
3) Compute
Z
∞
−∞
eikx dx. x2 + 1
Here k ∈ (−∞, ∞) is to be regarded as a parameter. Take care to consider both positive and negative k.
4) Let
with u(x, 0) = cos x and
∂u ∂t (x, 0)
∂2 ∂2 u(x, t) = 0 − 4 ∂x2 ∂t2
= sin x. Find u.
212
Midterm, Fall 1999
Solutions
1) Setting
A u ekx = B v
and substituting into the differential equations one obtains 0 A k k+1 . = 0 B k+1 2k This equation has non-zero solutions only when k k+1 = 2k 2 − (k + 1)2 = 0. det k+1 2k √ Thus k 2 − 2k − 1 = 0 so that k = 1 ± 2. Given k one may choose k+1 A = −k B so that the most general solution is √ √ √ √ u 2 + √2 2 − √2 (1+ 2)x e e(1− 2)x . = c1 + c2 v −1 − 2 −1 + 2 2) One has
Z
2
z dσ =
Z
0
S
2π
Z
= 2πR4
π
(R cos θ)2 R2 sin θ dθ dφ
Z0
π
cos2 θ sin θ dθ
0
= 2πR4 − =
4πR4 . 3
π 1 cos3 θ 0 3
3) The integrand has simple poles at ±i. If k > 0 one must close the contour in the upper half plane, if k < 0 one must close in the lower half plane. One has ikz Z ∞ e ikx 2πi Res if k > 0 e z 2 +1 , i dx = 2 −2πi Res e2ikz , −i −∞ x + 1 if k < 0 z +1 ( −k 2πi e2i if k > 0 = ek −2πi −2i if k < 0 = 2π e−|k| .
213
Midterm, Fall 1999
4) Note first that the speed of sound c = 12 . Then the general solution is 1 1 u(x, t) = f (x − t) + g(x + t). 2 2 One then has at t = 0 f (x) + g(x) = cos x and 1 1 − f ′ (x) + g ′ (x) = sin x. 2 2 Thus −f (x) + g(x) = −2 cos x + c so that f (x) =
c 3 cos x − 2 2
and c 1 g(x) = − cos x + 2 2 giving u(x, t) =
3 1 1 1 cos(x − t) − cos(x + t). 2 2 2 2
214
Final, Fall 1999
Final, Fall 1999
1) Compute the isentropic speed of sound c=
s
∂P ∂ρ S
given the following equation of state: P = ρRT 1 + AρP . Here A is a constant.
2) Let κ be a real, positive constant and consider the one dimensional heat equation ∂ ∂2 − κ 2 T (x, t) = 0 ∂t ∂x on the interval (0, L) with the boundary conditions T (0, t) = A cos ωt and T (L, t) = 0. Find the steady state solution. Are there any resonant frequencies?
3) Consider the differential equation d2 1 d 1 1 2 + − + k f (r) = 0. dr2 r dr 9 r2 Find the leading order asymptotic forms for both small and large r. Express the exact solution in terms of Bessel functions of order 31 .
215
Final, Fall 1999
Final Exam Solutions , Fall 1999
1) As in the notes cP
∂P =− ∂ρ S c
∂P ∂T
∂ρ V ∂T
From the equation of state one finds
ρ . P
∂P ∂P = ρR 1 + AρP + Aρ2 RT ∂T ρ ∂T ρ so that ∂P ∂T
and
so that
=
ρ
ρR 1 + AρP 1 − Aρ2 RT
∂ρ 0 = ρR 1 + AρP + RT + 2AρRT P ∂T P ∂ρ
∂T
P
Thus ∂P ∂ρ
S
ρ 1 + AρP . =− T 1 + 2AρP
=
cP RT 1 + 2AρP . cV 1 − Aρ2 RT
2) The steady state solutions will be of the form T (x, t) = Re TA (x)e−iωt with
− iω − κ
TA (0) = A and TA (L) = 0. Then writing k=
d2 TA (x) = 0, dx2
r
iω κ
= (1 + i) 216
r
ω 2κ
Final, Fall 1999
d2 2 TA (x) = 0 + k dx2
so that
TA (x) = c1 eikx + c2 e−ikx . The boundary conditions then give 1 ikL e
Thus
so that
c1 c2
1 e−ikL
1 =− 2i sin kL
c1 c2
=
e−ikL −eikL
A 0
−1 1
.
A 0
1 − eik(x−L) + e−ik(x−L) . 2i sin kL pω L is never 0 when ω is real. Thus there Note that Im k > 0 so that sin kL = sin (1 + i) 2κ TA (x) = A
are no resonant frequencies.
3) The indicial equation for this differential equation is r(r − 1) + r −
1 1 = 0 = r2 − 9 9
so that r = ± 13 . Thus, as r → 0, there sre linearly independent solutions, f1 and f2 with 1
f1 (r) ≈ r 3 and 1
f2 (r) ≈ r− 3 . To obtain the large r asymptotics first note that if f (r) = d2 − dr2
1 9
− r2
1 4
√1 g(r) r
then
+ k 2 g(x) = 0.
It now follows that the large r asymptotic form for g is
g(r) ≈ c1 eikr + c2 e−ikr so that, for large r,
1 ikr −ikr f (r) ≈ √ c1 e + c2 e . r Finally, the equation is related to Bessel’s differential equation of order f (r) = aJ 31 (kr) + bY 31 (kr). 217
1 3
through
Midterm, Spring 2000
Midterm, Spring 2000
1) Consider L=i
d3 dx3
on (0, 1). Formulate periodic boundary conditions for this operator and show that they make L self adjoint. Find the eigenvalues and eigenfunctions.
2) Consider a rectangular membrane as pictured below. Three sides are clamped. On the
W
L
y
x
other side a driver of some sort is mounted so that the membrane’s displacement at x = 0 is u(0, y, t) = F(y)e−iωt . a) Find the displacement u(x, y, t). b) Now assume that L ≫ W and that F(y) =
if 0 < y < W 2 if W < y < W. 2
1 −1
For what range of frequencies is there no appreciable displacement at x =
3) Consider a spherical (of radius r0 ≪
c ω)
sound source in free space whose surface is
vibrating with velocity u(θ, φ, t) = A cos θ e−iωt . Find the pressure amplitude, PA (r, θ, φ), for
L 2?
ω cr
218
≫ 1.
Midterm, Spring 2000
Solutions
1) Periodic boundary conditions for a third order operator are f (0) = f (1), f ′ (0) = f ′ (1), f ′′ (0) = f ′′ (1). For self-adjointness we need Z
1
0
But
Z
0
so that we need
1
d3 f¯i 3 g dx = dx
Z
1
i
0
d3 f g dx. dx3
1 Z 1 d3 3 d ′′ ′ ′ ′′ f¯i 3 g dx = i f¯g − f¯ g + f¯ g + i 3 f g dx dx dx 0 0 1 f¯g ′′ − f¯′ g ′ + f¯′′ g = 0. 0
Periodic boundary conditions clearly satisfy this. Finally, we need to solve i
d3 f = λf. dx3
This equation has constant coefficients so that f (x) = ekx must work. Substituting in one finds ik 3 = λ. But periodicity implies that 1 = ek so that k = 2jπi for any j ∈ {0, ±1, ±2, . . .}. Thus the eigenvalues are λj = (2jπ)4 corresponding to the (unnormalized) eigenfunctions fj (x) = e2jπix .
2a) Set u(x, y, t) = uA (x, y)e−iωt and expand uA in an eigenfunction expansion with respect to
d2 dy 2
in (0, W ) with Dirichlet boundary conditions. One has uA (x, y) =
∞ X
fj (x)
j=1
219
r
2 jπy sin W W
Midterm, Spring 2000
where
d2 j 2 π2 ω2 fj (x) = 0. − + dx2 W2 c2
The boundary conditions
uA (0, y) = F(y) and uA (L, y) = 0 imply that fj (0) =
r
Z
2 W
W
sin
0
jπy F(y) dy W
and fj (L) = 0. It follows that
fj (x) =
so that
q
y˜ sin jπ y ) d˜ y W F(˜ q sin j 2 π2 ω2 sin c2 − W 2 L
∞ 2 X uA (x, y) = W j=1
b) With
RW
2 W
0
RW
y˜ sin jπ y ) d˜ y W F(˜ 0 q sin j 2 π2 ω2 sin c2 − W 2 L
F(y) = one has
Z
0
W
jπ y˜ F(˜ y ) d˜ y= sin W
Note that for j = 1
Z Z
0
W 2
r
ω2 j 2 π2 − (L − x) c2 W2
j 2 π2 jπy ω2 − (L − x) sin . 2 2 c W W
1 −1
if 0 < y < W 2 if W < y < W. 2
Z
jπ y˜ d˜ y− sin W
W 2
0
π y˜ d˜ y− sin W
Z
2π y˜ d˜ y− sin W
Z
W 2
0
while for j = 2
r
W
sin W 2
π y˜ d˜ y=0 W
sin
2π y˜ d˜ y 6= 0. W
W
220
W
sin W 2
W 2
Z
jπ y˜ d˜ y. W
Midterm, Spring 2000
Thus ∞ 2 X uA (x, y) = W j=2
r
RW
y˜ sin jπ F(˜ y ) d˜ y q W sin j 2 π2 ω2 sin − L c2 W2 0
There is thus appreciable displacement at x =
L 2
ω2 j 2 π2 jπy − (L − x) sin . 2 2 c W W
only if
ω2 c2
≥
4π 2 W2 .
3) One has PA (r, θ, φ) =
∞ X l X
(+)
alm hl
l=0 m=−l
with
First note that
ω ( r)Ylm (θ, φ) c
∂PA = iωρ0 A cos θ. ∂r r=r0 r
cos θ = −
4π Y1,0 (θ, φ) 3
so that alm = 0 unless l = 1 and m = 0. Thus (+) ω PA (r, θ, φ) = a1,0 h1 ( r)Y1,0 (θ, φ) c
so that
r 4π ω (+) ′ ω a1,0 h1 ( r0 ) = −iωρ0 A . c c 3
It follows that PA (r, θ, φ) =
iρ0 cA (+) ω h1 ( r) cos(θ). (+) ′ ω c h1 ( c r0 )
At large r one has
ω
−iρ0 cA ei c r cos(θ). PA (r, θ, φ) ≈ (+) ′ ω ( ω r0 ) c r h 1
c
221
Final, Spring 2000
Final, Spring 2000
1) Consider sound propagation down a long pipe. Let the cross section of the pipe be rectangular, with height L and width W : 0 < x < L and 0 < y < W . Let there be a baffled piston occupying an area A mounted at z = 0 and let the velocity of the piston be u0 (t). a) Solve the wave equation to find the acoustic pressure, P (x, y, z, t). Don’t attempt to evaluate either sums or integrals. b) Assume that the piston fills the entire cross section. Find P (x, y, z, t) explicitly in terms of u0 .
2) Consider a resonator built of two concentric rigid spheres, one of radius r1 , the other of radius r2 . Find functions whose zeros are the resonant frequencies (recall that the resonant frequencies are c times the square root of the Neumann eigenvalues of −∇2 ). Given the resonant frequencies, find the resonant modes. Don’t compute the normalization.
3) Imagine sound radiating into free space out of an open manhole of radius R. Assume that the ground surrounding the manhole is solid and impenetrable. If the acoustic pressure field down in the manhole below the ground is a plane wave ω
P (r, θ, z, t) = a ei c z−iω0 t find the resulting acoustic pressure above ground. The result should involve a single integral which you should not attempt to evaluate.
Final, Spring 2000
Solutions
1) a) Fourier transforming in t, expanding in the Neumann eigenfunctions of
∂2 ∂x2
+
(0, L) × (0, W ) and solving the resulting equation in z one has q Z ∞ ∞ ∞ 2 ω2 k2 2 j i 1 XX 2 −π ( L2 + W 2 ) z−iωt ψj (x)ψk (y) P (x, y, z, t) = √ Ajk (ω)e c dω 2π j=0 k=0 −∞ with 1 ψj (x) = √ L
1√
1 ψk (y) = √ W
1√
and
2 cos( jπx L )
if j = 0 if j > 0
2 cos( kπx L )
if k = 0 . if k > 0
To find the coeficcients Ajk use Z ∞ ∞ ∞ ∂P 1 XX ψj (x)ψk (y) ωu ˆjk (ω) e−iωt dω = iρ0 √ ∂z z=0 2π j=0 −∞ k=0
with
Z
1 u ˆjk (ω) = ψj (x)ψk (y) dx dy √ 2π A
Z
∞
u0 (t) eiωt dt.
−∞
It follows that Ajk (ω) = q
ρ0 ω u ˆjk (ω) ω2 c2
2
− π 2 ( Lj 2 +
k2 W2 )
.
b) If A is (0, L) × (0, W ) then u ˆjk = 0 unless j = 0 = k. Thus Z ∞ Z ∞ ′ z ρ0 c 1 √ P (x, y, z, t) = √ u0 (t′ ) eiω(t −t+ c ) dt′ dω 2π −∞ 2π −∞ z = ρ0 cu0 (t − ). c
2) In spherical coordinates eigenfunctions of ∇2 with eigenvalue −k 2 are of the form ψ(r, θ, φ) = N jl (kr) + αyl (kr) Ylm (θ, φ)
with either k = α = 0 and l = 0 or l ≥ 0 and jl′ (kr1 ) + αyl′ (kr1 ) = 0 = jl′ (kr2 ) + αyl′ (kr2 ).
∂2 ∂y 2
in
Final, Spring 2000
It follows that det
jl′ (kr1 ) yl′ (kr1 ) jl′ (kr2 ) yl′ (kr2 )
= jl′ (kr1 )yl′ (kr2 ) − yl′ (kr1 )jl′ (kr2 ) = 0.
Let k0,0 = 0 and let kl,j for j = 1, 2, 3, . . . be the solutions of the above equation. Then ckl,j are the resonant frequencies. As for the resonant eigenfunctions, ψ0,0,0 = N0,0,0 and, for j > 0, ψl,m,j (r, θ, φ) = Nl,m,j
jl′ (kl,j r2 ) jl (kl,j r) − ′ yl (kl,j r) Ylm (θ, φ). yl (kl,j r2 )
3) Fourier transforming in t, Hankel transforming in r and expanding in a Fourier series in θ and solving the resulting equation in z one has, for z > 0, Z ∞Z ∞ p 2 ∞ ω 1 X 2 i Am (k, ω)Jm (kr)e c2 −k z+imθ−iωt kdk dω. P (r, θ, z, t) = 2π m=−∞ −∞ 0 To find Am (k, ω) note that at z = 0 the normal derivative,
∂P ∂z
, is known,
∂P iaω0 −iω0 t n 1 if r < R e · = ∂z z=0 c Z ∞0 if r > R iaω0 −iω0 t = e J0 (kr)p(k) kdk c 0 with p(k) =
Z
R
J0 (kr) rdr.
0
It follows that
Thus
2πaωp(k) . Am (k, ω) = δm,0 δ(ω − ω0 ) q ω2 2 c c2 − k 2πaω0 P (r, θ, z, t) = c
Z
0
∞
p(k)J0 (kr) i q 2 e ω0 2 − k c2
q
ω2 0 c2
−k2 z−iω0 t
kdk.
View more...
Comments