














Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
Various methods for constructing orthogonal sets of expansion functions for quantum systems. It covers Gram-Schmidt orthogonalization, diagonalization of overlap matrices, and the DVR method for bound state energies. The document also includes problem-solving tips using symbolic capabilities in Matlab.
Typology: Exams
1 / 22
This page cannot be seen from the preview
Don't miss anything!
CONTENTS
I. Units and Conversions 1 A. Atomic Units 1 B. Energy Conversions 2
II. Approximation Methods 2 A. Semiclassical quantization 2 B. Time-independent perturbation theory 4 C. Linear variational method 7 D. Orthogonalization 11
References 21
I. UNITS AND CONVERSIONS
A. Atomic Units
Throughout these notes we shall use the so-called Hartree atomic units in which mass is reckoned in units of the electron mass (me = 9. 1093826 × 10 −^31 kg) and distance in terms of the Bohr radius (a 0 = 5. 299175 × 10 −^2 nm). In this system of units the numerical values of the following four fundamental physical constants are unity by definition:
The replacement of these constants by unity will greatly simplify the notation.
It is easiest to work problems entirely in atomic units, and then convert at the end to SI units, using
B. Energy Conversions
Atomic (Hartree) units of energy are commonly used by theoreticians to quantify elec- tronic energy levels in atoms and molecules. From an experimental viewpoint, energy levels are often given in terms of electron volts (eV), wavenumber units, or kilocalories/mole (kcal/mol). From Planck’s relation
E = hν = hc λ
The relation between the Joule and the kilocalorie is
1 kcal = 4.184 kJ
Thus, 1 kcal/mole is one kilocalorie per mole of atoms, or 4.184× 103 J divided by Avogadro’s number (6.022× 1023 ) = 6.9479× 10 −^21 J/molecule. The conversions between these (and other) energy units is given in numerous places on the web, for example web.utk.edu/˜rcompton/constants.
II. APPROXIMATION METHODS
A. Semiclassical quantization
The Bohr-Sommerfeld quantization condition is
S =
~p · d~q = (n + 1/2)h (1)
You may have seen this as S =
~p · d~q = nh
or E = (n + 1/2)(k/m)^1 /^2 (h/ 2 π) = (n + 1/2)(k/m)^1 /^2 ℏ Since, for the harmonic oscillator ω =
k/m, we recover the quantization condition
[2mE]1/
x < x >
FIG. 1. Dependence on distance of a typical phase integral [Eq. ( 2 )].
E = (n + 1/2)ℏω
As stated above, without the additional factor of 1/2, we would not have any zero-point energy, even though the level spacing would be exact. For a general potential, an analytic integration of pdq may not be possible. However, it is always possible to evaluate the integral of Eq. ( 2 ) numerically, as the area under the curve in Fig. 1 This is easier than numerical integration of the Schroedinger equation. Unfortunately, there is no guarantee that the Bohr-Sommerfeld quantization condition is exact.
B. Time-independent perturbation theory
Suppose the full Hamiltonian can be expanded as
H = Ho + λH′
where the solutions to the zeroth-order Hamiltonian are known
Hoφ(0) n = E n(0) φ(0) n.
Here the subscript n designates the particular value of the energy. We will then expand the solution to the full Hamiltonian ψn as
ψn = φ(0) n + λφ(1) n + λ^2 φ(2) n
If we substitute this expansion into the Schroedinger equation Hψn = Enψn, we obtain
Hψn = Hoφ(0) n + λ (Hoφ(1) n + H′φ(0) n^ )^ + λ^2 (Hoφ(2) n + H′φ(1) n^ )^ + ... (3)
We similarly expand En = E n(0) + λE n(1) + λ^2 E(2) n + ...
so that
Enψn = E n(0) φ(0) n + λ (E(0) n φ(1) n + E n(1) φ(0) n^ )^ + λ^2 (E(0) n φ(2) n + E n(1) φ(1) n + E n(2) φ(0) n^ )^ + ... (4)
We assume that the Schroedinger equation is satisfied for all values of the perturbation parameter λ. This means that the terms multiplied by each power of λ in Eq. ( 3 ) must equal the terms multiplied by the same power of λ in Eq. ( 4 ). In other words
Hoφ(0) n = E n(0) φ(0) n
which is the unperturbed Schroedinger equation, and
H′φ(0) n + Hoφ(1) n = E n(1) φ(0) n + E n(0) φ(1) n. (5)
Now, in the last equation, we can expand φ(1) n in terms of the solutions to the unperturbed equation, namely φ(1) n =
k 6 =n
C nk(1) φ(0) k (6)
Now, let’s consider the terms of order λ^2 in Eqs. ( 3 ) and ( 4 ). We have
Hoφ(2) n + H′φ(1) n = E n(0) φ(2) n + E(1) n φ(1) n + E n(2) φ(0) n (11)
Following Eq. ( 6 ) we expand φ(2) n as
φ(2) n =
k 6 =n
C nk(2) φ(0) k (12)
We substitute this equation as well as Eq. ( 10 ) into Eq. ( 11 ), premultiply by φ(0) n , and integrate to get (remembering that φ(2) n is orthogonal to φ(0) n )
E(2) n = 〈φ(0) n |H′| φ(1) n^ 〉^ (13)
We can then substitute in Eq. ( 10 ) for φ( nn )to get
E n(2) =
k 6 =n
φ(0) k |H′| φ(0) n
φ(0) n |H′| φ(0) k
E n(0) − E k(0)
k 6 =n
φ(0) k |H′| φ(0) n
2
E n(0) − E(0) k
Consider the lowest energy level (n = 1, say). Then, E n(0) − E k(0) will always be a negative number. Since the matrix element in the numerator on the right-hand-side of Eq. ( 14 ) is squared, and thus always positive (or zero), the contribution of each term in the summa- tion will be negative. Thus we conclude that for the lowest energy level, the second-order contribution to the energy will always be negative.
C. Linear variational method
Suppose you have two states | 1 〉 and | 2 〉, which we assume to be normalized. Let the matrix of the full Hamiltonian be (^)
H^11 H^12 H 21 H 22
We shall designate this matrix H, which, in general, is Hermetian. For simplicity, we will assume here that the matrix is real, so that H 12 = H 21. The corresponding overlap matrix,
S, is (^)
1 S^12 S 21 1
Now, define a linear combination of states | 1 〉 and | 2 〉
|φ〉 = C 1 | 1 〉 + C 2 | 2 〉 (17)
The expectation value of the Hamiltonian is then
Evar = 〈φ 〈|φφH|〉φ 〉, (18)
which can be written as 〈φ|H|φ〉 = Evar 〈φ|φ〉 (19)
Problem 1
Obtain an expression for the variational energy in terms of C 1 , C 2 , H 11 , H 12 , H 22 , S 11 , S 12 , and S 22. Suppose we use a three-state expansion of the wave function
|φ〉 = C 1 | 1 〉 + C 2 | 2 〉 + C 3 | 3 〉 (20)
If we take the derivative of Eq. ( 19 ) with respect to the ith^ coefficient Ci we obtain
∂〈φ|H|φ〉 ∂Ci^ =^ Evar
∂〈φ|φ〉 ∂Ci^ +^ 〈φ|φ〉
∂Evar ∂Ci^ (21)
or, explicitly,
2 CiHii +
6 =i
Cj (Hij + Hji) = Evar
2 CiSii +
6 =i
Cj (Sij + Sji)
where the superscript T denotes the matrix transpose and
The C matrix is orthogonal (or, if the elements are complex, unitary), so that
C CT^ = CT^ C = 1 (28)
where 1 is the unit matrix.
Problem 2
For a two-state problem with a unit overlap matrix, show that the diagonalizing transform can be written in terms of a single angle
cos(ϑ)^ sin(ϑ) − sin(ϑ) cos(ϑ)
Obtain the value of the angle ϑ in terms of the matrix elements of H. Hint: Use Matlab’s symbolic capabilities to carry out the matrix multiplication of Eq. ( 28 ), namely
syms h11, h12, h22, cs, sn hmat=[h11 h12;h12 h22]; cmat=[cs sn;-sn cs]; res=cmat′.hmatcmat; simplify(res) Then determine the values of the two energies E 1 and E 2 in terms of the angle ϑ. To check your result, consider the matrix
The two values of the energy (called the eigenvalues) and the corresponding coefficient column vectors (called the eigenvectors), can be obtained from Matlab as follows:
ham_mat=[0.3 0.05;0.05 -0.1]; [evec eval]=eig(ham_mat)
Problem 3
Check that your answer to problem 2 gives the same eigenvalues and eigenvectors as the solution obtained using the Matlab eig command.
D. Orthogonalization
In general, when the overlap matrix is not diagonal, eigenvalues and eigenvectors can be obtained by solution of the generalized eigenvalue problem, invoked by the Matlab command eig(hmat,smat) which gives the eigenvalues or [evec, eval]=eig(hmat,smat), which yields both the eigenvalues and the eigenvectors. Note that in this case (S not diagonal), the eigenvectors are normalized as follows:
CT^ S C = 1 (31)
In other words, the Matlab command eig(hmat) solves the simultaneous homogeneous equations [H − ES]C = 0
under the assumption that the overlap matrix S is the unit matrix I. The Matlab command eig(hmat,smat) solves the same set of homogeneous equations but with a full (non-diagonal) overlap matrix given by smat.
Rather than working with non-orthogonal expansion functions, we can construct an or- thonormal set by Gram-Schmidt orthogonalization, which proceeds as follows: Suppose we have two functions φi with i = 1, 2. The functions are normalized but not orthogonal, in other words 〈φi| φi〉 = 1 and 〈φi| φj 〉 = Sij. Let us start with function φi.
Another way to construct an orthonormal set of expansion functions is to diagonalize the overlap matrix FT^ SF = λ
where λ is a diagonal matrix. The columns of the matrix F are the linear combinations of the expansion function φi in which the overlap matrix is diagonal. It is not yet normalized, since the diagonal elements of λ are not equal to 1. To impose normalization, we then divide the columns of F by the square-root of the corresponding diagonal element of the λ matrix. In other words, we define a new matrix G with
Gij = Fij /λ^1 j/^2
Then, in the basis defined by the columns of G the overlap matrix, defined by GT^ SG is equal to the unit matrix, as in Eq. ( 31 ). Thus, when expanding in a nonorthogonal basis set, one has three alternatives: (1). Determing the eigenvalues and eigenvectors directly using a generalized eigenvalue call (e.g. eig(hmat,smat) in Matlab). (2). Using Gram-Schmidt orthogonalization to construct the transformation matrix D of Eqs. ( 32 ) or ( 33 ), then diagonalizing, by means of a standard eigenvalue call, the transformed Hamiltonian matrix H˜ 2 = DT^ HD. Designate by F 2 the matrix that diagonalizes H˜ 2 so that
ε 2 = FT 2 H˜ 2 F 2 = FT 2 DT^ HDF 2 (34)
where ε 2 is the diagonal matrix of eigenvalues. Thus, the overall matrix of eigenvectors is
C 2 = DF 2 (35)
(3). Diagonalizing the overlap matrix, then renornmalizing each column, then diagonalizing by means of a standard eigenvalue call, the transformed Hamiltonian matrix H˜ 3 = CT^ HC, namely ε 3 = FT 3 H˜ 3 F 3 = FT 3 GT^ HGF 3 (36)
Thus, the overall matrix of eigenvectors is
C 3 = GF 3 (37)
Problem 5
Suppose that you have a Hamiltonian matrix given by Eq. ( 30 ) and an overlap matrix given by
S =
Write a Matlab script that demonstrates that the three alternatives described immediately above result in the same energies.
E. MacDonald’s Theorem
In general, non-trivial solutions to Eq. ( 25 ) exist only for values of the energy for which the determinant of the matrix [H − Evar 1 ] vanishes. The determinant will, of course, exist for any arbitrary value of Evar. For simplicity, let’s use the letter E to stand for Evar. The determinant, f(E) will be a polynomial in E of order N. In the case of a set of 3 basis function, the vanishing of the corresponding 3 × 3 secular determinant can be written as, where we explicitly use the symmetry of the matrix of the Hamiltonian. (We assume real matrix elements). (^) ∣ ∣∣ ∣∣ ∣∣ ∣∣
With the rules for evaluating a 3 × 3 determinant, we can express this as
(H 11 − E)(H 22 − E)(H 33 − E) + 2H 12 H 23 H 13 − H 132 (H 22 − E) + · · · = 0 (40)
This is a cubic equation, which we can represent schematically in Fig. ( 2 ). There will be, in general, three roots of the cubic – values of E for which f(E)=0.
energies, in the N + 1 × N + 1 basis, we need to find the roots of the secular determinant ∣∣ ∣∣ ∣∣ ∣∣ ∣∣ ∣∣
E 1 (N )− E · · · 0 h 1 ...... ... ... 0 · · · E N(N )− E hN h 1 · · · hN hN +1 − E
Applying the rules for expansion of a determinant, you can show that Eq. ( 43 ) is equivalent to
(hN +1 − E)
i=
(E i( N^ )− E) −
i=
h^2 i
j j =1 6 =i
(E( j N^ )− E) = 0 (44)
Consider the simplest case (N = 2). We will assume that E 1 (2) is less than (lower than) E(2) 2. The N + 1 = 3 secular determinant is ∣∣ ∣∣ ∣∣ ∣∣ ∣
E(2) 1 − E 0 h 1 0 E 2 (2) − E h 2 h 1 h 2 h 3 − E
= (E(2) 1 −E)(E 2 (2) −E)(h 3 −E)−h^22 (E(2) 1 −E)−h^21 (E(2) 2 −E) = f (E)
(45) Now, if E = E 1 (2) , then f (E = E 1 (2) ) = −h^21
(all the other terms vanish). This has to be negative, since E 1 (2) ≤ E 2 (2). If, however, E = E 2 (2) , then f (E = E 2 (2) ) = −h^22
, which has to be positive (by the same reasoning). Thus, f (E) changes sign between E = E 1 (2) and E = E(2) 2 , so that there will be one root between E 1 (2) and E = E 2 (2). Now, if E goes to negative infinity, then, Eq. ( 45 ) shows that
Elim→−∞ f^ (E) =^ −E^3 ,^ (46)
which is positive (E is large and negative). Thus, since f (E) is negative at E = E 1 (2) , one more root will occur at an energy less than E 1 (2).
Problem 6
Show that f (E) also changes sign between E = E(2) 2 and E = +∞. Thus the two roots for N = 2 are interleaved between the three roots for N = 3, and so on as N increases, as shown schematically in Fig. ( 3 ). Consequently, we see that
N=large
FIG. 3. Illustration of the placement of the linear variational roots as N , the size of the basis set, increases.
the nth^ eigenvalue obtained from a linear variational treatment is an upper bound to the nth^ true energy. This is known as the Hyleraas-Undheim-MacDonald theorem, discovered independently by Hylleraas and Undheim [ 1 ] and MacDonald [ 2 ].
F. DVR method for bound state energies
Many phenomena are interpreted by one-dimensional models. The Discrete Variable Representation (DVR) method is a straightforward, accurate way to determine the energies and wavefunctions of bound states for any arbitrary one-dimensional potential. Consider a one-dimensional Hamiltonian in Cartesian coordinates
H(x) = V (x) − (^21) m d
2 dx^2 (47)
We will designate the true wavefunctions for this Hamiltonian as φi(x), where i denotes the cardinal number of the energy (i=1 is the lowest energy, i = 2 is the energy of the first excited state, etc). These wavefunctions are assumed to be orthonormal, so that
〈φi|φj 〉 = δij (48)
This last equation can be written, formally, as a matrix equation
〈φi|H|φj〉 ∼= hciT^ [V + T] cj (55)
where ci is a column vector (ci = [c 1 i c 2 i · · · ]), V is a diagonal matrix with elements Vkl = δklV (x = xk) and T is a tri-diagonal matrix with elements Tkk = 1/mh^2 and Tk,k± 1 = − 1 /(2mh^2 ). (Remember that h here is the spacing of the numerical integration grid not Planck’s constant.) The matrix of the Hamiltonian, with matrix elements 〈φi|H|φj 〉, can then be written in matrix notation as
H = hCT^ [V + T] C (56)
where each column of the matrix C is given by ci. But we know that 〈φi|H|φj 〉 = δij Ej. This is equivalent to saying, in matrix notation, H = E, where E is a diagonal matrix with elements Ei. Thus H = E = hCT^ [V + T] C. Consequently, since hCT^ [V + T] C is a diagonal matrix, and the matrices V and T are symmetric, the matrix C is non other than the matrix of eigenvectors which diagonalize the matrix h [V + T]. The eigenvectors are proportional to the values of the true wavefunctions at the points x = xk, and thus are the discrete variable representations (hence the name, DVR) of these wavefunctions. The eigenvalues are proportional to the true energies. We discuss this proportionality next. Most computer diagonalization routines give orthogonal eigenvectors, so that CTC = 1 , or, in terms of the individual eigenvectors
k=
ckicki = cTi ci (57)
However, we want the wavefunctions to be normalized, so that 〈φi|φj〉 = δij. If we were to evaluate this overlap matrix element by a trapezoidal integration equivalent to Eq. ( 50 ), using the ci eigenvectors, we would obtain
〈φi|φj 〉 = δij
k=
hckj cki = hcTj ci = h (58)
which is equal to h, not unity. Consequently, we have to renormalize the eigenvector matrix C by dividing every element by h^1 /^2. Let us define these renormalized eigenvectors as di = h−^1 /^2 ci. In other words, the value of the normalized wavefunction of the ith^ state at x = xk is dki = ckih−^1 /^2. With this renormalization, Eq. ( 59 ) becomes
〈φi|φj 〉 = δij h
k=
dkjdki = hcTi ci/ (h^1 /^2 )^2 = cTi ci = 1 (59)
which is now correctly normalized. Since the value of the ith^ normalized eigenvector at x = xk is dki, the energy of the ith state is given by [see Eq. ( 55 )]
Ei = 〈φi|H|φi〉 = hdTi [V + T] di (60)
which is also equal to Ei = 〈φi|H|φi〉 = cTi [V + T] ci (61)
Thus, the simplest DVR approach is diagonalization of the matrix [V + T]. The eigenvalues are then equal to (no longer proportional to) the true energies of the system. The discrete approximation to the wavefunction is still given by dki = ckih−^1 /^2 , because, regardless of whether we diagonalize [V + T] or h [V + T] any computer program will automatically give eigenvectors which satisfy 1 = CT^ C. The DVR method is only as accurate as the underlying numerical integration. Increasing the number of points increases the size of the V and T matrices but (presumably) improves the accuracy. In actual practice, a slightly better approximation is obtained by a 5-point approxima- tion [3] to the 2nd^ derivative, namely
d^2 f dx^2
x=xk
= −fk+2^ + 16fk+1^ − 12 30 hf 2 k + 16fk−^1 −^ fk−^2 (62)
which implies that the matrix T has five non-zero bands.
Problem 7
The three-parameter Morse potential is a good approximation to many potential curves for