Phy5646

From FSUPhysicsWiki

Jump to: navigation, search

Welcome to the Quantum Mechanics B PHY5646 Spring 2009

Schrodinger equation. The most fundamental equation of quantum mechanics which describes the rule according to which a state |\Psi\rangle evolves in time.

This is the second semester of a two-semester graduate level sequence, the first being PHY5645 Quantum A. Its goal is to explain the concepts and mathematical methods of Quantum Mechanics, and to prepare a student to solve quantum mechanics problems arising in different physical applications. The emphasis of the courses is equally on conceptual grasp of the subject as well as on problem solving. This sequence of courses builds the foundation for more advanced courses and graduate research in experimental or theoretical physics.

The key component of the course is the collaborative student contribution to the course Wiki-textbook. Each team of students (see Phy5646 wiki-groups) is responsible for BOTH writing the assigned chapter AND editing chapters of others.

This course's website can be found here.

Team assignments: Spring 2010 student teams


Outline of the course:


Contents

Stationary State Perturbation Theory in Quantum Mechanics

General Remarks on Perturbation Theory

Most quantum problems cannot be solved exactly with the present resources of mathematics, as they lead to equations whose solutions cannot be expressed in finite terms with the help of the ordinary functions of analysis. For such problems one can often use a perturbation method. This consists in splitting up the Hamiltonian into two parts, one of which must be simple and the other small. The first part may then be considered as the Hamiltonian of a simplified or unperturbed system, which can be dealt with exactly, and the addition of the second will then require small corrections, of the nature of a perturbation, in the solution for the unperturbed system. The requirement that the first part shall be simple requires in practice that it shall not involve the time explicitly. If the second part contains a small numerical factor, we can obtain the solution of our equations for the perturbed system in the form of a power series in that numerical factor, which, provided it converges, will give the answer to our problem with any desired accuracy. Even when the series does not converge, the first approximation obtained by means of it is usually fairly accurate.

There are two distinct methods in perturbation theory. In one of these methods, the perturbation is considered as causing a modification of the state of motion of the unperturbed system. In the other we do not consider any modification to be made in the states of the unperturbed system, but we suppose that the perturbed system, instead of remaining permanently in one of these states, is continually changing from one to another, or making transitions, under the influence of the perturbation. Which method is to be used in any particular case depends on the nature of the problem to be solved. The first method is useful usually only when the perturbing energy (the correction in the Hamiltonian for the undisturbed system) does not involve the time explicitly, and is then applied to the stationary states. It can be used for calculating things that do not refer to any definite time, such as the energy levels of the stationary states of the perturbed system, or, in the case of collision problems, the probability of scattering through a given angle. The second method must, on the other hand, be used for solving all problems involving a consideration of time, such as those about the transient phenomena that occur when the perturbation is suddenly applied, or more generally problems in which the perturbation varies with time in any way (i.e. in which the perturbing energy involves the time explicitly). Again, this second method must be used in collision problems, even though the perturbing energy does not here involve the time explicitly, if one wishes to calculate the absorption and emission probabilities, since these probabilities, unlike a scattering probability, can not be defined without reference to a state of affairs that varies with the time.

One can summarize the distinctive features of the two methods by saying that, with the first method, one compares the stationary states of the perturbed system with those of the unperturbed system; with the second method one takes a stationary state of the unperturbed system and see how it varies with time under the influence of the perturbation.

Very often, quantum mechanical problems cannot be solved exactly. An approximate technique can be very useful since it gives us quantitative insight into a larger class of problems which do not admit exact solutions. One technique is the WKB approximation, which holds in the asymptotic limit  \hbar\rightarrow 0 .

Perturbation theory is another very useful technique, which is also approximate, and attempts to find corrections to exact solutions in powers of the terms in the Hamiltonian which render the problem insolvable. The basic idea of perturbation theory deals with the notion of continuity such that you must be able to write the given Hamiltonian in a way that involves the solvable part of the Hamiltonian with very small additional terms that represent the insolvable parts. In the case of non-degenerate perturbation theory the following assumption must hold: both the energy and the wavefunctions of the insolvable Hamiltonian have analytic expansion in powers of the real parameter \lambda\! -- to insure no jump discontinuities -- equals zero where the perturbing term is taken to be \lambda\mathcal{H}'. The quantity \lambda\!, which is taken to be 0 < \lambda < 1 \!, has no physical significance, and is merely used as a way to keep track of order.

The Hamiltonian is taken to have the following structure:

\mathcal{H}=\mathcal{H}_0+\lambda\mathcal{H}'

where \mathcal{H}_0 is exactly solvable and \mathcal{H}' makes it insolvable by analytical methods. Therefore the eigenvalue problem becomes:

(\mathcal{H}_0+\lambda\mathcal{H}')|\psi> = E_n|\psi>

At the end of the calculation we set \lambda=1\!.

It is important to note that perturbation theory tends to yield fairly accurate energies, but usually yields very poor wavefunctions.

Rayleigh-Schrödinger Perturbation Theory

We begin with an unperturbed problem, whose solution is known exactly. That is, for the unperturbed Hamiltonian \mathcal{H}_0, we have eigenstates  |n\rangle , and eigenenergies  \epsilon_n \!, that are known solutions to the Schrodinger equation:

\mathcal{H}_0 |n\rangle  = \epsilon_n |n\rangle \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad (1.1.1)


To find the solution to the perturbed hamiltonian \mathcal{H}, we first consider an auxiliary problem, parameterized by \mathcal \lambda:

 \mathcal{H} = \mathcal{H}_0 + \lambda \mathcal{H}^' \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad (1.1.2)

The only reason for doing this is that we can now, via the parameter \mathcal \lambda, expand the solution in powers of the component of the hamiltonian \mathcal{H}', which is presumed to be relatively small in comparison with \mathcal{H}_0. In nature we do not know a priori that this will work, and choosing the correct perturbation for a particular problem will likely require some insight into the problem or by a numerical solution.

We attempt to find eigenstates |N(\lambda)\rangle and eigenvalues  E_n(\lambda)\! of the Hermitian operator \mathcal{H}, and assume that they can be expanded in a power series of \mathcal\lambda:

E_n(\lambda) = E_n^{(0)} + \lambda E_n^{(1)}  + ... + \lambda^{j}E_n^{(j)} + ...\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\;\;\;\;\;\;\;\;\;\;\;


|N(\lambda)\rangle = |\Psi_n^{(0)}\rangle + \lambda|\Psi_n^{(1)}\rangle + \lambda^2 |\Psi_n^{(2)}\rangle + ... \lambda^j |\Psi_n^{(j)}\rangle + ... \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\;\;\;\;\;\;\;\;\;\;\; (1.1.3)

where |\Psi_n^{(j)}\rangle is the j-th order correction to the unperturbed eigenstate |n\rangle, upon perturbation. Then we must have,

 \mathcal{H} |N(\lambda)\rangle = E(\lambda) |N(\lambda)\rangle \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\;\;\;\;\;\;  (1.1.4)

which upon expansion, becomes:

 (\mathcal{H}_0 + \lambda \mathcal{H}')\left(\sum_{j=0}^{\infty}\lambda^{j} |\Psi_n^{(j)}\rangle \right) = \left(\sum_{l=0}^{\infty} \lambda^l E_n^{(l)}\right)\left(\sum_{j=0}^{\infty}\lambda^j |\Psi_n^{(j)}\rangle \right) \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\;\;\;\; (1.1.5)

In order for this method to be useful, the perturbed energies must vary continuously with \mathcal \lambda. Knowing this we can see several things about our as yet undetermined perturbed energies and eigenstates. For one, as \lambda \rightarrow 0, |N(\lambda)\rangle \rightarrow |\Psi_n^{(0)}\rangle = |n\rangle and  E_n(\lambda) \rightarrow E_n^{(0)} = \epsilon_n for some unperturbed state |n\rangle.

For convenience, assume that the unperturbed states are already normalized:  \langle n | n \rangle = 1, and choose normalization such that the exact states satisfy \langle n|N(\lambda)\rangle=1. Then in general |N\rangle will not be normalized, and we must normalize it after we have found the states (see Phy5646#Renormalization).

Thus, we have:

\langle n|N(\lambda)\rangle= 1 = \langle n |\Psi_n^{(0)}\rangle + \lambda \langle n |\Psi_n^{(1)}\rangle + \lambda^2 \langle n |\Psi_n^{(2)}\rangle + ... \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\;\;\;\;\;\;(1.1.6)

Coefficients of the powers of \lambda\! must match, so,

 \langle n | \Psi_n^{(i)} \rangle = 0, i = 1, 2, 3, ... \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\;\;\;(1.1.7)


Which shows that, if we start with the unperturbed state  |n\rangle (=|\Psi_n^{(0)}\rangle), upon perturbation, then we add to this initial state a set of perturbation states,  |\Psi_n^{(1)}\rangle, |\Psi_n^{(2)}\rangle, ... which are all orthogonal to the original state -- so the unperturbed states become mixed together.


We equate coefficients in the above expanded form of the perturbed Hamiltonian (eq. #1.1.5), we are provided with the corrected eigenvalues for whichever order of λ we want. The first few are as follows,


0th Order Energy

consider \mathcal\lambda^0 term in eq #1.1.5, we get

 (\mathcal{H}_0  -  E_n^{(0)}  )|\Psi_n^{(0)}\rangle = 0  \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad  \;(1.1.8)

so we obtain

  E_n^{(0)} = \epsilon_n \qquad \qquad \qquad \qquad \qquad\qquad \qquad\qquad \qquad \qquad   \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad  (1.1.9)
which we already had before.


1st Order Energy Corrections

Consider terms in eq #1.1.5 that are first order in \lambda\!:

 (\mathcal{H}_0  -  E_n^{(0)}  )|\Psi_n^{(1)}\rangle = (E_n^{(1)} - \mathcal{H}'   )|\Psi_n^{(0)}\rangle 
\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad   \qquad \qquad \qquad\;(1.1.10)

taking the scalar product of it with \langle n|, we get:

E_n^{(1)} = \langle n|\mathcal{H}'|n\rangle \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad  \;\;\;\;(1.1.11)

1st order Eigenkets

Instead, if we take the scalar product of eq #1.1.10 with \langle m|, where  m \not= n , we obtain

\langle m|\Psi_n^{(1)}\rangle = \frac{\langle m |  \mathcal{H}'  | n\rangle}{\epsilon_n - \epsilon_m} 
 \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad    \;(1.1.12)


The first order contribution is then the sum of this equation over all m\!, and adding the zeroth order we get the eigenstates of the perturbed hamiltonian to the 1st order in \lambda\!:

|N\rangle = |n\rangle + \lambda\sum_{k \not= n} |m\rangle \frac{\langle m |\mathcal{H}'| n\rangle}{\epsilon_n - \epsilon_m} + O(\lambda^2)  \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\;\;\;\;(1.1.13)



2nd Order Energy Corrections

Taking the terms in eq #1.1.5 that are second order in \lambda\!:

 (\mathcal{H}_0  -  E_n^{(0)}  )|\Psi_n^{(2)}\rangle = (E_n^{(1)} - \mathcal{H}'   )|\Psi_n^{(1)}\rangle +  E_n^{(2)}|\Psi_n^{(0)}\rangle \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \;\;\;\;(1.1.14)     ,

and operating on them with \langle n|, we get:

E_n^{(2)} = \sum_{m \not= n} \frac{|\langle n|\mathcal{H}'|m\rangle|^2}{\epsilon_n - \epsilon_m}  \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad  \;\;\;\;(1.1.15)  ,

so E_n\! up to the second order is:

E_n = \epsilon_n + \lambda\langle n|\mathcal{H}'|n\rangle + \lambda^2 \sum_{m \not= n} \frac{|\langle n|\mathcal{H}'|m\rangle|^2}{\epsilon_n - \epsilon_m}+ O(\lambda^3) \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\;\;\;\;(1.1.16)

There are a few interesting things to note from this equation.The first thing interesting thing to note is that |V_{mn}|^2 = |\langle m |V| n\rangle|^2 is positive definite. Therefore, since \epsilon_0 - \epsilon_m < 0\!, the second order energy correction will always lower the ground state energy. Secondly, note that any two energy levels, say the ith and the jth levels, when connected by the perturbation matrix element \langle i|\mathcal{H}'|j\rangle tend to get further separated from each other. The lower one, say i gets depressed by \frac{|\langle i|\mathcal{H}'|j\rangle|^2}{\epsilon_j - \epsilon_i} , while the higher energy state j goes up by the same amount.This is a special case of the no-level crossing theorem, which states that a pair of energy levels connected by perturbation do not cross as the strength of the perturbation is increased.

2nd order Eigenkets

Instead, if we take the scalar product of eq #1.1.14 with \langle k|, where  k \not= n , we obtain

\langle m|\Psi_n^{(2)}\rangle =
 \sum_{m \not= n} \frac{\langle k|\mathcal{H}'|m\rangle  \langle m|\mathcal{H}'|n\rangle   }{(\epsilon_n - \epsilon_k)(\epsilon_n - \epsilon_m)  } -   \frac{\langle k|\mathcal{H}'|n\rangle  \langle n|\mathcal{H}'|n\rangle   }{(\epsilon_n - \epsilon_k)(\epsilon_n - \epsilon_k)  }\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad    \;(1.1.17)


So the eigenstates of the perturbed hamiltonian up to the 2nd order in \lambda\! is:

|N\rangle = |n\rangle + \lambda\sum_{k \not= n} |m\rangle \frac{\langle m |\mathcal{H}'| n\rangle}{\epsilon_n - \epsilon_m}   
+ \lambda{^2} \left (\sum_{k \not= n}\sum_{m \not= n}|k\rangle  \frac{\langle k|\mathcal{H}'|m\rangle  \langle m|\mathcal{H}'|n\rangle   }{(\epsilon_n - \epsilon_k)(\epsilon_n - \epsilon_m)  } -  \sum_{k \not= n}
  |k\rangle \frac{\langle k|\mathcal{H}'|n\rangle  \langle n|\mathcal{H}'|n\rangle   }{(\epsilon_n - \epsilon_k)(\epsilon_n - \epsilon_k)  }\right )
 + O(\lambda^3)  \qquad\qquad\;\;\;\;(1.1.18)




kth order Energy Corrections and kth order Eigenkets

In general,  E_n^{(k)} = \langle n | \mathcal{H}' | \Psi_n^{(k-1)} \rangle \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\;(1.1.19)

This result provides us with a recursive relation for the eigenenergies of the perturbed state, so that we have access to the eigenenergies for an state of arbitrary order in \lambda\!.

What about the eigenstates? The perturbed states can be expressed in terms of the unperturbed states:

|\Psi_n^{(k)}\rangle = \sum_{m \not= n}|m\rangle\langle m|\Psi_n^{(k)}\rangle 
\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad \qquad\;\;\;\;(1.1.20)

Going beyond 2nd order in λ gets increasingly messy, but can be done by the same procedure as above.

Renormalization

Earlier we assumed that \langle n|N(\lambda)\rangle=1, which means that our |N(\lambda)\rangle states are not normalized themselves. To reconcile this we introduce the normalized perturbed eigenstates, denoted \bar{N}. These will then be related to the \ N(\lambda):

 |\bar{N}\rangle = \frac{|N\rangle}{\sqrt{\langle N|N\rangle}} =z^{1/2}|N\rangle \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad(1.1.21)

Thus z gives us a measure of how close the perturbed state is to the original state.

 z(\lambda ) = \frac{1}{\langle N(\lambda )|N (\lambda ) \rangle} \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\;\;(1.1.22)

To second order in λ

\frac{1}{z (\lambda )} = \langle N(\lambda )|N(\lambda )\rangle = ( \langle n| + \lambda \langle\Psi_n^{(1)}| + ...)( \langle n| + \lambda \langle\Psi_n^{(1)}| + ...)

z(\lambda ) = \frac{1}{1 + \lambda^2\sum_{ n \not= m}\frac{|\langle m|V|n\rangle|^2}{(\epsilon_n - \epsilon_m)^2} + ...} = 1 - \lambda^2\sum_{ n \not= m}\frac{|\langle m|V|n\rangle|^2}{(\epsilon_n - \epsilon_m)^2} + ... \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\;\;\;\;\;\;\;(1.1.23)

Image:Renorm.jpg

Where we use a taylor expansion to arrive at the final result (noting that \frac{|\langle m|V|n\rangle|^2}{(\epsilon_n - \epsilon_m)^2} < 1).

Then, interestingly, we can show that |n\rangle is related to the energies by employing equation 1.1.10:

z(\lambda ) = \frac{\partial E_n}{\partial \epsilon_n}\Big|_{\epsilon_m}{\langle m|V|n\rangle} \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\;\;\;(1.1.24)

Where the derivative is taken with respect to εn, while holding \langle m|V|n\rangle constant. Using the Brillouin-Wigner perturbation theory (see next section) it can supposedly be shown that this relation holds exactly, without approximation.

Problem examples of non-degenerate perturbation theory :

-Problem 1: demonstrating how linear algebra can be used to solve for the exact eigenstates, exact eigenevalues, first and second order corrections to the eigenvalues, and first order corrections to the eigenstates of a given Hamiltonian

-Problem 2

-Problem 3

-Problem 4

Brillouin-Wigner Perturbation Theory

Brillouin-Wigner perturbation theory is an alternative perturbation method based on treating the right hand side of

 (E_n - H_0)|N\rangle = H'|N\rangle

as a known quantity. This method is not strictly an expansion in lambda.

Using a basic formula derived from the Schrodinger equation, you can find an approximation for any power of  \lambda \! required using an iterative process. This theory is less widely used as compared to the Rayleigh-Schrodinger theory. At first order the two theories are equivalent. However,the BW theory extends more easily to higher order and avoid the need for separate treatment of non degenerate and degenerate levels. In addition, if we have a good approximation for the value of  E_n \!, the BW series should converge more rapidly than the RS series.

Starting with the Schrodinger equation:


\begin{align}
({\mathcal H}_o+\lambda {\mathcal H}')|N\rangle &= E_n|N\rangle \\
\lambda {\mathcal H}'|N\rangle &= (E_n-{\mathcal H}_o)|N\rangle \\
\langle n|(\lambda {\mathcal H}'|N\rangle) &= \langle n|(E_n-{\mathcal H}_o)|N\rangle \\
\lambda \langle n|{\mathcal H}'|N\rangle &= (E_n-\epsilon_n)\langle n|N\rangle \\
\end{align}

If we choose to normalize  \langle n|N \rangle = 1 , then so far we have:  (E_n-\epsilon_n) = \lambda\langle n|{\mathcal H}'|N\rangle \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\; (1.2.1)

which is still an exact expression (no approximation have been made yet). The wavefunction we are interested in,  |N\rangle can be rewritten as a summation of the eigenstates of the (unperturbed,  {\mathcal H}_o ) Hamiltonian: 
\begin{align}
|N\rangle &= \sum_m|m\rangle\langle m|N\rangle\\
&= |n\rangle\langle n|N\rangle + \sum_{m\neq n}|m\rangle\langle m|N\rangle\\
&= |n\rangle + \sum_{m\neq n}|m\rangle\frac{\lambda\langle m|{\mathcal H}'|N\rangle}{(E_n-\epsilon_m)}\\
\end{align} \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\;\;\;\;\;\; (1.2.2)

The last step has been obtained by using eq #1.2.1. So now we have a recursive relationship for both  E_n \! and  |N\rangle

 E_n = \epsilon_n+\lambda\langle n|{\mathcal H}'|N\rangle where  |N\rangle can be written recursively to any order of  \lambda \! desired

 |N\rangle = |n\rangle+\lambda \sum_{m\neq n}|m\rangle\frac{\lambda\langle m|{\mathcal H}'|N\rangle}{(E_n-\epsilon_m)} where En can be written recursively to any order of  \lambda \! desired

For example, the expression for  |N\rangle to a third order in  \lambda \! would be:


\begin{align}
|N\rangle &= |n\rangle + \lambda\sum_{m\neq n}|m\rangle\frac{\langle m|{\mathcal H}'}{(E_n-\epsilon_m)}\left(|n\rangle + \lambda\sum_{j\neq n}|j\rangle\frac{\langle j|{\mathcal H}'}{(E_n-\epsilon_j)}\left(|n\rangle + \lambda\sum_{k\neq n}|k\rangle\frac{\langle k|{\mathcal H}'|n\rangle}{(E_n-\epsilon_k)}\right)\right)\\
&= |n\rangle + \lambda\sum_{m\neq n}|m\rangle\frac{\langle m|{\mathcal H}'|n\rangle}{(E_n-\epsilon_m)} + \lambda^2\sum_{m,j\neq n}|m\rangle\frac{\langle m|{\mathcal H}'|j\rangle\langle j|{\mathcal H}'|n\rangle}{(E_n-\epsilon_m)(E_n-\epsilon_j)} + \lambda^3\sum_{m,j,k\neq n}|m\rangle\frac{\langle m|{\mathcal H}'|j\rangle\langle j|{\mathcal H}'|k\rangle\langle k|{\mathcal H}'|n\rangle}{(E_n-\epsilon_m)(E_n-\epsilon_j)(E_n-\epsilon_k)}\\
\end{align}
,

where \sum_{j} |j \rangle \langle j | is unity.

Note that we have chosen \langle n|N \rangle = 1, i.e. the correction is perpendicular to the unperturbed state. That is why at this point |N \rangle is not normalized. The normalized wave function can be written as

 |\bar{N} (\lambda) \rangle = \frac{|N(\lambda) \rangle}{\sqrt{\langle N (\lambda) | N (\lambda) }}  \equiv \sqrt{Z (\lambda)} | N(\lambda) \rangle

Interestingly, the normalization constant Z turns out be exactly equal to the derivative of the exact energy with respect to the unperturbed energy, ie

 \frac{\partial E_{n}(\lambda)}{\partial \epsilon_{n}}  = Z

The calculation for the normalization constant can be found through this link.

We can expand #1.2.1 with the help of #1.2.2, this gives:

E_n = \epsilon_n + \lambda\langle n|H'|n\rangle + \lambda^2\sum ' \frac{|\langle m|H'|n\rangle|^2}{E_n - \epsilon_m} \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\;\;\;\; (1.2.3).

Notice that if we replaced E_n\! with \epsilon_n\! we would recover the Raleigh-Schrodinger perturbation theory. By itself #1.2.2 provides a transcendental equation of E_n\!, since E_n\! appears in the denominator of the right hand side. If we have some idea of the value of a particular E_n\!, then we could use this as a numerical method to iteratively get better and better values for E_n\!.

Degenerate Perturbation Theory

Degenerate perturbation theory is an extension of standard perturbation theory which allows us to handle systems where one or more states of the system have non-distinct energies. Normal perturbation theory fails in these cases because the denominators of the expressions for the first-order corrected wave function and for the second-order corrected energy become zero, which we know is unphysical. If more than one eigenstate for the Hamiltonian  {\mathcal H}_o has the same energy value, the problem is said to be degenerate. If we try to get a solution using perturbation theory, we fail, since Rayleigh-Schroedinger PT includes terms like  \frac {1}{\mathcal(\epsilon_n-\epsilon_m)} \!.

Instead of trying to use these degenerate eigenstates with perturbation theory, we start with the non-degenerate linear combinations of the original eigenstates so that regular perturbation theory may be applied. In other words, the first, and only, extra step of degenerate perturbation theory is to find linear combinations by diagonalizing the perturbation within the set of degenerate states and then proceeding as usual in non-degenerate perturbation.

 \{|n_a\rangle,|n_b\rangle,|n_c\rangle,\dots\}  \longrightarrow  \{|n_{\alpha}\rangle,|n_{\beta}\rangle,|n_{\gamma}\rangle,\dots\}  where  |n_{\alpha}\rangle = \sum_iC_{\alpha,i}|n_i\rangle etc

The general procedure for doing this type of problem is to create the matrix with elements  \langle n_a|{\mathcal H}'|n_b\rangle formed from the degenerate eigenstates of  {\mathcal H}_o . This matrix can then be diagonalized, and the eigenstates of this matrix are the correct linear combinations to be used in non-degenerate perturbation theory. In other words, we choose to manipulate the expression for the Hamiltonian so that \langle n_\alpha|H'|n_\beta\rangle goes to zero for all cases \alpha \ne \beta. One can then apply the standard equation for the first-order energy correction, noting that the change in energy will apply to the energy states described by the new basis set. (In general, the new basis will consist of some linear superposition of the existing state vectors of the original system.)

One of the well-known examples of an application of degenerate perturbation theory is the Stark Effect. If we consider a Hydrogen atom with  n=2 \! in the presence of an external electric field  \vec{\mathcal E}={\mathcal E}\hat{z} . The Hamiltonian for this system is  {\mathcal H}={\mathcal H}_o-e{\mathcal E}z . The eigenstates of the system are  \{|2S\rangle,|2P_{-1}\rangle,|2P_0\rangle,|2P_{+1}\rangle\} . The matrix of the degenerate eigenstates and the perturbation is:


\begin{align}
\langle n_i|{\mathcal H}'|n_j\rangle &\longrightarrow \left(\begin{array}{cccc}\langle2S|-e{\mathcal E}z|2S\rangle&\langle2S|-e{\mathcal E}z|2P_{-1}\rangle&\langle2S|-e{\mathcal E}z|2P_0\rangle&\langle2S|-e{\mathcal E}z|2P_{+1}\rangle\\\langle2P_{-1}|-e{\mathcal E}z|2S\rangle&\langle2P_{-1}|-e{\mathcal E}z|2P_{-1}\rangle&\langle2P_{-1}|-e{\mathcal E}z|2P_0\rangle&\langle2P_{-1}|-e{\mathcal E}z|2P_{+1}\rangle\\\langle2P_0|-e{\mathcal E}z|2S\rangle&\langle2P_0|-e{\mathcal E}z|2P_{-1}\rangle&\langle2P_0|-e{\mathcal E}z|2P_0\rangle&\langle2P_0|-e{\mathcal E}z|2P_{+1}\rangle\\\langle2P_{+1}|-e{\mathcal E}z|2S\rangle&\langle2P_{+1}|-e{\mathcal E}z|2P_{-1}\rangle&\langle2P_{+1}|-e{\mathcal E}z|2P_0\rangle&\langle2P_{+1}|-e{\mathcal E}z|2P_{+1}\rangle\\\end{array}\right)\\
&\longrightarrow \left(\begin{array}{cccc}0&0&\langle2S|-e{\mathcal E}z|2P_0\rangle&0\\0&0&0&0\\\langle2P_0|-e{\mathcal E}z|2S\rangle&0&0&0\\0&0&0&0\\\end{array}\right)\\
&\longrightarrow \left(\begin{array}{cccc}0&0&-3e{\mathcal E}a_B&0\\0&0&0&0\\-3e{\mathcal E}a_B&0&0&0\\0&0&0&0\\\end{array}\right)\\
\end{align}


To briefly summarize how most of the terms in this matrix work out to be zero (the full arguments as to how most of these terms are zero is worked out in G Baym's "Lectures on Quantum Mechanics" in the section on Degenerate Perturbation Theory) first note that the hydrogen atom is degenerate under parity, and as a result, all the elements on the diagonal become zero. The other elements vanish because of angular momentum. Matrix elements of  \lambda V \! between states with different eigenvalues of  L_{z}\! vanish, since  e{\mathcal E}z \! commutes with  L_{z} \!. For example,

  0 = \langle 2P_{-1}| [e{\mathcal E}z, L_{z} ] | 2P_{1} \rangle  = \langle 2P_{-1} | e{\mathcal E}z (L_{z}|P_{1} \rangle)  
-( \langle 2P_{-1}|L_{z}) e{\mathcal E}z|2P_{1} \rangle  = 2\hbar \langle 2P_{-1}|e {\mathcal E}z|2P_{1} \rangle which means that  \langle 2P_{-1}|e {\mathcal E}z| 2P_{1} \rangle = 0

The correct linear combination of the degenerate eigenstates ends up being

 \{|2P_{-1}\rangle,|2P_{+1}\rangle,\frac{1}{\sqrt{2}}\left(|2S\rangle+|2P_0\rangle\right),\frac{1}{\sqrt{2}}\left(|2S\rangle-|2P_0\rangle\right)\}

Because of the perturbation due to the electric field, the  |2P_{-1}\rangle and  |2P_{+1}\rangle states will be unaffected. However, the energy of the  |2S\rangle and  |2P_0\rangle states will have a shift due to the electric field.

Example: 1D harmonic oscillator

Consider 1D harmonic oscillator perturbed by a constant force.

V = -F\mathbf{x}

The energy up to second order is given by E_{n}=\epsilon_{n}+\langle n|V|n\rangle +\sum_{m\neq n}  \frac{|\langle m|V|n\rangle |^{2}}{\epsilon_{n}-\epsilon_{m}} \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad (1.3.1)

Let's see the matrix elements

\begin{align}
\langle m|V|n\rangle &=-F\langle m|\mathbf{x}|n\rangle\\

&=-F\langle m|\sqrt{\frac{\hbar}{2m\omega}}\left( \mathbf{a}+\mathbf{a}^{\dagger}\right)|n\rangle\\

&=-F\sqrt{\frac{\hbar}{2m\omega}}\left( \langle m|\mathbf{a}|n\rangle+\langle m|\mathbf{a}^{\dagger}|n\rangle\right)\\

&=-F\sqrt{\frac{\hbar}{2m\omega}}\left( \sqrt{n}\langle m|n-1\rangle+\sqrt{n+1}\langle m|n+1\rangle\right)\\

&=-F\sqrt{\frac{\hbar}{2m\omega}}\left( \sqrt{n}\delta_{m,n-1}+\sqrt{n+1}\delta_{m,n+1}\right)\\

\end{align}

We see that:

  • The first order term in eq. #1.3.1 is:
\langle n|V|n\rangle=0
  • The Second order term is:
\begin{align}
\sum_{m\neq n}  \frac{|\langle m|V|n\rangle |^{2}}{\epsilon_{n}-\epsilon_{m}}&=

\frac{|\langle n-1|V|n\rangle |^{2}}{\epsilon_{n}-\epsilon_{n-1}}+\frac{|\langle n+1|V|n\rangle |^{2}}{\epsilon_{n}-\epsilon_{n+1}}\\

&=\frac{|\langle n-1|V|n\rangle |^{2}}{\hbar \omega (n+\frac{1}{2})-\hbar \omega (n-1+\frac{1}{2})}+\frac{|\langle n+1|V|n\rangle |^{2}}{\hbar \omega (n+\frac{1}{2})-\hbar \omega (n+1+\frac{1}{2})}\\

&=\frac{|\langle n-1|V|n\rangle |^{2}}{\hbar \omega}+\frac{|\langle n+1|V|n\rangle |^{2}}{-\hbar \omega}\\

&=\frac{1}{\hbar \omega}\left(\frac{\hbar}{2m\omega}F^{2}n-\frac{\hbar}{2m\omega}F^{2}(n+1)\right)\\ 

&=\frac{-F^{2}}{2m\omega^{2}}

\end{align}


Finally the energy is given by

E_{n}=\epsilon_{n}-\frac{F^{2}}{2m\omega^{2}}

This results is exactly the same as the result obtained when we solve the problem without perturbation theory.

Time Dependent Perturbation Theory in Quantum Mechanics

Formalism

Previously, we learned time independent perturbation theory in which a little change in the Hamiltonian generates a correction in the form of a series expansion for the energy and wave functions. The problem for a time independent \mathcal{H} can be solved by finding a solution to the equation \mathcal{H}|n\rangle = E_n|n\rangle. And then changes in time can be modeled by constructing the states  |\psi(t)\rangle = \sum_nc_n(t)|n\rangle where c_n(t) = e^{-\frac{i}{\hbar}E_n t}c_n(0) . In principle this describes any closed system, and there would never be a reason for time-dependent problems if it were practical to consider all systems as closed. However, there are many examples in nature of systems that are easier described as not being closed. For example, while the stationary approach can be used to describe the interaction of electromagnetic fields with atoms (i.e. photon with Hydrogen atom), it is more practical to describe it as an open system with an explicitly time dependent term (due to EM radiation). Therefore we explore Time Dependent Perturbation Theory.


One of the main tasks of this theory is the calculation of transition probabilities from one state |\psi_n \rangle to another state |\psi_m \rangle that occurs under the influence of a time dependent potential. Generally, the transition of a system from one state to another state only makes sense if the potential acts on within a finite time period from \!t = 0 to \!t = T. Except for this time period, the total energy is a constant of motion which can be measured. We start with the Time Dependent Schrodinger Equation,

i\hbar\frac{\partial}{\partial t}|\psi_t^0 \rangle = H_0 |\psi_t^0\rangle,  \qquad t<t_0. \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad (2.1.1)

where a is the Bohr radius. Then, to answer any questions about the behavior of the system at a later time we must find its state  | \psi_{t} \rangle , assuming that the perturbation acts after time \!t_0, we get

i\hbar\frac{\partial}{\partial t}|\psi_t \rangle = (H_0 + V_t)|\psi_t\rangle,  \qquad t>t_0 \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad\;\;\; (2.1.2)

The problem therefore consists of finding the solution |\psi_t\rangle with boundary condition |\psi_t\rangle = |\psi_t^0\rangle for t \leq t_0. However, such a problem is usually impossible to solve completely in closed form.
Therefore, we limit ourselves to the problems in which \!V_t is small. In that case we can treat \!V_t as a perturbation and seek it's effect on the wavefunction in powers of \!V_t.

Since \!V_t is small, the time dependence of the solution will largely come from \!H_0. So we use

|\psi_t\rangle = e^{-i H_0 t/\hbar}|\psi(t)\rangle \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad\; (2.1.3),

which we substitute into the Schrodinger Equation to get

i\hbar\frac{\partial}{\partial t}|\psi(t)\rangle=V(t)|\psi(t)\rangle \quad \text{where}\quad V(t) = e^{i H_0 t/\hbar}V_te^{-i H_0 t/\hbar}\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad (2.1.4).

In this equation \psi(t)\! and the operator V(t)\! are in the interaction representation. Now, we integrate equation #(2.1.4) to get

\int_{t_o}^{t}dt \frac{\partial}{\partial t}|\psi(t)\rangle = \psi(t) - \psi(t_0) = \frac{1}{i\hbar}\int_{t_0}^{t}dt' V(t')|\psi(t')\rangle

or

|\psi(t)\rangle = |\psi(t_0)\rangle + \frac{1}{i\hbar}\int_{t_0}^{t}dt' V(t')|\psi(t')\rangle \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \;\;\ (2.1.5)

Equation #(2.1.5) can be iterated by inserting this equation itself as the integrand in the r.h.s. We can then write equation #(2.1.5) as

|\psi(t)\rangle = |\psi(t_0)\rangle + \frac{1}{i\hbar}\int_{t_0}^{t}dt' V(t')\left(|\psi(t_0)\rangle + \frac{1}{i\hbar}\int_{t_0}^{t'}dt'' V(t'')|\psi(t'')\rangle\right), \qquad t''<t'\qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad\;\ (2.1.6)

which can be written compactly as

|\psi(t)\rangle = T e^{-\frac{i}{\hbar}\int_{t_0}^{t}V(t')dt'} |\psi(t_0) \rangle \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad\;\;\ (2.1.7)

This is the general solution. T\! is called the time ordering operator, which ensures that the series is expanded in the correct order. For now, we consider only the correction to the first order in \!V(t).

First Order Transitions

If we limit ourselves to the first order, we use

|\psi(t)\rangle = |\psi(t_0)\rangle + \frac{1}{i\hbar}\int_{t_0}^{t}dt'V(t')|\psi(t_0)\rangle \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad\;\;\ (2.1.8)

We want to see the system undergoes a transition to another state, say |n\rangle. So we project the wave function |\psi(t)\rangle to |n\rangle. From now on, let |\psi(t_0)\rangle = |0\rangle
for brevity. In other words, what is the probability of a state  |0\rangle making a transition into a state  |n \rangle at a given time t\!?

Projecting  |\psi(t)\rangle into state |n\rangle and letting  \langle n|0\rangle =0 if  n \neq 0 , we get \begin{align}\langle n|\psi(t)\rangle & = \langle n|0\rangle + \frac{1}{i\hbar}\int_{t_0}^{t}dt'\langle n|V(t')|0\rangle\\ & = \frac{1}{i\hbar}\int_{t_0}^{t}dt'\langle n|e^{\frac{i}{\hbar}H_0 t}V_{t'}e^{-\frac{i}{\hbar}H_0 t}|0\rangle\\ & = \frac{1}{i\hbar}\int_{t_0}^{t}dt'e^{\frac{i}{\hbar}(\epsilon_n - \epsilon_0)t'}\langle n|V_{t'}|0\rangle \end{align}\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad\;\,\ (2.1.9)

Expression #(2.1.9) is the probability amplitude of transition. Therefore, we square the final expression to get the probability of having the system in state |n\rangle at time t\!.
Squaring, we get

P_{0 \rightarrow n}(t) = |\langle n|\psi(t)\rangle|^2 = \left|\frac{1}{i\hbar}\int_{t_0}^{t}dt' e^{\frac{i}{\hbar}(\epsilon_n - \epsilon_0)t'}\langle n|V_{t'}|0\rangle\right|^2 \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad\, (2.1.10)

For example, let us consider a potential \!V_t which is turned on sharply at time \!t_0\!, but independent of  t \! thereafter. Furthermore, we let \!t_0 = 0\! for convenience. Therefore :

V_t = 
\begin{cases}
0 &\mbox{if} \qquad t<0\\
V &\mbox{if} \qquad t>0
\end{cases}

\begin{align}
P_{0 \rightarrow n}(t) & = \left|\frac{1}{i\hbar}\int_{0}^{t}dt' e^{\frac{i}{\hbar}(\epsilon_n - \epsilon_0)t'}\langle n|V|0\rangle\right|^2\\
& = \left|\frac{1}{i\hbar}\frac{e^{\frac{i}{\hbar}(\epsilon_n - \epsilon_0)t}-1}{\frac{i}{\hbar}(\epsilon_n - \epsilon_0)}\langle n|V|0\rangle\right|^2\\
& = \frac{4 \sin^2\left(\frac{\epsilon_n - \epsilon_0}{2\hbar}t\right)}{\left(\epsilon_n - \epsilon_0\right)^2}|\langle n|V|0 \rangle|^2
\end{align}\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad\;\;  (2.1.11)

The plot of the probability vs. \! \epsilon_n is given in the following plot:

Image:Amplitude.JPG

,where \Delta\epsilon\Delta t \geq 2\pi\hbar. So we conclude that as the time grows, the probability has a very narrow peak and approximate energy conservation is required for a transition with appreciable probability. However, this "uncertainty relation" is not the same as the fundamental  x - p \! uncertainty relation because while  x \! and  p \! are both observable variables, time in non-relativistic quantum mechanics is just a parameter, not an observable.

Now, we imagine shining a light of a certain frequency on a hydrogen atom. We probably ended up getting the atom at a certain bound state. However, it might be ionized as well. The problem with ionization is the fact that the final state is a continuum, so we cannot just simply pick up a state to end with i.e. a plane wave with a specific  k \!.

Furthermore, if the wave function is normalized, the plane waves states will contain a factor of \frac{1}{\sqrt{V}}\! which goes to zero if  V \! is very large. But, we know that ionization exists, so there must be something missing. Instead of measuring the probability to a transition to a pointlike wavenumber,  k \!, we want to measure the amplitude of transition to a group of states around a particular  k \!, i.e., we want to measure the transition amplitude from  k \! to  k+dk \!.

Let's suppose that the state |n\rangle is one of the continuum state, then what we could ask is the probability that the system makes transition to a small group of states about |n\rangle, not to a specific value of |n\rangle. For example, for a free particle, what we can find is the transition probability from initial state to a small group of states, viz. |\vec k\rangle, or in other words, the transition probability to an element of phase space \! d^3k / (2\pi)^3

The next step is a mathematical trick. We use

\delta(x) = \lim_{\eta \to 0}\frac{1}{\pi x}\sin\left(\frac{x}{\eta}\right)

Applying this to our result from above, we see that as  t \rightarrow \infty ,

\frac{\pi \sin(\frac{\epsilon_n - \epsilon_0}{2\hbar}t)}{\pi \hbar \frac{\epsilon_n - \epsilon_0}{2\hbar}} = \frac{\pi}{\hbar}\delta\left(\frac{\epsilon_n - \epsilon_0}{2\hbar}\right) = 2 \pi \delta\left( \epsilon_n - \epsilon_0 \right) \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad\;\;\;\;\; (2.1.12)

If this result used in the equation #2.1.11, it gives


{P_{0 \rightarrow n}(t)}\quad\underset{t \rightarrow \infty}{\longrightarrow}\quad \frac{t}{\hbar}2\pi \delta(\epsilon_n - \epsilon_0)|\langle n|V|0\rangle|^2 \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad\;(2.1.13)

or as a rate of transition, \Gamma_{0\rightarrow n} :

\Gamma_{0 \rightarrow n} = \frac{d}{dt}P_{0 \rightarrow n}(t)\quad\underset{t \rightarrow \infty}{\longrightarrow}\quad\frac{2\pi}{\hbar} \delta(\epsilon_n - \epsilon_0)|\langle n|V|0\rangle|^2 \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad (2.1.14)

which is called The Fermi Golden Rule. Using this formula, we should keep in mind to sum over the entire continuum of final states.

To make things clear, let's try to calculate the transition probability for a system from a state |\vec{k}\rangle to a final state |\vec{k'}\rangle due to a potential \! V(r).


\langle \vec{k}'|V|\vec{k}\rangle = \int d^3 r \frac{e^{-i\vec{k}'\cdot\vec{r}}}{\sqrt{L^3}}V(r)\frac{e^{i\vec{k}\cdot\vec{r}}}{\sqrt{L^3}} = \frac{V_{k'k}}{L^3} \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad\;\ (2.1.15)

\Gamma_{\vec{k} \rightarrow \vec{k}'} = \frac{2\pi}{\hbar} \delta(\epsilon_k - \epsilon_{k'})\frac{|V_{k'\rightarrow k}|^2}{L^6}

What we want is the rate of transition, or actually scattering in this case, \!d\Gamma into a small solid angle \!d\Omega. So we must sum over the momentum states in this solid angle:

\sum_{\vec{k}'\in d\Omega}\Gamma_{\vec{k}\rightarrow \vec{k}'}

The sum over states for continuum can be calculated using an integral,

\sum_{\vec{k}'\in d\Omega} \quad \longrightarrow \quad d\Omega\int d\epsilon_{\vec{k}'}\frac{L^3 m k'}{(2\pi)^3 \hbar^2}\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad\;\;\ (2.1.16)

Therefore,

d\Gamma_{\vec{k}\rightarrow{\vec{k}'\in d\Omega}} = \frac{d\Omega}{L^3}\frac{mk}{4\pi^2\hbar^3}|V_{k'k}|^2 \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad\;\;\;\ (2.1.17)

The flux of particles per incident particle of momentum \hbar \vec{k} in a volume \!L^3 is \hbar k / m L^3, so

\frac{d\Gamma}{d\Omega \left(\frac{\hbar k}{m L^3}\right)} = \frac{m^2}{4\pi^2\hbar^4}\left|V_{k'k}\right|^2 = \frac{d\sigma}{d\Omega}, in the Born Approximation.

This result makes sense since our potential does not depend on time, so what happened here is that we sent a particle with wave vector \vec{k} through a potential and later detect a particle coming out from that potential with wave vector \vec{k'}. So, it is a scattering problem solved using a different method.

This is another simple example of transition probability calculation in time dependent perturbation theory with different potential.

Here another example example


An example of the of the first excited state of the hydrogen atom.

Harmonic Perturbation Theory

Harmonic perturbation is one of the main interests in perturbation theory. We know that in experiment, we usually perturb the system using a certain signal to extract information about it, for example the difference between the energy levels. We could send a photon with a certain frequency to a Hydrogen atom to excite the electron and let it decay to observe the difference between two energy levels by measuring the frequency of the photon emitted from it. The photon acts as an electromagnetic signal, and it is harmonic (if we consider it as an electromagnetic wave).

In general, we write down the harmonic perturbation as


\!V_t = V cos(\omega t) e^{\eta t}\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad(2.2.1)

where \!e^{\eta t} specifies the rate at which the perturbation is turned on. Since we assume the perturbation is turned on very slowly η is a very small positive number which at the end of the calculation, when  t_0 = -\infty , is set to be zero.

We start from \!t_0 = - \infty. Since there's no perturbation at that time. We want to find the probability that there will be a transition from the initial state to some other state, | n \rangle. The transition amplitude is,

\!\langle n|\psi_t\rangle = \langle n|e^{\frac{-i}{\hbar}H_0 t}|\psi(t)\rangle = e^{\frac{-i}{\hbar}\epsilon_n t}\langle n|\psi(t)\rangle\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad(2.2.2)

To the first order of V we write


\begin{align}
\langle n|\psi(t)\rangle & = \frac{1}{i\hbar}\int_{-\infty}^{t}dt' \langle n|V(t')|0\rangle\\
& = \frac{1}{i\hbar}\int_{-\infty}^{t}dt' \langle n|e^{\frac{i}{\hbar}H_0 t'}V_t e^{\frac{-i}{\hbar}H_0 t'}|0\rangle\\
& = \frac{1}{i\hbar}\int_{-\infty}^{t}e^{\frac{i}{\hbar}(\epsilon_n - \epsilon_0)t'}e^{\eta t'}cos(\omega t')\langle n|V|0\rangle\\
& = \frac{\langle n|V|0\rangle}{2i\hbar}\sum_{s=\pm}\int_{-\infty}^{t}dt' e^{\frac{i}{\hbar}(\epsilon_n - \epsilon_0)t'}e^{\eta t'}e^{is\omega t'}\\
& = \frac{\langle n|V|0\rangle}{2i\hbar}\sum_{s=\pm}\frac{e^{\frac{i}{\hbar}(\epsilon_n - \epsilon_0)t}e^{\eta t}e^{is\omega t}}{i(\frac{\epsilon_n - \epsilon_0}{\hbar}+s\omega-i\eta)}\\
& = \frac{\langle n|V|0\rangle}{2} e^{\eta t}\sum_{s = \pm}\frac{e^{\frac{i}{\hbar}(\epsilon_n - \epsilon_0 - s\hbar \omega)t}}{\epsilon_0 - \epsilon_n - s\hbar \omega + i\eta \hbar}\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad (2.2.3)
\end{align}

Now we calculate the probability as usual:


\begin{align}
|\langle n|\psi_t\rangle|^2 & = \frac{1}{4} |\langle n|V|0\rangle|^2 e^{2\eta t}\sum_{ss'}\frac{e^{{-i}{\hbar}(s-s')\hbar \omega t}}{(\epsilon_0 - \epsilon_n - s\hbar \omega - i\eta \hbar)(\epsilon_0 - \epsilon_n - s\hbar \omega + i\eta \hbar)}\\
\underset{0 \rightarrow n}{P(t)} & = \frac{1}{4}|\langle n|V|0\rangle|^2 e^{2\eta t}\left[\frac{1}{(\epsilon_0 - \epsilon_ n -\hbar\omega)^2 +  \eta^2 \hbar^2}+\frac{1}{(\epsilon_0 - \epsilon_n + \hbar \omega)^2+  \eta^2 \hbar^2}\right]
\end{align}

Where all oscillatory terms have been averaged to zero. The transition rate is given by :


\underset{0 \rightarrow n}{\Gamma(t)}=\frac{d{P(t)_{0 \rightarrow n}}}{d t} = \frac{1}{4}|\langle n|V|0\rangle|^2 e^{2\eta t}\left[\frac{2\eta}{(\epsilon_0 - \epsilon_n - \hbar \omega)^2+  \eta^2 \hbar^2}+\frac{2\eta}{(\epsilon_0 - \epsilon_n + \hbar \omega)^2+  \eta^2 \hbar^2}\right]

Now, if the response is immediate or if the potential is turned on suddenly, we take η = 0. Therefore:


\underset{0 \rightarrow n}{\Gamma(t)} = \frac{1}{4}|\langle n|V|0\rangle|^2 \frac{2\pi}{\hbar}\left[\delta(\epsilon_n - \epsilon_0 + \hbar \omega)+\delta(\epsilon_n - \epsilon_0 - \hbar \omega)\right]


Which is the Fermi Golden Rule. This result shows that there will be a non-zero transition probability for cases where \epsilon_n - \epsilon_0 = \mp \hbar \omega - Roughly speaking, there will be significant transitions only when ω is a "resonant frequency" for a particular transition. The Fermi Golden Rule also shows that it doesn't matter how the potential is turned on -- fast or slow -- the transition rate is not really affected.

Second Order Transitions

Sometimes the first order matrix element  \langle f|V|i \rangle is identically zero (parity, Wigner Eckart, etc.) but other matrix elements are nonzero—and the transition can be accomplished by an indirect route.

 c^{(2)}_{n}(t)=\left(\frac{1}{i \hbar}\right)^2 \sum_{n}\int_{0}^{t} \int_{0}^{t'} dt' dt'' 
e^{-i \omega_{f}\left(t-t'\right)}\langle f|V_{S}(t')|n\rangle e^{-i \omega_{n}\left(t'-t''\right)}	
\langle n|V_{S}(t'')|i\rangle e^{-i \omega_{i} t''}

where  c^{(2)}_{n}(t) is the probability amplitude for the second-order process,

Taking the gradually switched-on harmonic perturbation \ V_{S}(t)=e^{\epsilon t} V e^{-i \omega t}, and the initial time  -\infty , as above

 c^{(2)}_{n}(t)=\left(\frac{1}{i \hbar}\right)^2 \sum_{n}\langle f|V|n\rangle \langle n|V|i\rangle
e^{-i \omega_{f} t} \int_{-\infty}^{t} dt' \int_{-\infty}^{t'} dt'' e^{i \left(\omega_{f} -\omega_{n} 
-\omega-i \epsilon\right)t'} e^{i \left(\omega_{n} -\omega_{i} -\omega-i \epsilon\right)t''}

The integrals are straightforward, and yield

c^{(2)}_{n}(t)=\left(\frac{1}{i \hbar}\right)^2 e^{-i \left(\omega_{i} -\omega_{f}\right)t}
\frac{e^{2 \epsilon t}}{\omega_{f} -\omega_{i} -2 \omega-2 i \epsilon}
\sum_{n} \frac{\langle f|V|n\rangle \langle n|V|i\rangle}{\omega_{n} -\omega_{i} -\omega-i \epsilon}

Exactly as in the section above on the first-order Golden Rule, we can find the transition rate:

 \frac{d}{dt}\left|{c^{(2)}_{n}(t)}\right|^2 = \frac{2 \pi}{\hbar^4}
\left|{\sum_{n}\frac{\langle f|V|n\rangle \langle n|V|i\rangle}{\omega_{n} -\omega_{i} -\omega
-i \epsilon}}\right|^2 \delta \left(\omega_{f} -\omega_{i} -2 \omega \right)


The  \hbar^4 in the denominator goes to  \hbar on replacing the frequencies ω with energies E, both in the denominator and the delta function, remember that  E= \hbar \omega


This is a transition in which the system gains energy  2 \hbar \omega from the beam, in other words two photons are absorbed, the first taking the system to the intermediate energy \ \omega , which is short-lived and therefore not well defined in energy—there is no energy conservation requirement into this state, only between initial and final states.


Of course, if an atom in an arbitrary state is exposed to monochromatic light, other second order processes in which two photons are emitted, or one is absorbed and one emitted (in either order) are also possible.

Example of Two Level System : Ammonia Maser

Image:Ammonia.JPG

This is a very complicated quantum system and there is no way to solve it in a formal form, however we can take some assumptions to solve the problem. In this model, we assume that a nitrogen atom, which is significantly heavier than a hydrogen atom, is motionless. The Hydrogen atoms form a rigid equilateral triangle whose axis is always passes through the Nitrogen Atom.

Since there are two significant and different states (the position of the Hydrogen triangle), we write down the wave function as a superposition of both states. Of course it is a function of time.

|\Psi_t\rangle = C_1(t)|1\rangle + C_2(t)|2\rangle

Then we operate on this state the time dependent Schrodinger equation to find the eigenvalues:

i\hbar
\begin{pmatrix}
  \dot{C}_1(t)\\
  \dot{C}_2(t) 
\end{pmatrix}
=
\begin{pmatrix}
  E_0 & A\\
  A & E_0 
\end{pmatrix}
\begin{pmatrix}
  C_1(t)\\
  C_2(t) 
\end{pmatrix}

In the presence of electric field, the additional energy enters only on the diagonal part of the Hamiltonian matrix

i\hbar
\begin{pmatrix}
  \dot{C}_1(t)\\
  \dot{C}_2(t) 
\end{pmatrix}
=
\begin{pmatrix}
  E_0 + \mu \varepsilon(t) & A\\
  A & E_0 - \mu \varepsilon(t)
\end{pmatrix}
\begin{pmatrix}
  C_1(t)\\
  C_2(t) 
\end{pmatrix}

Typically, 2 A \sim 10^{-4} \;\mbox{eV} which gives the frequency of the movement of the Hydrogen triangle \nu \sim 2.4 \times 10^9 \;\mbox{Hz} and the wavelength \lambda \approx 1.25 \;\mbox{cm} (microwave region).

Solving for the Schrodinger equation we have above, we find the energy of the two states

E_\pm = E_0 \pm \sqrt{(\mu \varepsilon)^2 + A^2}

The followings are the graphs of the eigenenergy as a function of the applied electric field  \varepsilon

Image:Crossing.JPG

Because of these two different states, ammonia molecule is separable in the electric field. This can be used to select molecule with certain value of energy.

Image:Ammonia_maser.JPG

It should be clear that if \! \varepsilon(t) = 0, our eigenstates are \underbrace{\frac{1}{\sqrt{2}}\begin{pmatrix} 1 \\ \pm 1 \end{pmatrix}}_{basis \;for \;expansion}= \; \begin{pmatrix} C_1(0) \\ C_2(0) \end{pmatrix} with energies \! E_0 \pm A

Let \begin{pmatrix}C_1(t)\\C_2(t)\end{pmatrix}=\frac{1}{\sqrt{2}}\begin{pmatrix}1\\1\end{pmatrix} \gamma_1(t)+\frac{2}{\sqrt{2}}\begin{pmatrix}1\\-1\end{pmatrix} \gamma_2(t), then we find


\begin{align}
i\hbar \dot{\gamma_1} & = & (E_0 +A)\gamma_1 + \mu \varepsilon(t)\gamma_2\\
i\hbar \dot{\gamma_2} & = & (E_0 -A)\gamma_2 + \mu \varepsilon(t)\gamma_1
\end{align}

Now, let


\begin{align}
\gamma_1(t) & = e^{-\frac{i}{\hbar}(E_0 + A)t}\alpha(t)\\
\gamma_2(t) & = e^{-\frac{i}{\hbar}(E_0 - A)t}\beta(t)
\end{align}

Also, we define the electric field as a function of time \varepsilon(t) = 2\varepsilon_0 \cos\omega t = \varepsilon_0(e^{i\omega t}+e^{-i \omega t}) so the above expression can be written as


\begin{align}
i\hbar \dot{\alpha}(t) & = \mu \varepsilon_0 (e^{i(\omega + \frac{2A}{\hbar})t}+e^{-i(\omega - \frac{2A}{\hbar})t})\beta(t)\\
i\hbar \dot{\beta}(t) & = \mu \varepsilon_0 (e^{i(\omega - \frac{2A}{\hbar})t}+e^{-i(\omega + \frac{2A}{\hbar})t})\alpha(t)
\end{align}

Now, we observe that as \! \omega \rightarrow \frac{2A}{\hbar} = \omega_0 the first term in the right hand side of the first equation will oscillate very rapidly compared to the second term of the same equation. The average of this rapid oscillating term will be zero. So we don't need to consider these oscillating terms in the next calculation. Therefore we get a result


\begin{align}
i\hbar\dot{\alpha}(t) & = \mu \varepsilon_0 e^{-i(\omega - \omega_0)t}\beta(t)\\
i\hbar\dot{\beta}(t) & = \mu \varepsilon_0 e^{i(\omega - \omega_0)t}\alpha(t)
\end{align}

At resonance, \! \omega = \omega_0, these equations are simplified to


\begin{align}
i\hbar\dot{\alpha}(t) & = \mu \varepsilon_0 \beta(t)\\
i\hbar\dot{\beta}(t) & = \mu \varepsilon_0 \alpha(t)
\end{align}

We can then differentiate the first equation with respect to time and substitute the second equation into it to get

i\hbar \ddot{\alpha} =\mu \varepsilon_0 \left(\frac{\mu \varepsilon_0}{i\hbar}\alpha\right) \Rightarrow \ddot{\alpha}=-\left(\frac{\mu\varepsilon_0}{\hbar} \right)^2 \alpha

With solution (also for β with substitution)


\begin{align}
\alpha(t) & = a \cos\left(\frac{\mu \varepsilon_0}{\hbar}t\right) + b \sin\left(\frac{\mu \varepsilon_0}{\hbar}t\right)\\
\beta(t) & = ib \cos\left(\frac{\mu \varepsilon_0}{\hbar}t\right) - ia \sin\left(\frac{\mu \varepsilon_0}{\hbar}t\right)
\end{align}

Let's assume that at time \! t = 0, the molecule is in state  |1\rangle (experimentally, we can prepare the molecule to be in this state) so that  a = 1 \! and b = 0\!. This assumption gives


\begin{matrix}
\alpha(t) \!\!\! &=& \!\!\! \cos\left(\frac{\mu \varepsilon_0}{\hbar}t\right) 
\\
\beta(t) \!\!\! &=& \!\!\! -i \sin\left(\frac{\mu \varepsilon_0}{\hbar}t\right) 
\end{matrix} 
\; \Rightarrow \;
\begin{matrix}
\gamma_1(t) \!\!\! &=& \!\!\! e^{-i\frac{i}{\hbar}(E_0 + A)t} \cos\left(\frac{\mu \varepsilon_0}{\hbar}t\right) \\
\gamma_2(t) \!\!\! &=& \!\!\! -ie^{-i\frac{i}{\hbar}(E_0 - A)t} \sin\left(\frac{\mu \varepsilon_0}{\hbar}t\right)
\end{matrix}

Therefore the each probability that the molecule remains in the state  |+\rangle = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 \\ 1 \end{pmatrix} and  |-\rangle = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 \\ -1 \end{pmatrix} is :


\begin{align}
P_+(t) & = |\gamma_1(t)|^2 = \cos^2\left(\frac{\mu \varepsilon_0}{\hbar}t\right)\\
P_-(t) & = |\gamma_2(t)|^2 = \sin^2\left(\frac{\mu \varepsilon_0}{\hbar}t\right)
\end{align}

Note that the probability depends on time. The molecules enter in upper energy state. If the length of the cavity is chosen appropriately, the molecules will come out surely in lower energy state P = 1. If that is the case, the molecules lost some energy and, in reverse, the cavity gains the same amount of energy. The cavity is therefore excited and then produces stimulated emission. This mechanism is known as a MASER which stands for Microwave Amplification by Stimulated Emission of Radiation.

This is an problem of time dependent perturbation theory.

Interaction of Matter and Radiation

The conventional treatment of quantum mechanics uses time-independent wavefunctions with the Schrödinger equation to determine the energy levels (eigenvalues) of a system. To understand the interaction of radiation (electromagnetic radiation) and matter, we need to consider the time-dependent Schrödinger equation.

Quantization of Electromagnetic Radiation

Classical Viewpoint

Let's use transverse gauge (sometimes called Coulomb gauge) which give us:

\varphi (\mathbf{r},t)=0

\nabla \cdot \mathbf{A}=0

In this gauge the electromagnetic fields are given by:

\mathbf{E}(\mathbf{r},t)=-\frac{1}{c}\frac{\partial \mathbf{A} }{\partial t}

\mathbf{B}(\mathbf{r},t)=\nabla \times \mathbf{A}

The energy is

\mathcal{E} = \frac{1}{8\pi} \int d^{3} r (\mathbf{E}^{2}+\mathbf{B}^{2})

The rate and direction of energy transfer are given by Poynting vector

\mathbf{P} = \frac{c}{4\pi} \mathbf{E} \times \mathbf{B}

The radiation generated by classical current is

\Box \mathbf{A} = -\frac{4\pi}{c} \mathbf{j}

Where \Box is the d'Alembert operator. Solutions in the region where \mathbf{j}=0 are given by

\mathbf{A}(\mathbf{r},t) = \alpha \boldsymbol{\lambda}\frac{e^{i(\mathbf{k}\cdot\mathbf{r}-\omega t)}}{\sqrt{V}}+\alpha^{*} \boldsymbol{\lambda}^{*} \frac{e^{-i(\mathbf{k}\cdot\mathbf{r}-\omega t)}}{\sqrt{V}}

where \omega=c|\mathbf{k}| and \boldsymbol{\lambda}\cdot \mathbf{k}=0 , as we are considering EM waves in vacuum. The \boldsymbol{\lambda} and \boldsymbol{\lambda^*} are the two general polarization vectors, perpendicular to \mathbf{k}. Note that, in general,

 \hat{\mathbf{k}}\times\hat{\boldsymbol{\lambda}} = \hat{\boldsymbol{\lambda^*}}; \hat{\boldsymbol{\lambda}}\times\hat{\boldsymbol{\lambda^*}} = \hat{\mathbf{k}}; \hat{\boldsymbol{\lambda^*}}\times\hat{\mathbf{k}} = \hat{\boldsymbol{\lambda}}

Here the plane waves are normalized with respect to some volume V \!. This is just for convenience and the physics won't change. Note that  \left| \boldsymbol{\lambda} \right|^2 =1 , as the polarization vectors are unit vectors. Notice that in this writing \mathbf{A} is a real vector.

Let's compute \mathcal{E}. For this


\begin{align}
\mathbf{E}(\mathbf{r},t) & =-\frac{1}{c}\frac{\partial \mathbf{A} }{\partial t} \\

& =-\frac{1}{c\sqrt{V}}\frac{\partial}{\partial t}\left[\alpha \boldsymbol{\lambda}e^{i(\mathbf{k}\cdot\mathbf{r}-\omega t)}+\alpha^{*} \boldsymbol{\lambda}^{*} e^{-i(\mathbf{k}\cdot\mathbf{r}-\omega t)}\right] \\

& =-\frac{i\omega}{c\sqrt{V}}\left[-\alpha \boldsymbol{\lambda} e^{i(\mathbf{k}\cdot\mathbf{r}-\omega t)}+\alpha^{*} \boldsymbol{\lambda}^{*} e^{-i(\mathbf{k}\cdot\mathbf{r}-\omega t)}\right] \\

\mathbf{E}^{2}(\mathbf{r},t) & = \frac{\omega^{2}}{c^{2}V}\left[\alpha\alpha^{*} \boldsymbol{\lambda}\cdot\boldsymbol{\lambda}^{*} -  \alpha\alpha \boldsymbol{\lambda}\cdot\boldsymbol{\lambda} e^{2i(\mathbf{k}\cdot\mathbf{r}-\omega t)}-\alpha^{*}\alpha^{*}\boldsymbol{\lambda}^{*}\cdot\boldsymbol{\lambda}^{*} e^{-2i(\mathbf{k}\cdot\mathbf{r}-\omega t)} + \alpha^{*}\alpha\boldsymbol{\lambda}\cdot\boldsymbol{\lambda}^{*}\right] \\
\end{align}

Taking the average, the oscillating terms will disappear. Then we have


\begin{align}
\mathbf{E}^{2}(\mathbf{r}) & = \frac{\omega^{2}}{c^{2}V}\left[\alpha\alpha^{*}+\alpha^{*}\alpha\right] \\

&=2\frac{\omega^{2}}{c^{2}V}|\alpha|^2 \\
\end{align}

It is well known that for plane waves \mathbf{B}=\mathbf{n}\times \mathbf{E} , where \mathbf{n} is the direction of \mathbf{k}. This clearly shows that \mathbf{B}^{2}=\mathbf{E}^{2}. However, let's see this explicitly:


\begin{align}
\mathbf{B}(\mathbf{r},t) & =\nabla \times\mathbf{A}\\
& =\nabla \times \left[\alpha \boldsymbol{\lambda}\frac{e^{i(\mathbf{k}\cdot\mathbf{r}-\omega t)}}{\sqrt{V}}+\alpha^{*} \boldsymbol{\lambda}^{*} \frac{e^{-i(\mathbf{k}\cdot\mathbf{r}-\omega t)}}{\sqrt{V}}\right] \\
\end{align}

Each component is given by


\begin{align}
\mathbf{B}_{i}(\mathbf{r},t)& =\frac{1}{{\sqrt{V}}}\left[\alpha \varepsilon _{ijk}\partial_{j} \left(\boldsymbol{\lambda}_{k}e^{i(\mathbf{k}\cdot\mathbf{r}-\omega t)}\right)+\alpha^{*} \varepsilon _{ijk}\partial_{j} \left(\boldsymbol{\lambda}^{*}_{k}e^{-i(\mathbf{k}\cdot\mathbf{r}-\omega t)}\right)\right] \\

& =\frac{i}{{\sqrt{V}}}\left[\alpha \varepsilon _{ijk}\mathbf{k}_{j} \boldsymbol{\lambda}_{k}e^{i(\mathbf{k}\cdot\mathbf{r}-\omega t)}-\alpha^{*} \varepsilon _{ijk}\mathbf{k}_{j} \boldsymbol{\lambda}^{*}_{k}e^{-i(\mathbf{k}\cdot\mathbf{r}-\omega t)}\right] \\
\end{align}

Then


\begin{align}
\mathbf{B}(\mathbf{r},t) & =\frac{i}{{\sqrt{V}}}\left[\alpha \mathbf{k}\times\boldsymbol{\lambda}e^{i(\mathbf{k}\cdot\mathbf{r}-\omega t)}-\alpha^{*} \mathbf{k}\times\boldsymbol{\lambda}^{*} e^{-i(\mathbf{k}\cdot\mathbf{r}-\omega t)}\right] \\

\mathbf{B}^{2}(\mathbf{r},t) & =\frac{1}{{V}}\left[\alpha \mathbf{k}\times\boldsymbol{\lambda}e^{i(\mathbf{k}\cdot\mathbf{r}-\omega t)}-\alpha^{*} \mathbf{k}\times\boldsymbol{\lambda}^{*} e^{-i(\mathbf{k}\cdot\mathbf{r}-\omega t)}\right] \left[\alpha \mathbf{k}\times\boldsymbol{\lambda}e^{i(\mathbf{k}\cdot\mathbf{r}-\omega t)}-\alpha^{*} \mathbf{k}\times\boldsymbol{\lambda}^{*} e^{-i(\mathbf{k}\cdot\mathbf{r}-\omega t)}\right]^{*} \\

& =\frac{1}{{V}}\left[\alpha\alpha^{*} \left(\mathbf{k}\times\boldsymbol{\lambda}\right)\cdot\left(\mathbf{k}\times\boldsymbol{\lambda}^{*}\right) -\alpha \alpha\left(\mathbf{k}\times\boldsymbol{\lambda}\right)\cdot\left(\mathbf{k}\times\boldsymbol{\lambda}\right) e^{2i(\mathbf{k}\cdot\mathbf{r}-\omega t)}-\alpha^{*} \alpha^{*} \left(\mathbf{k}\times\boldsymbol{\lambda}^{*}\right)\cdot\left(\mathbf{k}\times\boldsymbol{\lambda}^{*}\right) e^{-2i(\mathbf{k}\cdot\mathbf{r}-\omega t)} + \alpha^{*} \alpha \left(\mathbf{k}\times\boldsymbol{\lambda}^{*}\right)\cdot \left(\mathbf{k}\times\boldsymbol{\lambda}\right) \right] \\

\end{align}

Again, taking the average the oscillating terms vanish. Then we have


\begin{align}
\mathbf{B}^{2}(\mathbf{r}) & =\frac{1}{{V}}\left[\alpha \alpha^{*}+\alpha^{*} \alpha\right](\mathbf{k}\times\boldsymbol{\lambda})\cdot(\mathbf{k}\times\boldsymbol{\lambda}^{*}) \\

& =\frac{1}{{V}}\left[\alpha \alpha^{*}+\alpha^{*} \alpha\right][\mathbf{k}^{2}(\boldsymbol{\lambda}\cdot\boldsymbol{\lambda^{*}})-(\mathbf{k}\cdot\boldsymbol{\lambda^{*}})(\mathbf{k}\cdot\boldsymbol{\lambda})] \\

& =\frac{2}{{V}}|\alpha|^{2}\mathbf{k}^{2}\\

&=2\frac{\omega^{2}}{c^{2}V}|\alpha|^2 \\


&= \mathbf{E}^{2}(\mathbf{r},t)\\
\end{align}

Finally the energy of this radiation is given by

\begin{align}
\mathcal{E} &= \frac{1}{8\pi} \int d^{3}r (\mathbf{E}^{2}+\mathbf{B}^{2}) \\

&=\frac{1}{4\pi} \int d^{3}r\; \mathbf{E}^{2}\\

&=\frac{1}{4\pi} \int d^{3}r \left(2\frac{\omega^{2}}{c^{2}V}|\alpha|^2\right)\\

&=\frac{\omega^{2}}{2\pi c^{2}}|\alpha|^2\\

\end{align}

So far, we have treated the potential \mathbf{A}(\mathbf{r},t) as a combination of two waves with the same frequency. Now let's extend the discussion to any form of \mathbf{A}(\mathbf{r},t). To do this, we can sum \mathbf{A}(\mathbf{r},t) over all values of \mathbf{k} and \boldsymbol{\lambda}:

\begin{align}
\mathbf{A}(\mathbf{r},t)=\sum_{\mathbf{k}\boldsymbol{\lambda}} \left[A_{\mathbf{k}\boldsymbol{\lambda}} \boldsymbol{\lambda}\frac{e^{i(\mathbf{k}\cdot\mathbf{r}-\omega t)}}{\sqrt{V}}+A_{\mathbf{k}\boldsymbol{\lambda}}^{*} \boldsymbol{\lambda}^{*} \frac{e^{-i(\mathbf{k}\cdot\mathbf{r}-\omega t)}}{\sqrt{V}}      \right]\\
\end{align}

To calculate the energy, we use the fact that any exponential time-dependent term is on average equal to zero. Therefore, in the previous sum all cross terms with different \mathbf{k} vanishes. Then, it is clear that


\begin{align}
\mathbf{E}^{2}(\mathbf{r}) & = \sum_{\mathbf{k}\boldsymbol{\lambda}}\frac{\omega^{2}}{c^{2}V}\left[A_{\mathbf{k}\boldsymbol{\lambda}}A_{\mathbf{k}\boldsymbol{\lambda}}^{*}+A_{\mathbf{k}\boldsymbol{\lambda}}^{*}A_{\mathbf{k}\boldsymbol{\lambda}}\right] \\

\mathbf{B}^{2}(\mathbf{r}) & = \sum_{\mathbf{k}\boldsymbol{\lambda}}\frac{\mathbf{k}^2}{V}\left[A_{\mathbf{k}\boldsymbol{\lambda}}A_{\mathbf{k}\boldsymbol{\lambda}}^{*}+A_{\mathbf{k}\boldsymbol{\lambda}}^{*}A_{\mathbf{k}\boldsymbol{\lambda}}\right] \\
\end{align}

Then, the energy is given by

\begin{align}
\mathcal{E} &= \frac{1}{8\pi} \int d^{3}r (\mathbf{E}^{2}+\mathbf{B}^{2}) \\
&=\frac{1}{4\pi} \int d^{3}r\; \mathbf{E}^{2}\\
&=\frac{1}{4\pi} \int d^{3}r \sum_{\mathbf{k}\boldsymbol{\lambda}}\frac{\omega^{2}}{c^{2}V}\left[A_{\mathbf{k}\boldsymbol{\lambda}}A_{\mathbf{k}\boldsymbol{\lambda}}^{*}+A_{\mathbf{k}\boldsymbol{\lambda}}^{*}A_{\mathbf{k}\boldsymbol{\lambda}}\right] \\
&=\frac{1}{4\pi} \sum_{\mathbf{k}\boldsymbol{\lambda}}\frac{\omega^{2}}{c^{2}}\left[A_{\mathbf{k}\boldsymbol{\lambda}}A_{\mathbf{k}\boldsymbol{\lambda}}^{*}+A_{\mathbf{k}\boldsymbol{\lambda}}^{*}A_{\mathbf{k}\boldsymbol{\lambda}}\right] \\
&=\sum_{\mathbf{k}\boldsymbol{\lambda}}\frac{\omega^{2}}{2 \pi c^{2}} \left|A_{\mathbf{k}\boldsymbol{\lambda}}\right|^2.
\end{align}

Let's define the following quantities:

\begin{align}

Q_{\mathbf{k}\boldsymbol{\lambda}}&=\frac{1}{\sqrt{4\pi}c}(A_{\mathbf{k}\boldsymbol{\lambda}}+A_{\mathbf{k}\boldsymbol{\lambda}}^{*})\\

P_{\mathbf{k}\boldsymbol{\lambda}}&=\frac{-i\omega}{\sqrt{4\pi}c}(A_{\mathbf{k}\boldsymbol{\lambda}}-A_{\mathbf{k}\boldsymbol{\lambda}}^{*})\\

\end{align}

Notice that

\begin{align}

\omega^{2} Q_{\mathbf{k}\boldsymbol{\lambda}}^{2}&=\frac{\omega^{2}}{4\pi c^{2}}(A_{\mathbf{k}\boldsymbol{\lambda}}^{2}+A_{\mathbf{k}\boldsymbol{\lambda}}\cdot A_{\mathbf{k}\boldsymbol{\lambda}}^{*}+A_{\mathbf{k}\boldsymbol{\lambda}}^{*}\cdot A_{\mathbf{k}\boldsymbol{\lambda}}+A_{\mathbf{k}\boldsymbol{\lambda}}^{*2})\\


P_{\mathbf{k}\boldsymbol{\lambda}}^{2}&=\frac{-\omega^{2}}{4\pi c^{2}}(A_{\mathbf{k}\boldsymbol{\lambda}}^{2}-A_{\mathbf{k}\boldsymbol{\lambda}}\cdot A_{\mathbf{k}\boldsymbol{\lambda}}^{*}-A_{\mathbf{k}\boldsymbol{\lambda}}^{*}\cdot A_{\mathbf{k}\boldsymbol{\lambda}}+A_{\mathbf{k}\boldsymbol{\lambda}}^{*2})\\

\end{align}

Adding

\begin{align}

P_{\mathbf{k}\boldsymbol{\lambda}}^{2}+\omega^{2} Q_{\mathbf{k}\boldsymbol{\lambda}}^{2}&=\frac{\omega^{2}}{2\pi c^{2}}(A_{\mathbf{k}\boldsymbol{\lambda}}\cdot A_{\mathbf{k}\boldsymbol{\lambda}}^{*}+A_{\mathbf{k}\boldsymbol{\lambda}}^{*}\cdot A_{\mathbf{k}\boldsymbol{\lambda}})\\
&=\frac{\omega^{2}}{\pi c^{2}}\left| A_{\mathbf{k}\boldsymbol{\lambda}}\right|^2.
\end{align}

Then the energy (in this case the Hamiltonian) can be written as

\begin{align}

H=\frac{1}{2}\sum_{\mathbf{k}\boldsymbol{\lambda}} [P_{\mathbf{k}\boldsymbol{\lambda}}^{2}+\omega^{2} Q_{\mathbf{k}\boldsymbol{\lambda}}^{2}]
\end{align}

This has the same form as the familiar Hamiltonian for a harmonic oscillator.

Note that,

\begin{align}
\frac{\partial H_{cl}}{\partial Q_{k, \lambda}} &= - \dot{P}_{k, \lambda} \\
\frac{\partial H_{cl}}{\partial P_{k, \lambda}} &= \dot{Q}_{k, \lambda}
\end{align}

The makeshift variables, P_{k, \lambda} \! and Q_{k, \lambda}\! are canonically conjugate.

We see that the classical radiation field behaves as a collection of harmonic oscillators, indexed by \mathbf{k} and \boldsymbol{\lambda}, whose frequencies depends on |\mathbf{k}|.

From Classical Mechanics to Quantum mechanics for Radiation

As usual we proceed to do the canonical quantization:


\begin{align}
P_{\mathbf{k}\boldsymbol{\lambda}} & \to \mathbf{P}_{\mathbf{k}\boldsymbol{\lambda}}\\
Q_{\mathbf{k}\boldsymbol{\lambda}} & \to \mathbf{Q}_{\mathbf{k}\boldsymbol{\lambda}}\\
\end{align}

\begin{align}
A_{\mathbf{k}\boldsymbol{\lambda}} & \to \sqrt{\frac{2\pi \hbar c^{2}}{\omega_{\mathbf{k}}}}\;\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}\; , \; \left[\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}},\mathbf{a}^{\dagger}_{\mathbf{k'}\boldsymbol{\lambda'}}\right]=\delta_{\mathbf{kk'}}\delta_{\boldsymbol{\lambda \lambda'}}\\
\end{align}


Where last are quantum operators. The Hamiltonian can be written as



\mathcal{H}_{rad} =\sum_{\mathbf{k}\boldsymbol{\lambda}}\hbar \omega_{\mathbf{k}} \left(\mathbf{a}^{\dagger}_{\mathbf{k} \boldsymbol{\lambda}} \mathbf{a}_{\mathbf{k} \boldsymbol{\lambda}}+\frac{1}{2}\right)
=\frac{1}{2}\sum_{\mathbf{k}\boldsymbol{\lambda}}\hbar \omega_{\mathbf{k}}
\left(\mathbf{a}^{\dagger}_{\mathbf{k} \boldsymbol{\lambda}} \mathbf{a}_{\mathbf{k} \boldsymbol{\lambda}}+\mathbf{a}_{\mathbf{k} \boldsymbol{\lambda}} \mathbf{a}^{\dagger}_{\mathbf{k} \boldsymbol{\lambda}}\right)


The classical potential can be written as



\underbrace{\mathbf{A}(\mathbf{r},t)=\sum_{\mathbf{k}\boldsymbol{\lambda}} \left[A_{\mathbf{k}\boldsymbol{\lambda}} \boldsymbol{\lambda}\frac{e^{i(\mathbf{k}\cdot\mathbf{r}-\omega t)}}{\sqrt{V}}+A_{\mathbf{k}\boldsymbol{\lambda}}^{*} \boldsymbol{\lambda}^{*} \frac{e^{-i(\mathbf{k}\cdot\mathbf{r}-\omega t)}}{\sqrt{V}}\right]}_\textrm{Classical Vector potential}\;\;\;\longrightarrow\;\;\; \underbrace{\mathbf{A}_{\mbox{int}}(\mathbf{r},t)=\sum_{\mathbf{k}\boldsymbol{\lambda}} \sqrt{\frac{2\pi \hbar c^{2}}{\omega_{\mathbf{k}}}}\left[\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}} \boldsymbol{\lambda}\frac{e^{i(\mathbf{k}\cdot\mathbf{r}-\omega t)}}{\sqrt{V}}+\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}^{\dagger} \boldsymbol{\lambda}^{*} \frac{e^{-i(\mathbf{k}\cdot\mathbf{r}-\omega t)}}{\sqrt{V}}\right]}_\textrm{Quantum Operator}


Notice that the quantum operator is time dependent. Therefore we can identify it as the field operator in interaction representation. (That's the reason to label it with int). Let's find the Schrodinger representation of the field operator:


\begin{align}

\mathbf{A}(\mathbf{r})&=e^{-\frac{i}{\hbar}\mathcal{H}_{rad}t}\mathbf{A}_{int}(\mathbf{r},t)e^{\frac{i}{\hbar}\mathcal{H}_{rad}t}\\

&=e^{-\frac{i}{\hbar}\mathcal{H}_{rad}t}\left[\sum_{\mathbf{k}\boldsymbol{\lambda}} \sqrt{\frac{2\pi \hbar c^{2}}{\omega_{\mathbf{k}}}}\left[\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}} \boldsymbol{\lambda}\frac{e^{i(\mathbf{k}\cdot\mathbf{r}-\omega t)}}{\sqrt{V}}+\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}^{\dagger} \boldsymbol{\lambda}^{*} \frac{e^{-i(\mathbf{k}\cdot\mathbf{r}-\omega t)}}{\sqrt{V}}\right]\right]e^{\frac{i}{\hbar}\mathcal{H}_{rad}t}\\

&=\sum_{\mathbf{k}\boldsymbol{\lambda}} \sqrt{\frac{2\pi \hbar c^{2}}{\omega_{\mathbf{k}}}}\left[\left[e^{-\frac{i}{\hbar}\mathcal{H}_{rad}t} \mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}e^{\frac{i}{\hbar}\mathcal{H}_{rad}t}\right] \boldsymbol{\lambda}\frac{e^{i(\mathbf{k}\cdot\mathbf{r}-\omega t)}}{\sqrt{V}}+\left[ e^{-\frac{i}{\hbar}\mathcal{H}_{rad}t}\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}^{\dagger} e^{\frac{i}{\hbar}\mathcal{H}_{rad}t}\right] \boldsymbol{\lambda}^{*} \frac{e^{-i(\mathbf{k}\cdot\mathbf{r}-\omega t)}}{\sqrt{V}}\right]\\

&=\sum_{\mathbf{k}\boldsymbol{\lambda}} \sqrt{\frac{2\pi \hbar c^{2}}{\omega_{\mathbf{k}}}}\left[\left[\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}e^{i\omega t}\right] \boldsymbol{\lambda}\frac{e^{i(\mathbf{k}\cdot\mathbf{r}-\omega t)}}{\sqrt{V}}+\left[ \mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}^{\dagger} e^{-i\omega t}\right] \boldsymbol{\lambda}^{*} \frac{e^{-i(\mathbf{k}\cdot\mathbf{r}-\omega t)}}{\sqrt{V}}\right]\\

&=\sum_{\mathbf{k}\boldsymbol{\lambda}} \sqrt{\frac{2\pi \hbar c^{2}}{\omega_{\mathbf{k}}}}\left[\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}} \boldsymbol{\lambda}\frac{e^{i\mathbf{k}\cdot\mathbf{r}}}{\sqrt{V}}+\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}^{\dagger} \boldsymbol{\lambda}^{*} \frac{e^{-i\mathbf{k}\cdot\mathbf{r}}}{\sqrt{V}}\right]\\

\end{align}


COMMENTS

  • The meaning of \mathcal{H}_{rad} is as following: The classical electromagnetic field is quantized. This quantum field exists even if there is no source. This means that the vacuum is a physical object which can interact with matter. In classical mechanics this doesn't occur because fields are created by sources.
  • Due to this, the vacuum has to be treated as a quantum dynamical object. Therefore we can define to this object a quantum state.
  • The perturbation of this quantum field is called the photon (it is called the quanta of the electromagnetic field).


ANALYSIS OF THE VACUUM AT GROUND STATE

Let's call |0\rangle the ground state of the vacuum. The following can be stated:

  • The energy of the ground state is infinite. To see this notice that for ground state we have \begin{align}
\mathcal{H}_{rad}&=\sum_{\mathbf{k}\boldsymbol{\lambda}} \frac{1}{2} \hbar \omega_{\mathbf{k}}=\infin .
\end{align}
  • The state \;\mathbf{a}^{\dagger}_{\mathbf{k} \boldsymbol{\lambda}}|0\rangle represent an exited state of the vacuum with energy \hbar \omega_{\mathbf{k}}(1+1/2). This means that the extra energy \hbar \omega_{\mathbf{k}}is carried by a single photon. Therefore \mathbf{a}^{\dagger}_{\mathbf{k} \boldsymbol{\lambda}} represents the creation operator of one single photon with energy \hbar \omega_{\mathbf{k}}. In the same reasoning, \mathbf{a}_{\mathbf{k} \boldsymbol{\lambda}} represents the annihilation operator of one single photon.
  • Consider the following normalized state of the vacuum: \frac{1}{\sqrt{2}}\mathbf{a}^{\dagger}_{\mathbf{k} \boldsymbol{\lambda}}\mathbf{a}^{\dagger}_{\mathbf{k} \boldsymbol{\lambda}}|0\rangle. At the first glance we may think that \mathbf{a}^{\dagger}_{\mathbf{k} \boldsymbol{\lambda}}\mathbf{a}^{\dagger}_{\mathbf{k} \boldsymbol{\lambda}} creates a single photon with energy 2\hbar \omega_{\mathbf{k}}. However this interpretation is forbidden in our model. Instead, this operator will create two photons each of the carryng the energy \hbar \omega_{\mathbf{k}}.

    Proof

    Suppose that \mathbf{a}^{\dagger}_{\mathbf{k} \boldsymbol{\lambda}}\mathbf{a}^{\dagger}_{\mathbf{k} \boldsymbol{\lambda}} creates a single photon with energy 2\hbar \omega_{\mathbf{k}}. We can find an operator \mathbf{a}^{\dagger}_{\mathbf{k'} \boldsymbol{\lambda}} who can create a photon with the same energy 2\hbar \omega_{\mathbf{k}}. This means that

    
\frac{1}{\sqrt{2}}\mathbf{a}^{\dagger}_{\mathbf{k} \boldsymbol{\lambda}}\mathbf{a}^{\dagger}_{\mathbf{k} \boldsymbol{\lambda}}|0\rangle\overset{\underset{\mathrm{?}}{}}{=} \mathbf{a}^{\dagger}_{\mathbf{k'} \boldsymbol{\lambda}}|0\rangle \;\;\;\longrightarrow\;\;\;\frac{1}{\sqrt{2}}\mathbf{a}^{\dagger}_{\mathbf{k} \boldsymbol{\lambda}}\mathbf{a}^{\dagger}_{\mathbf{k} \boldsymbol{\lambda}} \overset{\underset{\mathrm{?}}{}}{=} \mathbf{a}^{\dagger}_{\mathbf{k'} \boldsymbol{\lambda}}\;\;\;\longrightarrow\;\;\;\frac{1}{\sqrt{2}}\mathbf{a}_{\mathbf{k} \boldsymbol{\lambda}}\mathbf{a}_{\mathbf{k} \boldsymbol{\lambda}} \overset{\underset{\mathrm{?}}{}}{=} \mathbf{a}_{\mathbf{k'} \boldsymbol{\lambda}}

    Let's see if this works. Using commutation relationship we have

    
\left[	\underbrace{\mathbf{a}_{\mathbf{k} \boldsymbol{\lambda}}\mathbf{a}_{\mathbf{k} \boldsymbol{\lambda}}},\mathbf{a}^{\dagger}_{\mathbf{k'} \boldsymbol{\lambda}}\right]=0

    Replace the highlighted part by \mathbf{a}_{\mathbf{k'} \boldsymbol{\lambda}}


    
\left[\mathbf{a}_{\mathbf{k'} \boldsymbol{\lambda}},\mathbf{a}^{\dagger}_{\mathbf{k'} \boldsymbol{\lambda}}\right]=0


    Since \left[\mathbf{a}_{\mathbf{k'} \boldsymbol{\lambda}},\mathbf{a}^{\dagger}_{\mathbf{k'} \boldsymbol{\lambda}}\right]=1, the initial assumption is wrong, namely:

    
\frac{1}{\sqrt{2}}\mathbf{a}^{\dagger}_{\mathbf{k} \boldsymbol{\lambda}}\mathbf{a}^{\dagger}_{\mathbf{k} \boldsymbol{\lambda}}|0\rangle \ne \mathbf{a}^{\dagger}_{\mathbf{k'} \boldsymbol{\lambda}}|0\rangle

    This means that \mathbf{a}^{\dagger}_{\mathbf{k} \boldsymbol{\lambda}}\mathbf{a}^{\dagger}_{\mathbf{k} \boldsymbol{\lambda}} 
cannot create a single photon with energy 2\hbar \omega_{\mathbf{k}}. Instead it will create two photons each of them with energy \hbar \omega_{\mathbf{k}}\!.


    ALGEBRA OF VACUUM STATES

    A general vacuum state can be written as

    
|n_{\mathbf{k_{1}} \boldsymbol{\lambda_{1}}},n_{\mathbf{k_{2}} \boldsymbol{\lambda_{2}}},...,n_{\mathbf{k_{i}} \boldsymbol{\lambda_{i}}},...\rangle

    where n_{\mathbf{k_{i}} \boldsymbol{\lambda_{i}}} is the number of photons in the state \mathbf{k_{i}} \boldsymbol{\lambda_{i}} which exist in the vacuum. Using our knowledge of harmonic oscillator we conclude that this state can be written as

    
|n_{\mathbf{k_{1}} \boldsymbol{\lambda_{1}}},n_{\mathbf{k_{2}} \boldsymbol{\lambda_{2}}},...,n_{\mathbf{k_{i}} \boldsymbol{\lambda_{i}}},...\rangle=\prod_{\mathbf{k_{j}} \boldsymbol{\lambda_{j}}}\frac{(
\mathbf{a}^{\dagger}_{\mathbf{k} \boldsymbol{\lambda}})^{n_{\mathbf{k_{j}} \boldsymbol{\lambda_{j}}}}}{\sqrt{n_{\mathbf{k_{j}} \boldsymbol{\lambda_{j}}}!}}|0\rangle

    Also it is clear that

    
\mathbf{a}^{\dagger}_{\mathbf{k_{i}} \boldsymbol{\lambda_{i}}}|n_{\mathbf{k_{1}} \boldsymbol{\lambda_{1}}},n_{\mathbf{k_{2}} \boldsymbol{\lambda_{2}}},...,n_{\mathbf{k_{i}} \boldsymbol{\lambda_{i}}},...\rangle=\sqrt{n_{\mathbf{k_{i}} \boldsymbol{\lambda_{i}}}+1}|n_{\mathbf{k_{1}} \boldsymbol{\lambda_{1}}},n_{\mathbf{k_{2}} \boldsymbol{\lambda_{2}}},...,n_{\mathbf{k_{i}} \boldsymbol{\lambda_{i}}}+1,...\rangle

    
\mathbf{a}_{\mathbf{k_{i}} \boldsymbol{\lambda_{i}}}|n_{\mathbf{k_{1}} \boldsymbol{\lambda_{1}}},n_{\mathbf{k_{2}} \boldsymbol{\lambda_{2}}},...,n_{\mathbf{k_{i}} \boldsymbol{\lambda_{i}}},...\rangle=\sqrt{n_{\mathbf{k_{i}} \boldsymbol{\lambda_{i}}}}|n_{\mathbf{k_{1}} \boldsymbol{\lambda_{1}}},n_{\mathbf{k_{2}} \boldsymbol{\lambda_{2}}},...,n_{\mathbf{k_{i}} \boldsymbol{\lambda_{i}}}-1,...\rangle .

    Matter + Radiation

    Hamiltonian of Single Particle in Presence of Radiation (Gauge Invariance)

    The Hamiltonian of a single charged particle in presence of E&M potentials is given by

    
\mathcal{H}=\frac{\left[\vec{p}-\frac{e}{c}\vec{A}(\vec{r},t)\right]^{2}}{2m}+e\phi (\vec{r},t) + V(\vec{r},t),

    where the vector potential in the first term and the scalar potential in the second term is external E-M interaction and the third term is related to all other potentials.

    The time dependent Schrödinger equation is

    
i\hbar \frac{\partial\psi (\vec{r},t)}{\partial t}=\left[\frac{\left[\vec{p}-\frac{e}{c}\vec{A}(\vec{r},t)\right]^{2}}{2m}+e\phi (\vec{r},t) + V(\vec{r},t)
\right]\psi(\vec{r},t)

    Since a gauge transformation,

     
A'_{\mu}=A_{\mu}-\partial_{\mu} \chi ,

    left invariant the E&M fields, we expect that |\psi|^{2} \! which is an observable is also gauge independent. Since |\psi|^{2} \! is independent of the phase choice, we can relate this phase with the E&M gauge transformation. In other words, the phase transformation with E&M gauge transformation must leave Schrödinger equation invariant. This phase transformation is given by:

    
\psi'(\vec{r},t)=e^{i\frac{e}{\hbar c}\chi(\vec{r},t)}\psi(\vec{r},t)

    Let's see this in detail. We want to see if:

     
\begin{align}
i\hbar \frac{\partial\psi' (\vec{r},t)}{\partial t}
& =\left[\frac{\left[\vec{p}-\frac{e}{c}\vec{A}'(\vec{r},t)\right]^{2}}{2m}+e\phi '(\vec{r},t) + V(\vec{r},t)
\right]\psi'(\vec{r},t)  \\
& = \left[\frac{\left[\vec{p}-\frac{e}{c}\vec{A}(\vec{r},t)\right]^{2}}{2m}+e\phi (\vec{r},t) + V(\vec{r},t)
\right]\psi(\vec{r},t)  
= i\hbar \frac{\partial\psi(\vec{r},t)}{\partial t}
\end{align}

    Let's put the transformations:

    \begin{align} 
\psi'(\vec{r},t)&=e^{i\frac{e}{\hbar c}\chi(\vec{r},t)}\psi(\vec{r},t) \\
\vec{A}'(\vec{r},t)&=\vec{A}(\vec{r},t)+\vec{\nabla} \chi(\vec{r},t) \\
\phi'(\vec{r},t)&=\phi(\vec{r},t)-\frac{1}{c}\frac{\partial\chi(\vec{r},t)  }{\partial t}
\end{align}

    Replacing

    \begin{align} 
i\hbar \left[\frac{ie}{\hbar c} \frac{\partial \chi}{\partial t} e^{i\frac{e}{\hbar c}\chi}\psi + e^{i\frac{e}{\hbar c}\chi} \frac{\partial \psi}{\partial t} \right] &=
\left[\frac{\left[\vec{p}-\frac{e}{c}\vec{A}'\right]^{2}}{2m}+e\phi -\frac{e}{c} \frac{\partial \chi}{\partial t} + V \right]e^{i\frac{e}{\hbar c}\chi}\psi\\ 

i\hbar e^{i\frac{e}{\hbar c}\chi} \frac{\partial \psi}{\partial t} &=
\left[\frac{\left[\vec{p}-\frac{e}{c} \vec{A}'\right]^{2}}{2m}+e\phi + V \right]e^{i\frac{e}{\hbar c}\chi}\psi\\ 

i\hbar \frac{\partial \psi}{\partial t} &=
\left[\frac{1}{2m} e^{-i\frac{e}{\hbar c}\chi}\left[\vec{p}-\frac{e}{c}\vec{A}'\right]^{2}e^{i\frac{e}{\hbar c}\chi} +e\phi + V \right]\psi\\ 


i\hbar \frac{\partial \psi}{\partial t} &=
\left[\frac{1}{2m} e^{-i\frac{e}{\hbar c}\chi}\left[\vec{p}-\frac{e}{c}\vec{A}'\right]e^{i\frac{e}{\hbar c}\chi}e^{-i\frac{e}{\hbar c}\chi}\left[\vec{p}-\frac{e}{c}\vec{A}'\right]e^{i\frac{e}{\hbar c}\chi} +e\phi + V \right]\psi\\ 

i\hbar \frac{\partial \psi}{\partial t} &=
\left[\frac{1}{2m} \left(e^{-i\frac{e}{\hbar c}\chi}\left[\vec{p}-\frac{e}{c}\vec{A}'\right]e^{i\frac{e}{\hbar c}\chi}\right) ^{2} +e\phi + V \right]\psi\\ 

i\hbar \frac{\partial \psi}{\partial t} &=
\left[\frac{1}{2m} \left(e^{-i\frac{e}{\hbar c}\chi}\left[\frac{\hbar}{i}\vec{\nabla}-\frac{e}{c}\vec{A}-\frac{e}{c}\vec{\nabla} \chi\right]e^{i\frac{e}{\hbar c}\chi}\right) ^{2} +e\phi + V \right]\psi\\ 

i\hbar \frac{\partial \psi}{\partial t} &=
\left[\frac{1}{2m} \left(e^{-i\frac{e}{\hbar c}\chi}e^{i\frac{e}{\hbar c}\chi}\left[\frac{\hbar}{i} \frac{ie}{\hbar c}\nabla \chi + \frac{\hbar}{i}\vec{\nabla}-\frac{e}{c}\vec{A}-\frac{e}{c}\vec{\nabla} \chi\right]\right) ^{2} +e\phi + V \right]\psi\\ 

i\hbar \frac{\partial \psi}{\partial t} &=
\left[\frac{1}{2m} \left(\frac{\hbar}{i}\vec{\nabla}-\frac{e}{c}\vec{A} \right) ^{2} +e\phi + V \right]\psi\\ 

\end{align}

    Let's write the Hamiltonian in the following way

     
\mathcal{H}=\underbrace{\frac{\vec{p}^2}{2m}+V}_{\mathcal{H}_{0}} \underbrace{-\frac{e}{2mc}\left(\vec{p}\cdot\vec{A}+ \vec{A}\cdot\vec{p} \right)+\frac{e^{2}}{2mc^{2}}A^{2}+e\phi}_{\mathcal{H}_{int}}

    Where \mathcal{H}_{0} is the Hamiltonian without external fields (say hydrogen atom) and \mathcal{H}_{int} is the interaction part with the radiation. In general  \vec{p} \cdot \vec{A} \neq \vec{A}\cdot \vec{p}, because  \vec{p}\cdot \vec{A} - \vec{A} \cdot \vec{p} = - i \hbar \vec{\nabla} \cdot \vec{A}. However,  \vec{p} \cdot \vec{A} = \vec{A} \cdot \vec{p} in transverse gauge,  \vec{\nabla} \cdot \vec{A} = 0.

    Example: electron on helium surface

    Hamiltonian of Multiple Particles in Presence of Radiation

    If we have a system of  N \! particles, we have the following Hamiltonian

     
\mathcal{H}=\sum_{i=1}^N \frac{\left[\vec{p}_{i}-\frac{e_{i}}{c}\vec{A}(\vec{r}_{i},t)\right]^{2}}{2m_{i}} +\sum_{i=1}^N e_{i}\phi(\vec{r}_{i},t) + V(\vec{r}_{1}...\vec{r}_{N}),

    where ei and mi are the charge and the mass of the i-th particle respectively,  \vec{r}_i and  \vec{p}_i are its coordinate and momentum operators and  V \! is all the other interaction term.

    Let's assume all particles having same mass and same charge. Then we have

    \begin{align} 
\mathcal{H}&=\sum_{i=1}^N \left[\frac{\vec{p}_{i}^{2}}{2m}-\frac{e}{2mc}\left(\vec{p}_{i} \cdot \vec{A}(\vec{r}_{i},t)+\vec{A}(\vec{r}_{i},t) \cdot \vec{p}_{i} \right) + \frac{e^{2}}{2mc^{2}} \vec{A}^{2}(\vec{r}_{i},t)\right] 
+e\sum_{i=1}^N \phi(\vec{r}_{i},t) + V(\vec{r}_{1}...\vec{r}_{N})\\
&=\underbrace{\sum_{i=1}^N \frac{\vec{p}_{i}^{2}}{2m} + V(\vec{r}_{1}...\vec{r}_{N})}_{\mathcal{H}_{0}} \\
&{\;\;\;\;}\underbrace{+\sum_{i=1}^N -\frac{e}{2mc}\left(\vec{p}_{i} \cdot \vec{A}(\vec{r}_{i},t)+\vec{A}(\vec{r}_{i},t)\cdot \vec{p}_{i}  \right)
+\sum_{i=1}^N \frac{e^{2}}{2mc^{2}} \vec{A}^{2}(\vec{r}_{i},t)
+e\sum_{i=1}^N \phi(\vec{r}_{i},t)}_{\mathcal{H}_{int}}
\end{align}

    Using delta function operator \delta (\vec{r}-\vec{r}_{i}) we can write

    \begin{align} 
\vec{A}(\vec{r}_{i},t)&=\int d^{3}{r}\; \delta (\vec{r}-\vec{r}_{i}) \vec{A}(\vec{r},t)\\
\phi(\vec{r}_{i},t)&=\int d^{3}{r}\; \delta (\vec{r}-\vec{r}_{i}) \phi(\vec{r},t)\\
\end{align}

    Then

    \begin{align} 
\mathcal{H}&=\mathcal{H}_{0}
+\sum_{i=1}^N -\frac{e}{2mc}\left(\vec{p}_{i} \cdot \int d^{3}{r}\; \delta (\vec{r}-\vec{r}_{i}) \vec{A}(\vec{r},t)+\int d^{3}{r}\; \delta (\vec{r}-\vec{r}_{i}) \vec{A}(\vec{r},t) \cdot \vec{p}_{i} \right)\\
&\;\;\;\;\;\;\;\;\;+\sum_{i=1}^N \frac{e^{2}}{2mc^{2}} \int d^{3}{r}\; \delta (\vec{r}-\vec{r}_{i}) A(\vec{r},t)^{2}
+e\sum_{i=1}^N \int d^{3}{r}\; \delta (\vec{r}-\vec{r}_{i}) \phi(\vec{r},t)\\

&=\mathcal{H}_{0}
-\int d^{3}{r}\;\frac{e}{c}\underbrace{\left[ \frac{1}{2}\sum_{i=1}^N \left[\frac{\vec{p}_{i}}{m} \delta (\vec{r}-\vec{r}_{i})+\delta (\vec{r}-\vec{r}_{i}) \frac{\vec{p}_{i}}{m} \right]\right]}_{\vec{j}(\vec{r})} \vec{A}(\vec{r},t)\\

&\;\;\;\;\;\;\;\;\;+\int d^{3}{r}\; \frac{e^{2}}{2mc^{2}} \underbrace{\left[ \sum_{i=1}^N \  \delta (\vec{r}-\vec{r}_{i}) \right]}_{\rho (\vec{r})} A(\vec{r},t)^{2}
+e\int d^{3}{r}\; \underbrace{\left[\sum_{i=1}^N  \delta (\vec{r}-\vec{r}_{i}) \right]}_{\rho (\vec{r})} \phi(\vec{r},t)\\

&=\mathcal{H}_{0}
+\underbrace{\int d^{3}{r}\; \left[-\frac{e}{c} \vec{j}(\vec{r})\cdot \vec{A}(\vec{r},t)+\frac{e^{2}}{2mc^{2}} \rho (\vec{r}) \vec{A}^{2}(\vec{r},t)
+e\rho (\vec{r})\phi(\vec{r},t)\right]}_{\mathcal{H}_{int}}\\

&=\mathcal{H}_{0}+\mathcal{H}_{int}

\end{align}

    In above equations,

    • \rho (\vec{r})=\sum_{i=1}^N \delta (\vec{r}-\vec{r}_{i}) can be interpreted as the particle density operator.
    •  \vec{A}\left(\vec{r},t\right) and  \phi\left(\vec{r},t\right) are no longer operators because all the position operators,  \vec{r}_i \!, are in  \rho\left(\vec{r}\right) \!.
    • \vec{j}(\vec{r}) is called paramagnetic current. It is just a piece of the total current operator  \vec{J}(\vec{r}). Explicitly, we have
      \begin{align}
\vec{J}(\vec{r})&=\sum_{i=1}^N \frac{1}{2}\left[\vec{v}_{i}(\vec{p}_{i},\vec{r}_{i})\delta (\vec{r}-\vec{r}_{i}) + \delta (\vec{r}-\vec{r}_{i})\vec{v}_{i}(\vec{p}_{i},\vec{r}_{i}) \right]\;\;\;\leftarrow\;\;\;\vec{v}_{i}(\vec{p}_{i},\vec{r}_{i})=\frac{\vec{p}_{i}}{m}-\frac{e}{mc}\vec{A}(\vec{r}_{i},t)\\

&=\sum_{i=1}^N \frac{1}{2}\left[\frac{\vec{p}_{i}}{m}\delta (\vec{r}-\vec{r}_{i}) + \delta (\vec{r}-\vec{r}_{i})\frac{\vec{p}_{i}}{m}-\frac{2e}{mc}  \vec{A}(\vec{r}_{i},t)\delta (\vec{r}-\vec{r}_{i})\right]\\

&=\vec{j}(\vec{r})-\frac{e}{mc}\sum_{i=1}^N  \vec{A}(\vec{r}_{i},t)\delta (\vec{r}-\vec{r}_{i})\;\;\;\leftarrow\;\;\;\vec{A}(\vec{r}_{i},t)\delta (\vec{r}-\vec{r}_{i})=\vec{A}(\vec{r},t)\delta (\vec{r}-\vec{r}_{i})\\

&=\underbrace{\vec{j}(\vec{r})}_{paramagnetic}\underbrace{-\frac{e}{mc}  \vec{A}(\vec{r},t) \rho (\vec{r})}_{diamagnetic}

\end{align}

    Light Absorption and Induced Emission

    Generally, if the electric fields which are described by  \mathbf{A} \! are small compared with atomic fields \mathbf{j}(\mathbf{r})\cdot \mathbf{A}(\mathbf{r},t)>>\rho \mathbf{A}^{2}, then we can neglect the  \rho \mathbf{A}^2 \! term. Therefore, using the transverse gauge we can approximate the interaction Hamiltonian as

    
\mathcal{H}_{int}=
\int d^{3}{r}\; \left[-\frac{e}{c} \mathbf{j}(\mathbf{r})\cdot \mathbf{A}(\mathbf{r},t)\right]

    Let's write \mathbf{A}(\mathbf{r},t) using the Fourier expansion as described above:

    \begin{align} 
\mathcal{H}_{int}&=-
\int d^{3}{r}\; \left[\frac{e}{c} \mathbf{j}(\mathbf{r}) \cdot \sum_{\mathbf{k}\boldsymbol{\lambda}} \sqrt{\frac{2\pi \hbar c^{2}}{\omega_{\mathbf{k}}}}\left\{\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}} \boldsymbol{\lambda}\frac{e^{i(\mathbf{k}\cdot\mathbf{r}-\omega t)}}{\sqrt{V}}+\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}^{\dagger} \boldsymbol{\lambda}^{*} \frac{e^{-i(\mathbf{k}\cdot\mathbf{r}-\omega t)}}{\sqrt{V}}\right\}\right]\\

&=-\sum_{\mathbf{k}\boldsymbol{\lambda}} e\sqrt{\frac{2\pi \hbar }{\omega_{\mathbf{k}}V}}\int d^{3}{r}\;  \mathbf{j}(\mathbf{r})\cdot \left\{  \mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}} \boldsymbol{\lambda}e^{i(\mathbf{k}\cdot\mathbf{r}-\omega t)}+\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}^{\dagger} \boldsymbol{\lambda}^{*} e^{-i(\mathbf{k}\cdot\mathbf{r}-\omega t)}\right\}\\

&=-\sum_{\mathbf{k}\boldsymbol{\lambda}} e\sqrt{\frac{2\pi \hbar }{\omega_{\mathbf{k}}V}} \left[  \mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}\underbrace{\left\{\int d^{3}{r}\;  \mathbf{j}(\mathbf{r})e^{i\mathbf{k}\cdot\mathbf{r}} \right\}}_{\mathbf{j}_{-\mathbf{k}}}\cdot \boldsymbol{\lambda}e^{-i\omega t}+\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}^{\dagger}\underbrace{\left\{\int d^{3}{r}\;  \mathbf{j}(\mathbf{r})e^{-i\mathbf{k}\cdot\mathbf{r}} \right\}}_{\mathbf{j}_{\mathbf{k}}}\cdot \boldsymbol{\lambda}^{*} e^{i\omega t}\right]\\

&=-\sum_{\mathbf{k}\boldsymbol{\lambda}} e\sqrt{\frac{2\pi \hbar }{\omega_{\mathbf{k}}V}} \left[  \mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}\mathbf{j}_{-\mathbf{k}}\cdot \boldsymbol{\lambda}e^{-i\omega t}+\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}^{\dagger}\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*} e^{i\omega t}\right],\\

\end{align}

    where

    \begin{align} 
\mathbf{j}_{\mp\mathbf{k}}
&=\int d^{3}{r}\;  \mathbf{j}(\mathbf{r})e^{\pm i\mathbf{k}\cdot\mathbf{r}}\\

&=\int d^{3}{r}\;  \frac{1}{2}\sum_{i}
\left\{\frac{\boldsymbol{p}_{i}}{m}\delta(\boldsymbol{r}-\boldsymbol{r}_{i})+\delta(\boldsymbol{r}-\boldsymbol{r}_{i})\frac{\boldsymbol{p}_{i}}{m}\right\}
e^{\pm i\mathbf{k}\cdot\mathbf{r}} \\

&=\frac{1}{2m} \sum_{i}
\left\{\frac{\boldsymbol{p}_{i}}{m}\left(\int d^{3}{r}\;\delta(\boldsymbol{r}-\boldsymbol{r}_{i})e^{\pm i\mathbf{k}\cdot\mathbf{r}}\right)+\left(\int d^{3}{r}\;\delta(\boldsymbol{r}-\boldsymbol{r}_{i})e^{\pm i\mathbf{k}\cdot\mathbf{r}} \right) \frac{\boldsymbol{p}_{i}}{m}\right\} \\

&=\frac{1}{2m} \sum_{i}
\left\{\frac{\boldsymbol{p}_{i}}{m}e^{\pm i\mathbf{k}\cdot\mathbf{r}_{i}}+e^{\pm i\mathbf{k}\cdot\mathbf{r}_{i}}\frac{\boldsymbol{p}_{i}}{m}\right\}.

\end{align}

    Let's use the golden rule to calculate transition rates for this time-dependent interaction. The evolution of the state in the first approximation is

    \begin{align}
 |\psi(t)\rangle = |I\rangle+\frac{1}{i\hbar}\int^{t}_{t_{0}}dt'\;e^{\frac{i}{\hbar}\mathcal{H}_{0}t'}\mathcal{H}_{int}e^{\eta t'}e^{-\frac{i}{\hbar}\mathcal{H}_{0}t'}|I\rangle
\end{align}

    where |I\rangle is the initial state and e^{\eta t'} \! is the usual slow "switch" factor. The transition amplitude to a state |F\rangle is

    \begin{align}
\langle F|\psi(t)\rangle = 
\langle F|I\rangle+\frac{1}{i\hbar}\int^{t}_{t_{0}}dt'\;\langle F|e^{\frac{i}{\hbar}\mathcal{H}_{0}t'}\mathcal{H}_{int}e^{\eta t'}e^{-\frac{i}{\hbar}\mathcal{H}_{0}t'}|I\rangle
\end{align}

    |F\rangle and |I\rangle are eigenstates of \mathcal{H}_{0}. Then we have

    \begin{align}
\langle F|\psi(t)\rangle &=\frac{1}{i\hbar}\int^{t}_{t_{o}}dt'\;e^{[\frac{i}{\hbar}(E_{n}-E_{o})+\eta ]t'}\langle F|\mathcal{H}_{int}|I\rangle\\


&=\frac{1}{i\hbar}\int^{t}_{t_{o}}dt'\;e^{[\frac{i}{\hbar}(E_{n}-E_{o})+\eta ]t'}\langle F|
-\sum_{\mathbf{k}\boldsymbol{\lambda}} e\sqrt{\frac{2\pi \hbar }{\omega_{\mathbf{k}}V}}\cdot \left[  \mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}\mathbf{j}_{-\mathbf{k}}\cdot \boldsymbol{\lambda}e^{-\omega t'}+\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}^{\dagger}\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*} e^{\omega t'}\right]|I\rangle\\


&=-\frac{1}{i\hbar}\sum_{\mathbf{k}\boldsymbol{\lambda}} e\sqrt{\frac{2\pi \hbar }{\omega V}}
\left[             
\left\{\langle F|\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}\mathbf{j}_{-\mathbf{k}}\cdot \boldsymbol{\lambda}|I\rangle            
\int^{t}_{t_{0}=-\infin}dt'\;e^{[\frac{i}{\hbar}(E_{n}-E_{0}-\hbar \omega )+\eta ]t'}
\right\}+
\left\{\langle F|\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}^{\dagger}\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*}|I\rangle                  
\int^{t}_{t_{0}=-\infin}dt'\;e^{[\frac{i}{\hbar}(E_{n}-E_{0}+\hbar \omega )+\eta ]t'}
\right\}
\right]\\

&=-\frac{1}{i\hbar}\sum_{\mathbf{k}\boldsymbol{\lambda}} e\sqrt{\frac{2\pi \hbar }{\omega V}}
\left[             
\left\{\langle F|\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}\mathbf{j}_{-\mathbf{k}}\cdot \boldsymbol{\lambda}|I\rangle            
\frac{e^{[\frac{i}{\hbar}(E_{n}-E_{0}-\hbar \omega )+\eta ]t}}{\frac{i}{\hbar}(E_{n}-E_{0}-\hbar \omega )+\eta }
\right\}+
\left\{\langle F|\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}^{\dagger}\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*}|I\rangle                  
\frac{e^{[\frac{i}{\hbar}(E_{n}-E_{0}+\hbar \omega )+\eta ]t}}{\frac{i}{\hbar}(E_{n}-E_{0}+\hbar \omega )+\eta }
\right\}
\right]\\

&=\sum_{\mathbf{k}\boldsymbol{\lambda}} e\sqrt{\frac{2\pi \hbar }{\omega V}}
\left[             
\left\{\langle F|\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}\mathbf{j}_{-\mathbf{k}}\cdot \boldsymbol{\lambda}|I\rangle            
\frac{e^{[\frac{i}{\hbar}(E_{n}-E_{o}-\hbar \omega )+\eta ]t}}{(E_{n}-E_{o}-\hbar \omega )-i\eta \hbar }
\right\}+
\left\{\langle F|\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}^{\dagger}\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*}|I\rangle                  
\frac{e^{[\frac{i}{\hbar}(E_{n}-E_{0}+\hbar \omega )+\eta ]t}}{(E_{n}-E_{0}+\hbar \omega )-i\eta\hbar }
\right\}
\right]

\end{align}

    The transition probability is given by

    \begin{align}
P_{0 \rightarrow n}&=|\langle F|\psi(t)\rangle|^{2}\\ 

&=\sum_{\mathbf{k}\boldsymbol{\lambda}} e^{2}\frac{2\pi \hbar }{\omega V}
\left[             
\left\{\left|\langle F|\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}\mathbf{j}_{-\mathbf{k}}\cdot \boldsymbol{\lambda}|I\rangle\right|^{2}            
\frac{e^{2 \eta t}}{(E_{n}-E_{0}-\hbar \omega )^{2}+\eta^{2} \hbar^{2} }
\right\}+
\left\{\left|\langle F|\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}^{\dagger}\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*}|I\rangle \right|^2 
\frac{e^{2 \eta t}}{(E_{n}-E_{0}+\hbar \omega )^{2}+\eta^{2} \hbar^{2}}
\right\}
\right],

\end{align}

    where all oscillatory terms have been averaged to zero. Taking a time derivative, we obtain the transition rate

    \begin{align}

\Gamma_{0 \rightarrow n}&=\frac{dP_{0 \rightarrow n}}{dt}\\

&=\sum_{\mathbf{k}\boldsymbol{\lambda}} e^{2}\frac{2\pi \hbar }{\omega V}
\left[             
\left\{|\langle F|\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}\mathbf{j}_{-\mathbf{k}}\cdot \boldsymbol{\lambda}|I\rangle|^{2}            
\frac{2 \eta e^{2 \eta t}}{(E_{n}-E_{o}-\hbar \omega )^{2}+\eta^{2} \hbar^{2} }
\right\}+
\left\{|\langle F|\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}^{\dagger}\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*}|I\rangle |^{2}                 
\frac{2 \eta e^{2 \eta t}}{(E_{n}-E_{o}+\hbar \omega )^{2}+\eta^{2} \hbar^{2}}
\right\}
\right]\\

&\overset{\underset{\mathrm{\eta \rightarrow 0 }}{}}{=}\sum_{\mathbf{k}\boldsymbol{\lambda}} e^{2}\frac{2\pi \hbar }{\omega V}
\left[             
\left\{|\langle F|\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}\mathbf{j}_{-\mathbf{k}}\cdot \boldsymbol{\lambda}|I\rangle|^{2}            
\frac{2\pi}{\hbar}\delta (E_{n}-E_{o}-\hbar \omega)
\right\}+
\left\{|\langle F|\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}^{\dagger}\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*}|I\rangle |^{2}                    
\frac{2\pi}{\hbar}\delta (E_{n}-E_{o}+\hbar \omega)
\right\}
\right]\\

&=\sum_{\mathbf{k}\boldsymbol{\lambda}} \frac{4\pi^{2} e^{2} }{\omega V}
\left[             
\left\{|\langle F|\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}\mathbf{j}_{-\mathbf{k}}\cdot \boldsymbol{\lambda}|I\rangle|^{2}            
\delta (E_{n}-E_{o}-\hbar \omega)
\right\}+
\left\{|\langle F|\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}^{\dagger}\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*}|I\rangle|^{2}                   
\delta (E_{n}-E_{o}+\hbar \omega)
\right\}
\right]\\

&=\sum_{\mathbf{k}\boldsymbol{\lambda}} 
\left[             

\underbrace{
\left\{\frac{4\pi^{2} e^{2} }{\omega V}|\langle F|\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}\mathbf{j}_{-\mathbf{k}}\cdot \boldsymbol{\lambda}|I\rangle|^{2}            
\delta (E_{n}-E_{o}-\hbar \omega)
\right\}
}_{\Gamma^{abs}_{0 \rightarrow n;\mathbf{k}\boldsymbol{\lambda}} }

+

\underbrace{
\left\{\frac{4\pi^{2} e^{2} }{\omega V}|\langle F|\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}^{\dagger}\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*}|I\rangle|^{2}                   
\delta (E_{n}-E_{o}+\hbar \omega)
\right\}
}_{\Gamma^{ind.em}_{n \rightarrow 0;\mathbf{k}\boldsymbol{\lambda}} }
\right]\\

&=\sum_{\mathbf{k}\boldsymbol{\lambda}}  \left(\Gamma^{abs}_{0 \rightarrow n;\mathbf{k}\boldsymbol{\lambda}}+ \Gamma^{ind.em}_{n \rightarrow 0;\mathbf{k}\boldsymbol{\lambda}}\right)

\end{align}

    The above equation says that the transition rate between two states is composed by two possibilities, that is, the absorption \Gamma^{abs}_{0 \rightarrow n;\mathbf{k}\boldsymbol{\lambda}} and the induced emission \Gamma^{ind.em}_{n \rightarrow 0;\mathbf{k}\boldsymbol{\lambda}}. Let's analyze the matrix elements between states.

    Absorption

    Let's suppose that initial and final states are:

    \begin{align}
|I\rangle&=|0 \rangle \otimes |N_{\mathbf{k}_1\boldsymbol{\lambda}_1},...,
N_{\mathbf{k}\boldsymbol{\lambda}},...\rangle \equiv |0 ; N_{\mathbf{k}_1\boldsymbol{\lambda}_1},...,
N_{\mathbf{k}\boldsymbol{\lambda}},...\rangle \\
|F\rangle&=|n \rangle \otimes |N_{\mathbf{k}_1\boldsymbol{\lambda}_1},...,
M_{\mathbf{k}\boldsymbol{\lambda}},...\rangle \equiv |n ; N_{\mathbf{k}_1\boldsymbol{\lambda}_1},...,
M_{\mathbf{k}\boldsymbol{\lambda}},...\rangle ,
\end{align}

    where |0\rangle \! and  |n\rangle \! are the initial and final states of \mathcal{H}_{0} (say hydrogen atom) with energies E_{0}<E_{n} \! and  |N_{\mathbf{k}_1\boldsymbol{\lambda}_1},...,N_{\mathbf{k}\boldsymbol{\lambda}},...\rangle and  |N_{\mathbf{k}_1\boldsymbol{\lambda}_1},...,M_{\mathbf{k}\boldsymbol{\lambda}},...\rangle are the initial and final states of the electromagnetic field, \mathcal{H}_{int} \! (the vacuum).

    The matrix element of \Gamma^{abs}_{0 \rightarrow n;\mathbf{k}\boldsymbol{\lambda}} is given by:

    \begin{align}
\langle F|\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}\mathbf{j}_{-\mathbf{k}}\cdot \boldsymbol{\lambda}|I\rangle

&=\langle n|\otimes \langle N_{\mathbf{k}_1\boldsymbol{\lambda}_1},...,M_{\mathbf{k}\boldsymbol{\lambda}},...|\left[\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}\mathbf{j}_{-\mathbf{k}}\cdot \boldsymbol{\lambda}\right]|0\rangle \otimes |N_{\mathbf{k}_1\boldsymbol{\lambda}_1},...,N_{\mathbf{k}\boldsymbol{\lambda}},...\rangle\\

&=\langle n|\mathbf{j}_{-\mathbf{k}}\cdot \boldsymbol{\lambda}|0\rangle
\langle N_{\mathbf{k}_1\boldsymbol{\lambda}_1},...,M_{\mathbf{k}\boldsymbol{\lambda}},...|\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}|N_{\mathbf{k}_1\boldsymbol{\lambda}_1},...,N_{\mathbf{k}\boldsymbol{\lambda}},...\rangle\\

&=\langle n|\mathbf{j}_{-\mathbf{k}}\cdot \boldsymbol{\lambda}|0\rangle
\langle M_{\mathbf{k}\boldsymbol{\lambda}}|\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}|N_{\mathbf{k}\boldsymbol{\lambda}}\rangle\\

&=\langle n|\mathbf{j}_{-\mathbf{k}}\cdot \boldsymbol{\lambda}|0\rangle
\sqrt{N_{\mathbf{k}\boldsymbol{\lambda}}}\langle M_{\mathbf{k}\boldsymbol{\lambda}}|N_{\mathbf{k}\boldsymbol{\lambda}}-1\rangle\\

&=\langle n|\mathbf{j}_{-\mathbf{k}}\cdot \boldsymbol{\lambda}|0\rangle
\sqrt{N_{\mathbf{k}\boldsymbol{\lambda}}}\delta_{M_{\mathbf{k}\boldsymbol{\lambda}},N_{\mathbf{k}\boldsymbol{\lambda}}-1}\\      
\end{align}

    The last part shows how the absorption process in the system; \mathcal{H}_{int} absorbs a single photon from the radiation. Namely, the final state is given by:

    \begin{align}
|F\rangle 
&=|n\rangle \otimes |N_{\mathbf{k}_1\boldsymbol{\lambda}_1},...,N_{\mathbf{k}\boldsymbol{\lambda}}-1,...\rangle \\ 
&=|n ; N_{\mathbf{k}_1\boldsymbol{\lambda}_1},...,N_{\mathbf{k}\boldsymbol{\lambda}}-1,...\rangle
\end{align}

    Finally, we can write the transition rate for absorption as following

    \begin{align}
\Gamma^{abs}_{0 \rightarrow n;\mathbf{k}\boldsymbol{\lambda}}

&=\frac{4\pi^{2} e^{2} }{\omega V}
\left|\langle n|\mathbf{j}_{-\mathbf{k}}\cdot \boldsymbol{\lambda}|0\rangle
\sqrt{N_{\mathbf{k}\boldsymbol{\lambda}}}\right|^{2}            
\delta (E_{n}-E_{0}-\hbar \omega)\\

&=\frac{4\pi^{2} e^{2} }{\omega V}
\left|\langle n|\mathbf{j}_{-\mathbf{k}}\cdot \boldsymbol{\lambda}|0\rangle
\right|^{2}N_{\mathbf{k}\boldsymbol{\lambda}}            
\delta (E_{n}-E_{0}-\hbar \omega)

\end{align}


    Induced Emission

    Let's suppose that initial and final states are:

    \begin{align}
|I\rangle&=|n\rangle \otimes |N_{\mathbf{k}_1\boldsymbol{\lambda}_1},...,N_{\mathbf{k}\boldsymbol{\lambda}},...\rangle \\
|F\rangle&=|0\rangle \otimes |N_{\mathbf{k}_1\boldsymbol{\lambda}_1},...,M_{\mathbf{k}\boldsymbol{\lambda}},...\rangle ,
\end{align}

    where |n\rangle \! and  |0\rangle \! are the initial and final states of \mathcal{H}_{0} (say hydrogen atom) with energies E0 < En and  |N_{\mathbf{k}_1\boldsymbol{\lambda}_1},...,N_{\mathbf{k}\boldsymbol{\lambda}},...\rangle and  |N_{\mathbf{k}_1\boldsymbol{\lambda}_1},...,M_{\mathbf{k}\boldsymbol{\lambda}},...\rangle are the initial and final states of the electromagnetic fields, \mathcal{H}_{int} (the vacuum).

    The matrix element of \Gamma^{ind.em}_{0 \rightarrow n;\mathbf{k}\boldsymbol{\lambda}} is given by:

    \begin{align}
\langle F|\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}^{\dagger}\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*}|I\rangle

&=\langle 0|\otimes \langle N_{\mathbf{k}_1\boldsymbol{\lambda}_1},...,M_{\mathbf{k}\boldsymbol{\lambda}},...|[\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}^{\dagger}\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*}]|n\rangle \otimes |N_{\mathbf{k}_1\boldsymbol{\lambda}_1},...,N_{\mathbf{k}\boldsymbol{\lambda}},...\rangle\\

&=\langle 0|\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*}|n\rangle
\langle N_{\mathbf{k}_1\boldsymbol{\lambda}_1},...,M_{\mathbf{k}\boldsymbol{\lambda}},...|\mathbf{a}^{\dagger}_{\mathbf{k}\boldsymbol{\lambda}}|N_{\mathbf{k}_1\boldsymbol{\lambda}_1},...,N_{\mathbf{k}\boldsymbol{\lambda}},...\rangle\\

&=\langle 0|\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*}|n\rangle
\langle M_{\mathbf{k}\boldsymbol{\lambda}}|\mathbf{a}^{\dagger}_{\mathbf{k}\boldsymbol{\lambda}}|N_{\mathbf{k}\boldsymbol{\lambda}}\rangle\\

&=\langle 0|\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*}|n\rangle
\sqrt{N_{\mathbf{k}\boldsymbol{\lambda}}+1}\langle M_{\mathbf{k}\boldsymbol{\lambda}}|N_{\mathbf{k}\boldsymbol{\lambda}}+1\rangle\\

&=\langle 0|\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*}|n\rangle
\sqrt{N_{\mathbf{k}\boldsymbol{\lambda}}+1}\delta_{M_{\mathbf{k}\boldsymbol{\lambda}},N_{\mathbf{k}\boldsymbol{\lambda}}+1}\\      
\end{align}

    The last shows how the emmision process in the system, \mathcal{H}_{int} release a single photon from the radiation. Namely, the final state is given by:

    \begin{align}
|F\rangle&=|0\rangle \otimes |N_{\mathbf{k}_1\boldsymbol{\lambda}_1},...,N_{\mathbf{k}\boldsymbol{\lambda}}+1,...\rangle \\ 
\end{align}

    Finally, we can write the transition rate absorption as following

    \begin{align}
\Gamma^{ind.em}_{n \rightarrow 0;\mathbf{k}\boldsymbol{\lambda}}

&=\frac{4\pi^{2} e^{2} }{\omega V}
\left|\langle 0|\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*}|n\rangle
\sqrt{N_{\mathbf{k}\boldsymbol{\lambda}}+1}\right|^{2}            
\delta (E_{0}-E_{n}+\hbar \omega)\\

&=\frac{4\pi^{2} e^{2} }{\omega V}
\left|\langle 0|\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*}|n\rangle
\right|^{2} (N_{\mathbf{k}\boldsymbol{\lambda}}+1)           
\delta (E_{n}-E_{0}-\hbar \omega)

\end{align}


    Important Phenomena: Spontaneous Emission

    Let's suppose that the initial state is a single Hydrogen atom in the  2P \! state in vacuum (no photons present). The state can be written as

    \begin{align}
|I\rangle&=|2P\rangle \otimes |0,...,0,...\rangle .
\end{align}

    Different from induced emission, there could be a process in which the final state is:

    \begin{align}
|F\rangle&=|1S\rangle \otimes |0,...,1,...\rangle ,
\end{align}

    where a single photon has been emitted without any external perturbation. This emission process is called Spontaneous emission. For an experimental observation of a Lamb-like shift in a solid state setup see here.

    Einstein's Model of Absorption and Induced Emision

    Let's use Statistical Mechanics to study a cavity with radiation. For this we need to use the Plank distribution:

    \begin{align}
\langle N_{\boldsymbol{k}\boldsymbol{\lambda}}\rangle=\frac{1}{e^{\frac{\hbar c k}{K_{B}T}}-1} 
\end{align}

    This is just the occupation number of the state \boldsymbol{k}\boldsymbol{\lambda}. Let's suppose the following situation:

    • Our cavity is made up with atoms with two quantum levels with energies E_{n}\! and E_{0}\! such that E_{n}>E_{0}\!.
    • The walls are emitting and absorbing radiation (Thermal Radiation) such that system is at equilibrium. Since there is just two levels, the photons emitted by atoms must have energy equal to E_{n}-E_{0}\!.

    The Boltzmann distribution tells us that the probabilities to find atoms at energies E_{n}\! and E_{0}\! are respectively

    \begin{align}
P_{n}=\frac{1}{Q}e^{-\frac{E_{n}}{K_{B}T}}\\
P_{0}=\frac{1}{Q}e^{-\frac{E_{0}}{K_{B}T}}\\
\end{align}

    Let's call  \langle N \rangle the number of photons at equilibrium. At equilibrium we have

    \begin{align}
0&=\frac{dN}{dt}\\
0&=\left(\frac{dN}{dt}\right)_{abs}+\left(\frac{dN}{dt}\right)_{ind.em}
\end{align}

    It is natural to express the absorption and emission rate as proportional to the probability of finding excited or ground state atoms:

    \begin{align}
\left(\frac{dN}{dt}\right)_{abs}&=-BNP_{0}\\
\left(\frac{dN}{dt}\right)_{ind.em}&=BNP_{n}
\end{align}

    Where B is some constant. Since Pn < P0 we have

    \left|\left(\frac{dN}{dt}\right)\right|_{abs}>\left|\left(\frac{dN}{dt}\right)\right|_{ind.em}

    This means that eventually all photons will be absorbed and then  \langle N \rangle =0. This of course is not a physical situation. Einstein realized that there is another kind of process of emission that balances the rates in such way that  \langle N \rangle \ne 0. This emission is precisely the spontaneous emission and can be written as

    \begin{align}
\left(\frac{dN}{dt}\right)_{spon.em}&=AP_{n}
\end{align}

    Then we have

    \begin{align}
0&=\left(\frac{dN}{dt}\right)_{abs}+\left(\frac{dN}{dt}\right)_{ind.em}+\left(\frac{dN}{dt}\right)_{spon.em}\\
0&=-BNP_{0}+BNP_{n}+AP_{n}\\
\end{align}

    And solving for A we have

    \begin{align}
A&=B \langle N \rangle \left(e^{\frac{E_{n}-E_{0}}{K_{B}T}}-1\right)\\
&=B \langle N \rangle \frac{1}{ \langle N \rangle }\\
&=B
\end{align}

    As conclusion we obtain for the emission rate the follwing:

    \begin{align}
\left(\frac{dN}{dt}\right)_{emission}&=\left(\frac{dN}{dt}\right)_{ind.em}+\left(\frac{dN}{dt}\right)_{spon.em}\\
&=BNP_{n}+AP_{n}\\
&=BP_{n}(N+1)\\
\end{align}

    Notice that the factor (N+1)\! matches with our previous result.

    Details of Spontaneous Emission

    Power of the emitted light

    Using our previous result for \Gamma^{spon.em}_{n \rightarrow 0;\mathbf{k}\boldsymbol{\lambda}}, we can calculate the power dP\! of the light with polarization \boldsymbol{\lambda} per unit of solid angle that the spontaneus emission produce:

    \begin{align}
dP&=\sum_{k}\hbar \omega \;\Gamma^{spon.em}_{n \rightarrow 0;\mathbf{k}\boldsymbol{\lambda}}\\
&=d\Omega V \int \frac{dk\;k^{2}}{(2\pi)^{3}}\;\hbar \omega \;\Gamma^{spon.em}_{n \rightarrow 0;\mathbf{k}\boldsymbol{\lambda}}\\
&=d\Omega V \int \frac{d\omega\;\omega^{2}}{(2\pi c)^{3}}\;\hbar \omega \;\Gamma^{spon.em}_{n \rightarrow 0;\mathbf{k}\boldsymbol{\lambda}}\\
\end{align}

    Then

    \begin{align}
\frac{dP}{d\Omega}

&=V \int\frac{d\omega\;\omega^{2}}{(2\pi c)^{3}}\;\hbar \omega \left[ \frac{4\pi^{2} e^{2} }{\omega V}|\langle 0|\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*}|n\rangle|^{2} \delta (E_{n}-E_{0}-\hbar \omega) \right]\\

&=\frac{e^{2}\hbar}{2\pi c^{3}}|\langle 0|\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*}|n\rangle|^{2}\int d\omega\;\omega^{2} \delta (E_{n}-E_{0}-\hbar \omega)\\

&=\frac{e^{2}\hbar}{2\pi c^{3}}|\langle 0|\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*}|n\rangle|^{2}\frac{(E_{n}-E_{0})^{2}}{\hbar^{3}}\;\;\;\leftarrow\;\;\;\hbar\omega_{n,0}=E_{n}-E_{0}\\

&=\frac{e^{2}\omega^{2}_{n,0}}{2\pi c^{3}}|\langle 0|\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*}|n\rangle|^{2}\\

\end{align}


    Conservation of Momentum

    Consider a matter in the eigenstate of the momentum \hbar q_{n}. Suppose that it make a transition to eigenstate with momentum \hbar q_{0} via spontaneus emission. The momentum must conserve. Therefore we have a process where:

    Initial Momenta\;\;\;\rightarrow\;\;\;\begin{align}matter& \rightarrow \hbar q_{n}\\vacuum& \rightarrow 0\end{align}


    Final Momenta\;\;\;\rightarrow\;\;\;\begin{align}matter& \rightarrow \hbar q_{0}\\vacuum& \rightarrow \hbar q_{n}-\hbar q_{0}\end{align}


    Let's calculate the matrix element \langle \mathbf{q_{0}}|\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*}|\mathbf{q_{n}}\rangle for two cases.


    Case 1: Single free charged particle

    \begin{align}
\langle \mathbf{q_{0}}|\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*}|\mathbf{q_{n}}\rangle

&=\boldsymbol{\lambda}^{*}\cdot\langle \mathbf{q_{0}}|\mathbf{j}_{\mathbf{k}}|\mathbf{q_{n}}\rangle\\

&=\boldsymbol{\lambda}^{*}\cdot\left\langle \mathbf{q_{0}}\left|\frac{1}{2} 
\left[\frac{\boldsymbol{p_{i}}}{m}e^{- i\mathbf{k}\cdot\mathbf{r}_{i}}+e^{- i\mathbf{k}\cdot\mathbf{r}_{i}}\frac{\boldsymbol{p_{i}}}{m}\right]\right|\mathbf{q_{n}}\right\rangle\\

&=\boldsymbol{\lambda}^{*}\cdot\frac{1}{2}\left\langle \mathbf{q_{0}}\left|
\left[\frac{\hbar \mathbf{q_{0}}}{m}e^{- i\mathbf{k}\cdot\mathbf{r}_{i}}+e^{- i\mathbf{k}\cdot\mathbf{r}_{i}}\frac{\hbar \mathbf{q_{n}}}{m}\right]\right|\mathbf{q_{n}}\right\rangle\\

&=\boldsymbol{\lambda}^{*}\cdot\frac{\hbar (\mathbf{q_{0}}+\mathbf{q_{n}})}{2m}\langle \mathbf{q_{0}}|
e^{- i\mathbf{k}\cdot\mathbf{r}_{i}}|\mathbf{q_{n}}\rangle\\

&=\boldsymbol{\lambda}^{*}\cdot\frac{\hbar (\mathbf{q_{0}}+\mathbf{q_{n}})}{2m}
\int d^{3}r_{i} \langle \mathbf{q_{0}}|\mathbf{r}_{i}\rangle \langle \mathbf{r}_{i}| e^{-i\mathbf{k}\cdot\mathbf{r}_{i}}|\mathbf{q_{n}}\rangle\\

&=\boldsymbol{\lambda}^{*}\cdot\frac{\hbar (\mathbf{q_{0}}+\mathbf{q_{n}})}{2m}
\int d^{3}r_{i} e^{-i\mathbf{q_{0}}\cdot\mathbf{r}_{i}} e^{-i\mathbf{k}\cdot\mathbf{r}_{i}} e^{i\mathbf{q_{n}}\cdot\mathbf{r}_{i}}\\

&=\boldsymbol{\lambda}^{*}\cdot\frac{\hbar (\mathbf{q_{0}}+\mathbf{q_{n}})}{2m}
\delta(\mathbf{q_{n}}-\mathbf{q_{0}}-\mathbf{k}) \\

\end{align}

    This result is very interesting!!!. It says that the emitted light must be

    \begin{align}
\hbar \mathbf{k} =\hbar \mathbf{q_{n}} -\hbar \mathbf{q_{0}}   
\end{align}

    However this is impossible from the point of view of conservation of energy:

    \begin{align}
\hbar c k =\frac{\hbar q^{2}_{n}}{2m}-\frac{\hbar q^{2}_{0}}{2m}
\end{align}

    This means that a single charged particle can not make transitions. So a single charged particle doesn't see the vacuum fluctuations.


    Case 2: General Case (System of particles)


    \begin{align}
\langle \mathbf{q_{0}}|\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*}|\mathbf{q_{n}}\rangle

&=\boldsymbol{\lambda}^{*}\cdot\langle \mathbf{q_{0}}|\mathbf{j}_{\mathbf{k}}|\mathbf{q_{n}}\rangle\\

&=\boldsymbol{\lambda}^{*}\cdot\left\langle \mathbf{q_{0}}\left|\int d^{3}r j(\mathbf{r}) e^{-i\mathbf{k}\cdot\mathbf{r}}\right|\mathbf{q_{n}}\right\rangle\\

&=\boldsymbol{\lambda}^{*}\cdot\int d^{3}r \langle \mathbf{q_{0}}|j(\mathbf{r})|\mathbf{q_{n}}\rangle e^{-i\mathbf{k}\cdot\mathbf{r}}\\

\end{align}

    We can use the total momentum of the system \mathbf{P}=\sum_{i}\mathbf{p}_{i} as generator of translations for \mathbf{r}. So that we can write

    \begin{align}
j(\mathbf{r})=e^{-\frac{i}{\hbar}\mathbf{P}\cdot\mathbf{r}}j(\mathbf{r}=0)e^{\frac{i}{\hbar}\mathbf{P}\cdot\mathbf{r}}
\end{align}


    Then

    \begin{align}
\langle \mathbf{q_{0}}|\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*}|\mathbf{q_{n}}\rangle

&=\boldsymbol{\lambda}^{*}\cdot\int d^{3}r \langle \mathbf{q_{0}}|j(\mathbf{r})|\mathbf{q_{n}}\rangle e^{-i\mathbf{k}\cdot\mathbf{r}}\\

&=\boldsymbol{\lambda}^{*}\cdot\int d^{3}r \langle \mathbf{q_{0}}|e^{-\frac{i}{\hbar}\mathbf{P}\cdot\mathbf{r}}j(0)e^{\frac{i}{\hbar}\mathbf{P}\cdot\mathbf{r}}|\mathbf{q_{n}}\rangle e^{-i\mathbf{k}\cdot\mathbf{r}}\\

&=\boldsymbol{\lambda}^{*}\cdot\int d^{3}r \langle \mathbf{q_{0}}|e^{-i\mathbf{q_{0}}\cdot\mathbf{r}}j(0)e^{i\mathbf{q_{n}}\cdot\mathbf{r}}|\mathbf{q_{n}}\rangle e^{-i\mathbf{k}\cdot\mathbf{r}}\\

&=\boldsymbol{\lambda}^{*}\cdot \langle \mathbf{q_{0}}|j(0)|\mathbf{q_{n}}\rangle\int d^{3}r e^{i\mathbf{q_{n}}\cdot\mathbf{r}} e^{-i\mathbf{q_{0}}\cdot\mathbf{r}} e^{-i\mathbf{k}\cdot\mathbf{r}}\\

&=\boldsymbol{\lambda}^{*}\cdot \langle \mathbf{q_{0}}|j(0)|\mathbf{q_{n}}\rangle\delta(\mathbf{q_{n}}-\mathbf{q_{0}}-\mathbf{k})\\

\end{align}

    The last shows that

    \begin{align}
\hbar \mathbf{k} =\hbar \mathbf{q_{n}} -\hbar \mathbf{q_{0}}   
\end{align}

    Electric Dipole Transitions

    Let's consider a nucleus (say hydrogen atom) well localized in space. Typically the wave length of the emitted light is much bigger than electron's orbit around nucleus (say Bohr radius a_{B}\!). For example the wavelength of blue light is on the order of 100 nm or 1000 Angstrom, while the wavelength of the electron orbiting the nucleus in the Hydrogen atom is of the order of 1 Angstrom. This means that:

    \;\;\;\;\lambda >>> a_{B}\;\;\;\;\leftrightarrow\;\;\;\;\;\mathbf{k}<<<1

    The matrix element is then

    \begin{align}
\langle 0|\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*}|n\rangle

&=\mathbf{\lambda}^{*}\cdot\langle 0|\mathbf{j}_{\mathbf{k}}|n\rangle\\

&=\mathbf{\lambda}^{*}\cdot\int d^{3}\mathbf{r}\;e^{-i\mathbf{k}\cdot\mathbf{r}} \langle 0|\mathbf{j}(\mathbf{r})|n\rangle\\

&=\mathbf{\lambda}^{*}\cdot\int d^{3}\mathbf{r}\;\left[1-i\mathbf{k}\cdot\mathbf{r}+...\right] \langle 0|\mathbf{j}(\mathbf{r})|n\rangle\\

&\cong\mathbf{\lambda}^{*}\cdot\int d^{3}\mathbf{r}\;\langle 0|\mathbf{j}(\mathbf{r})|n\rangle\\

&\cong\mathbf{\lambda}^{*}\cdot\int d^{3}\mathbf{r}\;\langle 0|\frac{1}{2}\left[\sum_{i} \frac{\mathbf{p}_{i}}{m}  \delta(\mathbf{r}-\mathbf{r}_{i})+\delta(\mathbf{r}-\mathbf{r}_{i})\frac{\mathbf{p}_{i}}{m}\right]|n\rangle\\

&\cong\mathbf{\lambda}^{*}\cdot\langle 0|\sum_{i} \frac{\mathbf{p}_{i}}{m}|n\rangle\\

&\cong\mathbf{\lambda}^{*}\cdot\langle 0|\frac{\mathbf{P}}{m}|n\rangle\;\;\;\;\;\;\;
\leftarrow\;\;\;\;\;\frac{\mathbf{P}}{m}=\frac{[\mathbf{R},\mathbf{H}_{0}]}{i\hbar}\\

&\cong\mathbf{\lambda}^{*}\cdot\frac{1}{i\hbar}\langle 0|[\mathbf{R}\cdot \mathbf{H}_{0}-\mathbf{H}_{0}\cdot\mathbf{R}]|n\rangle\\

&\cong\mathbf{\lambda}^{*}\cdot\frac{1}{i\hbar}\langle 0|[\mathbf{R}E_{n}-E_{0}\mathbf{R}]|n\rangle\\

&\cong\mathbf{\lambda}^{*}\cdot\frac{E_{n}-E_{0}}{i\hbar}\langle 0|\mathbf{R}|n\rangle\\

&\cong\mathbf{\lambda}^{*}\cdot\frac{\hbar\omega_{n,0}}{i\hbar}\langle 0|\mathbf{R}|n\rangle\\

&\cong\mathbf{\lambda}^{*}\cdot\frac{\omega_{n,0}}{i}\underbrace{\langle 0|\mathbf{R}|n\rangle}_{\mathbf{d}_{0,n}}\\

&\cong\frac{\omega_{n,0}}{i}\mathbf{d}_{0,n}\cdot\mathbf{\lambda}^{*}\\ 
\end{align}

    Notice that \mathbf{d}_{0,n} is the off diagonal elements of the dipole moment operator. The power per unit of solid angle for a given polarization λ is given by

    \begin{align}
\frac{dP}{d\Omega}
&=\frac{e^{2}\omega^{2}_{n,0}}{2\pi c^{3}}|\langle 0|\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*}|n\rangle|^{2}\\

&\cong\frac{e^{2}\omega^{2}_{n,0}}{2\pi c^{3}}\left|\frac{\omega_{n,0}}{i}\mathbf{d}_{0,n}\cdot\mathbf{\lambda}^{*}\right|^{2}\\

&\cong\frac{e^{2}\omega^{4}_{n,0}}{2\pi c^{3}}\left|\mathbf{d}_{0,n}\cdot\mathbf{\lambda}^{*}\right|^{2}\\

\end{align}

    Selection Rules

    Let's assume that initial and final states are eigenstates of \mathbf{L}^{2} and \mathbf{L}_{z}. Using commutation relationships we can obtain the following selection rules the vector \mathbf{d}_{0,n}:

    1. Selection Rules for m

    1.1[\mathbf{L}_{z},\mathbf{R}_{z}]=0. From this we have

    \begin{align}
0&=\langle l' m' |[\mathbf{L}_{z},\mathbf{R}_{z}]| l m \rangle\\
&=\langle l' m' |\mathbf{L}_{z} \mathbf{R}_{z} - \mathbf{L}_{z}\mathbf{R}_{z}| l m \rangle\\
&=\hbar(m'-m)\langle l' m' |\mathbf{R}_{z}| l m \rangle\\
\end{align}

    This means that \langle l' m' |\mathbf{R}_{z}| l m \rangle=0 if m'-m\neq 0.

    1.2
    • [\mathbf{L}_{z},\mathbf{R}_{x}]=i\hbar\mathbf{R}_{y}. From this we have
      \begin{align}
\langle l' m' |[\mathbf{L}_{z},\mathbf{R}_{x}] |l m \rangle&=i\hbar \langle l' m' |\mathbf{R}_{y}] l m \rangle\\
(m'-m)\langle l' m' |\mathbf{R}_{x}| l m \rangle&=i\langle l' m' |\mathbf{R}_{y}] l m \rangle\\
\end{align}
    • [\mathbf{L}_{z},\mathbf{R}_{y}]=-i\hbar\mathbf{R}_{x}. From this we have
      \begin{align}
\langle l' m' |[\mathbf{L}_{z},\mathbf{R}_{y}] |l m \rangle&=-i\hbar \langle l' m' |\mathbf{R}_{x}] l m \rangle\\
(m'-m)\langle l' m' |\mathbf{R}_{y}| l m \rangle&=-i\langle l' m' |\mathbf{R}_{x}] l m \rangle\\
\end{align}

    Combining

    \begin{align}
(m'-m)^{2}\langle l' m'|\mathbf{R}_{x}|l m \rangle=\langle l' m'|\mathbf{R}_{x}|l m \rangle\\
(m'-m)^{2}\langle l' m'|\mathbf{R}_{y}|l m \rangle=\langle l' m'|\mathbf{R}_{y}|l m \rangle\\
\end{align}

    From here we see that

    \begin{align}
(m'-m)^{2}\langle l' m'|\mathbf{R}_{x,y}|l m \rangle&=\langle l' m'|\mathbf{R}_{x,y}|l m \rangle\\
((m'-m)^{2}-1)\langle l' m'|\mathbf{R}_{x,y}|l m \rangle &=0\\

\end{align}

    This means that \langle l' m'|\mathbf{R}_{x,y}|l m \rangle=0 if [(m'-m)^{2}-1]\neq0 \;\;\;\;\rightarrow\;\;\;\;m'\neq m\pm 1

    2. Selection Rule for l

    Consider the following commutator proposed by Dirac

    [\mathbf{L}^{2},[\mathbf{L}^{2},\mathbf{R}]]=2\hbar ^{2}(\mathbf{R}\mathbf{L}^{2}+\mathbf{L}^{2}\mathbf{R})

    After some algebra we can see that

    (l'+l)(l'+l+2)((l'-l)^{2}-1)\langle l' m'|\mathbf{R}|l m \rangle=0

    Since l is non negative (l'+l+2)\neq0\;\;\;\;\forall\;\;\;\;l',l . There are two possibilities:

    • \langle l' m'|\mathbf{R}|l m \rangle=0 if (l'+l)\neq 0. However (l' + l) = 0 for l' = l = 0, which corresponds to \langle 0 0|\mathbf{R}|0 0 \rangle=0. This possibility is trivial and it doesn't say anything new.
    • \langle l' m'|\mathbf{R}|l m \rangle=0 if ((l'-l)^{2}-1)\neq 0\;\;\;\;\rightarrow\;\;\;\;l'\neq l\pm 1


    Summary

    If the initial and final states are eigenstates for \mathbf{L}^{2} and \mathbf{L}_{z} then the possible transitions that can occur in the dipole approximation are

    \begin{align}
l'&= l\pm 1\\
m'&= m\\
m'&= m\pm 1\\
\end{align}


    Example: Transitions Among Levels n=1,2,3 of Hydrogen Atom

    Let's consider the levels n=1,2,3 of Hydrogen Atom. The possible transitions to the state 1S according to the sharp selection rules are the following



    The possibles transitions to the state 2p0 are the following



    Power & Polarization of Emitted Light

    Case m' = m: In this case the selection rules tell us that:

    \begin{align}
\mathbf{d}_{0,n}= \langle 0|\mathbf{R}|n\rangle=

\begin{pmatrix}
  \langle 0|\mathbf{R}_{x}|n\rangle  \\
  \langle 0|\mathbf{R}_{y}|n\rangle  \\
  \langle 0|\mathbf{R}_{z}|n\rangle  \\ 
\end{pmatrix}

=\begin{pmatrix}
  0  \\
  0  \\
  \langle 0|\mathbf{R}_{z}|n\rangle  \\ 
\end{pmatrix}
\end{align}

    Then we can say

    • The light is always plane polarized in the plane defined by \mathbf{k}.

    Image:Planepolarization.png

    • The power is given by
      \begin{align}
\frac{dP}{d\Omega}
&\cong\frac{e^{2}\omega^{4}_{n,0}}{2\pi c^{3}}\left|\mathbf{d}_{0,n}\cdot\mathbf{\lambda}^{*}\right|^{2}\\
&\cong\frac{e^{2}\omega^{4}_{n,0}}{2\pi c^{3}}\left|\langle 0|\mathbf{R}_{z}|n\rangle\right|^{2}\;sin^{2}\theta\\
\end{align}

    Case m'=m\pm 1: In this case the selection rules tell us that:

    \begin{align}
\mathbf{d}_{0,n}= \langle 0|\mathbf{R}|n\rangle=

\begin{pmatrix}
  \langle 0|\mathbf{R}_{x}|n\rangle  \\
  \langle 0|\mathbf{R}_{y}|n\rangle  \\
  \langle 0|\mathbf{R}_{z}|n\rangle  \\ 
\end{pmatrix}

=\begin{pmatrix}
\langle 0|\mathbf{R}_{x}|n\rangle  \\
  \langle 0|\mathbf{R}_{y}|n\rangle  \\
  0 \\ 
\end{pmatrix}
\end{align}

    From the previous result we have

    \begin{align}
\mp \langle l' m' |\mathbf{R}_{y}| l m \rangle&=-i\langle l' m' |\mathbf{R}_{x}] l m \rangle\\
\end{align}

    Then

    \begin{align}
\mathbf{d}_{0,n}= \langle 0|\mathbf{R}_{x}|n\rangle
\begin{pmatrix}
  1  \\
  \pm i  \\
  0  \\ 
\end{pmatrix}
\end{align}

    Then we can say

    • \mathbf{d}_{0,n} rest at the XY plane. The polarization of the emitted light is circular.
    • Lets put a detector to see the light coming toward positive Z axis. Since right circular polarized light has angular momentum \hbar while negative circular polarized light has angular momentum-\hbar we can state the following:
      • If we see a circular polarized light then by conservation of angular momentum we know that
        \begin{align}  
\hbar m=\hbar m' + \hbar^{photon}\;\;\;\;\;\rightarrow \;\;\;\;\; m'-m=-1
\end{align}
        the transition was m'-m=-1\!
      • If we see a negative circular polarized light then by conservation of angular momentum we know that
        \begin{align}  
\hbar m=\hbar m' - \hbar^{photon}\;\;\;\;\;\rightarrow \;\;\;\;\; m'-m=1
\end{align}
        the transition was m'-m=1\!


      Below are some examples of Einstein's coefficient and spontaneous emission calculations

      - An Example of Einstein's coefficient calculation

      - An Example of spontaneous emission calculation

      Scattering of Light

      ( Notes and LaTex code, courtesy of Dr. Oskar Vafek)



      We can analyze how a charged system interacts with photons and scatter them. The problem of light scattering can be considered as a transition from an initial state, |\chi_0\rangle=|0;N_{k,\lambda},N_{k',\lambda'}=0\rangle to a final state |n;N_{k,\lambda}-1,N_{k',\lambda'}=1\rangle . For this transition we can calculate the transition amplitude. Let us deal with some basics first. First of all we can write the Schrodinger equation for an electron in a potential \ V(r) interacting with quantized EM radiation as:

      i\hbar\frac{\partial}{\partial t}|\psi\rangle
=\mathcal{H}|\psi\rangle

      where

      \mathcal{H}=\frac{1}{2m}\left(p-\frac{e}{c}A(r)\right)^2+V(r)+\sum_{k,\hat{\lambda}}\hbar\omega_{k}\left(\hat{a}_{k\hat{\lambda}}^{\dagger}\hat{a}_{k\hat{\hat{\lambda}}}+\frac{1}{2}\right)

      We are considering the transverse gauge, in which the vector potential operator can be defined as: \mathbf{\hat{A}(r)}=\frac{1}{\sqrt{V}}\sum_{k,\lambda}\left[\sqrt{\frac{2\pi\hbar}{\omega_{k}}}c\;\left(\hat{a}_{k,\hat{\lambda}}\hat{\lambda}e^{ik\cdot r}+\hat{a}^{\dagger}_{k,\hat{\lambda}}\hat{\lambda^*}e^{-ik\cdot r}\right)\right]

      where

      [\hat{a}_{k\hat{\lambda}},\hat{a}_{k'\hat{\lambda'}}^{\dagger}]=\delta_{kk'}\delta_{\hat{\lambda}\hat{\lambda'}};\;\;\;\;
[\hat{a}_{k\hat{\lambda}},\hat{a}_{k'\hat{\lambda'}}]=0

      Let us define,

      \mathcal{H}=\mathcal{H}_0+\mathcal{H}'

      where

      \mathcal{H}_0=\mathcal{H}^{(at)}_0+\mathcal{H}^{(rad)}_0=\left(\frac{p^2}{2m}+V(r)\right)+\sum_{k,\hat{\lambda}}\hbar\omega_{k}\left(\hat{a}_{k\hat{\lambda}}^{\dagger}\hat{a}_{k\hat{\hat{\lambda}}}+\frac{1}{2}\right)


      and

      \mathcal{H}'=-\frac{e}{mc}\mathbf{A(r)}\cdot p+\frac{e^2}{2mc^2}\mathbf{A(r)}\cdot \mathbf{A(r)}


      We can use the Dirac picture to represent the wavefunction as: |\psi(t)\rangle=e^{-\frac{i}{\hbar}\mathcal{H}_0t}|\chi(t)\rangle

      Therefore,

      i\hbar\frac{\partial}{\partial t}|\chi\rangle =\mathcal{H}'_I(t)|\chi\rangle
=e^{\frac{i}{\hbar}\mathcal{H}_0t}\mathcal{H}'e^{-\frac{i}{\hbar}\mathcal{H}_0t}|\chi\rangle
=e^{\frac{i}{\hbar}\mathcal{H}^{(at)}_0t}\left(e^{\frac{i}{\hbar}\mathcal{H}^{(rad)}_0t}\mathcal{H}'
e^{-\frac{i}{\hbar}\mathcal{H}^{(rad)}_0t}\right)e^{-\frac{i}{\hbar}\mathcal{H}^{(at)}_0t}|\chi\rangle


      More precisely,

      \mathcal{H}'_I(t)= e^{\frac{i}{\hbar}\mathcal{H}^{(at)}_0t}\left(e^{\frac{i}{\hbar}\mathcal{H}^{(rad)}_0t}\mathcal{H}'
e^{-\frac{i}{\hbar}\mathcal{H}^{(rad)}_0t}\right)e^{-\frac{i}{\hbar}\mathcal{H}^{(at)}_0t}
=e^{\frac{i}{\hbar}\mathcal{H}^{(at)}_0t}\left(
-\frac{e}{mc}A(r,t)\cdot p+\frac{e^2}{2mc^2}A(r,t)\cdot A(r,t)\right)e^{-\frac{i}{\hbar}\mathcal{H}^{(at)}_0t}

      where the vector potential operator which is now time dependent can be defined as,

      \mathbf{A(r,t)}=\frac{1}{\sqrt{V}}\sum_{k,\lambda}\left[\sqrt{\frac{2\pi\hbar}{\omega_{k}}}c\;\left(\hat{a}_{k,\hat{\lambda}}\hat{\lambda}e^{ik\cdot r-i\omega_{k} t}+\hat{a}^{\dagger}_{k,\hat{\lambda}}\hat{\lambda^*}e^{-ik\cdot r+i\omega_{k}t}\right)\right]

      Using second order time dependent perturbation theory up to to 2nd order, we can write the wavefunction is Dirac picture as,

      |\chi(t)\rangle\approx|\chi_0\rangle+\frac{1}{i\hbar}\int_{-\infty}^{t}dt'\mathcal{H}'_I(t')|\chi_0\rangle+
\frac{1}{(i\hbar)^2}\int_{-\infty}^{t}dt'\int_{-\infty}^{t'}dt''\mathcal{H}'_I(t')\mathcal{H}'_I(t'')|\chi_0\rangle

      where the perturbation is slowly switched on at t=-\infty.

      As mentioned before,we need to calculate the transition probability from

      |\chi_0\rangle=|0;N_{k,\lambda},N_{k',\lambda'}=0\rangle to the final state

      |n;N_{k,\lambda}-1,N_{k',\lambda'}=1\rangle

      Therefore we need to calculate that following transition probability,

       C(t)=\langle n;N_{k,\lambda}-1,N_{k',\lambda'}=1|\chi(t)\rangle

      Using second order time dependent perturbation theory the probability for such a transition is

      C(t)=\frac{1}{i\hbar}\int_{-\infty}^{t}dt'\langle
n;N_{k,\lambda}-1,N_{k',\lambda'}=1|\mathcal{H}'_I(t')|0;N_{k,\lambda},N_{k',\lambda'}=0\rangle

       +\frac{1}{(i\hbar)^2}\int_{-\infty}^{t}dt'\int_{-\infty}^{t'}dt''\langle n;N_{k,\lambda}-1,N_{k',\lambda'}=1|\mathcal{H}'_I(t')\mathcal{H}'_I(t'')|0;N_{k,\lambda},N_{k',\lambda'}=0\rangle

      The required transition can be made by term proportional to \mathbf{A(r)}^2)(the diamagnetic term)in first order, while term proportional to \mathbf{A(r)} (paramagnetic term) gives non-zero overlap in second order perturbation theory. Therefore we have:


       \begin{align}C(t)&=\frac{1}{i\hbar}\int_{-\infty}^{t}dt'\langle
n;N_{k,\lambda}-1,N_{k',\lambda'}=1|
e^{\frac{i}{\hbar}\mathcal{H}^{(at)}_0t'}\left(
\frac{e^2}{2mc^2}\mathbf{A(r},t')\cdot \mathbf{A(r},t')\right)e^{-\frac{i}{\hbar}\mathcal{H}^{(at)}_0t'}
|0;N_{k,\lambda},N_{k',\lambda'}=0\rangle \\

&+ \frac{1}{(i\hbar)^2}\int_{-\infty}^{t}dt'
\int_{-\infty}^{t'}dt''\langle
n;N_{k,\lambda}-1,N_{k',\lambda'}=1|e^{\frac{i}{\hbar}\mathcal{H}^{(at)}_0t'}\left(
-\frac{e}{mc}A(r,t')\cdot p\right)e^{-\frac{i}{\hbar}\mathcal{H}^{(at)}_0t'}\times\\
 
&\times e^{\frac{i}{\hbar}\mathcal{H}^{(at)}_0t''}\left(
-\frac{e}{mc}A(r,t'')\cdot p\right)e^{-\frac{i}{\hbar}\mathcal{H}^{(at)}_0t''}
|0;N_{k,\lambda},N_{k',\lambda'}=0\rangle\\

\end{align}

      We can ignore the \mathbf r-dependence in gauge field by using dipole approximation, that is we can say \mathbf{exp}(-iK.r)=1-iK.r

      \begin{align}C(t)&=\frac{1}{i\hbar}\frac{e^2}{2mc^2}\int_{-\infty}^{t}dt'e^{\frac{i}{\hbar}(\epsilon_n-\epsilon_0)t'}\langle
N_{k,\lambda}-1,N_{k',\lambda'}=1| A(t')\cdot A(t')
|N_{k,\lambda},N_{k',\lambda'}=0\rangle\langle n|0\rangle\\

&+\frac{1}{(i\hbar)^2}\frac{e^2}{m^2c^2}\sum_{\alpha}\int_{-\infty}^{t}dt'
\int_{-\infty}^{t'}dt''e^{\frac{i}{\hbar}(\epsilon_n-\epsilon_{\alpha})t'}e^{\frac{i}{\hbar}(\epsilon_{\alpha}-\epsilon_0)t''}\times\\
&\langle N_{k,\lambda}-1,N_{k',\lambda'}=1|A_{\mu}(t')A_{\nu}(t'')|N_{k,\lambda},N_{k',\lambda'}=0\rangle \langle n| p_{\mu} |\alpha\rangle \langle \alpha| p_{\nu} |0\rangle \\
\end{align}

      Let's define \mathbf{C(t)=C_1(t)+C_2(t)}

      where

      \begin{align}C_1(t) &=\frac{\delta_{n,0}}{i\hbar}\frac{e^2}{2mc^2}
\frac{1}{V}\frac{2\pi\hbar c^2}{\sqrt{\omega_{k}\omega_{k'}}}\hat{\lambda}\cdot {\hat{\lambda}^{'*}}
\langle
N_{k,\lambda}-1,N_{k',\lambda'}=1|(a_{k\lambda}a^{\dagger}_{k'\lambda'}+a^{\dagger}_{k'\lambda'}a_{k\lambda})
|N_{k,\lambda},N_{k',\lambda'}=0\rangle \times\\

&\int_{-\infty}^{t}dt'e^{\frac{i}{\hbar}(\epsilon_n-\epsilon_0)t'}e^{-i(\omega_{k}-\omega_{k'})t'}e^{2\eta
t'}\\
  
 &=\frac{\delta_{n,0}}{i\hbar}\frac{e^2}{m}
\frac{1}{V}\frac{2\pi\hbar
}{\sqrt{\omega_{k}\omega_{k'}}}\hat{\lambda}\cdot {\hat{\lambda}^{'*}}\sqrt{N_{k\lambda}}
\times\frac{e^{\frac{i}{\hbar}(\epsilon_n-\epsilon_0)t}e^{-i(\omega_{k}-\omega_{k'})t}e^{2\eta
t}}{\frac{i}{\hbar}(\epsilon_n-\epsilon_0)-i(\omega_{k}-\omega_{k'})+2\eta}
\end{align}

      The second order term is

      \begin{align} C_2(t)&=
\frac{1}{(i\hbar)^2}\frac{e^2}{m^2c^2}\frac{1}{V}\frac{2\pi\hbar
c^2}{\sqrt{\omega_{k}\omega_{k'}}}\sqrt{N_{k\lambda}}
\sum_{\alpha}\int_{-\infty}^{t}dt' \int_{-\infty}^{t'}dt''
e^{\frac{i}{\hbar}(\epsilon_n-\epsilon_{\alpha})t'}e^{\frac{i}{\hbar}(\epsilon_{\alpha}-\epsilon_0)t''}\times\\
&\left( \langle n| p |\alpha\rangle \cdot \hat{\lambda}\langle
\alpha| p
|0\rangle\cdot{\hat{\lambda}}^{'*}e^{-i\omega_{k}t'}e^{\eta
t'}e^{i\omega_{k'}t''}e^{\eta t''}+
 \langle n| p |\alpha\rangle \cdot \hat{\lambda}^{'*}\langle
\alpha| p |0\rangle\cdot{\hat{\lambda}}e^{i\omega_{k'}t'}e^{\eta
t'}e^{-i\omega_{k}t''}e^{\eta t''}\right)\\
&=
\frac{1}{(i\hbar)^2}\frac{e^2}{m^2}\frac{1}{V}\frac{2\pi\hbar}{\sqrt{\omega_{k}\omega_{k'}}}\sqrt{N_{k\lambda}}
\times\frac{e^{\frac{i}{\hbar}\left(\epsilon_n-\epsilon_0+\hbar\omega_{k'}-\hbar\omega_{k}\right)t}e^{2\eta
t}}{\frac{i}{\hbar}\left(\epsilon_n-\epsilon_0+\hbar\omega_{k'}-\hbar\omega_{k}-2i\hbar\eta\right)}
\times\\
&\sum_{\alpha}\left( \frac{\langle n| p |\alpha\rangle \cdot
\hat{\lambda}\langle \alpha| p
|0\rangle\cdot{\hat{\lambda}}^{'*}}{\frac{i}{\hbar}\left(\epsilon_{\alpha}-\epsilon_0+\hbar\omega_{k'}-i\hbar\eta\right)}+
 \frac{\langle n| p |\alpha\rangle \cdot \hat{\lambda}^{'*}\langle
\alpha| p
|0\rangle\cdot{\hat{\lambda}}}{\frac{i}{\hbar}\left(\epsilon_{\alpha}-\epsilon_0-\hbar\omega_{k}-i\hbar\eta\right)}\right)\\  \end{align}


      Therefore,

      \begin{align}C(t)&=C_1(t)+C_2(t)\\
&=-\frac{e^{\frac{i}{\hbar}\left(\epsilon_n-\epsilon_0+\hbar\omega_{k'}-\hbar\omega_{k}\right)t}e^{2\eta
t}}{\left(\epsilon_n-\epsilon_0+\hbar\omega_{k'}-\hbar\omega_{k}-2i\hbar\eta\right)}\frac{\sqrt{N_{k\lambda}}}{V}
\frac{2\pi \hbar e^2}{m\sqrt{\omega_{k}\omega_{k'}}}\times\\
&\left(\delta_{n,0}\hat{\lambda}\cdot{\hat{\lambda}}^{'*}-\frac{1}{m}
\sum_{\alpha}\left( \frac{\langle n| p |\alpha\rangle \cdot
\hat{\lambda}\langle \alpha| p
|0\rangle\cdot{\hat{\lambda}}^{'*}}{\epsilon_{\alpha}-\epsilon_0+\hbar\omega_{k'}-i\hbar\eta}+
 \frac{\langle n| p |\alpha\rangle \cdot \hat{\lambda}^{'*}\langle
\alpha| p
|0\rangle\cdot{\hat{\lambda}}}{\epsilon_{\alpha}-\epsilon_0-\hbar\omega_{k}-i\hbar\eta}\right)\right)\\
\end{align}

      The time dependent probability is

      \begin{align}\mathcal{P}(t)&=|C(t)|^2\\
&=\frac{e^{4\eta
t}}{\left(\epsilon_n-\epsilon_0+\hbar\omega_{k'}-\hbar\omega_{k}\right)^2+4\hbar^2\eta^2}\frac{N_{k\lambda}}{V^2}
\frac{4\pi^2 \hbar^2 e^4}{m^2\omega_{k}\omega_{k'}}\times\\
&\left|\delta_{n,0}\hat{\lambda}\cdot{\hat{\lambda}}^{'*}-\frac{1}{m}
\sum_{\alpha}\left( \frac{\langle n| p |\alpha\rangle \cdot
\hat{\lambda}\langle \alpha| p
|0\rangle\cdot{\hat{\lambda}}^{'*}}{\epsilon_{\alpha}-\epsilon_0+\hbar\omega_{k'}-i\hbar\eta}+
 \frac{\langle n| p |\alpha\rangle \cdot \hat{\lambda}^{'*}\langle
\alpha| p
|0\rangle\cdot{\hat{\lambda}}}{\epsilon_{\alpha}-\epsilon_0-\hbar\omega_{k}-i\hbar\eta}\right)\right|^2\\ \end{align}


      and the transition rate is

      \begin{align}\Gamma&=\frac{\partial \mathcal{P}(t)}{\partial
t}\\
&=\frac{2\pi}{\hbar} \frac{N_{k\lambda}}{V^2}
\frac{4\pi^2 \hbar^2 e^4}{m^2\omega_{k}\omega_{k'}}\times
\delta\left(\epsilon_n-\epsilon_0-\hbar\omega_{k}+\hbar\omega_{k'}\right)
\times\\
&\left|\delta_{n,0}\hat{\lambda}\cdot{\hat{\lambda}}^{'*}-\frac{1}{m}
\sum_{\alpha}\left( \frac{\langle n| p |\alpha\rangle \cdot
\hat{\lambda}\langle \alpha| p
|0\rangle\cdot{\hat{\lambda}}^{'*}}{\epsilon_{\alpha}-\epsilon_0+\hbar\omega_{k'}-i\hbar\eta}+
 \frac{\langle n| p |\alpha\rangle \cdot \hat{\lambda}^{'*}\langle
\alpha| p
|0\rangle\cdot{\hat{\lambda}}}{\epsilon_{\alpha}-\epsilon_0-\hbar\omega_{k}-i\hbar\eta}\right)\right|^2\\
\end{align}


      We observe that, \frac{i}{\hbar}[\mathcal{H}_0^{(at)},r]=\frac{1}{m}p\;\;\Rightarrow\;\;
\langle n| p |\alpha\rangle=\frac{i}{\hbar}m\langle
n|[\mathcal{H}_0^{(at)},r]
|\alpha\rangle=\frac{i}{\hbar}m(\epsilon_n-\epsilon_{\alpha})\langle n| r
|\alpha\rangle


      Taking (as \eta\rightarrow 0) we get,

      \begin{align}&\frac{1}{m} \sum_{\alpha}\left( \frac{\langle n| p
|\alpha\rangle \cdot \hat{\lambda}\langle \alpha| p
|0\rangle\cdot{\hat{\lambda}}^{'*}}{\epsilon_{\alpha}-\epsilon_0+\hbar\omega_{k'}-i\hbar\eta}+
 \frac{\langle n| p |\alpha\rangle \cdot \hat{\lambda}^{'*}\langle
\alpha| p
|0\rangle\cdot{\hat{\lambda}}}{\epsilon_{\alpha}-\epsilon_0-\hbar\omega_{k}-i\hbar\eta}\right)\\
&=\frac{i}{\hbar} \sum_{\alpha}\left(
\frac{(\epsilon_n-\epsilon_{\alpha})\langle n| r |\alpha\rangle \cdot
\hat{\lambda}\langle \alpha| p
|0\rangle\cdot{\hat{\lambda}}^{'*}}{\epsilon_{\alpha}-\epsilon_n+\hbar\omega_{k}-i\hbar\eta}+
 \frac{(\epsilon_{\alpha}-\epsilon_0)\langle n| p |\alpha\rangle \cdot \hat{\lambda}^{'*}\langle
\alpha| r
|0\rangle\cdot{\hat{\lambda}}}{\epsilon_{\alpha}-\epsilon_0-\hbar\omega_{k}-i\hbar\eta}\right)\\
&=\frac{i}{\hbar} \sum_{\alpha}\left(-\langle n| r |\alpha\rangle
\cdot \hat{\lambda}\langle \alpha| p
|0\rangle\cdot{\hat{\lambda}}^{'*}+ \langle n| p |\alpha\rangle
\cdot \hat{\lambda}^{'*}\langle \alpha| r
|0\rangle\cdot{\hat{\lambda}}\right)\\

&+i\omega_{k} \sum_{\alpha}\left( \frac{\langle n| r
|\alpha\rangle \cdot \hat{\lambda}\langle \alpha| p
|0\rangle\cdot{\hat{\lambda}}^{'*}}{\epsilon_{\alpha}-\epsilon_n+\hbar\omega_{k}-i\hbar\eta}+
 \frac{\langle n| p |\alpha\rangle \cdot \hat{\lambda}^{'*}\langle
\alpha| r
|0\rangle\cdot{\hat{\lambda}}}{\epsilon_{\alpha}-\epsilon_0-\hbar\omega_{k}-i\hbar\eta}\right)\\
&=\delta_{n0}\hat{\lambda}\cdot{\hat{\lambda}}^{'*}+i\omega_{k}
\sum_{\alpha}\left( \frac{\langle n| r |\alpha\rangle \cdot
\hat{\lambda}\langle \alpha| p
|0\rangle\cdot{\hat{\lambda}}^{'*}}{\epsilon_{\alpha}-\epsilon_n+\hbar\omega_{k}-i\hbar\eta}+
 \frac{\langle n| p |\alpha\rangle \cdot \hat{\lambda}^{'*}\langle
\alpha| r
|0\rangle\cdot{\hat{\lambda}}}{\epsilon_{\alpha}-\epsilon_0-\hbar\omega_{k}-i\hbar\eta}\right)\\ 
\end{align}

      where in the second line we have used the energy conserving δ − function, giving \epsilon_n+\hbar\omega_{k'}=\epsilon_0+\hbar\omega_{k}. Using the above commutation relation again we finally find

      \begin{align}&\frac{1}{m} \sum_{\alpha}\left( \frac{\langle n| p
|\alpha\rangle \cdot \hat{\lambda}\langle \alpha| p
|0\rangle\cdot{\hat{\lambda}}^{'*}}{\epsilon_{\alpha}-\epsilon_0+\hbar\omega_{k'}-i\hbar\eta}+
 \frac{\langle n| p |\alpha\rangle \cdot \hat{\lambda}^{'*}\langle
\alpha| p
|0\rangle\cdot{\hat{\lambda}}}{\epsilon_{\alpha}-\epsilon_0-\hbar\omega_{k}-i\hbar\eta}\right)\\
&=\delta_{n0}\hat{\lambda}\cdot{\hat{\lambda}}^{'*}+m\omega_{k}\omega_{k'}
\sum_{\alpha}\left( \frac{\langle n| r |\alpha\rangle \cdot
\hat{\lambda}\langle \alpha| r
|0\rangle\cdot{\hat{\lambda}}^{'*}}{\epsilon_{\alpha}-\epsilon_0+\hbar\omega_{k'}-i\hbar\eta}+
 \frac{\langle n| r |\alpha\rangle \cdot \hat{\lambda}^{'*}\langle
\alpha| r
|0\rangle\cdot{\hat{\lambda}}}{\epsilon_{\alpha}-\epsilon_0-\hbar\omega_{k}-i\hbar\eta}\right)\\
\end{align}

      Therefore

      \Gamma =\frac{2\pi}{\hbar} \frac{N_{k\lambda}}{V^2} \frac{4\pi^2
\hbar^2 e^4}{m^2\omega_{k}\omega_{k'}}\times
\delta\left(\epsilon_n-\epsilon_0-\hbar\omega_{k}+\hbar\omega_{k'}\right)m^2\omega^2_{k}\omega^2_{k'}

      ( \sum_{\alpha}\left( \frac{\langle n|r|\alpha \rangle \hat{\lambda}\langle \alpha| r|0\rangle {\hat{\lambda}}^{'*}} {\epsilon_\alpha-\epsilon_0 + \hbar \omega_{k'}-i\hbar \eta} + \frac{\langle n| r |\alpha\rangle  \hat{\lambda}^{'*}\langle
\alpha| r|0\rangle \hat{\lambda}}{\epsilon_\alpha-\epsilon_0 - \hbar \omega_{k}-i\hbar \eta}\right)


      To get the total transition rate we need to sum over all wavevectors in a solid angle dΩ'. \begin{align}dw\!\!&=\!\!\sum_{k'\in d\Omega'}\Gamma \\&= \frac{2\pi}{\hbar}
\frac{d\Omega'
\omega^2_{k'}}{8\pi^3c^3\hbar}\frac{N_{k\lambda}}{V}
\frac{4\pi^2 \hbar^2
e^4}{m^2\omega_{k}\omega_{k'}}m^2\omega^2_{k}\omega^2_{k'}\left|\sum_{\alpha}\left(
\frac{\langle n| r |\alpha\rangle \cdot \hat{\lambda}\langle
\alpha| r
|0\rangle\cdot {\hat{\lambda}}^{'*}}{\epsilon_{\alpha}-\epsilon_0+\hbar\omega_{k'}-i\hbar\eta}+
 \frac{\langle n| r |\alpha\rangle \cdot \hat{\lambda}^{'*}\langle
\alpha| r
|0\rangle\cdot {\hat{\lambda}}}{\epsilon_{\alpha}-\epsilon_0-\hbar\omega_{k}-i\hbar\eta}\right)
\right|^2\\
&=d\Omega'\frac{e^4\omega_{k}\omega^3_{k'}}{c^3}\frac{N_{k\lambda}}{V}
\left|\sum_{\alpha}\left( \frac{\langle n| r |\alpha\rangle \cdot
\hat{\lambda}\langle \alpha| r
|0\rangle\cdot {\hat{\lambda}}^{'*}}{\epsilon_{\alpha}-\epsilon_0+\hbar\omega_{k'}-i\hbar\eta}+
 \frac{\langle n| r |\alpha\rangle \cdot \hat{\lambda}^{'*}\langle
\alpha| r
|0\rangle\cdot {\hat{\lambda}}}{\epsilon_{\alpha}-\epsilon_0-\hbar\omega_{k}-i\hbar\eta}\right)
\right|^2\\\end{align}

      where \epsilon_n+\hbar\omega_{k'}=\epsilon_0+\hbar\omega_{k}. Finally the differential cross-section is found by dividing by the photon flux \mathbf{c N_{k\lambda}/V} to yield

      \frac{d\sigma}{d\Omega'}
=\frac{e^4\omega_{k}\omega^3_{k'}}{c^4}
\left|\sum_{\alpha}\left( \frac{\langle n| r |\alpha\rangle \cdot
\hat{\lambda}\langle \alpha| r
|0\rangle\cdot{\hat{\lambda}}^{'*}}{\epsilon_{0}-\epsilon_{\alpha}-\hbar\omega_{k'}+i\hbar\eta}+
 \frac{\langle n| r |\alpha\rangle \cdot \hat{\lambda}^{'*}\langle
\alpha| r
|0\rangle\cdot{\hat{\lambda}}}{\epsilon_{0}-\epsilon_{\alpha}+\hbar\omega_{k}+i\hbar\eta}\right)
\right|^2


      Therefore the scattering cross-section is inversely proportional to the fourth power of wavelength ( for elastic scattering). This explains why sky is blue since blue light having lower wavelength, gets scattered more.


      Example: 2010 final exam problem [[1]]


      Interaction between an atom and an external field

      Now consider an atom interact with a classical travelling wave with frequency ωp. The Hamiltonian of the system under dipole approximation can be written as

       
\mathcal{H}=\underbrace{\frac{\vec{p}^2}{2m}+V(\vec{r})}_{\mathcal{H}_{0}} \underbrace{-\frac{e}{mc} \vec{A}(t)\cdot\vec{p} }_{\mathcal{H}_{int}}

      where

       
\mathbf{A}(t)=A \boldsymbol{\lambda}e^{-i\omega_p t}+A^{*} \boldsymbol{\lambda}^{*} e^{i\omega_p t}


      Assume the driving wave only make the atom transit between two levels, |e\rangle , |g\rangle , and the energy difference between the 2 levels is ω0. Then the Hamiltonian can be rewritten as:


       \begin{align} \mathcal{H}_{0}&=\left ( |e\rangle \langle e| + |g\rangle \langle g| \right) \left(\frac{\vec{p}^2}{2m}+V(r)\right)  \left( |e\rangle \langle e| + |g\rangle \langle g| \right) \\ &= \frac{\hbar\omega_{0}}{2}\left (|e\rangle \langle e|-|g\rangle \langle g| \right)   \end{align}

       \begin{align}
\mathcal{H}_{int}=-\frac{e}{mc  } \left ( 
   A   \boldsymbol{\lambda}    \cdot   \langle e| \vec{p} |g\rangle   |e\rangle \langle g| e^{-i\omega_p t}

+   A^{*} \boldsymbol{\lambda}^{*} \cdot   \langle e| \vec{p} |g\rangle   |e\rangle \langle g|        e^{i\omega_p t}    

+   A \boldsymbol{\lambda} \cdot   \langle g| \vec{p} |e\rangle   |g\rangle \langle e| e^{-i\omega_p t}

+  A^{*} \boldsymbol{\lambda}^{*} \cdot   \langle g| \vec{p} |e\rangle    |g\rangle \langle e| e^{i\omega_p t} \right )


\end{align}


      Under interaction picture, the Hamiltonian is modified to

       \begin{align}
\mathcal{H'}=-\frac{e}{mc  } \left ( 
   A   \boldsymbol{\lambda}    \cdot   \langle e| \vec{p} |g\rangle   |e\rangle \langle g| e^{i(\omega_{0}-\omega_p) t}

+   A^{*} \boldsymbol{\lambda}^{*} \cdot   \langle e| \vec{p} |g\rangle   |e\rangle \langle g|        e^{i(\omega_{0}+\omega_p) t}    

+   A \boldsymbol{\lambda} \cdot   \langle g| \vec{p} |e\rangle   |g\rangle \langle e| e^{-i(\omega_{0}+\omega_p) t}

+  A^{*} \boldsymbol{\lambda}^{*} \cdot   \langle g| \vec{p} |e\rangle    |g\rangle \langle e| e^{-i(\omega_{0}-\omega_p) t} \right )


\end{align}


      If \omega_{0} \approx \omega_{p} , we can drop the fast oscillating term (This can be seen by integrating the Hamiltonian w.r.t time and we will obtain 1 / (ω0 − ωp) from e^{it(\omega_{0}-\omega_{p})} and 1 / (ω0 − ωp) from e^{it(\omega_{0}-\omega_{p})} and obviously 1/(\omega_{0}-\omega_{p}) \gg (\omega_{0}+\omega_{p})  ), this technique is called rotating wave approximation. Then resulting Hamiltonian is:


       \begin{align}
\mathcal{H'}= \Omega  |e\rangle \langle g| e^{i(\omega_{0}-\omega_p) t}

+  \Omega^{*} |g\rangle \langle e| e^{-i(\omega_{0}-\omega_p) t} 

\end{align}


      where  \Omega \equiv  -\frac{e}{mc  }  A   \boldsymbol{\lambda}    \cdot   \langle e| \vec{p} |g\rangle     .


      If the driving field is in exactly resonant with the atom, ω0 = ωp, then

       \begin{align}
\mathcal{H'}= \Omega  |e\rangle \langle g| 

+  \Omega^{*} |g\rangle \langle e| 

\end{align}


      Let |\psi (t) \rangle = a(t)|g \rangle + b(t)|e \rangle .

      Then

      
\begin{align}
i\hbar\dot{a}(t) & = \Omega^{*} b(t)\\
i\hbar\dot{b}(t) & = \Omega a(t)
\end{align}

      
\begin{align}
\dot{a}(t) & = -\frac{|\Omega|^{2}}{\hbar ^{2}} a(t)\\
\dot{b}(t) & = -\frac{|\Omega|^{2}}{\hbar ^{2}} b(t)
\end{align}

      If the system initially sit in the excited state, then

      
\begin{align}
a(t) & = sin\frac{|\Omega|}{\hbar }t \\
b(t) & = cos\frac{|\Omega|}{\hbar }t 
\end{align}

       |\psi(t) \rangle =  cos\frac{|\Omega|}{\hbar }t |e \rangle + sin\frac{|\Omega|}{\hbar }t |g \rangle

      So the system oscillate between up and down states with a frequency   \frac{|\Omega|}{\hbar } , we called this Rabi frequency.

      Interaction between an trapped ion and an external EM wave

      An ion can be trapped quite stably in a device and vibrate only along one direction with frequency ΩT. Now consider an atom interact with a classical travelling wave with frequency ωp where  \omega_p \approx \omega_0 . The Hamiltonian of the system under dipole approximation can be written as


       
\mathcal{H}=\underbrace{  \frac{\hbar\omega_{0}}{2}\left (|e\rangle \langle e|-|g\rangle \langle g| \right)  + \Omega_{T}\hat{b}^{\dagger} \hat{b}   }_{\mathcal{H}_{0}} \underbrace{-\frac{e}{mc} \vec{A}(\vec{r},t)\cdot\vec{p} }_{\mathcal{H}_{int}}

      where

       \begin{align}
\mathbf{A}(\vec{r},t)&=A \boldsymbol{\lambda}e^{i(\vec{k}_{p}\cdot \vec{r}-\omega_p t) }+A^{*} \boldsymbol{\lambda}^{*} e^{-i (\vec{k}_{p}\cdot\vec{r}-\omega_p t)} \\ &= A \boldsymbol{\lambda}e^{i(\eta(\hat{b}^{\dagger} + \hat{b}  )-\omega_p t) }+A^{*} \boldsymbol{\lambda}^{*} e^{-i (\eta(\hat{b}^{\dagger} + \hat{b}  )-\omega_p t)}

\end{align}

      where \eta \equiv \frac{k_{p} cos\theta}{\sqrt{ 2 M \Omega_{T}  }}  is called the Lamb-Dicke parameter of the trapped ion. And M is the mass of the trapped ion and θ is the angle between the driving wave and the vibrational direction of the ion. We take η as a small value, which means the vibrational distance of the ion is much smaller than the wavelength of the driving wave. Then

       \begin{align}
\mathbf{A}(\vec{r},t)&= A \boldsymbol{\lambda}  (1 + i\eta(\hat{b}^{\dagger} + \hat{b}  ) )  e^{-i\omega_p t }+A^{*} \boldsymbol{\lambda}^{*} (1 - i\eta(\hat{b}^{\dagger} + \hat{b}  ) )  e^{i\omega_p t}

\end{align}


      Using interaction picture, the Hamiltonian under rotating wave approximation is

       \begin{align}
\mathcal{H'}= \Omega (1 + i\eta(\hat{b}^{\dagger}e^{i\Omega_{T}t} + \hat{b}e^{-i\Omega_{T}t}  ) )  |e\rangle \langle g|     e^{i(\omega_{0}-\omega_p) t}

+  \Omega^{*}(1 - i\eta(\hat{b}^{\dagger}e^{i\Omega_{T}t} + \hat{b}e^{-i\Omega_{T}t}  ) )  |g\rangle \langle e|   e^{-i(\omega_{0}-\omega_p) t} 

\end{align}


      where  \Omega \equiv  -\frac{e}{mc  }  A   \boldsymbol{\lambda}    \cdot   \langle e| \vec{p} |g\rangle     .


      Assume the initial state of the system is it sit at excited internal state  | e \rangle and vibrational state |n \rangle .


      If the resonance condition is ω0 = ωp, following the idea of dropping the non-resonance oscillating term, we arrive


       
\mathcal{H}_{1}= \Omega  |e\rangle \langle g| 

+  \Omega^{*} |g\rangle \langle e|


      ie, the Hamiltonian is same as a simple 2-level atomic system interacting with a external field, and we get

       |\psi(t) \rangle =  cos\frac{|\Omega|}{\hbar }t |e \rangle|n\rangle + sin\frac{|\Omega|}{\hbar }t |g \rangle|n \rangle  .



      If the resonance condition is first blue sideband resonance, ie ωp = ω0 + ΩT, following the idea of dropping the non-resonance oscillating term, we arrive


       \begin{align}
\mathcal{H}_{2} = i\eta  \Omega \hat{b}^{\dagger} |e\rangle \langle g|   

-i\eta\Omega^{*}\hat{b}  |g\rangle \langle e|  

\end{align}


      The evaluation operator can be obtained by direct series expansion and arrive:  \begin{align}

e^{-i \frac{\mathcal{H}_{2}t}{\hbar}} =cos(\eta \frac{|\Omega|}{\hbar}t\sqrt{\hat{b}\hat{b}^{\dagger}})|e\rangle\langle e|+

  cos(\eta \frac{|\Omega|}{\hbar}t\sqrt{\hat{b}^{\dagger}\hat{b}})|g\rangle\langle g|  
 
 +\hat{b}\frac{sin(\eta \frac{|\Omega|}{\hbar}t\sqrt{\hat{b}^{\dagger}\hat{b}})}{\sqrt{\hat{b}^{\dagger}\hat{b}}} |e \rangle \langle g|
 
 -\hat{b}^{\dagger}\frac{sin(\eta \frac{|\Omega|}{\hbar}t\sqrt{\hat{b}\hat{b}^{\dagger}})}{\sqrt{\hat{b}\hat{b}^{\dagger}}} |g \rangle \langle  e|


\end{align}

      Thus the state would evolve as

       |\psi(t)\rangle=cos(\sqrt{n+1}\eta \frac{|\Omega|}{\hbar}t)|e\rangle|n\rangle
    
    -sin(\sqrt{n+1}\eta \frac{|\Omega|}{\hbar}t)|g \rangle |n+1\rangle


      ie it oscillate between |g \rangle |n+1\rangle and |e \rangle |n\rangle



      On the other hand, if the resonance condition is first red sideband resonance, ie ωp = ω0 − ΩT, following the idea of dropping the non-resonance oscillating term, we arrive


       \begin{align}
\mathcal{H}_{3} = i\eta  \Omega \hat{b}^{\dagger} |e\rangle \langle g|   

-i\eta\Omega^{*}\hat{b}  |g\rangle \langle e|  

\end{align}


      The evaluation operator can be obtained by direct series expansion and arrive:  \begin{align}

e^{-i \frac{\mathcal{H}_{3}t}{\hbar}} =cos(\eta \frac{|\Omega|}{\hbar}t\sqrt{\hat{b}\hat{b}^{\dagger}})|g\rangle\langle g|+

  cos(\eta \frac{|\Omega|}{\hbar}t\sqrt{\hat{b}^{\dagger}\hat{b}})|e\rangle\langle e|  
 
 -\hat{b}\frac{sin(\eta \frac{|\Omega|}{\hbar}t\sqrt{\hat{b}^{\dagger}\hat{b}})}{\sqrt{\hat{b}^{\dagger}\hat{b}}} |g \rangle \langle e|
 
 +\hat{b}^{\dagger}\frac{sin(\eta \frac{|\Omega|}{\hbar}t\sqrt{\hat{b}\hat{b}^{\dagger}})}{\sqrt{\hat{b}\hat{b}^{\dagger}}} |e \rangle \langle  g|


\end{align}

      Thus the state would evolve as

       |\psi(t)\rangle=cos(\sqrt{n}\eta \frac{|\Omega|}{\hbar}t)|e\rangle|n\rangle
    
    -sin(\sqrt{n}\eta \frac{|\Omega|}{\hbar}t)|g \rangle |n-1\rangle


      ie it oscillate between |g \rangle |n-1\rangle and |e \rangle |n\rangle .

      Non-Perturbative Methods

      Apart from the conventional perturbative methods, there also exist non-perturbative methods to approximately determine the lowest energy eigenstate or ground state, and some excited states, of a given system. Superconductivity and the fractional quantum Hall effect are examples of problems that were solved using non-perturbative methods. One of the important methods in the approximate determination of the wave function and eigenvalues of a system is the Variational Method, which is based on the variational principle. The variational method is a very general one that can be used whenever the equations can be put into the variational form. The variational method is now a springboard to many numerical computations.

      Principle of the Variational Method

      Consider a completely arbitrary system with time independent Hamiltonian \mathcal{H} and we assume that it's entire spectrum is discrete and non-degenerate.

      \mathcal{H}|{\varphi}_{n}\rangle=\mathcal{E}_{n}|{\varphi}_{n}\rangle ; n = 0,1,2,\dots \!

      Let's apply the variational principle to find the ground state of the system. Let  |{\psi}\rangle be an arbitrary ket of the system. We can define the expectation value of the Hamiltonian as

      \langle\mathcal{H}\rangle=\frac{\langle{\psi}|\mathcal{H}|{\psi}\rangle}{\langle{\psi}|{\psi}\rangle}\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad (4.1.1)

      Of course, if the wavefunction is normalized so that \langle{\psi}|{\psi}\rangle=1 , then the expectation value of the Hamiltonian is just: \langle\mathcal{H}\rangle=\langle{\psi}|\mathcal{H}|{\psi}\rangle

      The variational principle states that,

      \langle\mathcal{H}\rangle\geq \mathcal{E}_0 \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad (4.1.2)

      \langle\mathcal{H}\rangle= \mathcal{E}_0 \qquad \qquad \qquad \qquad \qquad \qquad is only true if the wave functions used in the expectation value are the exact wave functions of the true ground state for the Hamiltonian; they cannot be unperturbed or approximate wave functions.

      Because the expectation value of the Hamiltonian is always greater than or equal to the ground state energy, this gives an upper bound for the ground state energy when using unperturbed wavefunctions to calculate the expectation value.

      If you are making a guess at the wavefunction, but do not know it explicitly, you can write it up to a parameter and then minimize the expectation value of the Hamiltonian with respect to that parameter. For example, we can write the ground state wavefunction of the hydrogen atom up to a parameter as:

      \psi= \dfrac{e^{-b r}}{\sqrt{\pi a_o^3}}

      You would then minimize the expectation value of  \mathcal{H} \! with respect to  b \! , lowering your upper bound as far as possible so that you have a better idea of the true value of the energy.

      In some cases a lower bound can also be found by a similar method. In the case that \langle{\psi}|V|{\psi}\rangle\geq \ 0 ,  V \! is said to be a positive operator because \langle{\psi}|\mathcal{H}+V|{\psi}\rangle = E \geq \langle{\psi}|V|{\psi}\rangle. Therefore, \langle{\psi}|V|{\psi}\rangle is a lower bound for the energy.


      Since the exact eigenfunctions of  |{\varphi}\rangle form a complete set, we can express our arbitrary ket  |{\psi}\rangle as a linear combination of the exact wavefunction. Therefore,we have

       |{\psi}\rangle=\sum_{n} C_n |{\varphi}_n\rangle \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad (4.1.3)

      Multiplying both sides by \langle{\psi}|\mathcal{H} we get

      \langle{\psi}|\mathcal{H}|{\psi}\rangle= \sum_{n} |C_n|^{2}\langle{\varphi}_n| \mathcal{H} |{\varphi}_n\rangle =\sum_{n}|C_n|^{2} \mathcal{E}_n


      However,  \mathcal{E}_n \geq \mathcal{E}_0 . So, we can write the above equation as

      \langle{\psi}|\mathcal{H}|{\psi}\rangle \geq \mathcal{E}_0 \sum_{n} |C_n|^{2}

      Or

       \mathcal{E}_0 \leq \frac{ \langle{\psi}|\mathcal{H}|{\psi}\rangle } {\langle{\psi}|{\psi}\rangle} \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad (4.1.4)

      with {\langle{\psi}|{\psi}\rangle}=\sum_{n} |C_n|^{2}, thus proving eq. #4.1.2.


      Thus eq. #4.1.2 gives an upper bound to the exact ground state energy. For the equality to be applicable in the \qquad \text{(4.1.2)} all coefficients except \mathcal{C}_0 should be zero and then |{\psi}\rangle will be the eigenvector of the Hamiltonian and \mathcal{E}_0 the ground state eigenvalue.

      Generalization of Variational Principle: The Ritz Theorem.

      We claim that the expectation value of the Hamiltonian is stationary in the neighborhood of its discrete eigenvalues. Let us again consider the expectation value of the Hamiltonian eq.#4.1.1.

      \langle\mathcal{H}\rangle=\frac{\langle{\psi}|\mathcal{H}|{\psi}\rangle}{\langle{\psi}|{\psi}\rangle}

      Here \langle\mathcal{H}\rangle is considered as a functional of |\psi\rangle. Let us define the variation of \langle\mathcal{H}\rangle such that |\psi\rangle goes to |\psi\rangle +| \delta \psi\rangle where | \delta \psi\rangle is considered to be infinetly small. Let us rewrite eq.#4.1.1 as

      \langle\mathcal{H}\rangle\langle{\psi}|{\psi}\rangle=\langle{\psi}|\mathcal{H}|{\psi}\rangle\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad (4.2.1).


      Differentiating the above relation, \langle{\psi}|{\psi}\rangle\delta\langle\mathcal{H}\rangle+\langle\mathcal{H}\rangle[\langle{\psi}|\delta{\psi}\rangle+\langle\delta{\psi}|{\psi}\rangle]=\langle{\psi}|\mathcal{H}|{\delta\psi}\rangle+\langle{\delta\psi}|\mathcal{H}|{\psi}\rangle\qquad \qquad \qquad \qquad \qquad (4.2.2)


      However, \langle\mathcal{H}\rangle is just a c-number, so we can rewrite eq #4.2.2 as

      \langle{\psi} | {\psi}\rangle\delta\langle\mathcal{H}\rangle =\langle{\psi} | [\mathcal{H}-\langle\mathcal{H}\rangle] | {\delta\psi}\rangle+\langle{\delta\psi} | [\mathcal{H}-\langle\mathcal{H}\rangle]|{\psi}\rangle\qquad \qquad \qquad \qquad \qquad (4.2.3).


      If \delta \langle \mathcal{H}\rangle=0 , then the mean value of the Hamiltonian is stationary.

      Therefore,

      \langle{\psi} | [\mathcal{H}-\langle\mathcal{H}\rangle] | {\delta\psi}\rangle+\langle{\delta\psi} | [\mathcal{H}-\langle\mathcal{H}\rangle]|{\psi}\rangle=0 .


      Define, |{\varphi}\rangle =|[\mathcal{H}-\langle\mathcal{H}\rangle] | {\psi}\rangle \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad (4.2.4).

      Hence,eq. #4.2.3 becomes \langle{\varphi}|\delta{\psi}\rangle+ \langle\delta{\psi}|{\varphi}\rangle=0 \qquad \qquad \qquad \qquad \qquad \qquad (4.2.5).


      We can define the variation of |{\psi}\rangle as

      |\delta{\psi}\rangle=\delta\lambda|\delta{\psi}\rangle,


      with \lambda \! being a small (real) number. Therefore eq #4.2.5 can be written as

      \langle{\psi}|{\psi}\rangle \delta\lambda=0 \qquad \qquad \qquad \qquad \qquad \qquad (4.2.6)

      Since the norm is zero, the wave function itself should be zero. Keeping this in mind, if we analyze eq #4.2.4, it's clear that we can rewrite it as an eigenvalue problem.

      \mathcal{H}|{\psi}\rangle=\langle\mathcal{H}\rangle|{\psi}\rangle \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad (4.2.7).


      Finally, we can say that expectation value of the Hamiltonian is stationary if the arbitrary wavefunction |{\psi}\rangle is actually the eigenvector of the Hamiltonian with the stationary values of the expectation values of the Hamiltonian, \langle\mathcal{H}\rangle being precisely the eigen values of the Hamiltonian.

      The general method is to find an approximate trial wavefunction that contain one or more parameters  \alpha, \beta, \gamma, \dots \! . If the expectation value \langle\mathcal{H}\rangle can be differentiated with respect to these parameters, the extrema of \langle\mathcal{H}\rangle can be found using the following equation.

      \frac{\partial\langle\mathcal{H}\rangle}{\partial\alpha}=\frac{\partial\langle\mathcal{H}\rangle}{\partial\beta}=\frac{\partial\langle\mathcal{H}\rangle}{\partial\gamma}= \dots = 0

      The absolute minimum of the expectation value of the Hamiltonian obtained by this method correspond to the upper bound on the ground state energy. The other relative, extrema corresponds to excited states. There are many virtues of using the Variational method. Even a poor approximation to the actual wave function can yield an excellent approximation to the actual energy.

      Upper Bound on First Excited State

      We claim that if \langle{\psi}|{\varphi}_0\rangle=0, then \langle\mathcal{H}\rangle \geq \mathcal{E}_1 where \mathcal{E}_1 is the energy of the first excited state and |{\varphi}_0\rangle is the exact ground state of the Hamiltonian.

      From \qquad\text{(4.3)} it is clear that if the above condition is satisfied then,  \mathcal{C}_0=0 . Therefore,we can write the expectation value of the hamiltonian as

      \langle\mathcal{H}\rangle =\sum_{n=1} |\mathcal{C}_n|^2\mathcal{E}_n \geq \mathcal{E}_1 \sum_{n=1} |\mathcal{C}_n|^2

      Thus if we can find a suitable trial wavefunction that is orthogonal to the exact ground state wavefunction, then by calculating the expectation value of the Hamiltonian, we get an upperbound on the first excited state. The trouble is that we might not know the exact ground state( which is one reason why we implement the variational principle). However if we have a Hamiltonian which is an even function, then the exact ground state will be an even function and hence any odd trial function will be a right candidate as the first excited state wavefunction.


      Trial Wavefunction with Linear Parameter and the Hylleraas-Undheim Theory

      Let us consider a set of orthonormal wavefunctions \left \{ \psi  \right \} . So any arbitrary trial wavefunction may be constituted by the linear combination of \left \{ \psi  \right \} . That is

      |\phi\rangle = \sum_{n} a_n |\psi_{n}\rangle \!

      where an are linear parameters. The energy expectation value is now defined as,

      E_{\phi} = \left \langle H \right \rangle= \frac{\langle\phi|H|\phi\rangle}{\langle\phi|\phi\rangle}

      E_{\phi} = \frac{\sum_{n}\sum_{n{}'} a_n a_n{}' \langle\psi _{n}|H| \psi _{n}\rangle }{\sum_{n}\sum_{n{}'} a_n a_n{}' \langle\psi _{n}|\psi _{n}\rangle } ...............................(A)



      Since we have not told about the symmetry or orthonormality of \left \{ \psi  \right \}, lets us consider,

      H_{nn^'}= \langle\psi _{n}| H \psi _{n^'}\rangle and \Delta_{nn^'} = \langle\psi _{n}| \psi _{n^'}\rangle


      Now if \left \{ \psi  \right \} are orthonormal then, \Delta_{nn^'}= \delta_{nn^'}

      Therefore equation (A) stands as,

       E_{\phi}= \frac{\sum_{n}\sum_{n{}'} a_n a_n{}' H_{nn^{'} }}{\sum_{n}\sum_{n{}'} a_n a_n{}' \Delta_{nn^{'}}}

      E_{\phi}\sum_{n}\sum_{n{}'} a_n a_n{}' \Delta_{nn^{'}} - \sum_{n}\sum_{n{}'} a_n a_n{}' H_{nn^{'} } = 0


      Now minimizing the above equation with respect to a_n{}'\! , we have


      \sum_{n} a_n E_{\phi} \Delta_{nn^{'}} - \sum_{n} a_n  H_{nn^{'}} = 0

      omiting the subscript \phi\! we can write

      \sum_{n} a_n ( H_{nn^{'}} - E \Delta_{nn^{'}})\! = 0

      For n' = 1,2,3,...... we have

      a_1 ( H_{11} - E \Delta_{11} + a_2 ( H_{12} - E \Delta_{12}+ ...............+ a_N ( H_{1N} - E \Delta_{1N}) \! = 0

      a_1 ( H_{21} - E \Delta_{21} + a_2 ( H_{22} - E \Delta_{22}+ ...............+ a_N ( H_{2N} - E \Delta_{2N}) \! = 0 . .

      .

      .

      a_1 ( H_{N1} - E \Delta_{N1} + a_2 ( H_{N2} - E \Delta_{N2}+ ...............+ a_N ( H_{2N} - E \Delta_{NN}) \! = 0


      Now in Matrix form, 
\begin{pmatrix}
 H_{11} - E \Delta_{11} &  H_{12} - E \Delta_{12}  & ........  & H_{1N} - E \Delta_{1N}\\ 
 .&  .&  .&. \\ 
 .&  .&  .&. \\ 
 H_{N1} - E \Delta_{N1} &  H_{N2} - E \Delta_{N2}  & ........  & H_{NN} - E \Delta_{NN}
\end{pmatrix} \begin{pmatrix}
a_{1}\\ 
.\\ 
.\\ 
a_{n}
\end{pmatrix} = 0

      For the non-trivial values of an we have

      \begin{vmatrix}

 H_{11} - E \Delta_{11} &  H_{12} - E \Delta_{12}  & ........  & H_{1N} - E \Delta_{1N}\\ 
 .&  .&  .&. \\ 
 .&  .&  .&. \\ 
 H_{N1} - E \Delta_{N1} &  H_{N2} - E \Delta_{N2}  & ........  & H_{NN} - E \Delta_{NN}\end{vmatrix}


      The solution of this NxN determinant will give N roots say E_{0}^{N},E_{1}^{N}.......E_{N-1}^{N}. From the values the first one E_{0}^{N} will give the upper bound to the ground state energy.

      Now substituting E_{0}^{N} in 1st equation of the set of linear equations and then solving them we get the values of a_{n}\! and then the wave function \phi\! . Now if we add one more wave function in then the secular equation will give (N+1)\! values E\!, say E_{0}^{N+1},E_{1}^{N+1},E_{2}^{N+1}...........

      The previous energy level E_{0}^{N},E_{1}^{N}........... will now spirit to E_{0}^{N+1},E_{1}^{N+1}...........

      This theorem is known as the Hylleraas- Undheim Theorem.

      A Special Case where the Trial Functions form a Subspace

      Assume that we choose for the trial kets the set of kets belonging to a vector subspace \mathcal{F} of \mathcal{E}. In this case, the variational method reduces to the resolution of the eigenvalue equation of the Hamiltonian \mathcal{H} inside \mathcal{F}, and no longer in all of \mathcal{E}.

      To see this, we simply apply the argument of Sec. 4.2, limiting it to the kets |\psi\rangle of the subspace \mathcal{F}. The maxima and minima of \langle\mathcal{H}\rangle, characterized by \delta \langle\mathcal{H}\rangle=0, are obtained when |\psi\rangle is an eigen vector of \mathcal{H} in \mathcal{F}. The corresponding eigenvalues constitute the variational method approximation for the true eigenvalues of \mathcal{H} in \mathcal{E}.

      We stress the fact that the restriction of the eigenvalue equation of \mathcal{H} to a subspace \mathcal{F} of the state space \mathcal{E} can considerably simplify its solution. However, if \mathcal{F} is badly chosen, it can also yield results which are rather far from true eigenvalues and eigenvectors of \mathcal{H} in \mathcal{E}. The subspace \mathcal{F} must therefore be chosen so as to simplify the problem enough to make it soluble, without too greatly altering the physical reality. In certain cases, it is possible to reduce the study of a complex system to that of a two-level system, or at least, to that of a system of a limited number of levels. Another important example of this procedure is the method of the linear combination of atomic orbitals, widely used in molecular physics. This method essentially consists of the determination of the wave functions of electrons in a molecule in the form of linear combination of the eigenfunctions associated with the various atoms which constitute the molecule, treated as if they were isolated. It, therefore, limits the search for the molecular states to a subspace chosen using physical criteria. Similarly, in complement, we shall choose as a trial wave function for an electron in a solid a linear combination of atomic orbitals relative to the various ions which constitute this solid.

      Applications of Variational Method

      One Dimensional Infinite Well Potential

      The 1-D infinite potential well is defined by:

      
V = \left\{\begin{matrix}0,  for \left | x \right |<a
 & \\\infty, for \left | x \right |>a 
 & 
\end{matrix}\right.

      The exact solutions are: \psi(x) = \langle x|0\rangle  = \frac{1}{\sqrt2}\cos\left (\frac{\pi x}{2a}  \right ), and E_{0} = \left (\frac{\hbar^{2}}{2m}  \right )\left (\frac{\pi^{2}}{4a^{2}} \right ). Now let us suppose that we do not know the exact solution, but try to guess some tiral wavefunction and use variational method to arrive at the approximate answer. Now the wavefunction must vanish at x = \pm a;. the ground state wavefunction cannot have any nodes or wiggles.The simplest trial solution satisfying these requirements is a parabola passing through x = \pm a,y=0. So our trial solution becomes: \phi(x) = \langle x|0_{trial} \rangle = a^{2} - x^{2} ,upto a normalization factor.

      For this trial function,

       \bar{\mathcal{H}} = \frac{ \langle{\psi}|\mathcal{H}|{\psi}\rangle } {\langle{\psi}|{\psi}\rangle} or \bar{\mathcal{H}} = \frac{-\left (\frac{\hbar^{2}}{2m}  \right )\int_{-a}^{a}\left (a^2-x^2 \right)\frac{\partial^2 }{\partial x^2}\left (a^2-x^2 \right/)dx}{\int_{-a}^{a}\left (a^2-x^2 \right)^{2} dx} =\frac{\left (\frac{\hbar^{2}}{2m}  \right )2 \int_{-a}^{a}\left (a^2-x^2 \right) dx}{\int_{-a}^{a}\left (a^2-x^2 \right)^{2} dx}

      =\frac{\left (\frac{\hbar^{2}}{2m}  \right )\left (\frac{8a^{3}}{3} \right )}{\frac{16a^3}{15}}

      = \left ( \frac{\hbar^{2}}{2m} \right )\left ( \frac{5}{2a^{2}} \right ) \simeq 1.0132E_{0}.

      Even with such a simple trial function, with no variational parameter, we come within 1.3\% of the true ground state energy.

      A much more improved result can be obtained if we used a slightly more sofisticated wavefunction with a single variational parameter. Let us try: \langle x |\tilde{0} \rangle = \left | a \right |^{\lambda}-\left | x \right |^{\lambda}. λ is the variational parameter.

      For this trial wavefunction, \bar{\mathcal{H}} = \frac{-\left (\frac{\hbar^{2}}{2m}  \right )\int_{0}^{a}\left (a^{\lambda}-x^{\lambda} \right)\frac{\partial^2 }{\partial x^2}\left (a^{\lambda}-x^{\lambda} \right)dx}{\int_{0}^{a}\left (a^{\lambda}-x^{\lambda} \right)^{2} dx}

      = \left (\frac{\hbar^{2}}{4ma^{2}} \right) \left [ \frac{(\lambda+1)(2\lambda+1)}{2\lambda-1} \right ]

      which has a minimum at \lambda = \frac{1+\sqrt{6}}{2} \simeq 1.72

      This gives \bar{\mathcal{H}} = \left ( \frac{5+2 \sqrt{6}}{\pi^{2}} \right )E_{0} \simeq 1.00298E_{0}

      This variational method gives the correct ground state energy within 0.3\%

      Harmonic Potential

      Armed with the variational method let us apply it first to a simple Hamiltonian. Consider the following Hamiltonian with harmonic potential whose eigenvalues and eigenfunctions are known exactly. We will determine how close we can get with a suitable trial function.


      \mathcal{H}=-\frac{\hbar^2}{2m}\frac{d^2}{dx^2}+\frac{1}{2}mw^2x^2 \qquad \qquad\ \qquad \qquad \qquad \qquad (4.5.1.1)


      The above hamiltonian is even therefore, to find the ground state upper bound we need to use an even trial function. Let us consider the following state vector with one parameter \mathcal{\alpha}


      \psi(x)=A e^{-\alpha x^2}\qquad;\qquad\alpha>0   \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad (4.5.1.2)


      where A \,\! is the normalization constant.

      Let us normalize the trial wavefunction to be unity

      1=\langle\psi|\psi\rangle= |A|^2\int_{-\infty}^{\infty}e^{-2\alpha x^2} dx =|A|^2\sqrt{\frac{\pi}{2\alpha}} \Rightarrow A=\left[ \frac{2\alpha}{\pi}\right] ^{1/4}\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad(4.5.1.3)


      While, \langle\mathcal{H}\rangle= |A|^2 \int_{-\infty}^{\infty} dx e^{-\alpha x^2}\left[ -\frac{\hbar^2}{2m}\frac{d^2}{dx^2}+\frac{1}{2}mw^2x^2 \right] e^{-\alpha x^2}=\frac{\hbar^2\alpha}{2m}+\frac{mw^2}{8\alpha}\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad(4.5.1.4)

      Minimizing the expectation value with respect to the parameter we get,

      \frac{\partial\langle\mathcal{H}\rangle}{\partial\alpha}=  \frac{\hbar^2}{2m}-\frac{mw^2}{8\alpha^2}=0 \Rightarrow \alpha=\frac{mw}{2\hbar}


      Putting this value back in the expectation value, we get

      \langle \mathcal{H}\rangle_{min}=\frac{1}{2}\hbar w

      Due to our judicious selection of trial wavefunction, we were able to find the exact ground state energy. If we want to find the first excited state, a suitable candidate for trial wavefunction would be an odd function.


      Rational wave functions

      The calculations of the previous sections enabled us to familiarize ourselves with the variational method, but they do not really allow us to judge its effectiveness as a method of approximation, since the families chosen always included the exact wave function. Therefore, we shall now choose trial functions of a totally different type, for example

      \psi_{a}(x)=\frac{1}{x^2+a}\qquad; \quad a>0

      A simple calculation then yields:

      \langle \psi_{a}|\psi_{a}\rangle=\int_{-\infty}^{+\infty}\frac{dx}{\left(x^2+a\right)^2}=\frac{\pi}{2a\sqrt{a}}

      and finally:

      \langle\mathcal{H}\rangle (a)=\frac{\hbar^2}{4m}\frac{1}{a}+\frac{1}{2}m \omega^2 a

      The minimum value of this function is obtained for:

      a=a_{0}=\frac{1}{\sqrt{2}}\frac{\hbar}{m \omega}

      and is equal to:

      \langle\mathcal{H}\rangle (a_{0})=\frac{1}{\sqrt{2}}\, \hbar \omega

      The minimum value is therefore equal to \sqrt{2} times the exact ground state energy \hbar \omega/2. To measure the error committed, we can calculate the ratio of \langle\mathcal{H}\rangle (a_{0})-\hbar \omega/2 to the energy quantum \hbar \omega:

      \frac{\langle\mathcal{H}\rangle (a_{0})-\frac{1}{2} \hbar \omega}{\hbar \omega}=\frac{\sqrt{2}-1}{2} \simeq 20 \%


      Discussions

      The example of the previous section shows that it is easy to obtain the ground state energy of a system, without significant error, starting with arbitrary chosen trial kets. This is one of the principal advantages of the variational method. Since the exact eigenvalue is a minimum of the mean value \langle\mathcal{H}\rangle , it is not surprising that \langle\mathcal{H}\rangle does not vary much near this minimum.

      On the other hand, as the same reasoning shows, the "approximate" state can be rather different from the true eigenstate. Thus, in the example of the previous section, the wave function \frac{1}{\left(x^2+a_{0}\right)} decreases too rapidly for small values of x \! and much too slowly when x \! becomes large. The table below gives quantitative support for this qualitative assertion. It gives, for various values of x^2 \!, the values of the exact normalized eigenfunction:

      \varphi_{0}(x)=\left(\frac{2 \alpha_{0}}{\pi}\right)^{1/4} e^{-\alpha_{0} x^2}

      and of the approximate normalized eigenfunction of the wave function \frac{1}{\left(x^2+a_{0}\right)} :

      \sqrt{\frac{2}{\pi}} (a_{0})^{3/4} \psi_{a_{0}}(x) = \sqrt{\frac{2}{\pi}} \frac{(a_{0})^{3/4}}{x^2+a_{0}} = \sqrt{\frac{2}{\pi}} \left(2 \sqrt{2} \alpha_{0} \right)^{1/4} \frac{1}{1+2\sqrt{2} \alpha_{0} x^2},

      where  a_0 = \frac{1}{2\sqrt2 \alpha_0} .

       x\sqrt{\alpha_{0}} \left(\frac{2\alpha_0}{\pi}\right)^{1/4}e^{-\alpha_0 x^2} \sqrt{\frac{2}{\pi}} \frac{\left(2 \sqrt{2} \alpha_0
\right)^{1/4}}{1+2\sqrt{2} \alpha_{0} x^2}
      0 0.893 1.034
      1/2 0.696 0.606
      1 0.329 0.270
      3/2 0.094 0.141
      2 0.016 0.084
      5/2 0.002 0.055
      3 0.0001 0.039

      Therefore, it is necessary to be very careful when physical properties other than the energy of the system are calculated using the approximate state obtained from the variational method. The validity of the result obtained varies enormously depending on the physical quantity under consideration. In the particular problem which we are studying here, we find, for example, that the approximate mean value of the operator X^2 \! is not very different from the exact value:

      \frac{\langle \psi_{a_{0}}|X^2|\psi_{a_{0}}\rangle}{\langle \psi_{a_{0}}|\psi_{a_{0}}\rangle}=\frac{1}{\sqrt{2}}\frac{\hbar}{m \omega}

      which is to be compared with \hbar/{2 m \omega}. On the other hand, the mean value of X4 is infinite for the approximate normalized eigenfunction, while it is, of course, finite for the real wave function. More generally, the table shows that the approximation will be very poor for all properties which depend strongly on the behavior of the wave function for x \gtrsim 2/\sqrt{\alpha_{0}}.

      The drawback we have just mentioned is all the more serious as it is very difficult, if not impossible, to evaluate the error in a variational calculation if we do not know the exact solution of the problem (and, of course, if we use the variational method, it is because we do not know this exact solution).

      The variational method is therefore a very flexible approximation method, which can be adapted to very diverse situations and which gives great scope to physical intuition in the choice of trial kets. It gives good values for the energy rather easily, but the approximate state vectors may present certain completely unpredictable erroneous features, and we can not check these errors. This method is particularly valuable when physical arguments give us an idea of the qualitative or semi-qualitative form of the solutions.


      Here is another problem related to the energy of the ground state and first excited state of a harmonic potential. -problem1

      Delta Function Potential

      As another example lets consider the delta function potential.

      Suppose  H = \frac{-\hbar^2}{2m}\frac{d^2}{dx^2} - \alpha \delta(x) . Use as a trial wave function, a gaussian function.  \Psi(x) = Ae^{-bx^2}

      First normalizing this:

       1=|A|^2 \int_{-\infty}^{\infty} e^{-2bx^2}dx = |A|^2 \sqrt(\frac{\pi}{2b}) \rightarrow A=(\frac{2b}{\pi})^{1/4}

      First calculate <T> then <V>.

       <T>= -\frac{\hbar^2}{2m}|A|^2\int_{-\infty}^{\infty} e^{-bx^2}\frac{d^2}{dx^2}(e^{-bx^2})dx = \frac{\hbar^2b}{2m}

       <V>=-\alpha^2|A|^2\int_{-\infty}^{\infty} e^{-bx^2}\delta(x)dx = -\alpha\sqrt(\frac{2b}{\pi})

      Evidently  <H> = \frac{\hbar^2b}{2m}-\alpha\sqrt(\frac{2b}{\pi})

      Minimizing with respect to the parameter b:

       \frac{d}{db}<H> = \frac{\hbar^2}{2m}-\frac{\alpha}{\sqrt(2\pi b)} = 0

      b = \frac{2m^2 \alpha^2}{\pi \hbar^4}

      So, plugging b back into the expression for the expectation value, we get

       <H>_{min}=-\frac{m \alpha^2}{\pi \hbar^2}

      Ground State of Helium atom

      Let us use variational principle to determine the ground state energy of a Helium atom with a stationary nucleus. The Helium atom has two electrons and two protons. For simplicity, we ignore the presence of neutrons. We also assume that the atom is non-relativistic and ignore spin.

      The Hamiltonian can be written as

      
\mathcal{H} =  -\frac{\hbar^2}{2m}\left(\boldsymbol\nabla_1^2+\boldsymbol \nabla_2^2\right)- \frac{Ze^2}{|\vec{r}_1|} - \frac{Ze^2}{|\vec{r}_2|}+\frac{e^2}{|\vec{r}_1-\vec{r}_2|}, \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \mbox{(4.5.3.1)}

      where \vec{r}_1,\vec{r}_2 \! are the coordinates of the two electrons.

      If we ignore the mutual interaction term, then the wavefunction will be the product of the two individual electron wavefunctions which in this case is that of a hydrogen-like atom. Therefore, the ground state wavefunction can be written as

       \psi_0(\vec{r}_1,\vec{r}_2)=\psi_{100}( \vec{r}_1)\psi_{100}(\vec{r}_2),\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \mbox{(4.5.3.2)}

      where we ignored spin and

      \psi_{100}\left(\vec{r}_{1,2}\right)=\left(\frac{Z^3}{\pi{a_0}^3}\right)^{1/2} \exp\left[-{\frac{Z |\vec{r}_{1,2}|}{a_0}}\right],
\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \mbox{(4.5.3.3)}

      where  a_0 = \frac{\hbar^2}{me^2}. \!

      Therefore we can write

      \psi_{0}\left(\vec{r}_1,\vec{r}_2\right)=\frac{Z^3}{\pi{a_0}^3} \exp\left[-{\frac{Z ( |\vec{r}_1|+|\vec{r}_2|)}{a_0}}\right]. \qquad \qquad\qquad \qquad \qquad \qquad \qquad \qquad \qquad \mbox{(4.5.3.4)}


      We can write the lowest unperturbed energy for this situation with Z=2\! as

       E_0' = 2 \left( -\frac{m(Ze^2)^2}{2\hbar^2}\right) \simeq 2\left(-Z^2 \times 13.6 eV \right) = - 4 \times 13.6 eV = -108.8 eV. \qquad \qquad \mbox{(4.5.3.5)}

      The first order correction to the energy is

      
\Delta E = \left\langle V \left( \vec{r}_1, \vec{r}_2 \right) \right\rangle 
         = \int \int \left| \psi_0\left(\vec{r}_1, \vec{r}_2 \right) \right|^2 
         \frac{Ze^2}{\left| \vec{r}_1 - \vec{r}_2 \right|} d^3 r_1 d^3 r_2 
         = \frac{5Ze^2}{4a_0} = \frac{5}{2} \times 13.6eV = 34 eV.

      Therefore, the ground state energy in first approximation is

      
E_0 = E_0' + \Delta E = - 108.8 eV + 34 eV = - 74.8 eV
\!

      However, the ground state energy has been experimentally determined accurately to  -78.86 eV \!. Therefore, our model is not a good one. Now, let us apply variational method using a trial wavefunction. The trial wavefucntion we are going to use is #4.5.3.2 itself but we will allow Z \! to remain a free parameter. This argument is perfectly valid since each electron screens the nuclear charges seen by the other electron and hence the effective atomic number is less than  2 \!.

      We can manipulate the Hamiltonian with  Z \! going to  \sigma \! and rewriting the potential term as \frac{\sigma e^2}{|\vec{r}|}+\frac{(Z-\sigma) e^2}{|\vec{r}|}. So the Hamiltonian becomes

       
\mathcal{H}= -\frac{\hbar^2}{2m}(\boldsymbol\nabla_1^2+\boldsymbol \nabla_2^2)- \frac{\sigma e^2}{|\vec{r}_1|}-\frac{(Z-\sigma) e^2}{|\vec{r}_1|}-\frac{\sigma e^2}{|\vec{r}_2|}-\frac{(Z-\sigma) e^2}{|\vec{r}_2|}+\frac{e^2}{|\vec{r}_1-\vec{r}_2|}

      Now we can use the variational principle. The expectation value of the Hamiltonian is


      
\begin{align}
\langle \mathcal{H}\rangle 
 = & \int{d^3r_1}\int{d^3r_2} \psi^*_{100}(\vec{r}_1)\psi^*_{100}(\vec{r}_2) \\
 & \times \left[-\frac{\hbar^2}{2m}\boldsymbol\nabla_1^2-\frac{(Z-\sigma) e^2}{|\vec{r}_1|}- \frac{\sigma e^2}{|\vec{r}_1|} -\frac{\hbar^2}{2m}\boldsymbol\nabla_2^2-\frac{(Z-\sigma) e^2}{|\vec{r}_2|}-\frac{\sigma e^2}{|\vec{r}_2|}+\frac{e^2}{|\vec{r}_1-\vec{r}_2|}\right] \psi_{100}( \vec{r}_1)\psi_{100}(\vec{r}_2)\qquad \mbox{(4.5.3.6)}
\end{align}

      The first two terms give

      E_0^{(1)}(\sigma)=- \frac{(Z-\sigma)^2 me^4}{2\hbar^2}.

      The fourth and fifth term will give the same. The third term and sixth term will give

      E_0^{(2)}(\sigma)=-{\sigma e^2} \left\langle\frac{1}{r_1}\right\rangle = -{\sigma e^2}\frac{\left(Z-\sigma\right)}{a_0} = - \frac{me^4}{\hbar^2} \sigma \left(Z-\sigma\right).

      The seventh term will give an expectation value of

       E_0^{(3)}(\sigma)= \frac{5 \left(Z-\sigma\right) m e^4}{8\hbar^2}.

      Adding all this we get,


      
\begin{align}
E_0(\sigma) &= -\frac{m e^4}{\hbar^2}\left( \left(Z - \sigma\right)^2 + 2 \sigma\left(Z-\sigma\right) - \frac{5 \left(Z - \sigma\right)}{8}\right) \\
& = -\frac{e^2}{a_0}\left( Z^2 - \frac{5}{8}Z + \frac{5}{8}\sigma - \sigma^2 \right),
\end{align}  \qquad \qquad \qquad \qquad \qquad \mbox{(4.5.3.7)}

      where  a_0 = \frac{\hbar^2}{me^2}. \!


      Excercise 18.22 of E. Merzbacher's Quantum Mechanics (3rd Ed.)

      Here is another problem regarding the Yukawa Potential in a Hydrogen atom: [[2]]


      Since  \sigma \! in #4.5.3.7 is the variational parameter, we can minimize the energy,  E_0(\sigma) \! with respect to  \sigma \! . That is,

      
\frac{\partial E_0(\sigma)}{\partial \sigma} = -\frac{e^2}{a_0} \left( \frac{5}{8} - 2 \sigma \right) = 0.

      This will give us

       \sigma = \frac{5}{16}.

      Therefore, putting this value in #4.5.3.7 , then we have

      
E_0\left(\frac{5}{16}\right) = - \left(Z-\frac{5}{16}\right)^2 \frac{e^2}{a_0}
= - \frac{Z_{\mbox{eff}}^2 e^2}{a_0},

      where  Z_{\mbox{eff}} = \left( Z - \sigma \right) = \left( Z - \frac{5}{16} \right).

      Putting  Z=2 \!, we get  Z_{\mbox{eff}} = 1.6875. \! Substituting this,  Z_{\mbox{eff}} \! in #4.5.3.5 instead of  Z \! , we get

       E_0 = -77.46 eV \!

      which is very close to the experimental value \left( \sim - 78.86 eV \right) \! . Thus, using variational principle we were able to calculate the ground state energy of the helium atom very close to the experimental value.


      A sample problem related Rayleigh_Ritz variational principle: Exercise 18.23 of Quantum Mechanics, 3rd Ed., which is written by Eugen Merzbacher: [3]

      A problem related to the variational principle and non-degenerate perturbation theory: problem

      Solved problem demonstrating use of Variational Method [[4]]

      Example Problem

      A particle moves in an attractive central potential  V(r)= \frac{-g^2}{r^{3/2}} . Use the variational principle to find an upper bound to the lowest s-state energy.

      As a trial wave function we will use a hydrogenic wave function.

       \Psi = {(\frac{k^3}{8 \pi})}^{1/2} e^{-kr/2}

      For an s-state, l=0 and

       <H> = \int \Psi^{*}H \Psi d\tau

       = \int \Psi^{*}[\frac{-\hbar}{2mr^2} \frac{\partial}{\partial r}(r^{2} \frac{\partial}{\partial r}) +V(r)] \Psi d\tau

       = \frac{k^3}{8 \pi} 4\pi \int_{0}^{\infty} r^2 e^{-kr/2}[\frac{-\hbar}{2mr^2} \frac{\partial}{\partial r}(r^{2} \frac{\partial}{\partial r}) + \frac{-g^2}{r^{3/2}}] e^{-kr/2} d\tau

       = \frac{k^3}{2} \int_{0}^{\infty} e^{-kr/2}[-g^{2}r^{1/2} + \frac{\hbar}{2m}kr - \frac{k^2\hbar^2}{8m}r^2] d\tau

       = \frac{k^3}{2}[\frac{\hbar^2}{4mk} - \frac{\sqrt{\pi}g^2}{2k^{3/2}}]

       = \frac{\hbar^2}{8m}k^2 - \frac{\sqrt{\pi}g^2}{4}k^{3/2}

      For <H> to be a minimum,  \frac{\partial <H>}{\partial k} = 0

       \rightarrow \frac{\hbar^2}{4m}k- \frac{3 \sqrt{\pi}}{8}g^2 k^{1/2} = 0

      which gives the solution  k^{1/2} = \frac{3mg^{2}\sqrt{\pi}}{2\hbar^2} .

      So, <H> reaches a minimum  \frac{-27 \pi^2 g^8 m^3}{128 \hbar^6} . This is the upper bound to the lowest s-state energy.

      Spin

      The Stern-Gerlach Experiment and Electron Spin

      In 1922, Otto Stern and Walther Gerlach measured the possible values of the magnetic dipole moment for silver atoms by sending a beam of these atoms through a nonuniform magnetic field. A beam of neutral atoms is formed by evaporating silver from an oven. The beam is collimated by a diaphragm, and it enters a magnet which produces a field that increases in intensity in the z direction. As the atoms are neutral overall, the only net force acting on them is proportional to μz. Therefore each atom is deflected in passing through the magnetic field by an amount which is proportional to μz. Thus the beam is analyzed into components according to the various values of μz. The deflected atoms strike a metallic plate, upon which they condense and leave a visible trace.

      If the orbital magnetic moment vector of the atom has a magnitude μ, then in classical physics the z component μz of this quantity can have any value from − μ to + μ. The reason is that classically the atom can have any orientation relative to the z axis, and so this will also be true of its orbital angular momentum and its magnetic dipole moment. The predictions of quantum mechanics are that μz can have only the discretely quantized values

      \mathbf \mu_z = -g_l  \mu_b  m_l

      where ml is one of the integers

      \mathbf m_l = -l, -l+1, ..., 0, ..., +l-1, +1

      Thus the classical prediction is that the deflected beam would be spread into a continuous band, corresponding to a continuous distribution of values of μz from one atom to the next. The quantum mechanical prediction is that the deflected beam would be split into several discrete components. Furthermore, quantum mechanics predicts that this should happen for all orientations of the analyzing magnet. Thai is, the magnet is essentially acting as a measuring devicewhich investigates the quantization of the component of the magnetic dipole moment along a z axis.

      Stern and Gerlach found that the beam of silver atoms is split into two discrete components, one component being bent in the positive z direction and the other bent in the negative z direction. They also found that these results were obtained independent of the choice of atoms, and in each case investigated it was found that the deflected beam is split into two, or more, components. The results are qualitatively, very direct experimental proof of the quantization of the z component of the magnetic dipole moments of atoms and, therefore, of their angular momenta. In other words, the experiments showed that the orientation in space of of atoms is quantized. The phenomenon is called space quantization.

      But the results of the Stern-Gerlach experiment are not quantitatively in agreement with the equations above. According to these equations, the number of possible values of μz is equal to the number of possible values of ml, which is 2l + 1. Since l is an integer, this is always an odd number. Also for any value of l one of the possible values of ml is zero. Thus the fact that the beam of silver atoms is split into only two components, both of which are deflected, indicates either that something is wrong with the Schrodinger theory of the atom, or that the theory is incomplete.

      The theory is not wrong; but as it stands, the Schrodinger theory of the atoms is incomplete. This is shown most clearly by an experiment performed in 1927 by Phipps and Taylor, who used the Stern-Gerlach technique on a beam of hydrogen atoms. The experiment is particularly significant because the atom contain a single electron, so the theory makes unambiguous predictions. Since the atoms in the beam are in their ground state because of the relatively low temperature of the oven, the theory predicts that the quantum number l has the value l = 0. Then there is only one possible value of ml, namely ml = 0, and we expect that the beam will be unaffected by the magnetic field since μz will be equal to zero. However, Phipps and Taylor found that the beam is split into two symmetrically deflected components. Thus there is certainly some magnetic dipole moment in the atom which we have not hitherto considered.

      The idea of electron spin was introduced some time before the work of Phipps and Taylor. In the final sentence of a research paper on the scattering of x rays by atoms, published in 1921, Compton had written, "May I then conclude that the electron itself, spinning like a tiny gyrospoce, is probably the ultimate magnetic particle." This was really more of a speculation than a conclusion, and Compton apparently never follow it further.

      Credit for the introduction of electron spin is generally given to Goudsmit and Uhlenbeck. In 1925, as graduate students, they were trying to understand why certain lines of the optical spectra of the hydrogen and the alkali atoms are composed of a closely spaced pair of lines. This is the fine structure, which had been treated by Sommerfeld in terms of the Bohr model as due to a splitting of the atomic energy levels because of a small (about one part in 104) contribution to the total energy resulting from the relativistic variation of the electron mass with velocity. The results of Sommerfeld were in good numerical agreement with the observed fine structure of hydrogen. But the situation was not so satisfactory for the alkalis. In these atoms the electron responsible for the optical spectrum would be expected to move in a Bohr-like orbit of large radius at low velocity, so the relativistic variation of mass would be expected to be small. However, the fine structure splitting was observed to be very much larger than in hydrogen. Consequently, doubt arose concerning the validity of Sommerfeld's explanation of the origin of the fine structure. In considering other possibilities, Goudsmit and Uhlenbeck proposed that an electron has an intrinsic angular momentum and magnetic dipole moment, whose z components are specified by a fourth quantum number ms, which can assume either of two values, − 1 / 2 and 1 / 2. The splitting of the atomic energy levels could then be understood as due to a potential energy of orientation of magnetic dipole moment of the electron in the magnetic field that is present in the atom because it contains moving charge particles. The energy of orientation would be either positive or negative depending on the sign of ms, i.e. depending on whether the spin is "up" or "down" relative to the direction of the internal magnetic field of the atom. Uhlenbeck has described the circumstances as follows:

      "Goudsmit and myself hit upon this idea by studying a paper of Pauli, in which the famous exclusion principle was formulated and in which, for the first time, four quantum numbers were ascribed to the electron. This was done rather formally; no concrete picture was connected with it. To us this was a mystery. We were so conversant with the proposition that every quantum number corresponds to a degree of freedom, and on the other hand with the idea of point electron, which obviously had three degrees of freedom only, that we could not place the fourth quantum number. We could understand it only if the electron was assumed to be a small sphere that could rotate....

      Somewhat later we found a paper of Abraham, to which Ehrenfest drew our attention, that for a rotating sphere with surface charge the necessary factor two in the magnetic moment (gs = 2) could be understood classically. This encouraged us, but our enthusiasm was considerably reduced when we saw that the rotational velocity at the surface of the electron had to be many times the velocity of light! I remember that most of these thoughts came to us on an afternoon at the end of September 1925. We were excited, but we had not the slightest intention of publishing anything. It seemed so speculative and bold, that something ought to be wrong with it, especially since Bohr, Heisenberg, and Pauli, our great authorities, had never proposed anything of the kind. But of course we told Ehrenfest. He was impressed at once, mainly, I fell, because of the visual character of our hypothesis, which was very much in his line. He called our attention to several points, e.g., to the fact that in 1921 A. H. Compton already had suggested the idea of a spinning electron as a possible explanation of the natural unit of magnetism, and finally said that it was either highly important or nonsense, and that we should write a short note for Naturwissenschaften (a physical research journal) and give it to him. He ended with the words 'and then we will ask Lorentz.' This was done. Lorentz received us with his well known great kindness, and was very much interested, although, I feel, somewhat skeptical too. He promised to think it over. And in fact, already next week he gave us a manuscript, written in his beautiful handwriting, containing long calculations on the electromagnetic properties of rotating electrons. We could not fully understand it, but it was quite clear that the picture of the rotating electron, if taken seriously, would give rise to serious difficulties. For one thing, the magnetic energy would be so large that by the equivalence of mass and energy the electron would have a larger mass than the proton, or, if one sticks to the known mass, the electron would be bigger than the whole atom! In any case, it seemed to be nonsense. Goudsmit and myself both felt that it might be better for the present not to publish anything; but when we said this to Ehrenfest, he answered: 'I have already sent your letter in long ago; you are both young enough to allow yourselves some foolishness!'" (from The Conceptual Development of Quantum Mechanics by Max Jammer, McGraw-Hill, 1966)

      The most recent experimental evidence indicates that the electron is a point particle, and certainly not "bigger than the whole atom." One set of experiments studies the scattering of electrons by electrons at very high kinetic energies. If these objects had appreciable extent in space, in collisions which were so close that they overlap, the force acting between them would be modified - just as in the close collision of an α particle and a nucleus. It was found that the electrons always act like two point objects, with charge e and magnetic dipole moment μs, even in the closest collisions investigated. Thus electrons have an extent less than this collision distance, which is about 10 − 16m. In comparison to the dimensions of an atom (10 − 10m), or even the dimension of a nucleus (10 − 14m), electrons have negligible dimensions.

      Although the electron seems to be a point particle, four quantum numbers are required to specify its quantum states. The first three arise because three independent coordinates are required to describe its location in three-dimensional space. The fourth arises because it is also necessary to describe the orientation in space of its spin, which can be either "up" or "down" relative to some z axis. For a classical point particle, there is room only for the first three quantum numbers. But the electron is not a classical particle.

      Schrodinger quantum mechanics is completely compatible with the existence of electron spin; but it does not predict it, so spin must be introduced as a separate postulate. The reason for this is that the theory is an approximation which ignores relativistic effects. Recall that the theory is based on the non-relativistic energy equation, E = p2 / 2m + V. Dirac developed a relativistic theory of quantum mechanics in 1929. Using the same postulates as the Schrodinger theory, but replacing the energy equation by its relativistic form E = (c2p2 + m2c4)1 / 2 + V, Dirac showed that an electron must have an intrinsic s = 1 / 2 angular momentum, an intrinsic magnetic dipole moment with a g-factor of 2, and all the other properties people have known previously. This was a great triumph for relativity theory; it put electron spin on a firm theoretical foundation and showed that electron spin is intimately connected with relativity.

      Problem 1 [5]

      Spin Kinematics

      Have a non-homogeneous magnetic field created by two magnets with a gap in between them. A particle passes between the two magnets and is deflected. The particle enters the magnetic field at time t0 and leaves the field at time T.

       \mathbf B(x,y,z) = -\alpha x \hat{x} + (B_0 + \alpha z) \hat{z}


       H^' = \gamma \vec{S} \cdot \mathbf B = \gamma \vec{S} \cdot \left[ -\alpha \hat{x} + \left( B_0 + \alpha z \right) \hat{z} \right]
                 = \gamma \alpha S_x + \gamma \left( B_0 + \alpha z \right) S_z

      But Sx oscillates fast. So as the particle moves through the field it is ultimately unchanged, therefore we can ignore it.

       H^' = 
   \begin{cases}
      \gamma \left( B_0 + \alpha z \right) S_z &\mbox{if} \qquad z>0\\
      0 &\mbox{if} \qquad else
   \end{cases}

       H^'| \chi \rangle = E | \chi \rangle

       det \left( H^' - E \right) = 0

       E_{\pm} = \mp \frac{\hbar}{2} \gamma \left( B_0 + \alpha z \right)

      for \ 0 \leq t \leq T

       \begin{align}
   \chi(t) &= a \chi_+ e^{-iE++t/\hbar} + b \chi_-e^{-iE_+t/\hbar} \\
           &= ae^{iB_0t/2}\chi_+e^{i\gamma \alpha z t /2} + be^{-iB_0t/2}\chi_-e^{-i\gamma \alpha z t /2}
\end{align}

      for \ t > T

       \chi(t) = ae^{iB_0T/2}\chi_+e^{i\gamma \alpha z T /2} + be^{-iB_0T/2}\chi_-e^{-i\gamma \alpha z T /2}

      χ(t) proportional to  e^{\pm i\gamma \alpha z T /2} . This is the momentum in the z-direction. The negative exponent will move the particle down and the positive one moves it up. This is the splitting in the Stern-Gerlach experiment.

      General Theory of Angular Momentum

      Untill now we have been working with the rotational degree of freedom by using the orbital angular momentum. Namely we use the operator \mathbf L (the generator of rotations in \mathbb{R}^{3}) to construct wave functions which carry the rotational information of the system.

      To clarify, all that we have done consists of quantizing everything that we know from classical mechanics. Specifically:

      • Invariance of time translation  \rightarrow Conservation of the Hamiltonian
      • Invariance of Spatial translation  \rightarrow Conservation of the Momentum
      • Invariance of Spacial Rotations  \rightarrow Conservation of Orbital Angular Momentum

      However nature shows that there are other kinds of degrees of freedom that don't have classical analog. The first one was observed for the first time in 1922 by Stern and Gerlach (see Cohen-Tannoudji Chap 4). They saw that electrons have one additional degree of freedom of the angular momentum. This degree of freedom was called "Spin 1/2" since they exhibit just two single results: Up and Down. It is interesting to note that from the algebra of angular momenta, it is necessary that spins take on either half-integer or integer values; there is no continuous range of possible spins. (For example, one will never find a spin 2/3 particle.)

      Spin 1/2 is the first truly revolutionary discovery of quantum mechanics. The properties of this physical quantity in itself, the importance of its existence, and the universality of its physical effects were totally unexpected. The physical phenomenon is the following. In order to describe completely the physics of an electron, one cannot use only its degrees of freedom corresponding to translations in space. One must take into account the existence of an internal degree of freedom that corresponds to an intrinsic angular momentum. In other words, the electron, which is a point-like particle, “spins” on itself. We use quotation marks for the word “spins”. One must be cautious with words, because this intrinsic angular momentum is purely a quantum phenomenon. It has no classical analogue, except that it is an angular momentum. One can use analogies and imagine that the electron is a sort of quantum top. But we must keep in mind the word “quantum”. The electron is a point-like object down to distances of 10 − 18 m.One must admit that a point-like object can possess an intrinsic angular momentum.

      The goal of this section is to extend the notion of orbital angular momentum to a general case. For this we use the letter \mathbf{J} for this abstract angular momentum. As we will see, orbital angular momentum is just a simple case of the general angular momentum.

      Experimental results

      Experimentally, this intrinsic angular momentum, called spin, has the following manifestations (we do not enter in any technical detail):

      1. If we measure the projection of the spin along any axis, whatever the state of the electron, we find either of two possibilities:

      \frac{\hbar}{2} or -\frac{\hbar}{2}

      There are two and only two possible results for this measurement.

      2.Consequently, if one measures the square of any component of the spin, the result is \frac{\hbar}{4} with a probability of one.

      3. Therefore, a measurement of the square of the spin S^{2}=S_{x}^{2}+S_{y}^{2}+S_{z}^{2} gives the result

      S^{2}=\frac{3\hbar^{2}}{4}

      4. A system that has a classical analogue, such as a rotating molecule, can rotate more or less rapidly on itself. Its intrinsic angular momentum can take various values. However, for the electron, as well as for many other particles, it is an amazing fact that the square of its spin \ S^2 is always the same. It is fixed: all electrons in the universe have the same values of the square of their spins S^{2}=\frac{3\hbar^{2}}{4}. The electron “spins” on itself, but it is not possible to make it spin faster.

      One can imagine that people did not come to that conclusion immediately. The discovery of the 1/2 spin of the electron is perhaps the most breathtaking story of quantum mechanics.

      The elaboration of the concept of spin was certainly the most difficult step of all quantum theory during the first quarter of the 20th century. It is a real suspense that could be called "the various appearances of the number 2 in physics." There are many numbers in physics; it is difficult to find a simpler one than that.

      And that number 2 appeared in a variety of phenomena and enigmas that seemed to have nothing to do a priori with one another, or to have a common explanation. The explanation was simple, but it was revolutionary. For the first time people were facing a purely quantum effect, with no classical analogue. Nearly all the physical world depends on this quantity, the spin 1/2.

      The challenge existed for a quarter of a century (since 1897). Perhaps, there was never such a long collective effort to understand a physical structure. It is almost impossible to say who discovered spin 1/2, even though one personality dominates, Pauli, who put all his energy into finding the solution.

      We show that in order to manipulate spin 1/2, and understand technicalities we essentially know everything already. We have done it more or less on two-state systems.


      Note on Hund's Rules

      Hund's Rules describe  L-S \! coupling approximation as long as there is convergence for an atom with a given configuration. Below list the steps.

      1. Choose a maximum value of  S \! (total spin) consistent with Pauli exclusion principle

      2. Choose a maximum value of  L \! (angular momentum)

      3. If the shell is less than half full, choose J=J_{min} =|L-S| \!

      4. If the shell is more than half full, choose J=J_{max} =L+S \!

      For example, we use Silicon: 1s^2 2s^2 2p^6 3s^2 3p^2 \! then S= \dfrac{1}{2} + \dfrac{1}{2} = S_{max} \! for 2 spin \dfrac{1}{2}\! particles. Angular momentum is  L=1 \! and since it is less than half full,  J=0 \!.


      Since the equations has the same form (same commutation relationships) as in the case of orbital angular momentum, we can easily extend everything:

      \begin{align}
\mathbf{J}^2&|j,m \rangle = \hbar^{2} j(j+1)|j,m \rangle\\
J_z & |j,m \rangle = \hbar m|j,m \rangle;\;\;\;\;\;-j\le m\le j  \\
J_{\pm} & |j,m \rangle = \hbar \sqrt{j(j+1)-m(m\pm 1)}|j,m \pm 1 \rangle\\
\end{align}

      One important feature is that the allowed values for m\! are integers or half-integers (See Shankar). Therefore the possible values for j\! are

      \begin{align}
j=0,\;1/2,\;1,\;3/2,\;2,\;5/2...
\end{align}

      We can construct the following table:

      \begin{array}{c|r|r|r|r|r|r|r} 
j \rightarrow& 0 & 1/2 & 1 & 3/2 & 2 & 5/2 & ...\\
\hline
m& 0 & 1/2 & 1 & 3/2 & 2 & 5/2 & ...\\
\downarrow&   &-1/2 & 0 & 1/2 & 1 & 3/2 &    \\
&   &     &-1 &-1/2 & 0 & 1/2 &    \\
&   &     &   &-3/2 &-1 &-1/2 &    \\
&   &     &   &     &-2 &-3/2 &    \\
&   &     &   &     &   &-5/2 &    \\ 
\hline
&(2\cdot 0+1)&(2\cdot \frac{1}{2}+1)&(2\cdot1+1)&(2\cdot \frac{3}{2}+1)&(2\cdot2+1)&(2\cdot \frac{5}{2}+1)&(2\cdot j+1)    \\
&=1          &=2                    &=3         &=4                    &=5         &=6                    &\\
\hline
\end{array}


      Each of these columns represent subspaces of the basis |j,m \rangle \! that diagonalize \mathbf{J}^{2}\! and J_z \!. For orbital angular momentum the allowed values for m\! are integers. This is due to the periodicity of the azimuthal angle.

      For electrons, they have an additional degree of freedom which takes values "up" or "down". Physically this phenomena appears when the electron is exposed to magnetic fields. Since the coupling with the magnetic field is via magnetic moment, it is natural to consider this degree of freedom as internal angular momentum. Since there are just 2 states, therefore, the angular momentum is represented by the subspace j=1/2\!.

      It is important to see explicitly the representation of this group. Namely we want see the matrix elements of the operators \mathbf{J}^2 \! ,  J_x \!, J_y \! and J_z \!. The procedure is as follow:

      • \mathbf{J}^{2} and J_z \! are diagonal since the basis are their eigenvectors.
      • To find J_x\! and J_y\!, we use the fact that
      \begin{align}
J_x&=\frac{1}{2}[J_+ + J_- ]\\
J_y&=\frac{1}{2i}[J_+ - J_- ]\\
\end{align}

      And the matrix elements of J_{\pm} \! are given by

      \begin{align}
\langle j',m'|J_{\pm}|j,m\rangle &= \langle j',m'|\hbar \sqrt{j(j+1)-m(m\pm1)}|j,m\pm 1\rangle \\
&= \hbar \sqrt{j(j+1)-m(m\pm1)}\delta_{j' j} \delta_{m' m\pm 1} \\
\end{align}

      Let's find the representations for the subspaces j=0,\frac{1}{2}\!, and  1 \!


      Subspace j = 0: (matrix 1x1)

      • \mathbf{J}^{2}=0 \!
      • J_z=0 \!
      • \langle 00|J_{\pm}|00\rangle =0 \;\;\;\rightarrow\;\;\;J_x=J_y=0 \!


      Subspace j = 1 / 2: (matrix 2x2)

      • \mathbf{J}^{2}=
\begin{array}{r|c|c} 
                   & |1/2,1/2\rangle      & |1/2,-1/2\rangle \\ \hline
\langle 1/2,1/2|   & \frac{3}{4}\hbar^{2} & 0                \\ \hline
\langle 1/2,-1/2|  &         0            &  \frac{3}{4}\hbar^{2} \\ \hline

\end{array}

=\frac{3}{4}\hbar^{2}
\begin{pmatrix}
  1 & 0 \\
  0 & 1 
\end{pmatrix}
      • J_z=
\begin{array}{r|c|c} 
                   & |1/2,1/2\rangle      & |1/2,-1/2\rangle \\ \hline
\langle 1/2,1/2|   & \frac{1}{2}\hbar & 0                \\ \hline
\langle 1/2,-1/2|  &         0            &  -\frac{1}{2}\hbar \\ \hline

\end{array}

=\frac{1}{2}\hbar
\begin{pmatrix}
  1 & 0 \\
  0 & -1 
\end{pmatrix}
      • For J_+ \! and  J_- \! are given by
      \begin{align}
J_+ & = \hbar \sqrt{\frac{1}{2}\left(\frac{1}{2}+1\right)-m(m+1)}\;\;\;\delta_{\frac{1}{2},\frac{1}{2}} \delta_{m',m+1}\\

&=\begin{array}{r|c|c} 
                   & |1/2,1/2\rangle                                & |1/2,-1/2\rangle \\ \hline

\langle 1/2,1/2|   & 0                                              & \hbar \sqrt{\frac{1}{2}\left(\frac{1}{2}+1\right)-\left(-\frac{1}{2}\right)\left((-\frac{1}{2})+1\right)} \\ \hline

\langle 1/2,-1/2|  & 0                                              & 0                               \\ \hline

\end{array}

=\hbar
\begin{pmatrix}
  0 & 1 \\
  0 & 0\\ 
\end{pmatrix}
\end{align}
      \begin{align}
J_- & = \hbar \sqrt{\frac{1}{2}\left(\frac{1}{2}+1\right)-m(m-1)}\;\;\;\delta_{\frac{1}{2},\frac{1}{2}} \delta_{m',m-1}\\

&=\begin{array}{r|c|c} 
                   & |1/2,1/2\rangle                                & |1/2,-1/2\rangle \\ \hline

\langle 1/2,1/2|   & 0                                              & 0\\ \hline

\langle 1/2,-1/2|  & \hbar \sqrt{\frac{1}{2}\left(\frac{1}{2}+1\right)-\left(\frac{1}{2}\right)\left((\frac{1}{2})-1\right)}   & 0                               \\ \hline

\end{array}

=\hbar
\begin{pmatrix}
  0 & 0 \\
  1 & 0\\ 
\end{pmatrix}
\end{align}


      • The matrices for J_x \! and  J_y \! are given by
      \begin{align}

J_x & = \frac{1}{2}[J_+ + J_- ]

=\frac{1}{2}\left [

\hbar\begin{pmatrix}
  0 & 1 \\
  0 & 0\\ 
\end{pmatrix}
+
\hbar\begin{pmatrix}
  0 & 0 \\
  1 & 0\\ 
\end{pmatrix}

\right ]\\

&=\frac{\hbar}{2}\begin{pmatrix}
  0 & 1 \\
  1 & 0\\ 
\end{pmatrix}\\



J_y & = \frac{1}{2i}[J_+ - J_- ]

=\frac{1}{2i}\left [

\hbar\begin{pmatrix}
  0 & 1 \\
  0 & 0\\ 
\end{pmatrix}
-
\hbar\begin{pmatrix}
  0 & 0 \\
  1 & 0\\ 
\end{pmatrix}

\right ]\\

&=\frac{\hbar}{2i}\begin{pmatrix}
  0 & 1 \\
  -1 & 0\\ 
\end{pmatrix}

=\frac{\hbar}{2}\begin{pmatrix}
  0 & -i \\
  i & 0\\ 
\end{pmatrix}



\end{align}


      Subspace j = 1: (matrix 3x3)

      • \mathbf{J}^{2}=
\begin{array}{r|c|c|c} 
                   & |1,1\rangle      & |1,0\rangle     & |1,-1\rangle  \\ \hline
\langle 1,1|       & 2\hbar^{2}       & 0               &0              \\ \hline
\langle 1,0|       &         0        & 2\hbar^{2}      &0              \\ \hline
\langle 1,-1|      &         0        & 0               &2\hbar^{2}     \\ \hline
\end{array}

=2\hbar^{2}
\begin{pmatrix}
  1 & 0 & 0\\
  0 & 1 & 0\\
  0 & 0 & 1\\ 
\end{pmatrix}
      •  J_z =
\begin{array}{r|c|c|c} 
                   & |1,1\rangle      & |1,0\rangle     & |1,-1\rangle  \\ \hline
\langle 1,1|       & \hbar            & 0               &0              \\ \hline
\langle 1,0|       &         0        & 0               &0              \\ \hline
\langle 1,-1|      &         0        & 0               &-\hbar         \\ \hline
\end{array}

=\hbar
\begin{pmatrix}
  1 & 0 & 0\\
  0 & 0 & 0\\
  0 & 0 & -1\\ 
\end{pmatrix}
      • For J_+ \! and  J_- \! are given by
      \begin{align}
J_+ & = \hbar \sqrt{1(1+1)-m(m+1)}\;\;\;\delta_{1,1} \delta_{m',m+1}\\

&=\begin{array}{r|c|c|c} 
                   & |1,1\rangle      & |1,0\rangle                & |1,-1\rangle  \\ \hline
\langle 1,1|       & 0                & \hbar \sqrt{1(1+1)-0(0+1)} &0                                      \\ \hline
\langle 1,0|       &         0        & 0                          &\hbar \sqrt{1(1+1)-(-1)((-1)+1)}              \\ \hline
\langle 1,-1|      &         0        & 0                          &0                                       \\ \hline
\end{array}

=\hbar
\begin{pmatrix}
  0 & \sqrt{2} & 0\\
  0 & 0        & \sqrt{2}\\
  0 & 0        & 0\\ 
\end{pmatrix}

\end{align}
      \begin{align}
J_- & = \hbar \sqrt{1(1+1)-m(m-1)}\;\;\;\delta_{1,1} \delta_{m',m-1}\\

&=\begin{array}{r|c|c|c} 
                   & |1,1\rangle               & |1,0\rangle                & |1,-1\rangle  \\ \hline
\langle 1,1|       & 0                         & 0                          &0              \\ \hline
\langle 1,0|       &\hbar \sqrt{1(1+1)-1(1-1)} & 0                          &0              \\ \hline
\langle 1,-1|      & 0                         &\hbar \sqrt{1(1+1)-0(0-1)}  &0              \\ \hline
\end{array}

=\hbar
\begin{pmatrix}
  0 & 0 & 0\\
  \sqrt{2} & 0        & 0\\
  0 & \sqrt{2}        &0 \\ 
\end{pmatrix}
\end{align}


      • The matrices for J_x \! and J_y \! are given by
      \begin{align}

J_x & = \frac{1}{2}[J_+ + J_- ]

=\frac{1}{2}\left [

\hbar
\begin{pmatrix}
  0 & \sqrt{2} & 0\\
  0 & 0        & \sqrt{2}\\
  0 & 0        & 0\\ 
\end{pmatrix}
+
\hbar
\begin{pmatrix}
  0 & 0 & 0\\
  \sqrt{2} & 0        & 0\\
  0 & \sqrt{2}        &0 \\ 
\end{pmatrix}

\right ]\\

&=\frac{\hbar}{\sqrt{2}}
\begin{pmatrix}
  0 & 1 & 0\\
  1 & 0 & 1\\
  0 & 1 & 0 \\ 
\end{pmatrix}\\



J_y & = \frac{1}{2i}[J_+ - J_- ]

=\frac{1}{2i}\left [

\hbar
\begin{pmatrix}
  0 & \sqrt{2} & 0\\
  0 & 0        & \sqrt{2}\\
  0 & 0        & 0\\ 
\end{pmatrix}
-
\hbar
\begin{pmatrix}
  0 & 0 & 0\\
  \sqrt{2} & 0        & 0\\
  0 & \sqrt{2}        &0 \\ 
\end{pmatrix}

\right ]\\

&=\frac{\hbar}{\sqrt{2}i}
\begin{pmatrix}
  0 & 1 & 0\\
  -1 & 0 & 1\\
  0 & -1 & 0 \\ 
\end{pmatrix}

=\frac{\hbar}{\sqrt{2}}
\begin{pmatrix}
  0 & -i & 0\\
  i & 0 & -i\\
  0 & i & 0 \\ 
\end{pmatrix}

\end{align}

      Summary

      The following table is the summary of above calculations.

      

\begin{array}{r|c|c|c|c|c|c|c|c} 
                   & j=0  & j=1/2 &j=1  \\ \hline
\mathbf{J}^{2}    
&0

&\frac{3}{4}\hbar^{2}
\begin{pmatrix}
  1 & 0 \\
  0 & 1 
\end{pmatrix}

&2\hbar^{2}
\begin{pmatrix}
  1 & 0 & 0\\
  0 & 1 & 0\\
  0 & 0 & 1\\ 
\end{pmatrix}\\ \hline
 
J_z

&0
&\frac{\hbar}{2}
\begin{pmatrix}
  1 & 0 \\
  0 & -1 
\end{pmatrix}
&\hbar
\begin{pmatrix}
  1 & 0 & 0\\
  0 & 0 & 0\\
  0 & 0 & -1\\ 
\end{pmatrix}\\ \hline

J_x
&0
&\frac{\hbar}{2}
\begin{pmatrix}
  0 & 1 \\
  1 & 0 
\end{pmatrix}
&\frac{\hbar}{\sqrt{2}}
\begin{pmatrix}
  0 & 1 & 0\\
  1 & 0 & 1\\
  0 & 1 & 0\\ 
\end{pmatrix}\\ \hline

J_y
&0
&\frac{\hbar}{2}
\begin{pmatrix}
  0 & -i \\
  i & 0 
\end{pmatrix}
&\frac{\hbar}{\sqrt{2}}
\begin{pmatrix}
  0 & -i & 0\\
  i & 0 & -i\\
  0 & i & 0\\ 
\end{pmatrix}\\ \hline
\end{array}

      Spin 1/2 Angular Momentum

      Many particles, such as the electrons, protons and neutron, exhibit intrinsic angular momentums, which is unlike orbital angular momentums, have no relation with the spatial degrees of freedom. These are called spin 1/2 particles. An important concept about spin is that it is a purely quantum mechanical construct, with no classical analog, and it cannot be described by a differential operator. The angular momentum of a stationary spin 1/2 particle is found to be quantized to the \pm\frac{\hbar}{2} regardless of the direction of the axis chosen to measure the angular momentum. This means that there is a vector operator \vec{S}=(S_x, S_y, S_z) and when it projected along an arbitrary axis satisfies the following equations:

      \vec{S}\cdot\hat{m}|\hat{m}\uparrow\rangle = \frac{\hbar}{2}|\hat{m}\uparrow\rangle

      \vec{S}\cdot\hat{m}|\hat{m}\downarrow\rangle = -\frac{\hbar}{2}|\hat{m}\downarrow\rangle

      |\hat{m}\uparrow\rangle and |\hat{m}\downarrow\rangle form a complete basis, which means that any state |\hat{n}\uparrow\rangle and |\hat{n}\downarrow\rangle with different quantization axis can be expanded as a linear combination of |\hat{m}\uparrow\rangle and |\hat{m}\downarrow\rangle.

      The spin operator obeys the standard angular momentum commutation relations

      [S_{\mu}, S_{\nu}]=i\hbar\epsilon_{\mu\nu\lambda}S_{\lambda}\Rightarrow [S_{x}, S_{z}]=-i\hbar S_{y}

      The most commonly used basis is the one which diagonalizes \vec{S}\cdot \hat{z} = S_{z}.

      By acting on the states |\hat{z}\uparrow\rangle and |\hat{z}\downarrow\rangle \! with S_z \!, we find S_{z}|\hat{z}\uparrow\rangle = \frac{\hbar}{2}|\hat{z}\uparrow\rangle, and S_{z}|\hat{z}\downarrow\rangle = -\frac{\hbar}{2}|\hat{z}\downarrow\rangle

      Now by acting to the left with another state, we can form a 2x2 matrix.

      \begin{align} S_{z} & =\left( \begin{array}{ll}
\langle\hat{z}\uparrow|S_{z}|\hat{z}\uparrow\rangle & \langle\hat{z}\uparrow|S_{z}|\hat{z}\downarrow\rangle \\ 
\langle\hat{z}\downarrow|S_{z}|\hat{z}\uparrow\rangle & \langle\hat{z}\downarrow|S_{z}|\hat{z}\downarrow\rangle
            \end{array} \right)\\ & =\left(\begin{array}{ll}
\hbar/2 & 0 \\ 
0 & -\hbar/2
      \end{array}\right)\\ & =\dfrac{\hbar}{2}\left(
\begin{array}{ll}
1 & 0 \\ 
0 & -1
      \end{array}\right)\\ &=\dfrac{\hbar}{2}\sigma_{z} \end{align}

      where \mathcal\sigma_{z} is the  z \! component of Pauli spin matrix. Repeating the steps (or applying the commutation relations), we can solve for the  x \! and  y \! components.

      S_{x}=\dfrac{\hbar}{2}\left(\begin{array}{ll}
0 & 1 \\ 
1 & 0
            \end{array} \right)=\dfrac{\hbar}{2}\sigma_{x}


      S_{y}=\dfrac{\hbar}{2}\left( \begin{array}{ll}
0 & -i \\ 
i & 0
             \end{array} \right)=\dfrac{\hbar}{2}\sigma_{y}


      In this basis,  \vec{S} = \frac{\hbar}{2} \vec{\sigma} \!. It should be noted that a spin lying along an axis may be rotated to any other axis using the proper rotation operator.


      Properties of the Pauli Spin Matrices

      Each Pauli matrix squared produces the unity matrix

      \sigma_{x}^2=\sigma_{y}^2=\sigma_{z}^2=\left( \begin{array}{ll}
1 & 0 \\ 
0 & 1
            \end{array} \right)

      The commutation relation is as follows

      \mathcal{[\sigma_{\mu}, \sigma_{\nu}]}=2i\epsilon_{\mu\nu\lambda}\sigma_{\lambda}

      and the anticommutator relation

       \{\sigma_{\mu}, \sigma_{\nu} \}= [ \sigma_{\mu}, \sigma_{\nu} ]_+ = \sigma_{\mu}\sigma_{\nu}+\sigma_{\nu}\sigma_{\mu}=2\delta_{\mu\nu} \left( \begin{array}{ll}
1 & 0 \\ 
0 & 1
            \end{array} \right)

      For example,

      \sigma_{\mu}\sigma_{\nu}=\frac{1}{2}\left\{\sigma_{\mu}, \sigma_{\nu}\right\} + \frac{1}{2}\left[\sigma_{\mu}, \sigma_{\nu}\right]
= i\epsilon_{\mu\nu\lambda}\sigma_{\lambda} + \delta_{\mu\nu}

      S_{\mu}S_{\nu}=\dfrac{\hbar^2}{4}\delta_{\mu\nu}+\dfrac{i\hbar}{2}\epsilon_{\mu\nu\lambda}S_{\lambda}

      The above equation is true for 1/2\! spins only!!

      In general,

       \begin{align}
(\vec{a} \cdot \vec\sigma)(\vec{b}\cdot\vec\sigma) & =(a_{x}\sigma_{x}+a_{y}\sigma_{y}+a_{z}\sigma_{z})(b_{x}\sigma_{x}+b_{y}\sigma_{y}+b_{z}\sigma_{z})\\ & = a_{\mu}\sigma_{\mu}b_{\nu}\sigma_{\nu}\\ & =a_{\mu}b_{\nu}\sigma_{\mu}\sigma_{\nu}\\ & =a_{\mu}b_{\nu} \left( \left(
\begin{array}{ll}
1 & 0 \\ 
0 & 1
\end{array} \right) \delta_{\mu\nu} + i\epsilon_{\mu\nu\lambda}\sigma_{\lambda} \right)\\ 
& = \left( \begin{array}{ll}
1 & 0 \\ 
0 & 1 
\end{array} \right) \vec{a}\cdot \vec{b} + i(\vec{a} \times \vec{b})\cdot\vec{\sigma} \end{align}

      Finally, any  2 \times 2 \! matrix can be written in the form

       M=\alpha \left( \begin{array}{ll}
1 & 0 \\ 
0 & 1
\end{array} \right) +\vec\beta \cdot \vec\sigma= \left( \begin{array}{ll}
M_{11} & M_{12} \\ 
M_{21} & M_{22}
\end{array} \right)

      \Rightarrow\alpha=\frac{1}{2}\left(M_{11}+M_{22}\right)

      \Rightarrow\beta_{x}=\frac{1}{2}\left(M_{12}+M_{21}\right)

      \Rightarrow\beta_{y}=\frac{i}{2}\left(M_{12}-M_{21}\right)

      \Rightarrow\beta_{z}=\frac{1}{2}\left(M_{11}-M_{22}\right)

      For infinitesimal \vec{\alpha}

      Image:Spin.JPG

      \hat{n}=\hat{m}+\vec{\alpha} \times \hat{m} + O(\alpha^2)

      
\Rightarrow \vec{S}\cdot\hat{n}=\vec{S}\cdot\hat{m}+\vec{S} \cdot(\vec{\alpha} \times \vec{m})

      
\Rightarrow S_{\mu}\hat{n}_{\mu} = S_{\mu}\hat{m}_{\mu} + S_{\mu}\epsilon_{\mu\nu\lambda}\alpha_{\nu}\hat{m}_{\lambda}

      Note that using the previous developed formulas, we find that

      
S_{\mu}\epsilon_{\mu\nu\lambda}=\frac{1}{i\hbar} \left[S_{\nu}, S_{\lambda}\right]

       
\begin{align} 
\Rightarrow \vec{S}\cdot\hat{n} & =\vec{S}\cdot\hat{m}+\frac{1}{i\hbar}[\vec{\alpha}\cdot\vec{S}, \hat{m}\cdot\vec{S}] \\ 
& =\vec{S}\cdot\hat{m}+\frac{i}{\hbar}[\vec{S}\cdot\hat{m}, \vec{S}\cdot\vec{\alpha}]
\end{align}

      To this order in \vec{\alpha}, this equation is equivalent to

      \vec{S}\cdot\hat{n}=e^{-\frac{i}{\hbar}\vec{S}\cdot\vec{\alpha}} \left(\vec{S}\cdot\hat{m}\right) e^{\frac{i}{\hbar}\vec{S}\cdot\vec{\alpha}}.

      This equation is exact for any  \vec\alpha \! not just infinitesimal  \vec\alpha \! just as in hte case of orbital angular momentum.

      Consider  \vec{S} \cdot \hat{n} \! acts on  e^{-\frac{i}{\hbar} \vec{S} \cdot {\alpha}} \left| \hat{m} \uparrow \right\rangle \! which is an eigenstate of  \vec{S} \cdot \hat{n} \!,

      
\begin{align}
\vec{S}\cdot\hat{n} \left( e^{-\frac{i}{\hbar}\vec{S}\cdot\vec{\alpha}} |\hat{m} \uparrow\rangle \right) & = e^{-\frac{i}{\hbar} \vec{S}\cdot\vec{\alpha}} \left( \vec{S}\cdot\hat{m} \right) 
\left|\hat{m} \uparrow\right\rangle  \\
& = \frac{\hbar}{2}\left( e^{-\frac{i}{\hbar}\vec{S}\cdot\vec{\alpha}} |\hat{m} \uparrow\rangle \right)
\end{align}


      Another way of expressing the rotation of the spin basis by an angle \vec \alpha about some axis \hat{\alpha} (and the one derived in class) is the following.

      Consider an operator e^{-\frac{i}{\hbar}\vec{S}\cdot\vec{\alpha}} from the previous equation. This can also be written as

      
\begin{align}
e^{-\frac{i}{\hbar}\vec{S}\cdot\vec{\alpha}} & = e^{-\frac{i}{2}\vec{\sigma}\cdot\vec{\alpha}} \\
& = 1 - \frac{i}{2} \vec{\sigma}\cdot\vec{\alpha} + \frac{1}{2}\left(-\frac{i}{2}\vec{\sigma}\cdot\vec{\alpha}\right)^2 + \cdots \\
& = \sum_{n=0}^{\infty} \frac{1}{n!} \left(-\frac{i}{2} \vec{\sigma}\cdot\vec{\alpha}\right)^n \\
& = \sum_{n=0}^{\infty} \frac{(-i)^n}{n! 2^n} |\vec{\alpha}|^n \left( \vec{\sigma}\cdot\hat{\alpha}\right)^n 
\end{align}
.

      Consider, 
\begin{align}
\left( \vec{\sigma}\cdot\hat{\alpha} \right)^2  
& = \left(\sigma_x \alpha_x + \sigma_y \alpha_y + \sigma_z \alpha_z \right)\left(\sigma_x \alpha_x + \sigma_y \alpha_y + \sigma_z \alpha_z\right) \\
& = \left(\alpha_x ^2 + \alpha_y ^2 + \alpha_z ^2 \right) 
+ \alpha_x \alpha_y \left(\sigma_x \sigma_y + \sigma_y \sigma_x\right) 
+ \alpha_x \alpha_z \left(\sigma_x \sigma_z + \sigma_z \sigma_x\right) 
+ \alpha_y \alpha_z \left(\sigma_y \sigma_z + \sigma_z \sigma_y\right) \\
& = 1
\end{align}
.

      The non-squared terms vanish because of the anti-commutation property of the Pauli matrices.

      Therefore, (\vec{\sigma}\cdot\hat{\alpha})^{2n} = 1 ( n \! is an integer), thus the above equation can be split:

      
\begin{align}
e^{-\frac{i}{\hbar}\vec{S}\cdot\vec{\alpha}} 
& = \sum_{n = even}^{\infty} \frac{(-i)^{n}}{n!2^n}\left|\vec{\alpha}\right|^n 
+ \vec{\sigma}\cdot\hat{\alpha} \sum_{n = odd}^{\infty} \frac{(-i)^n}{n!2^n} \left| \vec{\alpha} \right|^n \\
& = \cos\left(\frac{\left|\vec\alpha\right|}{2}\right) - i \vec{\sigma}\cdot\hat{\alpha} \sin\left(\frac{\left|\vec\alpha\right|}{2}\right)
\end{align}

      This form may be more convenient when performing rotations. A solved problem for spins

      A Solved Problem on General Spin Vectors.

      A Comparison of the Modern and Old Quantum Mechancs

      We shall now compare the modern quantum theories (Schrodinger, Dirac, and quantum electrodynamics) with the old quantum theories (Bohr and Sommerfeld).

      One of the most striking aspects of the modern quantum theories is the way they lead progressively to more and more accurate treatments of the hydrogen atom. The Schrodinger theory without electron spin accounts for the energy levels of the atom that are observed in spectroscopic measurements of moderate resolution. Measurements of high resolution reveal the fine-structure splitting of the energy levels. They can be explained almost completely by adding to the Schrodinger theory corrections for the electron spin-orbit interaction and for the relativistic dependence of mass on velocity. They can be explained completely by the Dirac theory. Spectroscopic measurements of very high resolution show the Lamb shift, which can be understood in terms of quantum electrodynamics. Extremely high-resolution measurements show the hyperfine splitting, which can be accounted for in the Schrodinger theory by an interaction involving the nuclear spin. Another great success of the modern quantum theories is their ability to give very satisfactory treatments of the transition rates and selection rules observed in the measurements of the spectra emitted by hydrogen atoms, and all other one-electron and multielectron atoms.

      The record of the old quantum theory is spotty. The Bohr model leads to correct values for the energies of the unsplit hydrogen atom levels. Sommerfeld's relativistic modification of the model agrees with the fine-structure splittings in hydrogen, but the agreement is accidental. The relativistic modification cannot account for the Lamb shift, nor for hyperfine splittings. Furthermore, it disagrees by orders of magnitude with the fine-structure splitting seen in typical multielectron atoms. In fact, the Bohr model itself fails completely to explain many of the most obvious features of the energy levels of multielectron atoms; it is already in serious trouble with the helium atom that contains only two electrons. The old quantum theory is unreliable in explaining selection rules, and incapable of explaining transition rates.

      A particular helpful feature of the Schrodinger theory is that almost all of the work done in applying it to one-electron atoms carries over directly when it is applied to multielectron atoms. And the theory is certainly accurate enough to explain every important feature of multielectron atoms. Furthermore, it is not very much more complicated to apply Schrodinger quantum mechanics to such atoms than it is to apply it to one-electron atoms, part of the reason that this is true is that most of the electrons in a multielectron atom group together with other electrons to form symmetrical and inert shells in which they do not have to be treated individually. Only the few electrons in the atom which are not in such shells require detailed treatment.

      Addition of Angular Momenta

      Formalism

      In order to consider the addition of angular momentum, consider two angular momenta,  \vec{J}_1 and  \vec{J}_2 which belong to two different subspaces.  \vec{J}_1 has a Hilbert space of \left(2 j_{1} + 1\right) states, and  \vec{J}_2 has a Hilbert space of \left(2 j_{2} + 1\right) states. The total angular momentum is then given by:
      \vec{J}=\vec{J}_{1}+\vec{J}_{2}=\vec{J}_{1}\otimes\mathbb{I}_1 + \mathbb{I}_2\otimes\vec{J}_{2} \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad (6.1.1)
      where  \mathbb{I}_1 and  \mathbb{I}_2 are the identity matrices of \vec{J}_{1}'s and \vec{J}_{2}'s Hilbert spaces, and where the dimension Hilbert space is \!(2j_1+1)(2j_2+1).

      The components of  \vec{J}_1 and  \vec{J}_2 obey the commutation relation:

      \left[J_{1\mu}, J_{1\nu}\right] = i\hbar\epsilon_{\mu\nu\lambda} J_{1\lambda}
\qquad \qquad
\left[J_{2\mu}, J_{2\nu}\right] = i\hbar\epsilon_{\mu\nu\lambda} J_{2\lambda} \qquad \qquad \qquad \qquad \qquad \qquad\;\ (6.1.2a)

      And since  \vec{J}_1 and  \vec{J}_2 belong to different Hilbert spaces:

       \left[J_{1\mu}, J_{2\nu}\right] = 0 \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad\; (6.1.2b)

      Given the simultaneous eigenkets of  J_1^2 and \!J_{1z} denoted by |j_1 m_1\rangle , and of J_2^2 and \!J_{2z} denoted by |j_2 m_2\rangle we have the following relations:

      J_1^2|j_1 m_1\rangle = j_1(j_1+1)\hbar|j_1 m_1\rangle

      J_{1z}|j_1 m_1\rangle = m_1\hbar|j_1 m_1\rangle

      J_2^2|j_2 m_2\rangle = j_2(j_2+1)\hbar|j_2 m_2\rangle

      J_{2z}|j_2 m_2\rangle = m_2\hbar|j_2 m_2\rangle

      Now looking at the two subspaces together, the operators  J_1^2,  \!J_{1z},  J_2^2,  \!J_{2z} can be simultaneously diagonalized by their join eigenstates. These eigenstates can be formed by the direct products of  |j_1 m_1\rangle and  |j_2 m_2\rangle :

       |j_1 m_1\rangle \otimes |j_2 m_2\rangle = |j_1,j_2; m_1,m_2\rangle

      This basis for the total system diagonalizes  J_1^2 \!,  \!J_{1z},  J_2^2 \! ,  \!J_{2z}, but these four operators DO NOT define the total angular momentum of the system. Therefore it is useful to relate these direct product eigenstates to the total angular momentum  \vec{J} = \vec{J}_{1} + \vec{J}_{2}.

      Recall that  J_{z} = J_{1z} + J_{2z} \! and \left[J_{\mu}, J_{\nu}\right] = i\hbar\epsilon_{\mu\nu\lambda} J_{\lambda}.

      We also know the relations: \left[J_{1,2}^2, J^2\right]=0 and \left[J_{1,2}^2, J_{z}\right] = 0 and \left[J^{2}, J_{z}\right] = 0

      This tells us that we have a set of four operators that commute with each other. From this we can specify J_{1}^2 , J_{2}^2 , J^2, and J_{z}\! simultaneously. The joint eigenstates of these four operators denoted by |j m j_1 j_2\rangle. These four operators operate on the base kets according to:

      J^2|j m j_1 j_2\rangle = \hbar^2 j(j + 1)|j m j_1 j_2\rangle \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad\;\;\;\; (6.1.7)

      J_{z}|j m j_{1} j_{2}\rangle=\hbar m |j m j_{1} j_{2}\rangle \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad\;\;\;\;\;\ (6.1.8)

      J_{1,2}^2|j m j_{1} j_{2}\rangle=\hbar^2 j_{1,2}(j_{1,2}+1) |j m j_{1} j_{2}\rangle \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad\;\;\ (6.1.9)

      The choice of basis is now dictated by the specific problem being solved because we can find the relationship between the direct product basis and total- J \! basis.

      For example, consider two spin 1/2 particles with basis  |\uparrow\uparrow\rangle, |\uparrow\downarrow\rangle, |\downarrow\uparrow\rangle , and  |\downarrow\downarrow\rangle . These states are eigenstates of J_{1}^2, J_{2}^2, J_{1z},\! and J_{2z}\!, but are they eigenstates of \!J^2 and J_z^2?
      Let us see what happens with the state |\uparrow\downarrow\rangle:

      J^2 |\uparrow\downarrow\rangle = \left(J_{x}^2+J_{y}^2+J_{z}^2\right)|\uparrow\downarrow\rangle = \left((J_{x}+iJ_{y})(J_{x}-iJ_{y})+i[J_{x}, J_{y}] + J_{z}^2 \right)|\uparrow\downarrow\rangle.

      Let's define  J_{\pm}=(J_{x}\pm iJ_{y}), then

       J^2 |\uparrow\downarrow\rangle =\left(J_{+}J_{-}-\hbar J_{z} + J_{z}^2 \right)| \uparrow\downarrow\rangle = \left((J_{1+}+J_{2+})(J_{1-}+J_{2-})+(J_{1z}+J_{2z})^2-\hbar (J_{1z}+J_{2z})\right)|\uparrow\downarrow\rangle

      Now (J_{1z}+J_{2z}) |\uparrow\downarrow\rangle = \left(\frac{\hbar}{2}-\frac{\hbar}{2}\right) |\uparrow\downarrow\rangle = 0

      Also, (J_{1+}+J_{2+})(J_{1-}+J_{2-})|\uparrow\downarrow\rangle = (J_{1+}+J_{2+})|\downarrow\downarrow\rangle = |\uparrow\downarrow\rangle + |\downarrow\uparrow\rangle

       \therefore J^2 |\uparrow\downarrow\rangle = |\uparrow\downarrow\rangle + |\downarrow\uparrow\rangle

      Which means that |\uparrow\downarrow\rangle is not an eigenstate of \!J^2. Similarly, it can be shown that the other three states are also not eigenstates of \!J^2.


      To find a relationship between the direct product basis and the total- J \! basis, begin by finding the maximum total  m \! state:

      \left|j =1, m=1; \frac{1}{2}, \frac{1}{2}\right\rangle = |\uparrow \uparrow\rangle

      This must be true because  |\uparrow \uparrow\rangle is the only state with  \!m = 1 . Now we can lower this state using  J_{-} \! to yield:

      \left|j =1, m=0; \frac{1}{2}, \frac{1}{2}\right\rangle = \frac{1}{\sqrt{2}}\left(|\uparrow \downarrow\rangle + |\downarrow \uparrow\rangle\right)

      And then lower this state to yield:

      \left|j =1, m=-1; \frac{1}{2}, \frac{1}{2}\right\rangle = |\downarrow \downarrow\rangle

      All we are missing now is the antisymmetric combination of |\uparrow \downarrow\rangle and |\downarrow \uparrow\rangle):

      \left|j =0, m=0; \frac{1}{2}, \frac{1}{2}\right\rangle = \frac{1}{\sqrt{2}}\left(|\uparrow \downarrow\rangle - |\downarrow \uparrow\rangle\right)

      We now have a relationship between the two bases. Also, we can write  \mathbf{\frac{1}{2}} \otimes \mathbf{\frac{1}{2}} = \mathbf{1} \otimes \mathbf{0} \! where  \mathbf{1} \! and  \mathbf{0} \! represent triplet state and single state respectively.

      Problem: CG coefficients[[6]]

      Another problem: CG coefficients[[7]]

      Another worked problem with adding angular momentum[[8]]

      Clebsch-Gordan Coefficients

      Now that we have constructed two different bases of eigenkets, it is imperative to devise a way such that eigenkets of one basis may be written as linear combinations of the eigenkets of the other basis. To achieve this, we write:

      |j m j_1 j_2\rangle = \sum_{m_1,m_2}|j_1 j_2 m_1 m_2\rangle\langle j_1 j_2 m_1 m_2|j m j_1 j_2\rangle \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad\ (6.2.1)

      In above, we have used the completeness of the basis |j_1 j_2 m_1 m_2\rangle, given by:

      \sum_{m_1,m_2}|j_1 j_2 m_1 m_2\rangle\langle j_1 j_2 m_1 m_2| = 1 \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad\;\;\; (6.2.2)

      The coefficients \langle j_1 j_2 m_1 m_2|j m j_1 j_2\rangle are called Clebsch-Gordan coefficients (for an extensive list of these coefficients, see here), which have the following properties, giving rise to two "selection rules":

      1. If m \neq m_1 + m_2, then the coefficients vanish.

      Proof:  \because J_z = J_{1z} + J_{2z}, we get
      (J_z - J_{1z} - J_{2z})|j m j_1 j_2\rangle = 0
      \Rightarrow \langle j_1 j_2 m_1 m_2|(J_z - J_{1z} - J_{2z})|j m j_1 j_2\rangle = 0
       \therefore (m - m_1 - m_2)\langle j_1 j_2 m_1 m_2|j m j_1 j_2 \rangle = 0 . Q.E.D.

      2. The coefficients vanish, unless  |j_1 - j_2| \le j \le j_1 + j_2

      This follows from a simple counting argument. Let us assume, without any loss of generality, that \! j_1 > j_2 . The dimensions of the two bases should be the same. If we count the dimensions using the |j_1 j_2 m_1 m_2\rangle states, we observe that for any value of \! j , the values of \! m run from \! -j to \! j . Therefore, for \! j_1 and \! j_2 , the number of eigenkets is \! (2j_1 + 1)(2j_2 + 1) . Now, counting the dimensions using the  |j m j_1 j_2 \rangle eigenkets, we observe that, again, \! m runs from \! -j to \! j . Therefore, the number of dimensions is  N = \sum_a^b (2j + 1) . It is easy to see that for \! a = j_1 - j_2 and \! b = j_1 + j_2. Therefore,  N = (2j_1 + 1)(2j_2 +1)\!.

      Further, it turns out that, for fixed \!j_1, \!j_2 and \!j, coefficients with different values for \!m_1 and \!m_2 are related to each other through recursion relations. To derive these relations, we first note that: J_{\pm}|j m j_1 j_2\rangle = \sqrt{(j \mp m)(j \pm m + 1)}\hbar |j m \pm 1 j_1 j_2\rangle \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad\;\ (6.2.3)

      Now we write, J_{\pm}|j m j_1 j_2 \rangle = (J_{1 \pm} + J_{2 \pm}) \sum_{m_1,m_2}|j_1 j_2 m_1 m_2 \rangle \langle j_1 j_2 m_1 m_2|j m j_1 j_2 \rangle \qquad \qquad \qquad \qquad (6.2.4)

      Using equation #(6.2.4), we get (with  m_1 \to m'_1 ,  m_2 \to m'_2 ):

      
\begin{align} & \sqrt{(j \mp m)(j \pm m + 1)}|j m \pm 1 j_1 j_2 \rangle \\
& = \sum_{m'_1,m'_2} \left( \sqrt{(j_1 \mp m'_1)(j_1 \pm m'_1 + 1)}|j_1 j_2 m'_1 \pm 1 m'_2 \rangle + \sqrt{(j_2 \mp m'_2)(j_2 \pm m'_2 + 1)}|j_1 j_2 m'_1 m'_2 \pm 1 \rangle \right)\langle j_1 j_2 m'_1 m'_2|j m j_1 j_2 \rangle \end{align}


      The Clebsch-Gordan coefficients form a unitary matrix, and by convention, they are all taken real. Any real unitary matrix is orthogonal, as we study below.


      Additionally, another way to calculate the Clebsch-Gordan coefficients would be to go to www.volya.net, which is Dr. Alex Volya's website. Simply click the link on the side under the menu "Science Tools" for "Vector Coupling" and follow the directions for entering the angular momenta. The website will then calculate the Clebsch-Gordan coefficients for you.

      Example

      As an example lets calculate some Clebsch-Gordan coefficients through applications of  S_{\plusmn}=S_x+iS_y on the states | Sms > .


      Let S = S1 + S2 be the total angular momentum of two spin 1/2 particles (S1 = S2 = 1 / 2). Calculate the Clebsch-Gordan coefficients

      < m1m2 | Sms > by successive applications of  S_{\plusmn}=S_x+iS_y on the states | Sms > . Work separately in the two subspaces S=1 and S=0.

      In order to find the coefficients for the addition of spin 1/2, we shall use the following relations:

      I  S_{\plusmn}|Sm_s> = \hbar\sqrt{S(S+1)-m_s(m_s\plusmn 1)}|Sm_s\plusmn 1>

      II  S_{1\plusmn}|m_1m_2> = \hbar\sqrt{S_1(S_1+1)-m_1(m_1\plusmn 1)}|m_1\plusmn 1 m_2>

      III S_{2\plusmn}|m_1m_2> = \hbar\sqrt{S_2(S_2+1)-m_2(m_2\plusmn 1)}|m_1 m_2\plusmn 1>

      We shall also use the phase condition

      |S=S_1+S_2,m_s=\plusmn(S_1+S_2)> = |m_1=\plusmn S_1,m_2=\plusmn S_2>

      Note: The states  |S=S_1=S_2,m_s=\plusmn (S_1+S_2)> are eigenstates of S2 and Sz, with nondegenerate eigenvalues  \lambda_{\plusmn} = \plusmn \hbar(S_1+S_2) , repsectively. Therefore,

      |S=S_1+S_2,m_s=\plusmn(S_1+S_2)> = e^{i \phi}|m_1=\plusmn S_1,m_2=\plusmn S_2>

      and the phase φ may be chosen to be φ = 0.


      i Subspace S=1: From the phase condition we immediately have | 1,1 > = | 1 / 2,1 / 2 > = | + + >

      Then, operating with S = S1 − + S2 − on both sides of this, and using II and III, we obtain

      S_{-}|1,1>=\hbar\sqrt{1(1+1)-1(1-1)}|1,0>=\hbar\sqrt{2}|1,0> S_{-}|1,1>=(S_{1-}+S_{2-})|1/2,1/2>=\hbar\sqrt{1}|-1/2,1/2>+\hbar|1/2,-1/2>

      Thus,  |1,0>=\frac{1}{\sqrt{2}}(|1/2,-1/2>+|-1/2,1/2>) = \frac{1}{\sqrt{2}}(|+->+|-+>)

      Similarly, operating with S once again on the state |1,0>, we find

       S_{-}|1,0> = \hbar\sqrt{1(1+1)-0(0-1)}|1,-1>=\hbar\sqrt{2}|1,-1>  S_{-}|1,0> = \frac{1}{\sqrt{2}}S_{1-}(|1/2,-1/2>+|-1/2,1/2>)+\frac{1}{\sqrt{2}}S_{2-}(|1/2,-1/2>+|-1/2,1/2>) = \frac{\hbar}{\sqrt{2}}(|-1/2,-1/2>+|-1/2,-1/2>) = \frac{2}{\sqrt{2}}|-1/2,-1/2>

      Therefore, in accordance with the phase condition, | 1, − 1 > = | − 1 / 2, − 1 / 2 > = | − − >


      ii Subspace S=0: Since ms = m1 + m2 (in this case ms = 0), we have

      | 0,0 > = α | − 1 / 2,1 / 2 > + β | − 1 / 2,1 / 2 >

      Next, due to orthonormality of | Sms > basis we get

       <1,0|0,0> = 0 \rightarrow \frac{1}{\sqrt{2}}(\alpha+\beta)=0 \rightarrow \beta=-\alpha

       <0,0|0,0>=1 \rightarrow |\alpha|^2+|\beta|^2=1 \rightarrow 2|\alpha|^2=1 \rightarrow \alpha = \frac{1}{\sqrt{2}}

      Therefore, we find  |0,0> = \frac{1}{\sqrt{2}}(|1/2,-1/2>-|-1/2,1/2>)

      Orthogonality of Clebsch-Gordan Coefficients

      We have the following symmetry:

      \langle j_1 j_2 m_1 m_2 | j_1 j_2 j m  \rangle 
= (-1)^{j-j_1-j_2} \langle j_2 j_1 m_2 m_1 | j_2 j_1 j m \rangle
=  \langle j_2 j_1, -m_2, -m_1 |j_2 j_1 j, -m \rangle

      If we put the coefficients into a matrix, it is real and unitary, meaning \langle j m j_1 j_2 |j_1 j_2 m_1 m_2 \rangle = \langle j_1 j_2 m_1 m_2 |j m j_1 j_2 \rangle ^*

      | j_1 j_2 m_1 m_2 \rangle = \sum_{j,m} |j m j_1 j_2 \rangle \langle j m j_1 j_2 |j_1 j_2 m_1 m_2 \rangle .

      For example,

      |\uparrow_1 \downarrow_2 \rangle = \dfrac{1}{\sqrt{2}}(|10 \rangle + |00 \rangle )

      |\downarrow_1 \uparrow_2 \rangle = \dfrac{1}{\sqrt{2}}(|10 \rangle - |00 \rangle )

      We have the following orthogonality relations:
      \sum_{jm}\langle j_1 m'_1 j_2 m'_2|jmj_1 j_2\rangle \langle jmj_1 j_2 | j_1 m_1 j_2 m_2\rangle = \delta_{m_1 m'_1} \delta_{m_2 m'_2}

      \sum_{m_1 m_2}\langle j m j_1 j_2|j_1 m_1 j_2 m_2\rangle \langle j_1 m_1 j_2 m_2 | j' m' j_1 j_2\rangle = \delta_{j j'} \delta_{m m'}

      Hydrogen atom with spin orbit coupling given by the following hamiltonian

      H'=\dfrac{e^2}{2m^2 c^2 r^3}\vec{L}\cdot\vec{S}

      Recall, the atomic spectrum for bound states

      E_n = -\frac{e^2}{2a_o n^2} where  n=1, 2, 3, ...\!

      The ground state, |1s\rangle, is doubly degenerate: \dfrac{\uparrow\downarrow}{1s}

      First excited state is 8-fold degenerate: \dfrac{\uparrow\downarrow}{2s}\dfrac{\uparrow\downarrow}{}\dfrac{\uparrow\downarrow}{2p}\dfrac{\uparrow\downarrow}{}

      n \!-th state is 2n^2\! fold degenerate.

      We can break apart the angular momentum and spin into its  x, y, z \!-components

      \vec{L}\cdot\vec{S} = L_x S_x + L_y S_y + L_z S_z

      Define lowering and raising operators

      \Rightarrow L_\pm = L_x \pm iL_y

      \Rightarrow S_\pm = S_x \pm iS_y

      \vec{L}\cdot\vec{S} = L_z S_z + \dfrac{1}{2} L_{+} S_{-} + \dfrac{1}{2} L_{-} S_{+}

      For the ground state, (|1s, \uparrow\rangle, |1s, \downarrow\rangle ), nothing happens. Kramer's theorem protects the double degeneracy.

      For the first excited state, (|2s, \uparrow\rangle, |2s, \downarrow\rangle ), once again nothing happens.

      For (|2p, \uparrow\rangle, |2p, \downarrow\rangle ), there is a four fold degeneracy.

      We can express the solutions in matrix form

      \left( \begin{array}{llllll}
\dfrac{\hbar^2}{2} & 0 & 0 & 0 & 0 & 0 \\ 
0 & 0 & 0 & 0 & 0 & 0 \\ 
0 & 0 & 0 & 0 & 0 & 0 \\ 
0 & 0 & 0 & 0 & 0 & 0 \\ 
0 & 0 & 0 & 0 & 0 & 0 \\ 
0 & 0 & 0 & 0 & 0 & \dfrac{\hbar^2}{2}
\end{array} \right)

      But there is a better and more exact solution, which we can solve for by adding the momenta first.

      \vec{L}\cdot\vec{S} = \frac{1}{2} \left(\vec{L} + \vec{S}\right)^2 -\frac{1}{2}\vec{L}^2 -\frac{1}{2}\vec{S}^2 = \frac{1}{2}\left(J^2 -L^2 - S^2\right)

      add the angular momenta:

      |1s\rangle : l=0, s=\dfrac{1}{2}: 0\otimes \dfrac{1}{2}= \dfrac{1}{2}

      |2s\rangle : l=0, s=\dfrac{1}{2}: 0\otimes \dfrac{1}{2}= \dfrac{1}{2}

      |2p_m, 0 \rangle : l=1, s=\dfrac{1}{2}: 1\otimes \dfrac{1}{2}= \dfrac{3}{2} \oplus \dfrac{1}{2}

      So that

      \vec{L}\cdot\vec{S} \left|j=\dfrac{3}{2}, m, l=1, s=\dfrac{1}{2} \right\rangle =\dfrac{1}{2} \left(\hbar^2\dfrac{3}{2}\dfrac{5}{2}-2 \hbar^2 - \dfrac{3}{4} \hbar^2\right) \left|j=\dfrac{3}{2}, m, l=1, s=\dfrac{1}{2} \right\rangle = \dfrac{\hbar^2}{2} \left| j=\dfrac{3}{2}, m, l=1, s=\dfrac{1}{2} \right\rangle

      \vec{L}\cdot\vec{S} \left|j=\dfrac{1}{2}, m, l=1, s=\dfrac{1}{2} \right\rangle =\dfrac{-\hbar^2}{2} \left| j=\dfrac{1}{2}, m, l=1, s=\dfrac{1}{2} \right\rangle

       \left|j=\dfrac{3}{2}, m= \dfrac{3}{2}, l=1, s=\dfrac{1}{2} \right\rangle = \left|l=1, m_l =1 \right\rangle \left|s=\dfrac{1}{2}, m_s = \dfrac{1}{2} \right\rangle

       \left|j= \dfrac{3}{2}, m= \dfrac{3}{2} \right\rangle = \left|m_l =1 \right\rangle \left|m_s = \dfrac{1}{2} \right\rangle

      Define  J_{-} = L_{-} + S_{-} \!

      J_{-} \left|\dfrac{3}{2}, \dfrac{1}{2} \right\rangle = \left(L_{-} + S_{-} \right)\left|l=1, m=1 \right\rangle \left| S=\frac{1}{2}, m_s=\frac{1}{2} \right\rangle


      
\Rightarrow \hbar \sqrt{\dfrac{3}{2} \dfrac{5}{2}- \dfrac{3}{2}\dfrac{1}{2}} \left|\dfrac{3}{2}, \dfrac{1}{2} \right\rangle = \hbar \sqrt{2} \left|l=1, m=0 \right\rangle \left|s=\frac{1}{2}, m_s=\frac{1}{2} \right\rangle + \hbar \sqrt{\dfrac{1}{2} \dfrac{3}{2} - \frac{1}{2}\left(\frac{1}{2}-1\right)}|l=1,m=1\rangle \left| s=\frac{1}{2}, m_s=-\frac{1}{2} \right\rangle

      
\Rightarrow \sqrt{3} \left|\dfrac{3}{2}, \dfrac{1}{2} \right\rangle =  \sqrt{2}\left|1,0 \right\rangle \left|\frac{1}{2}, \frac{1}{2}\right\rangle + \left|1,1 \right\rangle \left| \frac{1}{2}, -\frac{1}{2} \right\rangle

       
\Rightarrow \left|\frac{3}{2}, \frac{1}{2} \right\rangle = \sqrt{\frac{2}{3}}\left|1,0 \right\rangle \left|\dfrac{1}{2}, \frac{1}{2} \right\rangle + \sqrt{\dfrac{1}{3}}\left|1,1 \right\rangle \left| \frac{1}{2},- \dfrac{1}{2} \right \rangle

      As the same,

       \left|\dfrac{3}{2}, -\dfrac{1}{2} \right\rangle =  \sqrt{\dfrac{2}{3}} \left|1,0 \right\rangle \left| \frac{1}{2},-\dfrac{1}{2} \right\rangle + \sqrt{\dfrac{1}{3}}\left|1,-1 \right\rangle \left| \frac{1}{2}, \dfrac{1}{2} \right\rangle,

       \left|\dfrac{3}{2}, \pm \dfrac{3}{2} \right\rangle =  \left|1, \pm 1 \right \rangle \left| \frac{1}{2}, \pm \dfrac{1}{2} \right\rangle

      We can express as follows:

      
\left|j=\dfrac{1}{2}, m =\dfrac{1}{2} \right\rangle = \alpha \left|1,0 \right\rangle \left| \frac{1}{2}, \dfrac{1}{2} \right\rangle + \beta \left|1,1 \right\rangle \left|\frac{1}{2}, -\dfrac{1}{2} \right\rangle ,

      \left|j=\dfrac{1}{2}, m = -\dfrac{1}{2} \right \rangle = \alpha ' \left|1,0 \right\rangle \left|\frac{1}{2},-\dfrac{1}{2} \right\rangle + \beta ' \left|1,-1 \right\rangle \left| \frac{1}{2}, \dfrac{1}{2} \right\rangle .

      When we project these states on the previously found states, we find that

      \alpha = \dfrac{1}{\sqrt{3}}, \beta = - \sqrt{\dfrac{2}{3}},

      and

      \alpha' = - \dfrac{1}{\sqrt{3}}, \beta' = \sqrt{\dfrac{2}{3}}.

      For a more detailed account of these and other related results, see here.


      |j m j_1 j_2\rangle = \sum_{m_1,m_2}|j_1 j_2 m_1 m_2\rangle\langle j_1 j_2 m_1 m_2|j m j_1 j_2\rangle \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad\ (6.2.1)

      Wigner's 3j Symbols

      The addition of angular momentum \mathbf{J}=\mathbf{J}^{(1)}+\mathbf{J}^{(2)} may be rewritten in the form

      \mathbf{J}^{(1)}+\mathbf{J}^{(2)}+\mathbf{J}^{(3)}=0,

      by renaming \mathbf{J}^{(3)}=-\mathbf{J}. Therefore the expansion of \left |\mathit{j_{1}m_{1}} \right \rangle\otimes \left |\mathit{j_{2}m_{2}} \right \ranglein terms of \left |\mathit{jmj_{1}m_{1}} \right \ranglemust be related to the direct product of three states \left |\mathit{j_{1}m_{1}} \right \rangle\otimes \left |\mathit{j_{2}m_{2}} \right \rangle\otimes \left |\mathit{j_{3}m_{3}} \right \rangle, where the corresponding total angular momentum is zero. The triple product may be expanded in terms of total angular moomentum states \mathbf{J}_{total}by combining the states in pairs in two steps as follows

      \left |\mathit{j_{1}m_{1}} \right \rangle\otimes \left |\mathit{j_{2}m_{2}} \right \rangle\otimes \left |\mathit{j_{3}m_{3}} \right \rangle=\sum_{jm}\left ( |jmj_{1}j_{2}>\otimes|j_{3}m_{3}  \right )\times < jm|m_{1}m_{2}>

      =\sum_{j_{t}m_{t}}\sum_{jm}|j_{t}m_{t};jj_{3};j_{1}j_{2}> \times < j_{t}m_{t}|mm_{3}> < jm|m_{1}m_{2}>


      Now since \mathbf{J}_{total}=0, the only possible values for jt and mt are jt = 0 = mt. Furthermore, the coefficient < 00 | mm3 > can be non-zero only if j = j3 and m = − m3

      < 00|mm_{3}> =\delta _{j,j_{3}}\delta _{m,-m_{3}}(-1)^{-j_{3}-m_{3}}\sqrt{\frac{1}{2j_{3}+1}}

      Therefore the triple product reduces to

      |j_{1}m_{1}> \otimes |j_{2}m_{2}>\otimes |j_{3}m_{3}>=|00,j_{1}j_{2}j_{3}> \begin{pmatrix}
j &j_{1}  &j_{2} \\ 
m &m_{1}  &m_{3} 
\end{pmatrix}

      where the 3j symbol is defined in terms of Clebsch-Gordan coefficients as

      \begin{pmatrix}
j_{1} &j_{2}  &j_{3} \\ 
m_{1} &m_{2}  &m_{3} 
\end{pmatrix}=(-1)^{j_{1}-j_{2}-j_{3}}\sqrt{\frac{1}{2j_{3}+1}}< j_{3},-m_{3}|j_{1}m_{1}j_{2}j_{3}>

      An extra phase of (-1)^{j_{3}+j_{1}-j_{2}} has been absorbed into this definition in order to define a state | 00,j1j2j3 > that is symmetric under permutations of the indices 1,2,3.

      The 3 − j symbols have some symmetry properties:

      \bullet The overall 3 − j symbol should be the same (up to a minus sign) if the indices 1,2,3 are interchanged. By using the properties of the Clebsch- Gordan coefficients one finds the following symmetry properties under cyclic and anti-cyclic permutations

      \begin{pmatrix}
j_{1} &j_{2}  &j_{3} \\ 
 m_{1}&m_{2}  &m_{3} 
\end{pmatrix}=\begin{pmatrix}
j_{3} &j_{1}  &j_{2} \\ 
 m_{3}&m_{1}  &m_{2} 
\end{pmatrix}=(-1)^{j_{1}+j_{2}+j_{3}}\begin{pmatrix}
j_{2} &j_{1}  &j_{3} \\ 
 m_{2}&m_{1}  &m_{3} 
\end{pmatrix}

      \bullet Under a reflection \textbf{J}^{(1,2,3)}\rightarrow -\textbf{J}^{(1,2,3)}the magnetic quantum numbers change sign, but the total angular momentum remains zero, therefore the 3j symbol can differ only up to a sign, indeed one finds \begin{pmatrix}
j_{1} &j_{2}  &j_{3} \\ 
 -m_{1}&-m_{2}  &-m_{3} 
\end{pmatrix}=(-1)^{j_{1}+j_{2}+j_{3}}\begin{pmatrix}
j_{1} &j_{2}  &j_{3} \\ 
 m_{1}&m_{2}  &m_{3} 
\end{pmatrix}

      These symmetry properties together are useful to relate various Clebsch-Gordan coefficients. For example from the knowledge of the coefficients for \left ( \frac{3}{2}\otimes \frac{1}{2}\to 2 \right ) one can obtain the coefficients for \left ( 2\otimes \frac{1}{2}\to \frac{1}{2}\right ), etc.


      Additionally, another way to calculate the 3j symbols would be to go to www.volya.net, which is Dr. Alex Volya's website. Simply click the link on the side under the menu "Science Tools" for "Vector Coupling" and follow the directions for entering the appropriate values. The website will then calculate the Wigner 3j symbols for you.

      Addition of Three Angular Momenta

      To add three angular momenta \bold J_1, \bold J_2, \bold J_3, first we add \bold J_{12}=\bold J_1 + \bold J_2, and construct the simultaneous eigenstates of \bold J_{1} ^{2}, \bold J_2 ^{2}, \bold J_{12} ^{2}, \bold J_{12z}, \bold J_3 ^{2}, \bold J_{3z}. We write such states as |j_1 j_2 j_{12} m_{12} j_3 m_3 \rangle. Such states can be given in terms of Clebsch-Gordan coefficients and |j_1 j_2 j_3 m_1 m_2 m_3 \rangle (eigenstates of \bold J_1^{2}, \bold J_2^{2}, \bold J_3^{2}, \bold J_{1z}, \bold J_{2z}, \bold J_{3z}):

      |j_1 j_2 j_{12} m_{12} j_3 m_3 \rangle = \sum_{m_1,m_2} |j_1 j_2 j_3 m_1 m_2 m_3 \rangle \langle j_1 j_2 j_3 m_1 m_2 m_3|j_1 j_2 j_{12} m_{12} j_3 m_3\rangle

      Next we add \bold J_{12} to \bold J_3, forming simultaneous eigenstates |j_1 j_2 j_{12} j_3 j m \rangle of  \bold J_1^{2}, \bold J_2^{2}, \bold J_{12}^{2}, \bold J_3^{2}, \bold J^{2}, \bold J_{z}. These are given in terms of the | j_1 j_2 j_{12} m_{12} j_3 m_3 \rangle by

      |j_1 j_2 j_{12} j_3 j m \rangle = \sum_{m_{12},m_{3}} | j_1 j_2 j_{12} m_{12} j_3 m_3 \rangle \langle j_{12} m_{12} j_3 m_3 | j_{12} j_3 j m \rangle

      Therefore, we can construct eigenstates of  \bold J_1^{2}, \bold J_2^{2}, \bold J_{12}^{2}, \bold J_3^{2}, \bold J^{2}, \bold J_{z} in terms of eigenstates of \bold J_1^{2}, \bold J_2^{2}, \bold J_3^{2}, \bold J_{1z}, \bold J_{2z}, \bold J_{3z} as follows:

      |j_1 j_2 j_{12} j_3 j m \rangle = \sum_{m_{1},m_{2},m_{3}} |j_1 j_2 j_3 m_1 m_2 m_3 \rangle \sum_{m_{12}} \langle j_1 j_2 m_1 m_2|j_1 j_2 j_{12} m_{12}\rangle \langle j_{12} m_{12} j_3 m_3 | j_{12} j_3 j m \rangle

      Thus the analogous addition coefficients for three angular momenta are products of Clebsch-Gordan coefficients.

      Note that for addition of two angular momenta, the dimension of Hilbert space is \!(2J_1 + 1)(2J_2 + 1). For three angular momenta, it is \!(2J_1 + 1)(2J_2 + 1)(2J_3 + 1).


      Addition of three spin half particles [[9]]

      Schwinger's Oscillator model of Angular Momenta

      It is quite apparent that the algebra of angular momentum and its operators have a lot of connection with the algebra of simple harmonic oscillators. The connection between the algebra of addition of two angular momentum and the algebra of two uncoupled oscillators was worked out by J. Schwinger. Let us consider two harmonic oscillators, and label them as plus and minus type.The creation and annihilation operators for the plus-type oscillator are denoted by a + and a^\dagger_+ respectively. Similarly, we have a and a^\dagger_- for the minus oscillator. We define the number operators N + and N as follows: N_+ = a^\dagger_+ a_+ , N_- = a^\dagger_- a_-.

      Commutation relations within the two subspaces

      The usual commutation relations for a single harmonic oscillator hold separately within the subspace of each of the two oscillators:

      \left [a_+,a^\dagger _+ \right ] = 1 \qquad \qquad \qquad \left [a_-,a^\dagger _- \right ] = 1 \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad(6.7.1)

      \left [N_+,a^\dagger _+ \right ] = a^\dagger _+ \qquad \qquad \qquad \left [N_-,a^\dagger _- \right ] = a^\dagger _- \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad(6.7.2)

      \left [N_+,a _+ \right ] = -a _+ \qquad \qquad \qquad \left [N_-,a _- \right ] = -a _- \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad (6.7.3)

      However, since the oscillators are uncoupled, the operators belonging to different oscillators commute: \left [a_+,a^\dagger _- \right ] = \left [a_-,a^\dagger _+ \right ] = \left [N_+,N_- \right ] = 0 \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad(6.7.4)

      and so on.

      Eigenvalue equations and angular momentum

      Since N + and N commute, they form simultaneous eigenkets with eigenvalues n + and n respectively. The eigenvalue equations for N_\pm can therefore be written as:

      N_+|n_+,n_- \rangle = n_+|n_+,n_- \rangle, \qquad \qquad N_-|n_+,n_- \rangle = n_-|n_+,n_- \rangle \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad(6.7.5)

      The creation and annihilation operators act on  |n_+,n_- \rangle as follows:

       a_+^\dagger|n_+,n_-\rangle = \sqrt{n_+ +1}|n_+  +1,n_- \rangle, \qquad \qquad a_-^\dagger|n_+,n_- \rangle = \sqrt{n_- +1}|n_+,n_- +1 \rangle \qquad \qquad \qquad \qquad (6.7.6)

       a_+|n_+,n_-\rangle = \sqrt{n_+}|n_+  -1,n_- \rangle, \qquad \qquad a_-|n_+,n_- \rangle = \sqrt{n_-}|n_+,n_- -1 \rangle \qquad \qquad \qquad \qquad \qquad \qquad(6.7.7)

      Next, we define the vacuum ket as:

       a_+|0,0 \rangle = 0, \qquad \qquad \qquad a_-|0,0 \rangle \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad\qquad \qquad \qquad(6.7.8)

      By applying a_+^\dagger and a_-^\dagger successively to the vacuum ket, the most general eigenkets of N + and N are obtained:

       |n_+,n_- \rangle = \frac{(a_+ \dagger)^{n_+}(a_- \dagger)^{n_-}}{\sqrt{n_+!}\sqrt{n_-!}}|0,0 \rangle \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad(6.7.9)

      Further, on defining

       J_+ = \hbar a_+^{\dagger} a_-^{\dagger}, \qquad J_- = \hbar a_- ^{\dagger} a_+ \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad (6.7.10a)

      and

       J_z = (\frac{\hbar}{2}(a_+^{\dagger} a_+ - a_-^{\dagger} a_-) = \frac{\hbar}{2}(N_+ - N_-) \qquad \qquad \qquad \qquad \qquad \qquad (6.7.10b)

      it can be proved that these operators satisfy the usual angular momentum commutation relations:

      \qquad \qquad \left [J_z,J_+\pm \right ] = \pm \hbar J_\pm , \qquad \qquad \qquad \left [J_+,J_- \right ] = 2\hbar J_z \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad (6.7.11)

      Defining the total N to be

      N = N_+ + N_- = a_+^{\dagger}a_+ + a_-^{\dagger}a_- , \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad(6.7.12)

      we get \mathbf{J}^{2} = J_z ^{2} + \frac{1}{2}(J_+J_- + J_-J_+) = \frac{\hbar^{2}}{2}N(\frac{N}{2} + 1) \qquad \qquad \qquad \qquad \qquad \qquad(6.7.13)

      Physical Interpretation

      Considering spin up (m = \frac{1}{2}) as one quantum unit of the plus-type oscillator and spin down (m = -\frac{1}{2})as one quantum unit of the minus-type oscillator, and associating the eigenvalues n + and n with the numbers of up-spins and down-spins respectively, the meaning of J + is that it destroys one unit of spin down having z-component of angular momentum equal to \frac{\hbar}{2} and creates one unit of spin up having z-component of angular momentum equal to -\frac{\hbar}{2}, thus increasing the total z-component by a unit of \hbar. In the same way, J destroys one unit of spin up and creates one unit of spin down, decreasing the total z-component by \hbar. The Jz operator, on the other hand, counts the difference of n + and n in units of \frac{\hbar}{2}, giving the total z-component of the angular momentum. Making use of equations (6.7.6), (6.7.7) and (6.7.10), the action of J_\pm and Jz on |n_+,n_-\rangle are as follows:

      J_+|n_+,n_-\rangle = \hbar a_+^\dagger a_-|n_+,n_-\rangle = \sqrt{n_-(n_+ + 1)}\hbar|n_+ + 1,n_- -1\rangle \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad(6.7.14a)

      J_-|n_+,n_-\rangle = \hbar a_-^\dagger a_+|n_+,n_-\rangle = \sqrt{n_+(n_- + 1)}\hbar|n_+ - 1,n_- +1\rangle \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad(6.7.14b)

      J_z|n_+,n_-\rangle = \frac{\hbar}{2}(N_+ - N_-)|n_+,n_-\rangle  = \frac{1}{2}(n_+-n_-)\hbar|n_+,n_-\rangle \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad(6.7.14c)

      It is interesting to note that the total number of spin-half particles, which is equal to the sum of n + and n, remains unchanged in all the three equations above.


      Connection between oscillator and angular momentum matrix elements

      It may be observed that, on substituting n_+\to j+m and n_-\to j-m, the equations in (6.7.14) reduce to the ones obtained before, in section 5.3, for J_\pm and Jz. also, the eigenvalue of the \mathbf{J}^{2} operator, defined by equation (6.7.13), changes to \hbar^{2}j(j+1) . This is understandable, because J_\pm and \mathbf{J}^{2} were constructed out of oscillator operators that satisfy the usual angular momentum commutation relations.

      Further, using \qquad\qquad\qquad\qquad j\equiv \frac{(n_++n_-)}{2}, m\equiv \frac{(n_+-n_-)}{2}\qquad\qquad\qquad\qquad(6.7.15)

      in equations (6.7.14), the action of J + keeps j unchanged and m goes to m + 1. Likewise, J operator lowers m by one unit, without changing j.

      The most general N + , N eigenket from equation (6.7.9) can now be written as:

      |j,m\rangle = \frac{(a_+^\dagger)^{j+m}(a_-^\dagger)^{j-m}}{\sqrt{(j+m)!(j-m)!}}|0,0\rangle \qquad\qquad\qquad\qquad\qquad\qquad(6.7.16)

      In general, as far as transformation properties under rotation are concerned, an object with angular momentum j can be visualized to be made up of 2j spin-half particles, j + m of them with spin-up and the remaining jm of them with spin-down. The special case m = j, which physically means that, for a given j, the eigenvalue of Jz is as large as possible, is a state with 2j spin-half particles, with all the spins pointing in the positive z-direction. Equation (6.7.16) reduces to

      |j,j\rangle = \frac{(a_+^\dagger)^{2j}}{\sqrt{(2j)!}}|0,0\rangle \qquad\qquad\qquad\qquad\qquad\qquad\qquad(6.7.17)

      There is a difference however, from the point of view of angular momentum addition. From the formalism in section 6.1, spins of 2j spin-half particles can be added to obtain states with angular momenta j, j − 1, j − 2,.... For example, momenta of two spin-half particles can be added to get a total angular momentum of zero as well as one. But in Schwinger's oscillator scheme, only states with angular momentum j can be obtained. Only totally symmetrical states can be constructed by this method. Thus, this method holds good for bosons and helps to examine the properties under rotations of states characterized by j and m, without the information of how such states are built up initially.

      Elementary Applications of Group Theory in Quantum Mechanics

      Symmetry

      Mathematically, a Group consists of a set of elements for which a operation is defined that combines any two of the elements to form a third element. Also, the group must satisfy some properties such as closure, associativity, identity and invertibility.

      G=\lbrace A,B,C \dots\rbrace

      G is a group under operation A\cdotB if

      . A\cdot B\in G,for\forall A,B\in G (closure)

      . \left(A\cdot B\right)\cdot C=A\cdot\left(B\cdot C\right) (associativity)

      . \exists \mathbf I, such that \mathbf I\cdot A=A\cdot \mathbf I=A, for\forall A (identity)

      . \forall A, \exists A^{-1}, Such that A\cdot A^{-1}=A^{-1}\cdot A=\mathbf I (invertibility)


      Broad Characteristics of Various Groups

      Discrete and Continuous Groups

      G can be discrete(isolated elements) or it can be continuous(rotation)

      Examples(discrete group):

      \mathbf I_2=\lbrace\mathbf I,A\rbrace where A^{2}=\mathbf I

      \mathbf I_n=\lbrace n \rbrace, The operation “\circ” is defined as n\circ m=n+m

      And continuous group:

      U\left(1\right)=\lbrace e^{i\theta}; \theta\in\lbrack 0,2\pi\rbrack\rbrace, in which e^{i\theta}e^{i\phi}=e^{i\left(\theta+\phi\right)}

      SU\left(2\right): group of 2\times2 matrices with unit determinant (special unitary group)

      O\left(3\right): group of all rotations about the origin in 3D. i.e. set of all orthogonal transformation in a 3D vector space, or a group of all 3\times3 orthogonal matrices.

      SU\left(3\right): a group of all unitary matrices with determinant +1\!

      Abelian Group

      If all the elements commute with each other, then the group is "Abelian."

      In a group \ G,if AB=BA \forall A,B\in G

      Then the group is said to be abelian.

      Example: The addition of real numbers yields an abelian group. However, the group of square matrices with dimension N under multiplication is not abelian.

      Non-Abelian Group

      If all the elements of the group do not commute with each other, then the group is "Non-Abelian."

      Example: Matrices, in general do not commute, and can form a non-abelian group. For example, SU\left(2\right): the group of 2\times2 matrices with unit determinant (special unitary group)

      Continuously Connected

      A groups is called continuously connected if a continuous variation of the group parameters leads from any arbitrary element of the group to any other.

      Example: The translation group of the elements {a = a_{x}\varepsilon_{1}+a_{y}\varepsilon_{2}+a_{z}\varepsilon_{3}} possesses three continuous parameters (a_{x},a_{y},a_{z})\! .We can generate each displacement vector in space by continuous variation of these parameters. However, rotations combined with reflection in space [called \ O[3]], form a continuous,not connected group. similarly, the Lorentz transformation group in relativity for a not connected group.


      Definition of conjugate elements: B\! is conjugate to A\! if B=XAX^{-1}\! for some X\in G. This property is reciprocal since that A=X^{-1}BX\!

      Collecting all conjugate elements gives a conjugacy class. We can divide G into conjugate classes.

      Compact Group In each sequence within a compact group there exists an infinite partial sequence a_{n}\! of group elements, which converges to an element of the group, i.e \lim_{n\to\infty}a_{n}=a,a\in G

      Examples, (a) The group of the translation vectors on a lattice T =  {a_{n} = n_{1}\varepsilon_{1}+n_{2}\varepsilon_{2}+n_{3}\varepsilon_{3}}, with n_{1},n_{2},n_{3} \in N.

      Theory of Group Representations

      Group representations are the description on abstract objects using linear transformations of vector spaces. Usually, the elements in the group are represent by matrices so that the group operation can be represents by matrix multiplication.

      Associate a matrix \Gamma\left(A\right)with each A\in G

      \Gamma\left(A\right)\Gamma\left(B\right)=\Gamma\left(AB\right)


      Group Theory and Quantum Mechanics

      Angular Momentum Algebra and Representation of Angular Momentum Operators

      Irreducible Representation of the Rotation Group

      The commutation relations of the angular momentum operators represent the Lie algebra of the Group S0(3)

      After defining the angular momentum operators, we can study the matrix elements of the rotation operator D(R). Specifying a rotation by \bold{\hat{n}} and \phi, we can define its matrix elements as

      D_{{m}'m}^{j}(R) = \langle j,{m}'|e^{\left (-\frac{\bold{J^.\hat{n}}}{\hbar} \right )}|j,m\rangle These matrix elements are called Wigner functions. Since D(R)|j,m\rangle is an eigenstate of \bold{J^2}, rotations cannot change the j value. The (2j + 1)(2j + 1) matrix formed by D_{{m}'m}^{j}(R) is called as the (2j + 1) dimensional irreducible representation of the rotation operator D(R) The rotation matrices formed by a definite value of j form a group.

      Proof: The product of any two members is also a member, i.e., \sum_{{m}'} D_{{m}''{m}'}^{j}(R_1)D_{{m}'m}^{j}(R_2) = D_{{m}''m}^{j}(R_1R_2)

      Identity is a member because it corresponds to no rotation (φ = 0).Reversing the rotation angle and keeping the direction of the unit vector unchanged, we get the inverse of each group member. Note that the rotation matrix is unitary.

      The operator D(R) mixes different values of m, i.e, D(R)| j,m\rangle = \sum_{{m}'} |j,{m}'\rangle D_{{m}'m}^{j}(R)


      Angular Momentum Addition by Character

      Rotation matrices \ e^{{\mathit{i}}{\vec{\omega }.\vec{J}}} are (2\mathit{j}+1)\times (2\mathit{j}+1) matrix functions of rotating angles \vec{\omega } in some representation of spin \ \mathit{j}. To indicate more explicitly the representation we are in we write them as D_{\mathit{j}}(\vec{\omega }).Let us define the character by \chi _{\mathit{j}}(\vec{\omega })=\mathit{tr}\; D_{\mathit{j}}(\vec{\omega}) For a rotation about the z-axis, the rotation matrix is diagonal

      D_{\mathit{j}}(\phi\hat{z})=\; diag(e^{\mathit{ij}\phi}e^{\mathit{i(j-1)}\phi}...e^{-\mathit{ij}\phi})

      and the character is easy to compute

      \chi_{\mathit{j}}(\phi)=\sum_{\mathit{m=-j}}^{\mathit{j}}\; e^{\mathit{im\phi}}

      =\frac{\epsilon^{\mathit{j+1}}-\epsilon^{-\mathit{j}}}{\epsilon-1}\; \; \; where\; \; \epsilon=e^{\mathit{i\phi}}

      =\frac{sin(\mathit{j+\frac{1}{2})\phi}}{sin(\frac{\phi}{2})}

      But any rotation may be brought to diagonal form by a similarity transform, so this is the most general character. It depends on the rotation angle, not the direction.

      If we tensor together the states  |\mathit{j_{1}m_{1}}\rangle and |\mathit{j_{2}m_{2}}\rangle, they transform under the tensor product representation D_{\mathit{j_{1}}}\times D_{\mathit{j_{2}}}.These matrices have characters which are just products of the elementary characters.

      D_{\mathit{j_{1}}}\times D_{\mathit{j_{2}}} :\; \; \; \chi_{}\mathit{j_{1}\times\mathit{j_{2}}}(\phi)=\chi _{\mathit{j_{1}}}(\phi)\chi _{\mathit{j_{2}}}(\phi).

      This expression can then be manipulated into a sum of the irreducible representation characters:

      \chi_{j_{1}}(\phi)\chi_{j_{2}}(\phi)=\left(\sum_{\mathit{m_{2}=-j_{2}}}^{\mathit{j_{2}}}\; \; \epsilon^{\mathit{m_{2}}}\right)\; \frac{\epsilon^{\mathit{j_{1}+1}}-\epsilon^{-\mathit{j_{1}}}}{\epsilon-1}

      =\sum_{\mathit{l=\left | j_{1}-j_{2} \right |}}^{\mathit{j_{1}+j_{2}}}\; \; \frac{\epsilon ^{\mathit{l+1}}-\epsilon^{\mathit{-l}}}{\epsilon -1}

      =\chi _{j_{1}+j_{2}}(\phi )+\cdot \cdot \cdot +\chi _{|j_{1}-j_{2}}|(\phi )

      This shows that the product representation is reducible to a sum of the known irreducible representations:

      D_{\mathit{j_{1}}}\times D_{_{\mathit{j_{2}}}}\; =\; D_{\mathit{j_{1}+j_{2}}}+\; ...\; +D_{\mathit{j_{1}-j_{2}}}.

      This is another way of approach to the essential content of the angular momentum addition theorem.

      Problems

      7.1) Show that there is only one group of order 3. Solution to 7.1

      Irreducible Tensor Representations and the Wigner-Eckart Theorem

      Representation of Rotations

      If \bold J is the total angular momentum of a system (\bold J = \bold J_1 + \bold J_2), the operator R_{\vec \alpha}=e^{-\frac{i}{\hbar} \bold J \cdot \vec \alpha} acting to the right on a state of the system rotates it in a positive sense about the axis \vec \alpha by an angle |\vec \alpha|. This is similar to how  e^{-\frac{i}{\hbar} \bold L \cdot \vec \alpha} rotates in the plane, and  e^{-\frac{i}{\hbar} \bold S \cdot \vec \alpha} rotates spin states. Suppose that we act with R_{\vec \alpha} on an eigenstate |jm \rangle of \bold J^2 and \bold J_z. This generates a superposition of states. Under the rotation, the state is generally no longer an eigenstate of \bold J_z. However, the rotated state remains an eigenstate of \bold J^2, so the value of  \! j remains the same while the value of  \! m will change. This is because \bold J^2 commutes with every component of \bold J, (\bold J_x,\bold J_y,\bold J_z), and therefore \bold J^2 commutes with R_{\vec \alpha}. Indeed

      \bold J^2 R_{\vec \alpha}|jm \rangle = \bold R_{\vec \alpha} J^2 |jm \rangle = j(j+1)R_{\vec \alpha}|jm \rangle

      Therefore, when we act with the rotation operator on a state, we are only mixing the multiplet. For example, acting on a 3d state with the rotation operator will result in a mixture of only the five 3d states. There will be no mixing of the 3p or 3s states.

      Considering \bold J_z, we know \bold J_z will not commute with  \bold R_{\vec \alpha} because \bold J_z does not commute with either \bold J_x or \bold J_y:

      \lbrack\bold J_z, \bold R_{\vec \alpha}\rbrack \ne0

      Therefore the rotated state can be expressed as a linear combination of |jm'' \rangle as follows:

      R_{\vec \alpha}|jm \rangle= \sum _{m''=-j} ^{j}|jm'' \rangle d_{m''m} ^{(j)}(\vec \alpha)

      Multiplying this equation on the left by a state  \langle jm'| , and using the orthonormality of the angular momentum eigenstates we find

      d_{m'm} ^{(j)}(\vec \alpha) = \langle jm'|e^{-i \bold J \vec \alpha}|jm \rangle

      Thus we can associate with each rotation a 2J+1\! by 2J+1\! matrix \bold d_{\vec \alpha} ^{(j)} whose matrix elements are d_{m'm} ^{(j)}(\vec \alpha). These matrix elements don't depend on dynamics of the system; they are determined entirely by the properties of the angular momentum.

      The matrices have a very important property. Two rotations performed in a row, say \vec \alpha followed by \vec \beta, are equivalent to a single rotation, \vec \gamma. Thus

      R_{\vec \gamma}=R_{\vec \beta}R_{\vec \alpha}

      Taking matrix elements of both sides and inserting a unity operator |j'm'\rangle \langle j'm'| between R_{\vec \beta} and R_{\vec \alpha} we find

      \langle jm|R_{\vec \gamma}| jm'\rangle=\sum_{m''}\langle jm|R_{\vec \beta}| jm''\rangle \langle jm''|R_{\vec \alpha}| jm'\rangle,

      or

      d_{mm'} ^{(j)}(\vec \gamma)=\sum_{m''}d_{mm''} ^{(j)}(\vec \beta)d_{m''m'} ^{(j)}(\vec \alpha)

      or equivalently

      d ^{(j)}(\vec \gamma)=d ^{(j)}(\vec \beta)d ^{(j)}(\vec \alpha)

      A set of matrices associated with rotations having this property is called a representation of the rotation group.

      The rotation operators R_{\vec \alpha} act on the set of states |jm \rangle for fixed j\!, in an irreducible fashion. To see what this means, let's consider the effect of rotations on the set of eight states for j=1\! and j=2\!. Under any rotation a j=1\! state becomes a linear combination of j=1\! states, with no j=2\! components, conversely a j=2\! state becomes a linear combination of j=2\! states with no j=1\! components. Thus this set of eight states transforms into themselves under rotation with no mixing. One says that the rotations act on these eight states reducibly. On the other hand, for a set of states all with the same j\!, there is no smaller subset of states that transforms privately inside itself under all rotations; the rotations are said to act irreducibly. Put another way, if we start with any state, |jm \rangle, we can rotate it into 2j+1\! linearly independent states, and therefore there can't be any subspace of these j states that transforms only into itself under rotations. One can prove this in detail starting from the fact that one can generate all the |jm \rangle states starting from |jj \rangle by applying J_{-}\! enough times.

      Tensor Operators

      The types of operators having simple transform properties under rotations are known as tensor operators. By an irreducible tensor operator \bold T^{(k)} of order k we shall mean a set of 2k+1 operators T_q ^{(k)},\;q=\;-k,\;-k+1,....,k-1,\;k that transform among themselves under rotation according to the transformation law:

      R_{\vec \alpha} T_q ^{(k)}R_{\vec \alpha} ^{-1}=\sum_{q'=-k}^{k}T_{q'} ^{(k)}d_{q'q}^{(k)}(\vec \alpha)

      If we consider and infinitesimal rotation \vec \epsilon,then

      R_{\vec \epsilon } = e^{- \frac {i} {\hbar} \vec J \vec \epsilon} \approx 1- \frac {i} {\hbar} \vec J \vec \epsilon

      to the first order in \bold \epsilon

      T_q ^{(k)}-\frac {i}{\hbar}[\vec J \vec \epsilon , T_q ^{(k)}]=T_q ^{(k)}- \frac {i}{\hbar} \vec \epsilon \sum_{q'=-k}^{k}T_{q'} ^{(k)} \langle kq' |\vec J| kq \rangle

      Comparing coefficients of we see that tensor operators must obey the commutation relation with the angular momentum:

      [\vec J , T_q ^{(k)}]=\sum_{q'=-k}^{k}T_{q'} ^{(k)} \langle kq' |\vec J| kq \rangle

      The z component of this relation is

      [J _z, T_q ^{(k)}]=qT_{q} ^{(k)}

      while

      [J _{\pm}, T_q ^{(k)}]=\sum_{q'=-k}^{k}T_{q'} ^{(k)}\langle kq' |J_{\pm}| kq \rangle=T_{q \pm 1} ^{(k)} \sqrt {k(k+1)-q(q \pm 1)}

      Tensor operators have many simple properties. For example, T_q ^{(k)} acts on state |\alpha j_1 m_1 \rangle of a system (\bold \alpha refers to other quantum number), creating a state whose z component of angular momentum is q + m1. To prove this, let us consider the transformation properties of the state T_q ^{(k)}|\alpha j_1 m_1 \rangle under rotation about the z axis by \bold \phi

      R_{\bold \phi}T_q ^{(k)}|\alpha j_1 m_1 \rangle = R_{\bold \phi}T_q ^{(k)}R_{\bold \phi}^{-1}R_{\bold \phi}|\alpha j_1 m_1 \rangle = \sum_{q'} T_{q'} ^{(k)} d_{q'q}^{(k)} \sum_{m'_{1}}|\alpha j_1 m'_1 \rangle d_{m'_1 m_1}^{(j_1)}(\bold \phi)

      but d_{m' m}^{(j)}(\bold \phi)=\delta _{m'm} e^{-im \bold \phi}

      so that

      R_{\bold \phi}T_q ^{(k)}|\alpha j_1 m_1 \rangle = e^{-i(q+m_1)\bold \phi}T_q ^{(k)}|\alpha j_1 m_1 \rangle

      This is exactly the transformation law for an eigenstate of \ J_z with eigenvalue \ q+m_1. Thus is an operator that increases the eigenvalue \ J_z of by \ q .

      The Wigner-Eckart Theorem

      The Wigner-Eckart theorem postulates that in a total angular momentum basis, the matrix element of a tensor operator can be expressed as the product of a factor that is independent of \displaystyle{j_z} and a Clebsch-Gordan coefficient. To see how this is derived, we can start with the matrix element \langle \underbrace{\alpha \prime} j\prime m\prime |\underbrace{\tilde{\alpha}} j m\rangle, where the \underbrace{\alpha} represents all the properties of the state not related to angular momentum:

      \langle \underbrace{\alpha \prime} j\prime m\prime |\underbrace{\tilde{\alpha}} j m\rangle = \int d\alpha \langle \underbrace{\alpha \prime} j\prime m\prime |R_{\alpha}^{-1}R_{\alpha}|\underbrace{\tilde{\alpha}} j m\rangle

      \Rightarrow \langle \underbrace{\alpha \prime} j\prime m\prime |\underbrace{\tilde{\alpha}} j m\rangle = \sum_{m_1 ,m_1 \prime} \int d\alpha  \ d_{m_1 \prime m\prime}^{(j\prime )}(\alpha)^{*} d_{m_1 m}^{(j)}(\alpha)\langle \underbrace{\alpha \prime} j\prime m_1 \prime |\underbrace{\tilde{\alpha}} j m_1 \rangle

      Using the orthogonality of rotation matrices, this reduces to

      \langle \underbrace{\alpha \prime} j\prime m\prime |\underbrace{\tilde{\alpha}} j m\rangle = \delta _{jj\prime} \delta _{mm\prime} \sum _{m_1} \frac{\langle \underbrace{\alpha \prime} j\prime m_1|\underbrace{\tilde{\alpha}} j\prime m_1 \rangle}{2j\prime +1}

      Finally, using the fact that |\underbrace{\tilde{\alpha}} j m\rangle = \sum_{q, \tilde{m}} T_{q}^{(k)}|\underbrace{\alpha} \tilde{j} \tilde{m}\rangle\langle k\tilde{j}q\tilde{m}|k\tilde{j}jm\rangle and the orthogonality of the Clebsch-Gordan coefficients, we obtain

      \langle \underbrace{\alpha \prime} j\prime m\prime |T_{q}^{(k)}|\underbrace{\alpha} j m\rangle = \sum_{m_1}\frac{\langle \underbrace{\alpha \prime} j\prime m_1 |\underbrace{\tilde{\alpha}} j\prime m_1 \rangle}{2j\prime +1} \langle kjqm|kjj\prime m\prime \rangle

      Historically, this is written as

      \langle \underbrace{\alpha \prime} j\prime m\prime |T_{q}^{(k)}|\underbrace{\alpha} j m\rangle = \frac{\langle \underbrace{\alpha \prime} j\prime || T_{q}^{(k)}|| \underbrace{\alpha} j \rangle}{\sqrt{2j\prime +1}} \langle kqjm|kjj\prime m\prime \rangle

      where \langle \underbrace{\alpha \prime} j\prime || T_{q}^{(k)}|| \underbrace{\alpha} j \rangle is referred to as the reduced matrix element.

      In summary, the Wigner-Eckart theorem states that the matrix elements of spherical tensor operators T_q^{(k)} with respect to the total-\bold J eigenstates |j,m \rangle can be written in terms of the Clebsch-Gordan coefficients,  <kq;jm|j\prime m\prime;kj\rangle, and the reduced matrix elements of T_q^{(k)}, which do not depend on the orientation of the system is space, i.e., no dependence on \! m\prime , \! m , and \! q : 
\langle \underbrace{\alpha \prime} j\prime m\prime |T_{q}^{(k)}|\underbrace{\alpha} j m\rangle = \frac{\langle \underbrace{\alpha \prime} j\prime || T_{q}^{(k)}|| \underbrace{\alpha} j \rangle}{\sqrt{2j\prime +1}} \langle kqjm|kjj\prime m\prime \rangle

      As an example of how this theory can be useful, consider the example of the matrix element T_{0}^{(1)} = z =r\cos \left(\theta \right) with hydrogen atom states |n \ell m\rangle. Because of the Clebsch-Gordan coefficients, the matrix element \langle n\prime \ell \prime m\prime | T_{0}^{(1)}|n\ell m\rangle is automatically zero unless \displaystyle{m=m\prime} and \displaystyle{\ell \prime = \ell \pm 1} or \displaystyle{\ell}. Also, because z is odd under parity, we can also eliminate the \ell \prime = \ell transition.

      Also, for x=\frac{1}{\sqrt{2}}\left(T_{-1}^{1}-T_{1}^{1}\right), the Wigner-Eckart Theorem reads

      \langle n \ell  m | x|n\ell m\rangle=\frac{1}{\sqrt{2}}\langle n \ell || T^1 || n \ell \rangle\left(C^{\ell m}_{\ell m11}-C^{\ell m}_{\ell m1-1}\right)

      The result vanishes since the CG coefficients on the right hand side are zero.

      Problem [10]

      EXAMPLE PROBLEM [11]

      Application [12]

      Elements of Relativistic Quantum Mechanics

      General Remarks

      The theory we have been building so far is essentially a non-relativistic one. We have been working all the time with one particular Lorentz frame of reference and have set up the theory as an analogue of the classical non-relativistic dynamics. Let us now try to make the theory invariant under Lorentz transformations, so that it conforms to special relativity. This is necessary if we want the theory to apply to high-speed particles. There is no need to make the theory conform to general relativity, since general relativity is required only when one is dealing with gravitation, and gravitational forces are quite unimportant in atomic phenomena.

      Let us see how the basic ideas of quantum theory can be adapted to the relativistic point of view that the four dimensions of spacetime should be treated on the same footing. The general principle of superposition of states is a relativistic principle, since it applies to 'states' with the relativistic spacetime meaning. However, the general concept of an observable does not fit in, since an observable may involve physical things at widely separated points at one instant of time. Thus, if one works with a general representation referring to any complete set of commuting observables, the theory cannot display the symmetry between space and time required by relativity. In relativistic quantum mechanics one must be content with having one representation which displays this symmetry. One then has the freedom to transform to another representation referring to a special Lorentz frame of reference if it is useful for a particular calculation.

      Relativistic Wave Equations

      The description of phenomena at high speed/energies requires the investigation of the relativistic wave equations, the equations which are invariant under Lorentz transformations. The translation from a nonrelativistic to a relativistic description, implies that several concepts of the nonrelativistic theory have to be reinvestigated, in particular:

      (1) Spatial and temporal coordinates have to be treated equally within the theory.

      (2) Since, from the Uncertainty principle, we know

      \triangle x \sim \frac{\hbar}{\triangle p} \sim \frac{\hbar}{m_{0} c},

      a relativistic particle can not be localized more accurately than \approx \hbar/{m_{0} c}; otherwise pair creation occurs for E > 2m_{0} c^2\!. Thus, the idea of a free particle only makes sense if the particle is not confined by external constraints to a volume which is smaller than approximately than the Compton wavelength \lambda_c=\hbar/{m_{0} c}. Otherwise, the particle automatically has companions due to particle-antiparticle creation.

      (3) If the position of the particle is uncertain, i.e. if

      \triangle x > \frac{\hbar}{m_{0} c},

      then the time is also uncertain, because

      \triangle t \sim \frac{\triangle x}{c} > \frac{\hbar}{m_{0} c^2}.

      In a nonrelativistic theory, \triangle t can be arbitrary small, because c \to \infty. Thereby, we recognize the necessity to reconsider the concept of probability density, which describes the probability of finding a particle at a definite place r\! at a fixed time t\!.

      (4) At high energies i.e. in the relativistic regime, pair creation and annihilation processes occur, ususlly in the form of creating particle-antiparticle pairs. Thus, at relativistic energies, particle conservation is no longer a valid assumption. A relativistic theory must be able to describe the phenomena like pair creation, vacuum polarization, particle conservation etc.

      In nonrelativistic quantum mechanics, states of particles are described by Schrodinger equation of states:

      i\hbar\frac{\partial\psi(\bold r, t)}{\partial t}=\left(-\frac{\hbar^2}{2m}\nabla^2+V(\bold r, t)\right)\psi(\bold r, t)

      Schrodinger equation is a first order differential equation in time. However, it is second order in space and therefore, it is not invariant under the Lorentz transformations. As mentioned above, in relativistic quantum mechanics, the equation describing the states must be invariant under the Lorentz transformations. In order to satisfy this condition, equation of state must contain the derivatives with respect to time and space of the same order. Equations of states in relativistic quantum mechanics are Klein-Gordon equation (for spinless particles) and Dirac equation (for spin \frac {1}{2} particles). The former contains second ordered derivatives while the latter contains first ordered derivatives with respect to both time and space. The way to derive these equations is similar to that of Schrodinger equation: making use of the correspondence principle, starting from the equation connecting the energy and momentum with the substitution E by i\hbar \frac {\partial}{\partial t} and \bold p by -i\hbar \nabla.

      Here is a link to Klein-Gordon equation [[13]]

      Here is a link to Dirac equation [[14]]

      Here in another link to Dirac equation [[15]]

      Here is a worked problem for a free relativistic particle.

      Here is a worked problem to review the use of relativistic 4-vectors: relativistic 4-vectors

      Relativistic Quantum Mechanics and the Dirac Equation

      Starting from the relativistic relation between energy and momentum:

      E^2=\vec p \; ^{2}c^2+m^2c^4

      or E=c\sqrt{p^2+m^2c^2}

      From this equation we can not directly replace  E, \vec p by the corresponding operators since we don't have the definition for the square root of an operator. Therefore, first we need to linearize this equation as follows:

      E=c\sqrt{p^2+m^2c^2}=c\sqrt{(p_{x}^2+p_{y}^2+p_{z}^2)+m^2c^2}=c(\alpha _{x}p_{x}+\alpha _{y}p_{y}+\alpha _{z}p_{z})+\beta mc^2

      where \bold \alpha _{x},\alpha _{y},\alpha _{z} and \bold \beta are some operators independent of \vec p.

      From this it follows that:

      c^2(p_{x}^2+p_{y}^2+p_{z}^2+m^2c^2)=[c(\alpha _{x}p_{x}+\alpha _{y}p_{y}+\alpha _{z}p_{z})+\beta mc^2] . [c(\alpha _{x}p_{x}+\alpha _{y}p_{y}+\alpha _{z}p_{z})+\beta mc^2]

      Expanding the right hand side and comparing it with the left hand side, we obtain the following conditions for \bold \alpha _{x},\alpha _{y},\alpha _{z} and \bold \beta :

      \alpha _{i}^2=\beta ^2=1 \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \ \ (1)

      \bold \alpha_ {i}\alpha_ {j}+\alpha_ {j}\alpha_ {i}=\{\alpha_ {i},\alpha_ {j}\}=2\delta_{ij} \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \ \ \ \ \ \ (2)

      \bold \alpha_ {i} \beta+\beta \alpha_ {i}=\{\alpha_ {i},\beta\}=0 \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \ \ \ \ \ \ \ (3)

      where i = 1,2,3 corresponds to x,y,z

      In order to describe both particle (positive energy state) and antiparticle (negative energy state); spin-up state and spin-down state, the wave function must have 4 components and all operators acting on such states correspond to 4x4 matrices. Therefore, \bold \alpha _{x},\alpha _{y},\alpha _{z} and \bold \betaare 4x4 matrices. It is convention that these matrices are given as follows (in the form of block matrices for short):

      \alpha_{x}=\left(\begin{array}{cc}0& \sigma_{x}\\ \sigma_{x}&0\end{array}\right); \qquad  \alpha_{y}=\left(\begin{array}{cc}0& \sigma_{y}\\ \sigma_{y}&0\end{array}\right); \qquad \alpha_{z}=\left(\begin{array}{cc}0& \sigma_{z}\\ \sigma_{z}&0\end{array}\right); \qquad \beta= \left(\begin{array}{cc}1&0\\0&-1\end{array}\right) \qquad (4)

      Continuity Equation

      Dirac equation and its adjoint equation:

      i \hbar \frac {\partial \psi}{\partial t}=(-i \hbar c \vec \alpha \vec \nabla + mc^2 \beta) \psi

      -i \hbar \frac {\partial \psi ^{\dagger}}{\partial t}=(i \hbar c \vec \nabla  \psi ^{\dagger} \vec \alpha + mc^2 \psi ^{\dagger} \beta )

      Multiplying Dirac equation by \psi ^{\dagger}from the left and the adjoint equation by \bold \psi from the right, we get:

      i \hbar \psi ^{\dagger} \frac {\partial \psi}{\partial t}=-i \hbar c \psi ^{\dagger} \vec \alpha \vec \nabla \psi+ mc^2 \psi ^{\dagger} \beta \psi

      -i \hbar \frac {\partial \psi ^{\dagger}}{\partial t} \psi=i \hbar c \vec \nabla  \psi ^{\dagger} \vec \alpha \psi+ mc^2 \psi ^{\dagger} \beta \psi

      Subtracting one from the other, we get:

      i \hbar \left ( \psi ^{\dagger} \frac {\partial \psi}{\partial t} + \frac {\partial \psi}{\partial t} \psi ^{\dagger} \right )=-i \hbar c \left [ \psi ^{\dagger} \vec \alpha \vec \nabla \psi + \vec \nabla  \psi ^{\dagger} \vec \alpha \psi \right ] = -i \hbar c \vec \nabla \left ( \psi ^{\dagger} \vec \alpha \psi \right )

      \Rightarrow \frac {\partial}{\partial t} \left ( \psi ^{\dagger} \psi \right )+ \vec \nabla  \left ( c \psi ^{\dagger} \vec \alpha \psi \right ) = 0 \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad (11)

      Therefore, we can define:

      \rho = \psi ^{\dagger} \psi as probability density (12)

      \vec j = c \left ( \psi ^{\dagger} \vec \alpha \psi \right ) as probability current density (13)

      where \sigma_{x}, \;\sigma_{y}, \;\sigma_{z} are 2 by 2 Pauli matrices.

      Let's define:

      \vec \alpha=\alpha _{x} \hat x+\alpha _{y} \hat y+\alpha _{z} \hat z

      Then we can write:

      E=c \vec \alpha \vec p +\beta mc^2

      Substituting all quantities by their corresponding operators, we obtain Dirac equation:

      i \hbar \frac {\partial \psi}{\partial t}=H_{D} \psi \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad (5)

      where H_{D}=c \vec \alpha \vec p + \beta mc^2

      Dirac equation can also be written explicitly as follows:

      i \hbar \frac {\partial \psi_{1}}{\partial t}=c(p_{x}-ip_{y}) \psi _{4}+cp_{z} \psi _{3} + mc^2 \psi _{1} \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad (6)

      i \hbar \frac {\partial \psi_{2}}{\partial t}=c(p_{x}+ip_{y}) \psi _{3}-cp_{z} \psi _{4} + mc^2 \psi _{2} \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad (7)

      i \hbar \frac {\partial \psi_{3}}{\partial t}=c(p_{x}-ip_{y}) \psi _{2}+cp_{z} \psi _{1} - mc^2 \psi _{3} \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad (8)

      i \hbar \frac {\partial \psi_{4}}{\partial t}=c(p_{x}+ip_{y}) \psi _{1}-cp_{z} \psi _{2} - mc^2 \psi _{4} \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad (9)

      In the present of electromagnetic field, Dirac equation becomes:

      (i \hbar \frac {\partial }{\partial t} -e \phi) \psi = \left [ c \vec \alpha (\frac {\hbar}{i} \vec \nabla - \frac {e}{c} \bold A)+\beta mc^2 \right ]\psi \qquad \qquad \qquad \qquad \qquad \qquad \qquad \ \ \ \ \ \ \ (10)


      Free Particle Solution

      Substituting \psi(x,y,z,t)=\psi (\vec r, t)=\psi _{0}(\vec r)e^{(-i/\hbar)Et} into (5), we get time-dependent Dirac equation:

      E \psi_{0}(\vec r)=(c \vec \alpha \vec p +mc^2 \beta)\psi_{0}(\vec r)

      Let's seek for the plane wave solutions with momentum \vec p:

      \psi_{0}(\vec r)=u e^{(i/ \hbar) \vec p \vec r}

      u satisfies the following equation:

      Eu=(c \vec \alpha \vec p +mc^2 \beta)u

      u can be written as follows:

      u= \left[\begin{array}{cc}u_{1}\\u_{2}\\u_{3}\\u_{4}\end{array}\right]=\left(\begin{array}{cc}W\\W'\end{array}\right)

      W= \left(\begin{array}{cc}u_{1}\\u_{2}\end{array}\right) \qquad W'= \left(\begin{array}{cc}u_{3}\\u_{4}\end{array}\right)

      The equation for u can be rewritten as:

      E\left(\begin{array}{cc}W\\W'\end{array}\right)= \left [ \left(\begin{array}{cc}0&c \vec \sigma \vec p \\c \vec \sigma \vec p & 0\end{array}\right)+\left(\begin{array}{cc}mc^2&0 \\0 & -mc^2\end{array}\right)\right ]\left(\begin{array}{cc}W\\W'\end{array}\right)

      or

      \left\{\begin{array}{cc}(E-mc^2)W-c \vec \sigma \vec p W'=0&\\-c \vec \sigma \vec p W+(E+mc^2)W'=0\end{array}\right.

      Condition for non-trivial solutions:

      \left|\begin{array}{cc}E-mc^2&-c \vec \sigma \vec p\\-c \vec \sigma \vec p&E+mc^2\end{array}\right|=0

      \Rightarrow E^2=c^2 \vec p \; ^{2} +m^2c^4 \Rightarrow E_{\pm} = \pm c \sqrt {\vec p \; ^{2} +m^2c^2} \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad (14)

      So, for a given value of momentum there are two values of energy one with positive sign the other with negative sign.

      Substituting the value of energy into equation for W and W' yields :

      W = \frac {c \vec \sigma \vec p}{E-mc^2} W' \qquad for \qquad E=E_{-}

      W' = \frac {c \vec \sigma \vec p}{E+mc^2} W \qquad for \qquad E=E_{+}

      Choosing the momentum along z direction, the wave functions can be written as follows:

      u_{E_{-}}=\left[\begin{array}{cc} \frac {cp_{z}A}{E_{-}-mc^2} \\ - \frac {cp_{z}B}{E_{-}-mc^2} \\A\\B \end{array}\right] \qquad where \qquad W' = \left(\begin{array}{cc}A\\B\end{array}\right)

      u_{E_{+}}=\left[\begin{array}{cc}D\\F\\ \frac {cp_{z}D}{E_{+}+mc^2} \\ - \frac {cp_{z}F}{E_{+}+mc^2} \end{array}\right] \qquad where \qquad W = \left(\begin{array}{cc}D\\F\end{array}\right)

      General solution for free particles is as follows:

      \psi (\vec r,t)= \int u_{E_{-}}(A,B)e^{i(-Et+\vec p \vec r) \hbar}d \vec p + \int u_{E_{+}}(D,F)e^{i(Et+\vec p \vec r) \hbar}d \vec p \qquad \qquad \qquad \qquad \qquad (15)

      A,B,C,D can be determined from initial conditions.


      Nonrelativistic Limit

      In this limit v \ll c \qquad E_{\pm}= \pm mc^2 and:

      u_{E_{-}} \approx \left[\begin{array}{cc}0\\0\\A\\B \end{array}\right] \qquad u_{E_{+}} \approx \left[\begin{array}{cc}D\\F\\0\\0 \end{array}\right]

      Four independent solutions of Dirac equation can be chosen as follows:

      u_{1}=\left[\begin{array}{cc}1\\0\\0\\0\end{array}\right] \qquad for \;\; E=mc^2 \;\; ; \;\; spin-up

      u_{2}=\left[\begin{array}{cc}0\\1\\0\\0\end{array}\right] \qquad for \;\; E=mc^2 \;\; ; \;\; spin-down

      u_{3}=\left[\begin{array}{cc}0\\0\\1\\0\end{array}\right] \qquad for \;\; E=-mc^2 \;\; ; \;\; spin-up

      u_{4}=\left[\begin{array}{cc}0\\0\\0\\1\end{array}\right] \qquad for \;\; E=-mc^2 \;\; ; \;\; spin-down


      Spin Operators

      Let the spin matrix \vec{\sigma} \prime be defined as

      \vec{\sigma} \prime = \left(\begin{array}{cc}  \vec{\sigma}&0\\ 0& \vec{\sigma}\end{array}\right)

      and the spin-1/2 operator \vec S be defined as

      \vec S = \frac{\hbar}{2} \vec{\sigma} \prime

      Using the spin-1/2 operator, we can determine many things from the Dirac equation. One of the more important things we can deduce is the g-factor that arises in the presence of a magnetic field. Starting with the time-independent Dirac equation in an electromagnetic field we can obtain

      [(E - e\phi)^2 - (c\vec p - e\vec A)^2 - m^2 c^4 + e\hbar c \vec{\sigma} \prime \cdot \vec H + ... ]\psi = 0

      where \vec H = \vec{\nabla} \times \vec A. Now let E\prime = E - mc^2. This means that (E - e\phi )^2 - m^2 c^4 \approx 2mc^2 (E\prime - e\phi) plus other terms much less than \displaystyle{mc^2}. Thus,

      \Rightarrow E\prime \psi \approx [\frac{1}{2m}(\vec p - \frac{e}{c} \vec A )^2 +e\phi -2 \frac{e}{2mc} \vec S \cdot \vec H ] \psi

      We can see here that since \frac{e}{2mc} = \mu _B, the value of 2 is given to us as the gyromagnetic ratio.

      Another thing we can determine from the Dirac equation is the properties of spin-orbit coupling. First, let e\phi (\vec r) \rightarrow V(\vec r). Simplifying our problem by allowing  \psi = \left(\begin{array}{c} \psi _1 \\ \psi _2 \end{array}\right), where both \displaystyle{\psi _1} and \displaystyle{\psi _2} have two entries each, and using E = E\prime + mc^2, the Dirac equation tells us that

      (E\prime - V)\psi _1 - c\vec{\sigma} \cdot \vec p \psi _2 = 0

      (E\prime + 2mc^2 - V)\psi _2 - c\vec{\sigma} \cdot \vec p \psi _1 =0

      \Rightarrow \psi _2 = \frac{c\vec{\sigma} \cdot \vec p}{E\prime +2mc^2 -V} \psi _1

      \Rightarrow E\prime \psi _1 = \frac{1}{2m}(\vec{\sigma} \cdot \vec p)[1+\frac{E\prime -V}{2mc^2}]^{-1}(\vec{\sigma} \cdot \vec p) \psi _1 + V \psi _1

      Now lets simplify this expression a bit. We can expand the inverse factor here so that [1+\frac{E\prime - V}{2mc^2}]^{-1} \approx 1-\frac{E\prime - V}{2mc^2}. We also know that \vec p V = V\vec p - i\hbar \vec{\nabla V}, (\vec{\sigma} \cdot \vec p)(\vec{\sigma} \cdot \vec p) = p^2, and (\vec{\sigma} \cdot \vec{\nabla V})(\vec{\sigma} \cdot \vec{p}) = \vec{\nabla V} \cdot \vec{p} + i\vec{\sigma} \cdot [\vec{\nabla V} \times \vec{p}]. Thus

      \Rightarrow E\prime \psi _1 = [(1-\frac{E\prime - V}{2mc^2})\frac{p^2}{2m} + V]\psi _1 -\frac{\hbar ^2}{4m^2 c^2} \vec{\nabla V} \cdot \vec{\nabla \psi _1} + \frac{\hbar}{4m^2 c^2}\vec{\sigma} \cdot [\vec{\nabla V} \times \vec{p} \psi _1]

      Assuming \displaystyle{V(\vec{r})} is a central potential, we can further simplify this expression by noting that \vec{\nabla V} \cdot \vec{\nabla} = \frac{dV}{dr}\frac{\partial}{\partial r} and \vec{\nabla V} = \frac{1}{r}\frac{dV}{dr}\vec{r}. We also take note that \displaystyle{E\prime - V \sim \frac{p^2}{2m}}. Thus

      \Rightarrow E\prime \psi _1 = [\frac{p^2}{2m} - \frac{p^4}{8m^3 c^2} + V - \frac{\hbar ^2}{4m^2 c^2}\frac{dV}{dr}\frac{\partial}{\partial r} + \frac{\hbar}{4m^2 c^2}\frac{1}{r}\frac{dV}{dr}\vec{\sigma} \cdot (\vec{r} \times \vec{p})]\psi _1

      Finally, noting that \vec{\sigma} \cdot (\vec{r} \times \vec{p}) = \frac{2}{\hbar}\vec{S_0} \cdot \vec{L}, where \displaystyle{\vec{S_0}} is the spin-1/2 operator for \displaystyle{\vec{\sigma}}, we can finally see that

      \Rightarrow E\prime \psi _1 = [...+\frac{1}{2m^2 c^2}\frac{1}{r}\frac{dV}{dr}\vec{S_0} \cdot \vec{L}]\psi _1

      Thus, the Dirac equation tells us how spin and the angular momentum of the particle interact in the relativistic limit.


      Dirac Hydrogen Atom

      As with the Klein-Gordon equation, the Dirac equation can be solved exactly for the Coulomb potential. Starting with definitions, for the Coloumb potential, \displaystyle{V(\vec{r}) = -\frac{Ze^2}{r}}. Also, let p_r = -i\hbar \frac{1}{r}\frac{\partial}{\partial r} r, \alpha _r = (\vec{\alpha} \cdot \frac{\vec{r}}{r}), \hbar k = \beta(\vec{\sigma} \prime \cdot \vec{L} + \hbar ), and \vec{\alpha} \cdot \vec{p} = \alpha _r p_r + \frac{i\hbar}{r} \alpha _r \beta k. Using the condition that \displaystyle{E^2 = c^2 p^2 + m^2 c^4} in order to find conditions for these new components, we see that \alpha _r ^2 = \beta ^2 = 1 and \displaystyle{\{ \alpha _r , \beta \} = 0}. Thus,

      H_D = c\alpha _r p_r + \frac{i\hbar c}{r} \alpha _r \beta k + \beta mc^2 + V

      It is easy to see that \displaystyle{[k, H_D] = 0}. Also, notice that \hbar ^2 k^2 =(\vec{\sigma}\prime \cdot \vec{L})^2 + 2\hbar (\vec{\sigma}\prime \cdot \vec{L}) + \hbar ^2 = (\vec{L} \cdot \vec{S})^2 +\frac{\hbar ^2}{4} = \hbar ^2 j(j+1) +\frac{\hbar ^2}{4} = \hbar ^2 (j+\frac{1}{2})^2, which implies that \displaystyle{|k|=j+\frac{1}{2}}. With this in mind, we use the simplification \psi = \left(\begin{array}{c} \psi _1 \\ \psi _2 \end{array} \right), for which the radial components of \displaystyle{\psi _1} and \displaystyle{\psi _2} are \frac{F(\rho )}{\rho} and \frac{G(\rho )}{\rho}, where \rho = \frac{1}{\hbar c}(m^2 c^4 - E^2 )^{\frac{1}{2}}r. We solve these radial equations in the same manner as the Schrodinger or Klein-Gordon equation: finding their asymptotic properties(e^{-\frac{\rho}{2}} for \displaystyle{\rho \rightarrow \infty} and \displaystyle{\rho ^s} for \displaystyle{\rho \rightarrow 0}, where \displaystyle{s=\sqrt{k^2 - \gamma ^2}} and \gamma = \frac{Ze^2}{\hbar c}), applying these asymptotic properties to a power series expansion, using these adjusted power series expansions in the radial equation, and finding a recursion relation between the coefficients of the power series expansion. In the end, we find that the energies are given by

      E = mc^2 [1+\frac{\gamma ^2}{(s+N)^2}]^{-\frac{1}{2}}, where N is a constant.

      Expanding the root in this energy, we find

      E = mc^2 - \frac{Ry}{n^2} - \frac{mc^2 \gamma ^4}{2n^4}(\frac{n}{j+\frac{1}{2}} - \frac{3}{4}) + ..., where \displaystyle{n=N+j+\frac{1}{2}}. Thus, we find that the Dirac equation yields the rest mass energy, non-relativistic energy, and the correct fine-structure correction, in contrast to the Klein-Gordon equation. (For more detail in the evaluation of this problem, see "Klein-Gordon_equation with Coulomb potential" as an example.)


      Dirac's Hole Theory

      From the solution of the Dirac Eqn for a free particle we find that the energy eigen value can be both positive and negative,

      E = \pm \sqrt{p^{2}c^{2} + m^{2}c^{4}} = \pm E_{p}

      So in the presence of an external electric field an electron can make a transition to the negative energy state. Sonce there is no ground state the electron can goto E= - \infty state. If this is so, we shall have a continuous emission of radtion which is not not observed physically. Therefore the theory so far developed is in difficulty to understand.

      To remove these difficulties Dirac introduced some excellent ideas which are known as the Dirac's Hole Theory. According to the hole theory the vacuum or empty state is a sea of negative energy states which are completely filled with negative energy electrons. So due to the Paul's exclusion principle electrons cannot jump to the lower states, so no radiation will be seen.

      Since the sea of negative energy states are completely by the negative energy electrons, the charge density would be infinite but is not observed physically. So in the 2nd Postulate Dirac states that electrons in the negative energy states will not exibhit its charge, momentum, field, intrinsic spin etc. But it can interact with external field.

      Now according to the hole theory the positive and negative energy states are placed symmetrically about E = 0 and an electron in the negative state will be excited to the positive state if its energy is E= \geq  2mc^{2} when an electron from the negative energy goes to the positive energy state it will exhibit all of its properties such charge, momentum, field, intrinsic spin etc. At the same time it will create a hole in the negative energy state. After transition the charge and energy of the vacuum would be,

      Q = Qvac − ( − e) = Qvac + | e |

      E = Evac − ( − Ep) = Evac + Ep

      So it is now clear that the hole has a charge of | e | and energy Ep.

      That is a transition of an electron from a negative energy state creates a particle pf charge | e | and energy of Ep. The particle is known as the hole or positron.

      On the other hand if an energy level in the negative energy sea remains empty, an electron from the positive energy state can jump into it and both the hole and the electron will disappear. But a radiation would be emitted due to the transition from the higher energy to the lower energy state. Therefore pair production and annihilation can be explained with the help of this hole theory.

      A perfect vacuum is region where all the states of positive energy are unoccupied and all those of negative energy are occupied. In a perfect vacuum, Maxwell's equation

      \bigtriangledown .E= 0

      must of course be valid. This means that the infinite distribution of negative energy electrons does not contribute to the electron field, Only the departure from the distribution in a vacuum will contribute to the electric density ρ0 in the Maxwell's equation,

      \bigtriangledown .E= ρ0

      Thus there will be a contribution -e for each occupied state of positive energy and contribution of e for each unoccupied state of negative energy.

      So the Hole theory results to a new fundamental symmetry in natureTo each particle there is an antiparticle and in particular the existence of electrons implies the existence of positrons.

      The Adiabatic Approximation and Berry Phase

      The adiabatic approximation can be applied to systems in which the Hamiltonian evolves slowly with time. The Hamiltonian of an adiabatic system contains several degrees of freedom. The basic idea behind the adiabatic approximation is to solve the Schrodinger equation for the "fast" degree of freedom and only then allow the "slow" degree of freedom to evolve slowly. For example, imagine a molecule with a heavy nucleus and an electron. In this system there is a "slow" degree of freedom (the nucleus) and a "fast" degree of freedom (the electrons). Imagine that the nucleus is stationary, and the electrons align themselves. Now that the electrons have aligned themselves, allow the nucleus to move very slowly - which will cause the electrons to realign. This is the adiabatic approximation.

      Adiabatic Process

      The gradual change in the external conditions characterizes an adiabatic process. In another word, let T_{i}\! be the internal characteristic frequency, T_{e}\! be the external characteristic frequency. The adiabatic process is one for which T_{e}>>T_{i}\!.

      The Adiabatic Theorem

      The adiabatic theorem states that if a system is initially in the nth state and if its Hamiltonian evolves slowly with time, it will be found at a later time in the nth state of the new Hamiltonian. (Proof: Messiah Q.M. (wiley NY 1962) Vol II ch. XVII)

      Application(Born-Oppenheimer Approximation)[16]

      Geometric Phase (Berry Phase)

      The phase of a wave function is often considered arbitrary, and it is canceled out for most physics quantities, such as  |\Psi |^2 \!. For that reason, the time-dependent phase factor on the wave function of a particle going from the nth eigenstate of \hat{H}_0 to the nth eigenstate of \hat{H}_t was ignored. However, Berry showed that if the Hamiltonian is evolved over a closed loop the relative phase is not arbitrary, and cannot be gauged away. For more information on this discovery, see this paper. This is called the Berry Phase.

      If \psi(x, 0) = |n (0)\rangle,

      \psi (x,t)\simeq e^{\theta _n(t)}e^{\gamma (t)}|n(t)\rangle, where \theta _n(t)=-\frac{1}{\hbar }\int _{\text{t0}}^tE_n\left(t'\right)dt', is called dynamic phase, and \gamma (t)\! is geometric phase.

      Do you want to know what geometric phase looks like? If so, let' s begin our work :

      Substitute \left.\psi (x,t)\simeq e^{\theta _n(t)}e^{\gamma (t)}\right|n(t)\rangle into Schrodinger equation,

      i\hbar \left[\frac{\partial }{\partial t}\left|n(t)\rangle e^{i\theta (t)}e^{i\gamma (t)}-\frac{i}{\hbar }E_n(t)\right|n(t)\rangle e^{i\theta (t)}e^{i\gamma(t)}+i\frac{d\gamma _n(t)}{dt}|n(t)\rangle e^{i\theta(t)}e^{i\gamma(t)}\right]=H(t)|n(t)\rangle e^{i\theta(t)}e^{i\gamma (t)}=E_n(t)|n(t)\rangle e^{i\theta(t)}e^{i\gamma(t)}

      So, \frac{d\gamma _n(t)}{dt}=i\langle n(t)|\frac{\partial }{\partial t}|n(t)\rangle.

      Since \frac{\partial }{\partial t}|n(t)\rangle=\frac{\partial |n(t)\rangle}{\partial R}\frac{\partial R}{\partial t}

      \therefore \gamma _n(t)=i\int _{t_0}^t\langle n\left(t'\right)|\frac{\partial }{\partial t}\left|n\left(t'\right)\rangle dt'\right.=i\int _{t_0}^t\langle n\left(t'\right)|\frac{\partial }{\partial R}\left|n\left(t'\right)\rangle\frac{\partial R}{\partial t'}dt'\right.=\int _{R_i}^{R_f}\langle n\left(t'\right)|\frac{\partial }{\partial R}\left|n\left(t'\right)\rangle dR\right.


      This is the expression of geometric phase.

      If it's a 1 D problem : \gamma _n(t)=0\!, there is no geometric phase change.

      If more than 1 D: \gamma _n(t)=\int _{R_i}^{R_f}\langle n\left(t'\right)|\nabla _R\left|n\left(t'\right)\rangle dR.\right.

      The larger number of dimensions allows for the geometric phase change.


      Berry' s phase : If the Hamiltonian returns to its oringinal form after a time T, the net geometric phase change is :

      \gamma _n(t)=\oint \langle n\left(t'\right)|\nabla _R\left|n\left(t'\right)\rangle dR.\right.


      Why geometric phase is special? Because it does have physical meaning. We can observe it from interference experiment. is γn(t) real? If it is not, thene^{i\gamma _{n}} is not a phase factor at all, but an exponential factor, and the normalization of ψn is lost. Since the time-dependent Schrodinger equation conserves probability, it must preserve normalization. It would be comforting to check this, by showing explicitly that \gamma  _{n}(t)=i\int_{R_{i}}^{R_{f}}\langle\psi _{n}\mid \bigtriangledown _{R}\psi _{n}\rangle. dR

      yields a realγn . In fact, this is very easy to do. First, note that

      \bigtriangledown _{R} \langle\psi _{n}\mid \psi_{n}\rangle=0

      so \langle\bigtriangledown _{R}\psi _{n}\mid \psi _{n}\rangle+\langle\psi _{n}\mid\bigtriangledown _{R}\psi _{n}\rangle=\langle\psi _{n}\mid\bigtriangledown _{R}\psi _{n}\rangle^{*}+\langle\psi _{n}\mid\bigtriangledown _{R}\psi _{n}\rangle=0

      Since \bigtriangledown _{R}(\psi _{n}\mid \psi _{n}) plus its complex conjugate is zero, it follows that\bigtriangledown _{R}(\psi _{n}\mid \psi _{n}) is imaginary and therefore γn(t) is real.


      Is Berry’s phase measurable? We accustomed to thinking that the phase of the wave function is arbitrary-physical quantities involve \left | \psi  \right |^{2}  and the phase factor cancels out. But γn(t) can be measured if (for example) we take a beam of particles (all in the state ψ) and split it into two, so that one beam passes through an adiabatically changing potential while the other does not. When the two beams are recombined, the total wave function has the form \psi =\frac{1}{2}\psi _{0}+\frac{1}{2}\psi _{0}e^{i\tau }

      where ψ0 is the “direct” wave function and Г is the extra phase (in part dynamic and in part geometric) acquired by the beam subjected to the varying H). In this case

      \left | \psi  \right |^{2} =\frac{1}{4}\left | \psi_{0}  \right |^{2}(1+e^{i\tau})(1+e^{-i\tau})=\frac{1}{2}\left | \psi_{0}  \right |^{2}(1+cos\tau)=\left | \psi_{0}  \right |^{2}cos^{2}( \frac{\tau}{2})

      so by looking for points of constructive and destructive interference (where Г is an even or odd multiple of π, respectively), one can easily measure Г.

      Looking at Berry Phase and Adiabatic Approximation by a Simple Problem

      The case of an infinite well square whose right wall expands at a constant velocity (v) can be solved exactly. A complete set of solution is

      \psi _{n}\left ( x,t \right )=\sqrt{\frac{2}{\omega }}sin\left ( \frac{np}{\omega }x \right )e^\frac{{i\left ( mvx^{2}-2E_{n}^{t}at \right )}}{2h\omega }

      where

      \omega \left ( t \right )=a+\vartheta \left ( t \right )

      'is the width of the well and

      E_{n}^{t}=\frac{n^{2}h^{2}\pi ^{2}}{2ma^{2}} Is the nth allowed energy of the original well (width a). The general solution is a linear combination of the ф’s:

      \psi \left ( x,t \right )=\sum_{n=1}^{\infty }c_{n}\phi _{n}\left ( x,t \right )


      The coefficients cn are independent of t

      a) 'Suppose an article starts out (t=0) in the ground state of the initial well: '

      \psi \left ( x,0 \right )=\sqrt{\frac{2}{a }}sin\left ( \frac{p}{a }x \right )

      show that the expansion coefficients can be written in the form

      c_{n}=\frac{2}{\pi }\int_{0}^{\pi }e^{-i\alpha z^{2}}sin (nz)sin(z)dz

      where

      \alpha\equiv \frac{mva }{2\pi ^{2}h}

      is a dimensionless measure of the speed with which the well expands.

      \psi \left ( x,0 \right )=c_{n}\phi _{n}\left ( x,0 \right )\sum \sqrt{\frac{2}{a}}Sin\left ( \frac{n\pi }{a}x \right )e^{\frac{imvx^{2}}{2ha}}

      multiply by

      \sqrt{\frac{2}{a}}Sin\left ( \frac{{n}'\pi }{a}x \right )e^{\frac{-imvx^{2}}{2ha}}

      and integrate:

      \sqrt{\frac{2}{a}}\int_{0}^{a}\psi \left ( x,0 \right )Sin\left ( \frac{{n}'\pi }{a}x \right )e^{\frac{-imvx^{2}}{2ha}}dx=\sum c_{n}\left [ \frac{2}{a}\int_{0}^{\pi}Sin\left ( \frac{n\pi }{a}x \right )Sin\left ( \frac{{n}'\pi }{a}x \right )dx \right ]=c{}'_{n} So, in general,

      \sqrt{\frac{2}{a}}\int_{0}^{a}e^{\frac{-imvx^{2}}{2ha}}Sin\left ( \frac{n\pi }{a}x \right )\psi \left ( x,0 \right )dx


      in this particular case,

      c_{n}=\frac{2}{a}\int_{0}^{a} e^{\frac{-imvx^{2}}{2ha}}Sin\left ( \frac{n\pi }{a}x \right ) Sin\left ( \frac{\pi }{a}x \right )dx

      Let

      \frac{\pi }{a}x\equiv z


      dx=\frac{a}{\pi }dz

      \frac{imvx^{2}}{2ha}=\frac{imvz^{2}}{2ha}\frac{a^{2}}{\pi ^{2}}=\frac{mva}{2\pi ^{2}h}z^{2}

      we will have

      c_{n}=\frac{2}{a}\int_{0}^{\pi} e^{-i\alpha z^{2}}Sin(nz)Sin(z)dz



      b) suppose we allow the well to expand to twice its original width, so the “external” time is given by

      \ \omega (T_{e})=2a

      The “internal” time is the period of time-dependent exponential factor in the (initial) ground state. Determine Te and Ti, and show that the adiabatic regime corresponds to α<<1, so that

      e^{-i\alpha z^{2}}\cong 1 over the domain of integration. Use this to determine the expansion coefficients cn . Construct ψ(x, t) and confirm that it is consistent with the adiabatic theorem

      \omega (T_{e})=2a\Rightarrow a+\upsilon T_{e} =2a \Rightarrow \upsilon T_{e}=a \Rightarrow T_{e}=\frac{a}{\upsilon}

      e^{\frac{-iE_{1}t}{h}} \Rightarrow \omega =\frac{E_{1}}{h}\Rightarrow T_{i}=\frac{2\pi }{\omega }=2\pi \frac{h }{E_{1} }

      or,

      T_{i}=\frac{2\pi h }{\pi ^{2} h^{2}}2ma^{2}=\frac{4}{\pi }\frac{ma^{2}}{h}


      Adiabatic\Rightarrow T_{e}\gg T_{i}\Rightarrow \frac{a}{\upsilon }\gg \frac{4ma^{2}}{\pi h}\Rightarrow \frac{4}{\pi }\frac{m\upsilon a}{h}\ll 1

      Or,

      8\pi \left ( \frac{m\upsilon a}{2\pi ^{2}h} \right )=8\pi \alpha \ll 1

      so, \alpha \ll 1.

      Then,

      c_{n}=\frac{2}{\pi }\int_{0}^{\pi}Sin(nz)Sin (z) dz=\delta _{n1}.

      Therefore,

      \sqrt{\frac{2}{\omega }}Sin (\frac{\pi x}{\omega })e^{\frac{i(m\upsilon x^{2}-2E_{1}^{i}at)}{2h\omega }}

      Which (apart from a phase factor) is the ground state of the instantaneous well, of width ω, as required by the adiabatic theorem (actually the first term in the exponent, which is at most \frac{m\upsilon a^{2}}{2ha}=\frac{m\upsilon a}{2h} \ll 1 and could be dropped, in the adiabatic regime).


      c) Show that the phase factor in ψ(x, t) can be written in the form

      \theta \left ( t \right )=-1\frac{1}{h}\int_{0}^{t}E_{1}\left ( {t}' \right )d{t}'

      where

      E_{n}^{t}=\frac{n^{2}h^{2}\pi ^{2}}{2m\omega ^{2}}

      Is the nth instantaneous eigenvalue, at time t


      d) By knowing that

      \gamma  _{n}(t)=i\int_{0}^{t}\langle\psi_{n}\mid \frac{\partial \psi _{n}}{\partial R}\rangle\frac{dR}{d{t}'}d{t}'=i\int_{R_{i}}^{R_{f}}\langle\psi_{n}\mid \frac{\partial \psi _{n}}{\partial R}\rangle dR

      where Ri and Rf are the initial and final values of R(t). Calculate the geometric phase change when the infinite square well expands adiabatically from width ω1 to width ω2

      \psi _{n}(x)=\sqrt{\frac{2}{\omega }}Sin(\frac{n\pi }{\omega }x)


      in this case \ R = \omega .

      \frac{\partial\psi _{n}}{\partial R}=\sqrt{2}(\frac{-1}{2}\frac{1}{\omega ^{\frac{3}{2}}})Sin (\frac{n\pi }{\omega }x)+\sqrt{\frac{2}{\omega }}(-\frac{n\pi }{\omega ^{2}}x)Cos (\frac{n\pi }{\omega }x);


      \langle\psi _{n}\mid \frac{\partial\psi _{n}}{\partial R}\rangle=\int_{0}^{\omega } \psi _{n}\frac{\partial\psi _{n}}{\partial R}dx=-\frac{1}{\omega ^{2}}\int_{0}^{\omega }Sin^{2}(\frac{n\pi }{\omega }x)dx-\frac{2n\pi }{\omega ^{3}}\int_{0}^{\omega }x Sin(\frac{n\pi }{\omega }x)Cos(\frac{n\pi }{\omega }x)dx=-\frac{1}{\omega ^{2}}(\frac{\omega }{2})-\frac{n\pi }{\omega ^{3}}\int_{0}^{\omega }Sin(\frac{2n\pi }{\omega }x)dx=-\frac{1}{2\omega }-\frac{n\pi }{\omega ^{3}}\left [ (\frac{\omega }{2n\pi })^{2} Sin(\frac{2n\pi }{\omega }x)-\frac{\omega x}{2n\pi }Cos(\frac{2n\pi }{\omega }x)\right ]

      by assigning values of ω and 0 for x, we will have

      \frac{1}{2\omega }-\frac{n\pi }{\omega ^{3}}\left [ -\frac{\omega^{2} }{2n\pi }Cos (2n\pi )\right ]=-\frac{1}{2\omega }+\frac{1}{2\omega }=0. Therefore,

      \ \gamma _{n}(t)=0.

      (if the eigenfunctions are real, the geometric phase vanishes).


      e)

      if the expansion occurs at a constant rate of

      \frac{d\omega }{dt}=\nu

      what is the dynamic phase change for this process.

      \theta _{n}(t)=\frac{1}{h}\int_{0}^{t}\frac{n^{2}\pi ^{2}h^{2}}{2m\omega ^{2}}dt{}'=-\frac{n^{2}\pi ^{2}h}{2m}\int \frac{1}{\omega ^{2}}\frac{dt{}'}{d\omega }d\omega

      \theta _{n}==-\frac{n^{2}\pi ^{2}h}{2m\upsilon }\int_{\omega _{1}}^{\omega _{2}}\frac{1}{\omega ^{2}}d\omega =\frac{n^{2}\pi ^{2}h}{2m\upsilon }(\frac{1}{\omega})

      by subtracting the quantities for ω2 and ω1 in the equation, we will have,


      \frac{n^{2}\pi ^{2}h}{2m\upsilon }\left(\frac{1}{\omega _{2}}-\frac{1}{\omega _{1}}\right).


      f) If the well now contracts back to its original size, what is Berry’s phase for the cycle?

      Zero

      Berry Potentials

      It is possible to construct potentials that give rise to this phase, by carefully considering a general Hamiltonian of two interacting particles, where one is much larger (and hence slower) than the other. (This can also be done for more particles, but the construction is very similar.)

       \mathcal{H} = \frac{P^2}{2m_n} + \frac{p^2}{2m_e} + V(\vec{R},\vec{r}) where \vec{R} refers to the coordinate of the larger particle, and not the center of mass.

      After some work, it can be shown that terms similar to both a vector and scalar potential can be found that explicitly create the Berry Phase.

      The final result is:

      for the Vector Potential,

       A^{(n)} = i\hbar \langle n(R)|\vec{\nabla_R}|n(R)\rangle

      and for the Scalar Potential,

       \Phi^{(n)} = \frac{\hbar^2}{2m_n}\left(\langle\vec{\nabla_R} n(R)|\vec{\nabla_R} n(R) \rangle - \langle \vec{\nabla_R} n(R)|n(R)\rangle \langle n(R)|\vec{\nabla_R}n(R)\rangle \right)

      where  |n(R)\rangle is the wavefunction of the smaller particle, depending on the position of the larger. More generally, it would be the wavefunction of the object with the 'fast' degree of freedom, depended on the state of the slower degree of freedom.

      Once these are found, an effective Hamiltonian may be constructed:

       \mathcal{H} = \frac{1}{2m_n}\left(\vec{P}-\vec{A^{(n)}}\right)^2 + \Phi^{(n)}

      Time Reversal Symmetry

      T Symmetry is the symmetry in physics|symmetry of physical laws under a time reversal Transformation (mathematics)|transformation:

       T: t \mapsto -t.

      Although in restricted contexts one may find this symmetry, the observable universe itself does not show symmetry under time reversal, primarily due to the second law of thermodynamics.

      Time asymmetries are generally distinguished as either those which are intrinsic to the dynamic laws of nature, and those that are due to the Big Bang|initial conditions of our universe. The T-asymmetry of the weak force is of the first kind, while the T-asymmetry of the second law of thermodynamics is of the second kind.

      Effect of time reversal on some variables of classical physics: Even Classical variables which do not change upon time reversal include:

      \vec x\!, the position of a particle in three-space
      \vec a\!, the acceleration of the particle
      \vec f\!, the force on the particle
      E\!, the energy of the particle
      \phi\!, the electric potential (voltage)
      \vec E\!, the electric field
      \vec D\!, the electric displacement
      \rho\!, the density of electric charge
      \vec P\!, the electric polarization
      Energy density of the electromagnetic field
      Maxwell stress tensor
      All masses, charges, coupling constants, and other physical constants, except those associated with the weak force.

      Odd Classical variables which are negated by time reversal include:

      t\!, the time when an event occurs
      \vec v\!, the velocity of a particle
      \vec p\!, the linear momentum of a particle
      \vec l\!, the angular momentum of a particle (both orbital and spin)
      \vec A\!, the electromagnetic vector potential
      \vec B\!, the magnetic induction
      \vec H\!, the magnetic field
      \vec j\!, the density of electric current
      \vec M\!, the magnetization
      \vec S\!, Poynting vector
      Power (rate of work done).

      Time Reversal in Quantum Mechanics

      This section contains a discussion of the three most important properties of time reversal in quantum mechanics; chiefly,

      1. that it must be represented as an anti-unitary operator,
      2. that it protects non-degenerate quantum states from having an electric dipole moment,
      3. that it has two-dimensional representations with the property K2 = −1.

      The strangeness of this result is clear if one compares it with parity. If parity transforms a pair of quantum states into each other, then the sum and difference of these two basis states are states of good parity. Time reversal does not behave like this. It seems to violate the theorem that all abelian groups be represented by one dimensional irreducible representations. The reason it does this, is that it is represented by an anti-unitary operator. It thus opens the way to spinors in quantum mechanics.

      The Schrodinger equation is: i\hbar\frac{\partial }{\partial (t)}\psi(r,t) =H\psi(r,t)

      Taking  t \rightarrow -t yields: -i\hbar\frac{\partial }{\partial (t)}\psi(r,-t) =H\psi(r,-t)

      This is obviously not a symmetric transformation. In addition, take the complex conjugate: i\hbar\frac{\partial }{\partial (-t)}\psi ^*=H^*\psi ^*

      Whether or not this equation is symmetric depends on the form of H we are working with.


      Problem about spin and time reversal symmetry [[17]]

      Anti-Unitary Representation of Time Reversal

      Eugene Wigner showed that a symmetry operation S of a Hamiltonian is represented, in quantum mechanics either by a unitary operator, S = U, or an antiunitary one, S = UC where U is unitary operator|unitary, and C denotes complex conjugation. These are the only operations that acts on Hilbert space so as to preserve the length of the projection of any one state-vector onto another state-vector.

      Consider the parity (physics) operator. Acting on the position, it reverses the directions of space, so that P−1xP = −x. Similarly, it reverses the direction of momentum, so that PpP−1 = −p, where x and p are the position and momentum operators. This preserves the canonical commutation relation|canonical commutator[x, p] = , where ħ is the reduced Planck constant, only if P is chosen to be unitary, PiP−1 = i.

      On the other hand, for time reversal, the time-component of the momentum is the energy. If time reversal were implemented as a unitary operator, it would reverse the sign of the energy just as space-reversal reverses the sign of the momentum. This is not possible, because, unlike momentum, energy is always positive. Since energy in quantum mechanics is defined as the phase factor exp(iEt) that one gets when one moves forward in time, the way to reverse time while preserving the sign of the energy is to reverse the sense of "i", so that the sense of phases is reversed.

      Similarly, any operation which reverses the sense of phase, which changes the sign of i, will turn positive energies into negative energies unless it also changes the direction of time. So every antiunitary symmetry in a theory with positive energy must reverse the direction of time. The only antiunitary symmetry is time reversal, together with a unitary symmetry which does not reverse time.

      Given the time reversal operator K, it does nothing to the x-operator, KxK−1 = x, but it reverses the direction of p, so that KpK−1 = −p. The canonical commutator is invariant only if K is chosen to be anti-unitary, i.e., KiK−1 = −1. For a elementary particle|particle with spin, one can use the representation

      K= e^{-i\pi S_y/\hbar} C,

      where Sy is the y-component of the spin, to find that KJK−1 = −J.

      Theorem: Suppose the Hamiltonian is invariant under time reversal and the energy eigenket|\ n \rangle is non degenerate; then the corresponding energy eigenfunction is real(or, more generally, a real function times a phase factor independent of x).

      Proof: To prove this, first note that

       H K|\ n \rangle=KH|\ n \rangle=E_{n}K|\ n \rangle

      so|\ n \rangle and K|\ n \rangle have the same energy. the nondegeneracy assumption prompts us to conclude that |\ n \rangleandK|\ n \rangle must represent the same state; otherwise there would be two different states with the same energy En, an obvious contradiction! Let us recall that the wave functions for|\ n \rangle andK|\ n \rangle are \langle\ x{}'|\ n\rangle and \langle\ x{}'|\ n\rangle^{*}, respectively. They must be the same-that is,

      \langle\ x{}'|\ n\rangle=\langle\ x{}'|\ n\rangle^{*}

      for all practical purpose- or, more precisely, they can differ at most by a phase factor independent of x.

      The Kramers' Theorem

      To find an expression for the time reversal operator, we consider the specific Hamiltonian for an electron:

       H = \frac{p^2}{2m} + V(r) + (\frac{1}{2m^2c^2}\frac{1}{r}\frac{dV}{dr})\mathbf{L}\cdot \mathbf{S}

      The time reversal operator for a one electron system is:

       \hat{K} = i \sigma _{y} C where C\! indicates to take the complex conjugate

      Prove:

      K\psi=i\sigma_y\psi^*\!

      KHK^{\dagger}=(i\sigma_yC) H (-i\sigma _yC^*)=H^*

      To get degeneracy from the time reversal. (Kramers Degeneracy)

      For n-electrons:

      \hat{K}=i^n\sigma _{y_1}\sigma _{y_2}...\sigma _{y_n}C

      For time reversal invariant H: (KHK^{\dagger})K\psi=E(K\psi)

      So, \psi\! and K\psi\! have same energy.

      Assume they are linear dependent: K\psi=\psi^'=a\psi\!

      K^2\psi=K\psi^'=K\psi=a^*K\psi=a^*a\psi=\psi\!

      It requires K^2=1\!.

      However,  K^2=(i^n\sigma_y1\sigma_y2...\sigma_yn C)(i^n\sigma_y1\sigma_y2...\sigma_yn C)=(-1)^n \!

      And thus we have arrived at Kramer's Degeneracy Theorem: For an odd number of electrons, the energy levels of the system are at least doubly degenerate, as long as H\! is time reversal invariant.

      A solved problem for time reversal symmetry

      A problem about spin and Time reversal symmetry [[18]]

      Many Particle Systems and Identical Particles

      General Remarks

      If a system in atomic physics contains a number of particles of the same kind, e.g. a number of electrons, the particles are absolutely indistinguishable one from another. No observable change is made when two of them are interchanged. This circumstance gives rise to some curious phenomena in quantum mechanics having no analogue in the classical theory, which arise from the fact that in quantum mechanics a transition may occur resulting in merely the interchange of two similar particles, which transition then could not be detected by any observational means. A satisfactory theory ought, of course, to count two observationally indistinguishable states as the same state and to deny that any transition does occur when two similar particles exchange places. We shall find that it is possible to reformulate the theory so that this is so.

      Suppose we have a system containing n similar particles. We may take as our dynamical variables a set of variables X1 describing the first particle, the corresponding set X2 describing the second particle, and so on up to the set Xn describing the nth particle. We shall then have the Xr's commuting with the Xs's for r=/s. The Hamiltonian describing the motion of the system will now be expressible as a function of the X1, X2, ..., Xn. The fact that the particles are similar requires that the Hamiltonian shall be a symmetrical function of the X1, X2, ..., Xn, i.e. it shall remain the unchanged when the set of variables Xr are interchanged or permuted in any way. This condition must hold, no matter what perturbations are applied to the system. In, fact, any quantity of physical significance must be a symmetrical function of the X's.

      Let |a_1\rangle, |b_1\rangle, ... be kets for the first particle considered as a dynamical system by itself. There will be corresponding kets |a_2\rangle, |b_2\rangle, ... for the second particle by itself, and so on. We can get a ket for the assembly by taking the product of kets for each particle by itself, for example

      |a_1\rangle|b_2\rangle|c_3\rangle...|g_n\rangle=|a_1 b_2 c_3 ... g_n\rangle (*)

      This ket corresponds to a special kind of state for the assmebly, which may be described by saying that each particle is in its own state, corresponding to its own factor on the left-hand side of (*). The general ket for the assembly is of the formof a sum or integral of kets like (*), and corresponds to a state for the assembly for which one cannot say that each particle is in its own state, but only that each particle is partly in several states. If the kets |a_1\rangle, |b_1\rangle, ... are a set of basic kets for the first particle by itself, the kets |a_2\rangle, |b_2\rangle, ... will be a set of basic kets for the second particle by itself, and so on, and the ket (*) will be a set of basic kets for the assembly. We call the representation provided by such basic kets for the assembly a symmetrical representation, as it treats all the particles on the same footing.

      In (*) we may interchange/permute for the first two particles and get another ket for the assembly, namely

      |b_1\rangle|a_2\rangle|c_3\rangle...|g_n\rangle=|b_1 a_2 c_3 ... g_n\rangle.

      A ket for the assembly is called symmetrical if it is unchanged by any permutation, i.e. if

      P|X\rangle=|X\rangle

      for any permutation P. It is called antisymmetrical if it is unchanged by any even permutation and has its sigh changed by any odd permutation, i.e. if

      P|X\rangle=+-|X\rangle

      the + or - sing being taken according to whether P is even or odd. The state corresponding to a symmetrical ket is called a symmetrical state, and the state corresponding to an antisymmetrical ket is called an antisymmetrical state.

      In the Schrodinger picture, the ket corresponding to a state of the assembly will vary with time according to Schrodinger's equation of motion. If it is initially symmetrical it must always remain symmetrical, since, owing to the Hamiltonian being symmetrical, there is nothing to disturb the symmetry. Similarly if the ket is initially antisymmetrical it must always remain antisymmetrical. Thus a state which is initially symmetrical always remains symmetrical and a state which is initially antisymmetrical always remains antisymmetrical. In consequence, it may be that for a particular kind of particle only symmetrical states occur in nature, or only antisymmetrical states occur in nature. If either of these possibilities held, it would lead to certain special phenomena for the particles in question.

      According to the spin statistics theorem by Pauli, particles having half-integer spins obey the Fermi-Dirac statistics, while particles having integer spins obey the Bose-Einstein statistics. Particles of the first kind are called fermions, and those of the second kind are called bosons.

      To illustrate the dramatic difference between classical and quantum statistics, imagine a situation in which there are three particles and only three states a, b, and c available to them. The total number of allowed, distinct configuration for this system is

      (1) 27 if they are labeled (classically)

      (2) 10 if they are bosons

      (3) 1 if they are fermions

      In building up a theory of atoms to get agreement with experiment one must assume that two electrons are never in the same state. This rule is known as Pauli's exclusion principle. It shows us that electrons are fermions. Planck's law of radiation shows us that photons are bosons, as only the Bose statistics for photons will lead to Planck's law. Similarly, for each of the other kinds of particles known in physics, there is experimental evidence to show either that they are fermions, or that they are bosons. Protons, neutrons, positrons are fermions, alpha-particles are bosons. It appears that all particles occurring in nature are either fermions or bosons, and thus only antisymmetrical or symmetrical states for an assembly of similar particles are met with in practice. Other more complicated kinds of symmetry are possible mathematically, but do not apply to any known particles. With a theory which allows only antisymmetrical or only symmetrical states for a particular kind of particle, one cannot make a distinction between two states which differ only through a permutation of the particles, so that that the transitions mentioned in the beginning disappear.

      Two Identical Particles

      At this point it is second nature to write down the Hamiltonian for a system if the potential and kinetic energy of the particle is known. The Hamiltonian is then denoted by:  \hat H = \frac{p^2}{2m} + V(\vec r)

      The next natural step is to investigate what the Hamiltonian and the resulting wavefunctions and energies would look like for systems with more than one particle. The easiest place to start is with two identical particles.

      It is straight forward to generalize a Hamiltonian for two identical particles:

       \hat H = \frac{p_1^2}{2m} + \frac{p_2^2}{2m} + V(\vec r_1) + V(\vec r_2) + (u(\vec r_1, \vec r_2) + u(\vec r_2, \vec r_1)),

      where the potential and kinetic energy are written down for each particle individually and there is an additional term which represents the interaction between the two particles. For simplicity we will treat the interaction potential as a central force and from this point on write it as  u(| \vec r_1 - \vec r_2|) .

      The above Hamiltonian for the two identical particles is invariant under exchange symmetry (or the permutation of particle labels) and as such it is either even or odd under permutation. Likewise the eigenfunctions can be chosen to be even or odd under the exchange of particle labels. Ignoring spin orbit coupling the general solution will therefore be of the form:

       \psi (\eta_1, \eta_2) = \phi (\vec r_1, \vec r_2) \chi (\sigma_1, \sigma_2),

      where  \sigma_1 \! and  \sigma_2 \! are just spin labels not Pauli spin matrices. If  \psi (\eta_1, \eta_2) \! is a solution, then  \psi  (\eta_2, \eta_1) \! is also a solution and as such there are two possible states, one is symmetric and the other is anti-symmetric:

       \Rightarrow \frac{1}{\sqrt{2}} \left(\psi (\eta_1, \eta_2) + \psi (\eta_2, \eta_1)\right)

      and

       \Rightarrow \frac{1}{\sqrt{2}} \left(\psi (\eta_1, \eta_2) - \psi (\eta_2, \eta_1)\right)

      Although mathematically this formula will result in symmetric and anti-symmetric solutions, in nature that is not the case, and the solution must be chosen to be one or the other. If the system deals with two fermions, which have half-integer spin, then only the anti-symmetric solution appears in nature. Likewise if the system deals with two bosons, which have integer spin, then only the symmetric solution appears in nature.

      N Particles

      If the Hamiltonian were for a three particle system, it would be:

       \hat H = \frac{p_1^2}{2m} + \frac{p_2^2}{2m} + \frac{p_3^2}{2m} + V(\vec r_1) + V(\vec r_2) + V(\vec r_3) +(u(|\vec r_1 - \vec r_2 |) + u(| \vec r_1 - \vec r_3 |) + u(| \vec r_2 - \vec r_3 |).

      In general, the Hamiltonian for a system with N particles can be written as:

      \hat H = \sum _{j=1} ^{N} \left( \frac{p^2_j}{2m} + V(\vec r_j)\right) + \frac{1}{2} \sum_{j,k}^N u(r_j, r_k).

      In general, it is difficult to solve this problem with interaction terms but assuming you could do so. The only physically admissible states are either symmetric or antisymmetric under exchange of any two particle labels as before, therefore, the wavefunction is given by: \psi (\eta_1, \eta_2, \eta_3, ... , \eta_N) = \phi (\vec r_1, \vec r_2, \vec r_3, ... , \vec r_N) \chi (\sigma_1, \sigma_2, \sigma_3, ..., \sigma_N) ,

      and it follows the same rules as before for bosons and fermions.

      It is important to note that if a solution doesn't satisfy the proper symmetry, then a linear combination of all permutations will result in a properly symmetrized solution that will be an eigenstate.

      Constructing Admissible Eigenstates

      As stated above, if a solution does not satisfy the necessary symmetry properties, then a linear combination of the different permutations of product states (that are completely symmetric for bosons and anti-symmetric for fermions) must be made.

      For spin-less bosons the normalized wavefunction is:

      \psi_{\mbox{bosons}} (1,2,....N) = \sqrt{\frac{N_a! N_b! .... N_n!}{N!}} \sum_P P \varphi_a (1) \varphi_b(2) ....... \varphi_n (N)

      where the sum is over all  N!\! permutations of indices 1 through  N \!.

      For spin-less fermions the normalized wavefunction is:

      \psi_{\mbox{fermions}} (1,2,....N) = \frac{1}{\sqrt{N!}}\sum_P  (-1)^P P \varphi_a (1) \varphi_b(2) ....... \varphi_n (N)

      where (-1)^P \! is  +1 \! if a permutation can be decomposed into an even number of two particle exchanges and  -1 \! for odd.

      Another way of writing the sum to form an anti-symmetric matrix is through the use of the Slater determinant.

      Second Quantization

      Consider now a wavefunction pertaining to a many-particle system, \psi (\eta_1, \eta_2, \eta_3, ... , \eta_N) \!, which is considered to be a field variable. For the many-particle system, this field variable must also quantized by a process known as second quantization.

      In order to perform this quantization of the field variable, we must construct special raising and lowering operators, associated with the individual energy levels of the system,  \hat{a}_j ^{\dagger} \! and  \hat{a}_j \!, which add and subtract particles from the  j^{th} \! energy level, respectively. In the presence of spin, an additional subscript is added to separate the creation and annihilation operators for each case of spin, so that each operator only acts on particles with the same spin attributed to said operator. In the simple, although rather non-physical, case of spinless particles, this extra factor can be ignored for simplicity in examining how the operators work on the quantized field:

       \hat{a}_j ^{\dagger} |0\rangle = |1\rangle \!

       \hat{a}_j |1\rangle = |0\rangle \!

       \hat{a}_j |0\rangle = 0 \!

      For the case of fermions, an additional constraint on the operators is placed due to the exclusion principle:  \hat{a} ^{\dagger} |1\rangle = 0 \!

      Given the two classes of particles, fermions and bosons, two sets of relations result to relate the creation and annihilation operators.

      For the case of bosons, the operators obey a commutator relationship of the form:

       [\hat{a}_i, \hat{a}_j ^{\dagger}] = \delta_{ij};  [\hat{a}_i, \hat{a}_j] = [\hat{a}_i ^{\dagger}, \hat{a}_j ^{\dagger}] = 0 \!

      The state of the system,  |n_0, n_1, .., n_N\rangle \!, where  n_j \! refers to the number of particles in the  j^{th} \! state, is therefore of the form:

       |n_0, n_1, .., n_N\rangle = {\frac{(\hat{a}_0 ^{\dagger})^{n_o}}{\sqrt{n_0 !}}}{\frac{(\hat{a}_1 ^{\dagger})^{n_1}}{\sqrt{n_1 !}}}...{\frac{(\hat{a}_N ^{\dagger})^{n_N}}{\sqrt{n_N !}}} |0\rangle \!

      Fermions, however, obey anti-commutator relationships, of the following form:

       \{ \hat{a}_i, \hat{a}_j ^{\dagger} \} = \delta_{ij};  \{ \hat{a}_i, \hat{a}_j \} = \{ \hat{a}_i ^{\dagger}, \hat{a}_j ^{\dagger} \} = 0 \!

      For this type of system, the state  |n_0, n_1, .., n_N\rangle \! can be written as:

       |n_0, n_1, .., n_N\rangle = (\hat{a}_0 ^{\dagger})^{n_0}(\hat{a}_1 ^{\dagger})^{n_1}...(\hat{a}_N ^{\dagger})^{n_N} |0\rangle \!

      Furthermore, for both classes of particles, we can create an operator that, upon acting on the total state of the system, returns the number of particles in a given  n^{th} \! state (for fermions this will obviously be 0 or 1). This operator is of the form  \hat{a}_n ^{\dagger} \hat{a}_n \!. Therefore,  \hat{a}_j ^{\dagger} \hat{a}_j |n_0, n_1, .., n_N\rangle = n_j |n_0, n_1, .., n_N\rangle \!. From this it is easy to obtain an operator,  \hat{N} \! that returns the total number of particles in the system:

       \hat{N} = \sum_{j=0}^{\infty} \hat{a}_n ^{\dagger} \hat{a}_n \!

      From this operator, the average number of particles, and therefore the average flux number, can be calculated by the following:

       \langle \Psi | \hat{N} | \Psi\rangle = \langle \hat{N} \rangle \!

       \langle \Psi | {\langle \hat{N} - {\langle \hat{N} \rangle} \rangle}^2 | \Psi\rangle = \langle \hat{N}^2 \rangle - {\langle \hat{N} \rangle}^2 \!

      To continue the analysis of these second quantization operators, lets consider the projection of  \hat{a} ^{\dagger} \! on the state  |0\rangle \!, that is to say, define a function  \phi_j(\mathbf r) \! such that  \langle \mathbf{r} | \hat{a}_j ^{\dagger} |0\rangle = \phi_j(\mathbf r) \!

      Rearranging this yields an expression for the position state of a particular particle, weighted over the different energy levels by the function  \phi_j(\mathbf r) \!

       |\mathbf{r}\rangle = \sum_{j=0}^{\infty} \phi_j^*(\mathbf r)\hat{a}_j ^{\dagger} |0\rangle \!

      This new field operator acting on the ground state  |0\rangle \! is called the field creation operator:

       \Psi ^{\dagger}(\mathbf r) \equiv \sum_{i=0}^{\infty} \phi_i^*(\mathbf r) \hat{a}_i ^{\dagger} \!

      Therefore, the position state of a system containing  n\! particles can be expressed as the following:

       |\mathbf{r}_1, \mathbf{r}_2, .., \mathbf{r}_n\rangle = {\frac{1}{\sqrt{n!}}}\Psi ^{\dagger}(\mathbf{r}_n)...\Psi ^{\dagger}(\mathbf{r}_2)\Psi ^{\dagger}(\mathbf{r}_1)|0\rangle\!

      Here it is important to note the permutation relationships differ for fermions and bosons, that is to say the following:

      For bosons:  |\mathbf{r}_1, \mathbf{r}_2, .., \mathbf{r}_n\rangle = |\mathbf{r}_2, \mathbf{r}_1, .., \mathbf{r}_n\rangle \!

      Whereas for fermions:  |\mathbf{r}_1, \mathbf{r}_2, .., \mathbf{r}_n\rangle = -|\mathbf{r}_2, \mathbf{r}_1, .., \mathbf{r}_n\rangle \!

      Manipulation of the field creation and annihilation operators yield results congruent with those associated with the particle creation and annihilation operators discussed above, recognizing that  \sum_j \phi_j(\mathbf r)\phi_j^*(\mathbf{r'}) \! is a delta function. Therefore, we obtain the following expressions:

      For fermions,  \{ \Psi(\mathbf r), \Psi^{\dagger}(\mathbf r') \} = \delta(\mathbf{r - r'}); \{ \Psi(\mathbf r), \Psi(\mathbf r') \} = \{\Psi^{\dagger}(\mathbf r), \Psi^{\dagger}(\mathbf r') \}= 0 \!

      And for bosons,  [ \Psi(\mathbf r), \Psi^{\dagger}(\mathbf r') ] = \delta(\mathbf{r - r'}); [ \Psi(\mathbf r), \Psi(\mathbf r') ] = [\Psi^{\dagger}(\mathbf r), \Psi^{\dagger}(\mathbf r') ]= 0 \!

      From these relations, we show that adding a particle at position  \mathbf{r} \! is expressed by the automatically correctly symmetrized expression:

       \Psi^{\dagger}(\mathbf{r}) |\mathbf{r}_1, \mathbf{r}_2, .., \mathbf{r}_n\rangle = \sqrt{n+1}|\mathbf{r}_1, \mathbf{r}_2, .., \mathbf{r}_n, \mathbf{r}\rangle \!

      The annihilation operator also results in a correctly symmetrized expression, however, it is only valid if  \mathbf{r} \in \{ \mathbf{r}_1, \mathbf{r}_2, .., \mathbf{r}_n\} \!. The result is a correctly symmetrized combination of  n - 1\! particle states:

       \Psi(\mathbf{r})|\mathbf{r}_1, .., \mathbf{r}_n\rangle = \frac{1}{\sqrt{n}}(\delta(\mathbf{r} - \mathbf{r}_n)|\mathbf{r}_1, .., \mathbf{r}_{n-1}\rangle \pm \delta(\mathbf{r} - \mathbf{r}_{n-1})|\mathbf{r}_1, ..,\mathbf{r}_{n-2}, \mathbf{r}_n\rangle + \delta(\mathbf{r} - \mathbf{r}_{n-2})|\mathbf{r}_1, ..,\mathbf{r}_{n-3}, \mathbf{r}_{n-1}, \mathbf{r}_n\rangle \pm ... (\pm 1)^{n-1}\delta(\mathbf{r} - \mathbf{r}_1)|\mathbf{r}_2, ..,\mathbf{r}_n\rangle) \!

      Finally, we will consider certain preexisting operators, expressed in the new second quantization format. Consider first the density operator  \hat{\rho}(\mathbf r) \! which was previously  \sum_{j=1}^N \delta(\mathbf{r} - \mathbf{r}_j) \! in the first quantization. Recalling our earlier commutator relations, this operator is refined as:

       \hat{\rho} = \Psi^{\dagger}(\mathbf{r})\Psi(\mathbf{r}) \!

      In order to obtain the total kinetic energy and interaction potential operators, one must first express the annihilation and creation operators in momentum space to simplify the mathematics. Therefore, define  \Psi(\mathbf{r}) = \sum_{\mathbf{p}} \frac{e^\frac{i\mathbf{p}\cdot\mathbf{r}}{
\hbar}}{\sqrt{V}}\hat{a}_p \!. Therefore, rearranging this yields:

       \hat{a}_p = \frac{1}{\sqrt{V}} \int d^3r e^\frac{-i \mathbf{p}\cdot\mathbf{r}}{\hbar}\Psi(\mathbf{r}) \!

      Therefore, for  \hat{T} = \sum_{\mathbf{p}} \frac{{\mathbf{p}}^2}{2m} \hat{a}^{\dagger}_{\mathbf{p}} \hat{a}_{\mathbf{p}} \!, we can rewrite this as the following:

       \hat{T} = \frac{{\hbar}^2}{2m}\int d^3 \mathbf{r}( \mathbf{\nabla}\Psi^{\dagger}(\mathbf{r}))\cdot(\mathbf{\nabla}\Psi(\mathbf{r})) \!

      The interaction potential, taking care to include the  \frac{1}{2}\! term for symmetry, is thereby expressed as:

       \hat{V} = \frac{1}{2}\int d^3 \mathbf{r} d^3 \mathbf{r'} V(\mathbf{r} - \mathbf {r'}) \Psi^{\dagger}(\mathbf{r}) \Psi^{\dagger}(\mathbf{r'}) \Psi(\mathbf{r'}) \Psi (\mathbf{r}) \!

      Note that this entire analysis has been done for the spinless particle case, however the addition of spin only requires that we sum over the possible spins of the system, namely:

       \hat{T} = \frac{{\hbar}^2}{2m}\sum_s\int d^3 \mathbf{r}( \mathbf{\nabla}\Psi_s^{\dagger}(\mathbf{r}))\cdot(\mathbf{\nabla}\Psi_s(\mathbf{r})) \!

       \hat{V} = \frac{1}{2}\sum_{s, s'}\int d^3 \mathbf{r} d^3 \mathbf{r'} V(\mathbf{r} - \mathbf {r'}) \Psi_s^{\dagger}(\mathbf{r}) \Psi_{s'}^{\dagger}(\mathbf{r'}) \Psi_{s'}(\mathbf{r'}) \Psi_s (\mathbf{r}) \!

      Thus finally the total Hamiltonian for the system can be written as:

       \hat{H} = \hat{T} + \hat{V} = \frac{{\hbar}^2}{2m}\sum_s\int d^3 \mathbf{r}( \mathbf{\nabla}\Psi_s^{\dagger}(\mathbf{r}))\cdot(\mathbf{\nabla}\Psi_s(\mathbf{r})) + \frac{1}{2}\sum_{s, s'}\int d^3 \mathbf{r} d^3 \mathbf{r'} V(\mathbf{r} - \mathbf {r'}) \Psi_s^{\dagger}(\mathbf{r}) \Psi_{s'}^{\dagger}(\mathbf{r'}) \Psi_{s'}(\mathbf{r'}) \Psi_s (\mathbf{r}) \!

      Klein-Gordon Equation

      Starting from the relativistic connection between energy and momentum:

      E^2=\bold p^2c^2+m^2c^4

      Substituting E \rightarrow i\hbar \frac{\partial}{\partial t} and \bold p \rightarrow -i\hbar \nabla, we get Klein-Gordon equation for free particles as follows:

      -\hbar^2 \frac{\partial ^2\psi(\bold r, t)}{\partial t^2}=(-\hbar^2c^2\nabla^2+m^2c^4)\psi(\bold r, t)\qquad \qquad \qquad \qquad \qquad (9.1.1)

      Klein-Gordon can also be written as the following: (\square-K^2)\psi(\bold r, t)=0\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \;\;\;\;(9.1.2)

      where \square=\nabla^2-\frac{1}{c^2}\frac{\partial^2}{\partial t^2} is d'Alembert operator and K=\frac{mc}{\hbar}.

      Equation #9.1.2 looks like a classical wave equation with an extra term \ K^2.

      Potentials couple to the Klein-Gordon equation in a manner analogous to classical four-vectors. A good example of this is the potential four-vector:

       \Phi ^{ \mu } = (\phi ; \bold{A} )

      Coupling this to the momentum four-vector,

       p ^{ \mu } = (\frac{E}{c} ; \bold{p} )

      where E \rightarrow i\hbar \frac{\partial}{\partial t} and \bold{p} \rightarrow -i\hbar \bold{\nabla} in the quantum limit, we find the conjugate momentum four vector:

      P^{\mu} = p^{\mu}-\frac{e}{c}\Phi^{\mu} = ( \frac{E}{c}-\frac{e}{c}\phi ; \bold{p}-\frac{e}{c}\bold{A} )

      Squaring the conjugate momentum four vector and multiplying by c2, we obtain the Klein-Gordon equation in an electromagnetic field by moving to the quantum limit and acting on \displaystyle{\psi}:

      c^2 P^{\mu}P_{\mu} = m^2 c^4 = (E-e\phi)^2-(c\bold{p}-e\bold{A})^2 \Rightarrow\left[ i\hbar \frac {\partial}{\partial t}-e\phi(\bold r, t) \right] ^2\psi(\bold r, t)=\left( \left[ -i\hbar\nabla-\frac{e}{c}\bold A(\bold r, t)\right] ^2c^2+m^2c^4\right) \psi(\bold r, t)\qquad \qquad (9.1.3)

      The Klein-Gordon equation is second order in time. Therefore, to see how the states of a system evolve in time we need to know both \psi(\bold r, t) and \frac{\partial\psi(\bold r, t)}{\partial t} at a certain time. While in nonrelativistic quantum mechanics, we only need \psi(\bold r, t)

      Also because the Klein-Gordon equation is second order in time, it has the solutions \psi(\bold r, t)=e^{i(\bold p \bold r - Et)/\hbar} with either sign of energy E=\pm c\sqrt{\bold p^2+m^2c^2}. The negative energy solution of Klein-Gordon equation has a strange property that the energy decreases as the magnitude of the momentum increases. We will see that the negative energy solutions of Klein-Gordon equation describe antiparticles, while the positive energy solutions describe particles.

      Below is an image depicting the two different enery levels (positive and negative).

      Image:KleinGordonEnergy.JPG

      Continuity Equation

      Multiplying (1) by \bold \psi^{*} from the left, we get: -\frac{\hbar^2}{c^2}\psi^{*} \frac{\partial ^2\psi(\bold r, t)}{\partial t^2}=\psi^{*}(-\hbar^2\nabla^2+m^2c^2)\psi(\bold r, t)\qquad \qquad \qquad (9.2.1)

      Multiplying the complex conjugate form of (1) by \bold \psi from the left, we get: -\frac{\hbar^2}{c^2}\psi \frac{\partial ^2\psi^{*}(\bold r, t)}{\partial t^2}=\psi(-\hbar^2\nabla^2+m^2c^2)\psi^{*}(\bold r, t)\qquad \qquad \qquad (9.2.2)

      Subtracting #9.2.2 from #9.2.1, we get:

      -\frac{\hbar^2}{c^2}\left( \psi^{*} \frac{\partial ^2\psi}{\partial t^2}-\psi \frac{\partial ^2\psi^{*}}{\partial t^2}\right) =\hbar^2\left( \psi\nabla^2\psi^{*}-\psi^{*}\nabla^2\psi\right)

      \Rightarrow -\frac{\hbar^2}{c^2}\frac{\partial}{\partial t}\left( \psi^{*}\frac{\partial\psi}{\partial t}-\psi\frac{\partial\psi^{*}}{\partial t}\right) +\hbar^2\nabla\left( \psi^{*}\nabla\psi-\psi\nabla\psi^{*}\right) =0

      \Rightarrow \frac {\partial}{\partial t}\left[ \frac {i\hbar}{2mc^2}\left( \psi^{*}\frac{\partial\psi}{\partial t}-\psi\frac{\partial\psi^{*}}{\partial t}\right) \right] +\nabla \left[ \frac {\hbar}{2mi}\left( \psi^{*}\nabla\psi-\psi\nabla\psi^{*}\right) \right] =0

      this give us the continuity equation: \frac {\partial \rho}{\partial t}+\nabla \bold j = 0 \qquad \qquad \qquad  \qquad \qquad \qquad \qquad \qquad \qquad \qquad(9.2.3)

      where \rho = \frac {i\hbar}{2mc^2}\left( \psi^{*}\frac{\partial\psi}{\partial t}-\psi\frac{\partial\psi^{*}}{\partial t}\right) \qquad \qquad \qquad  \qquad \qquad \; \; \; \;(9.2.4)

      \bold j = \frac {\hbar}{2mi}(\psi^{*}\nabla\psi-\psi\nabla\psi^{*})\qquad \qquad \qquad \qquad \qquad \qquad  \qquad \; \;(9.2.5)

      From #9.2.3 we can see that the integral of the density \bold \rho over all space is conserved. However, \bold\rho is not positively definite. Therefore, we can neither interpret \bold \rho as the particle probability density nor can we interpret \bold j as the particle current. The appropriate interpretation are charge density for e\rho(\bold r,t) and electric current for e\bold j(\bold r, t) since charge density and electric current can be either positive or negative.

      Using #9.1.3 and the same procedure as before, it can be shown that the continuity equation still holds in an electromagnetic field with

      \bold{j} (\bold{r} , t) = \frac{1}{2m} [\psi^{*}(-i\hbar \bold{\nabla} - \frac{e}{c} \bold{A} )\psi - \psi(-i\hbar \bold{\nabla} + \frac{e}{c} \bold{A} )\psi^{*}]\qquad \ \ (9.2.6)

      and

      \rho (\bold{r} , t) = \frac{1}{2mc^2} [\psi^{*}(i\hbar \frac{\partial}{\partial t} - e\phi )\psi - \psi(i\hbar \frac{\partial}{\partial t} + e\phi )\psi^{*}]\qquad \ \ \ (9.2.10)

      Nonrelativistic Limit

      In nonrelativistic limit when v \ll c or p \ll mc, we have:

      E=[(pc)^2+(mc^2)^2]^{1/2}=mc^2\left[ 1+(\frac{p}{mc})^2\right] ^{1/2}\approx mc^2\left( 1+\frac{1}{2}\left( \frac{p}{mc}\right) ^2\right) =mc^2+\frac{p^2}{2m}

      So, the relativistic energy is different from classical energy by mc2, therefore, we can expect that if we write the solution of Klein-Gordon equation as \psi e^{-imc^2t/\hbar} and substitute it into Klein-Gordon equation, we will get Schrodinger equation for \bold \psi.

      Indeed, doing so we get:

      -\hbar ^2\frac {\partial ^2 \psi}{\partial t^2}e^{-imc^2t/\hbar}+2\frac {\partial \psi}{\partial t}imc^2 \hbar e^{-imc^2t/\hbar}+\psi m^2c^4 e^{-imc^2t/\hbar}=-\hbar ^2 c^2\nabla ^2 \psi e^{-imc^2t/\hbar}+m^2c^4 \psi e^{-imc^2t/\hbar}

      \Rightarrow -\frac {\hbar ^2}{2mc^2} \frac{\partial ^2 \psi}{\partial t^2}+i\hbar \frac {\partial \psi}{\partial t}=-\frac {\hbar ^2}{2m}\nabla ^2 \psi

      In the nonrelativistic limit the first term is considered negligibly small. As a result, for free particles in this limit we get back the Schrodinger equation:

      i\hbar \frac {\partial \psi}{\partial t}=-\frac {\hbar ^2}{2m} \nabla ^2 \psi


      Negative Energy States and Antiparticles

      The solutions to the Klein-Gordon equation allow for both positive and negative energies. The positive energies are no cause for concern, but the negative energies seem counter-intuitive classically, as they allow for negative probability densities, spontaneous transitions from the positive energy states to the negative energy states, and particles to propagate both directions in time. We can't simply "drop" the negative energy states, as they form part of the complete set of solutions. One way to interpret these negative energies was proposed by Stükelberg and Feynman: The negative energy solutions describe positive energy antiparticles, which are conjugates to particles and have been experimentally observed.

      By observing the Klein-Gordon equation with negative energy states, we can observe some properties of antiparticles. Consider a free particle at rest (\bold{p} = 0). The solution to the Klein-Gordon equation in a rest frame K is then

      \psi (\bold{r} , t) =  e^{-\frac{i}{\hbar }mc^2 t}

      If we now follow this same procedure for a negative energy state, the free particle wavefunction will be

      \psi (\bold{r} , t) = e^{\frac{i}{\hbar }mc^2 t}

      Notice that the wavefunction for the negative energy state is the complex conjugate of the wavefunction for the positive energy state. If we take the complex conjugate of the Klein-Gordon equation in an electromagnetic field, we find that

      \frac{1}{c^2}[i\hbar \frac{\partial }{\partial t} + e\phi ]^2 \psi ^{*} (\bold{r} ,t) = ([\frac{\hbar }{i}\bold{\nabla } + \frac{e}{c} \bold{A}]^2 + m^2 c^2 ) \psi ^{*} (\bold{r},t)

      This tells us that \psi ^{*} (\bold{r} , t) is a wavefunction satisfying the Klein-Gordon equation for a particle of mass m and charge -e, which is the antiparticle of a particle of mass m and charge e with wavefunction \psi (\bold{r} , t).

      Now, the density and current in a frame K' moving with velocity -\bold{v} relative to frame K for the particles will be \rho (\bold{r \prime}, t\prime) = \frac{E_p}{mc^2 } and \bold{j} (\bold{r \prime}, t\prime) = \frac{\bold{p}}{m} and for the antiparticles the density and current will be \rho (\bold{r \prime}, t\prime) = \frac{-E_p}{mc^2 } and \bold{j} (\bold{r \prime}, t\prime) = \frac{-\bold{p}}{m}, where E_p = \frac{mc^2 }{\sqrt{1-v^2 / c^2 }} and \bold{p} = \frac{m\bold{v}}{\sqrt{1-v^2 / c^2 }}.

      This tells us that an antiparticle moving in one direction has a current moving in the opposite direction. Thus, a particle current in one direction is equivalent to an antiparticle current in the opposite direction. This applies for propagation through time, as well: a negative energy solution describes equivalently a particle traveling backward in time and an antiparticle traveling forward in time. For charged particles, this is easy to reconcile since the charge of the particle and its antiparticle are opposite; thus, we expect the charge density and current to be opposite.

      For neutrally charged particles, however, the difference between particle and antiparticle are a bit more subtle. By observation, we see that there are a few possibilities. First, the particle and its antiparticle may have a "charge" that is not electromagnetic, e.g. the \displaystyle{K^0} and \overline{K^0}, which have a strangeness of 1 and -1, respectively. It may also be the case that the neutral particle is in fact its own antiparticle, as with the π0, in which case the wavefunction is always real, thus making the current and density zero.

      Klein-Gordon Equation with Coulomb Potential

      The Coulomb potential is exactly solvable in the Klein-Gordon equation. The potential four-vector of the Coulomb potential is

      \Phi ^{\mu} = ( \frac{-Ze}{r} ; 0 )

      which makes the Klein-Gordon equation

      [i\hbar \frac{\partial}{\partial t} + \frac{Ze^2}{r}]^2 \psi = -\hbar ^2 c^2 \nabla ^2 \psi + m^2 c^4 \psi

      Let \psi (\bold{r} , t) = R(\bold{r})Y_{\ell m}(\theta , \phi )e^{-\frac{i}{\hbar}Et}

      \Rightarrow [-\frac{1}{r^2}\frac{\partial}{\partial r} (r^2 \frac{\partial}{\partial r} ) + \frac{\ell (\ell +1)}{r^2}]R(\bold{r}) = \frac{(E+\frac{Ze^2}{r})^2 - m^2 c^4 }{\hbar ^2 c^2} R(\bold{r})

      Now, letting \gamma = \frac{Ze^2}{\hbar c}, \alpha ^2 = \frac{4(m^2 c^4 - E^2 )}{\hbar ^2 c^2}, \lambda = \frac{2E\gamma}{\hbar c \alpha}, and \displaystyle{\rho = \alpha r},

      \Rightarrow (\frac{1}{\rho ^2}\frac{\partial}{\partial \rho}(\rho ^2 \frac{\partial}{\partial \rho}) + [\frac{\lambda}{\rho} - \frac{1}{4} - \frac{\ell (\ell +1) -\gamma ^2}{\rho ^2}])R(\bold{r}) = 0

      Now, let's suppose that \displaystyle{R=f(\rho )g(\rho )v(\rho )}, where \displaystyle{f(\rho )} and \displaystyle{g(\rho )} are the asymptotically dominant properties of R at small and large \displaystyle{\rho}.

      As \rho \rightarrow 0, the \frac{1}{\rho ^2} term dominates, and R \rightarrow f(\rho )=\rho ^s, where s(s+1) = \ell (\ell +1) - \gamma ^2.

      As \rho \rightarrow \infty, the ρ0 term dominates, and R \rightarrow g(\rho )=e^{-\frac{\rho}{2}}.

      \Rightarrow R=\rho ^s e^{-\frac{\rho}{2}} v(\rho )

      Now suppose that v(\rho ) = \sum_{k=0}^{\infty} a_k \rho ^k. Putting this solution of R into the radial equation and simplifying, we find that \displaystyle{a_k} follows a recursive relation implying that \displaystyle{\lambda = N+s+1} where N is a constant, and that v(\rho ) = _{1}F_{1}(-N; \ 2(s+1); \ \rho ). Thus,

      \psi (\bold{r} , t) = \rho ^s e^{-\frac{\rho}{2}} \ _{1}F_{1}(-N; \ 2(s+1); \ \rho )Y_{\ell m}(\theta , \phi ) e^{-\frac{i}{\hbar}Et}\qquad \ \ (9.5.1)

      and E_{n \prime}=mc^2 (1+\frac{\gamma ^2}{n \prime ^2})^{-\frac{1}{2}}\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad(9.5.2)

      where n \prime = N+s+1. If we expand the root in #9.5.2, we find

      E_n = mc^2 [1- \frac{\gamma ^2}{2n^2} - \frac{\gamma ^4}{2n^4}(\frac{n}{\ell +\frac{1}{2}} - \frac{3}{4}) +...] = mc^2 - \frac{Ry}{n^2} - \frac{\gamma ^4 mc^2}{2n^4}(\frac{n}{\ell +\frac{1}{2}} - \frac{3}{4}) +..., where \displaystyle{n=N+\ell +1}, as in the non-relativistic case.

      Analyzing the energy states, we see that the first term is the rest mass energy of the particle, the second term is the non-relativistic energy levels of the Coulomb potential, and the third term is the fine structure correction. It should be noted that the Klein-Gordon equation is not compatible with spin. As should be observed, the fine structure correction to a particle with spin in a Coulomb potential is \frac{\gamma ^4 mc^2}{2n^4}(\frac{n}{j+\frac{1}{2}} - \frac{3}{4}), where j is the total angular momentum. The reason we can never reach this correction from the Klein-Gordon equation is because spin has no equivalent four-vector, and so will not couple completely to the Klein-Gordon equation. In order to accommodate spin in any potential, we must use something else. For spin-1/2, that something else is the Dirac equation.

      Klein-Gordon Problem with infinite square well potential

      Consider the Dirac equation in one space (x) and one time (t) dimesion. It can be written, in units for which c=\hbar=1 i\frac{\partial}{\partial t}\psi(x,t)=(-i\alpha\frac{\partial}{\partial x}+m\beta)\psi(x,t)

      where α and β are matrices.

      (a) Find the conditions that α and β must satisfy if each component of ψ(x,t) is required to satisfy the Klein-Gordon equation.,

      (\frac{\partial^{2}}{\partial t^{2}}-\frac{\partial^{2}}{\partial x^{2}}+m^{2})\psi(x,t)=0

      (b) Show that the 2*2 matrices \alpha=\sigma_{y}=\left(\begin{array}{cc}
0 & -i\\
i & 0\end{array}\right) \beta=\sigma_{z}=\left(\begin{array}{cc}
1 & 0\\
0 & -1\end{array}\right) satisfy the conditions found in part (a).

      (c) Look for stationary solutions to the Dirac equation of the form \psi(x,t)=\left(\begin{array}{c}
F(x)\\
G(x)\end{array}\right)e^{-iEt} and find the differential equations satisfied by F(x) and G(x).

      (d) Apply these results to find the bound state energy levels of a relativistic particle of mass m confiend to a one-dimensional box of length L with impenetrable walls.

      (e) Compare the energy levels of part (d) with the bound state energy levels of non-relativistic particle of mass m confined to a one-dimensional box of length L with impenetrable walls.

      Solution:

      (a)[-i\alpha\frac{\partial}{\partial x}+m\beta-i\frac{\partial}{\partial t}]\psi(x,t)=0

      [-i\alpha\frac{\partial}{\partial x}+m\beta+i\frac{\partial}{\partial t}][-i\alpha\frac{\partial}{\partial x}+m\beta-i\frac{\partial}{\partial t}]\psi(x,t)=0

      [\frac{\partial^{2}}{\partial t^{2}}-\alpha^{2}\frac{\partial^{2}}{\partial x^{2}}+m^{2}\beta^{2}-i\alpha\frac{\partial}{\partial x}m\beta-m\beta i\alpha\frac{\partial}{\partial x}]\psi(x,t)=0

      Reduce to Klein-Gordon equation if α2 = β2 = 1

      αβ + βα = 0

      [\frac{\partial^{2}}{\partial t^{2}}-\frac{\partial^{2}}{\partial x^{2}}+m^{2}]\psi(x,t)=0

      (b) The 2*2 Pauli matrices σy and σz satisfy the above equations,

      \sigma_{y}^{2}=\sigma_{z}^{2}=1

      σyσz + σzσy = 0

      (c)

      \psi(x,t)=\left(\begin{array}{c}
F(x)\\
G(x)\end{array}\right)e^{-iEt}; insert into Dirac equation:

      E\left(\begin{array}{c}
F(x)\\
G(x)\end{array}\right)=\left(\begin{array}{cc}
m & -\frac{\partial}{\partial x}\\
\frac{\partial}{\partial x} & -m\end{array}\right)\left(\begin{array}{c}
F(x)\\
G(x)\end{array}\right) or

      (E-m)F(x)=-\frac{\partial}{\partial x}G(x)

      (E+m)G(x)=\frac{\partial}{\partial x}F(x)

      In the presence of a potential it reduces to:

      (E-V-m)F(x)=-\frac{\partial}{\partial x}G(x)

      (E-V+m)G(x)=\frac{\partial}{\partial x}F(x)

      (d) One-dimensional box of length L with impenetrable wells, consider 0<x<L; then V=0; From the second equation:

      G(x)=\frac{1}{E+m}\frac{\partial}{\partial x}F(x); insert into first equation

      (E-m)F(x)=-\frac{1}{E+m}\frac{\partial^{2}}{\partial x^{2}}F(x) or

      \frac{d^{2}F(x)}{dx^{2}}=-(E^{2}-m^{2})F(x)

      Proposed solution: F(x) = Asin(kx) + Bcos(kx)

      It is the solution if k2 = E2m2

      Boundary conditions:

      F(0)=0\ \implies B=0

      F(L)=0\ \implies k_{n}=n\frac{\pi}{L},\ n=1,2,3,\cdots

      Hence, E_{n}=\sqrt{k_{n}^{2}+m^{2}}=\sqrt{(\frac{\pi n}{L})^{2}+m^{2}}

      (e) Compare with nonrelativistic solution:

      E_{n}^{NR}=\frac{k_{n}^{2}}{2m}=(\frac{\pi n}{L})^{2}\frac{1}{2m}

      E_{n}=m\sqrt{1+(\frac{\pi n}{L})^{2}\frac{1}{m^{2}}}\simeq m+\frac{1}{2m}(\frac{\pi n}{L})^{2}+m\cdot O(\frac{\pi n}{mL})^{4}

      == relativistic Quantum mechanics and Dirac Equation ==

Personal tools