Phy5646: Difference between revisions

From PhyWiki
Jump to navigation Jump to search
Line 1: Line 1:
''' Welcome to the Quantum Mechanics B PHY5646 Spring 2009'''
[[Image:SchrodEq.png|thumb|Schrodinger equation. The most fundamental equation of quantum mechanics which describes the rule according to which a state <math>|\Psi\rangle</math> evolves in time.
]]
This is the second semester of a two-semester graduate level sequence, the first being [[phy5645|PHY5645 Quantum A]]. Its goal is to explain the concepts and mathematical methods of Quantum Mechanics, and to prepare a student to solve quantum mechanics problems arising in different physical applications. The emphasis of the courses is equally on conceptual grasp of the subject as well as on problem solving. This sequence of courses builds the foundation for more advanced courses and graduate research in experimental or theoretical physics.
The key component of the course is the collaborative student contribution to the course Wiki-textbook. Each
team of students (see [[Phy5646 wiki-groups]]) is responsible for BOTH writing the assigned chapter AND editing chapters of others.
This course's website can be found [http://www.physics.fsu.edu/courses/Spring10/PHY5646/default.htm here].
'''Team assignments:''' [[Phy5646_Spring10_teams|Spring 2010 student teams]]
----
'''Outline of the course:'''
== <span style="color:#2B65EC"> Stationary state perturbation theory in Quantum Mechanics </span> ==
Very often, quantum mechanical problems cannot be solved exactly. An approximate technique can be very useful since it gives us quantitative insight into a larger class of problems which do not admit exact solutions. One technique is the WKB approximation, which holds in the asymptotic limit <math> \hbar\rightarrow 0 </math>.
Perturbation theory is another very useful technique, which is also approximate, and attempts to find corrections to exact solutions in powers of the terms in the Hamiltonian which render the problem insolvable.  The basic idea of perturbation theory deals with the notion of continuity such that you must be able to write the given Hamiltonian in a way that involves the solvable part of the Hamiltonian with very small additional terms that represent the insolvable parts.  In the case of non-degenerate perturbation theory the following assumption must hold: both the energy and the wavefunctions of the insolvable Hamiltonian have analytic expansion in powers of the real parameter <math>\lambda</math> -- to insure no jump discontinuities -- equals zero where the perturbing term is taken to be <math>\lambda\mathcal{H}'</math>. The quantity <math>\lambda</math>, which is taken to be <math>0 < \lambda < 1 </math>, has no physical significance, and is merely used as a way to keep track of order.
The Hamiltonian is taken to have the following structure:
<math>\mathcal{H}=\mathcal{H}_0+\lambda\mathcal{H}'</math>
where  <math>\mathcal{H}_0</math> is exactly solvable and <math>\mathcal{H}'</math> makes it insolvable by analytical methods. Therefore the eigenvalue problem becomes:
<math>(\mathcal{H}_0+\lambda\mathcal{H}')|\psi> = E_n|\psi> </math>
At the end of the calculation we set <math>\lambda=1</math>.
It is important to note that perturbation theory tends to yield fairly accurate energies, but usually yields very poor wavefunctions.
=== ''' <span style="color:#2B65EC"> Rayleigh-Schrödinger Perturbation Theory </span> ''' ===
We begin with an unperturbed problem, whose solution is known exactly.  That is, for the unperturbed Hamiltonian, <math>\mathcal{H}_0</math>, we have eigenstates, <math> |n\rangle </math>, and eigenenergies, <math> \mathcal \epsilon_n </math>, that are known solutions to the Schrodinger equation:
<math>\mathcal{H}_0 |n\rangle  = \epsilon_n |n\rangle \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad (1.1.1) </math>
To find the solution to the perturbed hamiltonian, <math>\mathcal{H}</math>, we first consider an auxiliary problem, parameterized by <math>\mathcal \lambda</math>:
<math> \mathcal{H} = \mathcal{H}_0 + \lambda \mathcal{H}^' \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad (1.1.2) </math>
The only reason for doing this is that we can now, via the parameter <math>\lambda</math>, expand the solution in powers of the component of the hamiltonian <math>\mathcal{H}'</math>, which is presumed to be relatively small in comparison with <math>\mathcal{H}_0</math>. In nature we do not know a priori that this will work, and choosing the correct perturbation for a particular problem will likely require some insight into the problem or by a numerical solution.
We attempt to find eigenstates <math>|N(\lambda)\rangle</math> and eigenvalues <math> E_n</math> of the Hermitian operator <math>\mathcal{H}</math>, and assume that they can be expanded in a power series of <math>\mathcal\lambda</math>:
<math>
E_n(\lambda) = E_n^{(0)} + \lambda E_n^{(1)}  + ... + \lambda^j E_n^{(j)} + ... </math>
<math>|N(\lambda)\rangle = |\Psi_n^{(0)}\rangle + \lambda|\Psi_n^{(1)}\rangle + \lambda^2 |\Psi_n^{(2)}\rangle + ... \lambda^j |\Psi_n^{(j)}\rangle + ... \qquad\qquad\qquad\qquad\qquad\;\;\;\;\;\; (1.1.3)</math>
Where the <math>|\Psi_n^{(j)}\rangle</math> signify the j-th order correction to the unperturbed eigenstate <math>|n\rangle</math>, upon perturbation.  Then we must have,
<math> \mathcal{H} |N(\lambda)\rangle = E(\lambda) |N(\lambda)\rangle \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\;\;\;\;\;\;  (1.1.4)</math>
Which upon expansion, becomes:
<span id="1.1.5"></span>
<math> (\mathcal{H}_0 + \lambda \mathcal{H}')\left(\sum_{j=0}^{\infty}\lambda^j |\Psi_n^{(j)}\rangle \right) = \left(\sum_{l=0}^{\infty} \lambda^l E_n^{(l)}\right)\left(\sum_{j=0}^{\infty}\lambda^j |\Psi_n^{(j)}\rangle \right) \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\;\;\;\; (1.1.5)</math>
In order for this method to be useful, the perturbed energies must vary continuously with <math>\lambda</math>.  Knowing this we can see several things about our as yet undetermined perturbed energies and eigenstates.  For one, as  <math>\lambda \rightarrow 0, |N(\lambda)\rangle \rightarrow |\Psi_n^{(0)}\rangle = |n\rangle</math> and <math> E_n(\lambda) \rightarrow E_n^{(0)} = \epsilon_n</math> for some unperturbed state <math>|n\rangle</math>.
For convenience, assume that the unperturbed states are already normalized: <math> \langle n | n \rangle = 1</math>,
and choose normalization such that the exact states satisfy <math>\langle n|N(\lambda)\rangle=1</math>.
Then in general <math>|N\rangle</math> will not be normalized, and we must normalize it after we have found the states (see [[Phy5646#Renormalization]]). 
Thus, we have:
<math>\langle n|N(\lambda)\rangle= 1 = \langle n |\Psi_n^{(0)}\rangle + \lambda \langle n |\Psi_n^{(1)}\rangle + \lambda^2 \langle n |\Psi_n^{(2)}\rangle + ... \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\;\;\;\;\;\;(1.1.6)</math>
Coefficients of the powers of <math>\lambda</math> must match, so,
<math> \langle n | \Psi_n^{(i)} \rangle = 0, i = 1, 2, 3, ... \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\;\;\;(1.1.7)</math>
Which shows that, if we start with the unperturbed state <math> |n\rangle </math>, upon perturbation, then we add to this initial state a set of perturbation states, <math> |\Psi_n^{(0)}\rangle, |\Psi_n^{(1)}\rangle, ... </math> which are all orthogonal to the original state -- so the unperturbed states become mixed together.
We equate coefficients in the above expanded form of the perturbed Hamiltonian (eq. [[#1.1.5]]), we are provided with the corrected eigenvalues for whichever order of <math>\lambda</math> we want.  The first few are as follows,
'''0th Order Energy'''
<math>\lambda^0  \rightarrow  E_n^{(0)} = \epsilon_n \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad (1.1.8)</math> <br />
which we already had before.
'''1st Order Energy Corrections'''
<math>\lambda^1  \rightarrow  \mathcal{H}_0 |\Psi_n^{(1)}\rangle + \mathcal{H}' |\Psi_n^{(0)}\rangle = E_n^{(1)} |\Psi_n^{(0)}\rangle + E_n^{(0)} |\Psi_n^{(1)}\rangle </math>,
taking the scalar product of this result of <math>|n\rangle</math>, and using our previous results, we get:
<math>E_n^{(1)} = \langle n|\mathcal{H}'|n\rangle \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\;\;\;\;(1.1.9)</math>
'''2nd Order Energy Corrections'''
Taking the terms in eq [[#1.1.5]] that are second order in <math>\lambda</math> and operating on them with <math>|n\rangle</math> provides us with <math>E_n</math> up to the second order:
<math>E_n = \epsilon_n + \lambda\langle n|\mathcal{H}'|n\rangle + \lambda^2 \sum_{m \not= n} \frac{|\langle n|\mathcal{H}'|m\rangle|^2}{\epsilon_n - \epsilon_m}+ O(\lambda^3)</math>
One interesting thing to note about this is that <math>|V_{mn}|^2 = |\langle m |V| n\rangle|^2</math> is positive definite.  Therefore, since <math>\epsilon_0 - \epsilon_m < 0</math>, the second order energy correction will always lower the ground state energy.
'''kth order Energy Corrections'''
In general,
<math> E_n^{(k)} = \langle n | \mathcal{H}' | N_n^{(k - 1)} \rangle \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\;(1.1.10)</math>
This result provides us with a ''recursive'' relation for the eigenenergies of the perturbed state, so that we have access to the eigenenergies for an state of arbitrary order in <math>\lambda</math>. 
What about the eigenstates?
Express the perturbed states in terms of the unperturbed states:
<math>|\Psi_n^{(k)}\rangle = \sum_{m \not= n}|m\rangle\langle m|\Psi^{(k)}\rangle</math>
Go back to equation [[#1.1.5]] and take the scalar product from the left with <math>\langle m |</math>
and then, compare orders of <math>\lambda</math> to find:
'''1st order Eigenkets'''
<math>\langle m|\Psi_n^{(1)}\rangle = \frac{\langle m | n\rangle}{\epsilon_n - \epsilon_m}</math>
The first order contribution is then the sum of this equation over all <math>m</math>, and adding the zeroth order we get the eigenstates of the perturbed hamiltonian to the 1st order in <math>\lambda</math>:
<math>|N\rangle = |n\rangle + \lambda\sum_{k \not= n} |m\rangle \frac{\langle m |V| n\rangle}{\epsilon_n - \epsilon_m} + ...\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\;\;\;\;(1.1.11)</math>
Going beyond order one in <math>\lambda</math> gets increasingly messy, but can be done by the same procedure.
===Renormalization===
Earlier we assumed that <math>\langle n|N(\lambda)\rangle=1</math>, which means that our <math>|N(\lambda)\rangle</math> states are not normalized themselves.  To reconcile this we introduce the normalized perturbed eigenstates, denoted <math>\bar{N}</math>.  These will then be related to the <math>N(\lambda)</math>:
<math> |\bar{N}\rangle = \frac{|N\rangle}{\sqrt{\langle N|N\rangle}} =z^{1/2}|N\rangle \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad(1.1.12)</math>
Thus <math>z</math> gives us a measure of how close the perturbed state is to the original state.
<math> z(\lambda ) = \frac{1}{\langle N(\lambda )|N (\lambda ) \rangle} \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\;\;(1.1.13)</math>
To second order in <math>\lambda</math>
<math>\frac{1}{z (\lambda )} = \langle N(\lambda )|N(\lambda )\rangle = ( \langle n| + \lambda \langle\Psi_n^{(1)}| + ...)( \langle n| + \lambda \langle\Psi_n^{(1)}| + ...)</math>
<math>z(\lambda ) = \frac{1}{1 + \lambda^2\sum_{ n \not= m}\frac{|\langle m|V|n\rangle|^2}{(\epsilon_n - \epsilon_m)^2} + ...} = 1 - \lambda^2\sum_{ n \not= m}\frac{|\langle m|V|n\rangle|^2}{(\epsilon_n - \epsilon_m)^2} + ... \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\;\;\;\;\;\;\;(1.1.14)</math>
[[Image:Renorm.jpg]]
Where we use a taylor expansion to arrive at the final result (noting that <math>\frac{|\langle m|V|n\rangle|^2}{(\epsilon_n - \epsilon_m)^2} < 1</math>).
Then, interestingly, we can show that <math>|n\rangle</math> is related to the energies by employing equation 1.1.10:
<math>z(\lambda ) = \frac{\partial E_n}{\partial \epsilon_n}\Big|_{\epsilon_m}{\langle m|V|n\rangle} \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\;\;\;(1.1.15)</math>
Where the derivative is taken with respect to <math>\epsilon_n</math>, while holding <math>\langle m|V|n\rangle</math> constant.  Using the Brillouin-Wigner perturbation theory (see next section) it can supposedly be shown that this relation holds exactly, without approximation.
Problem examples of non-degenerate perturbation theory :
-[http://wiki.physics.fsu.edu/wiki/index.php/Phy5646_PerturbationExample1 Problem 1]: demonstrating how linear algebra can be used to solve for the exact eigenstates, exact eigenevalues, first and second order corrections to the eigenvalues, and first order corrections to the eigenstates of a given Hamiltonian
-[http://wiki.physics.fsu.edu/wiki/index.php/Sample_problem_2 Problem 2]
-[http://wiki.physics.fsu.edu/wiki/index.php/Phy5646/Non-degenerate_Perturbation_Theory_-_Problem_3 Problem 3]
-[http://wiki.physics.fsu.edu/wiki/index.php/Non_degenerate_perturbation_example Problem 4]
=== ''' <span style="color:#2B65EC"> Brillouin-Wigner Perturbation Theory </span> ''' ===
Brillouin-Wigner perturbation theory is an alternative perturbation method based on treating the right hand side of
<math> (E_n - H_0)|N\rangle = H'|N\rangle </math>
as a known quantity. This method is not strictly an expansion in lambda.
Using a basic formula derived from the Schrodinger equation, you can find an approximation for any power of <math> \lambda </math> required using an iterative process. This theory is less widely used as compared to the RS theory. At first order the two theories are equivalent. However,the BW theory extends more easily to higher order and avoid the need for separate treatment of non degenerate and degenerate levels. In addition, if we have a good approximation for the value of <math> E_n </math>, the BW series should converge more rapidly than the RS series.
Starting with the Schrodinger equation:
<math>
\begin{align}
({\mathcal H}_o+\lambda {\mathcal H}')|N\rangle &= E_n|N\rangle \\
\lambda {\mathcal H}'|N\rangle &= (E_n-{\mathcal H}_o)|N\rangle \\
\langle n|(\lambda {\mathcal H}'|N\rangle) &= \langle n|(E_n-{\mathcal H}_o)|N\rangle \\
\lambda \langle n|{\mathcal H}'|N\rangle &= (E_n-\epsilon_n)\langle n|N\rangle \\
\end{align}
</math>
If we choose to normalize <math> \langle n|N \rangle = 1 </math>, then so far we have:
<span id="1.2.1"></span>
<math> (E_n-\epsilon_n) = \lambda\langle n|{\mathcal H}'|N\rangle \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\; (1.2.1)</math>
which is still an exact expression (no approximation have been made yet).  The wavefunction we are interested in, <math> |N\rangle </math> can be rewritten as a summation of the eigenstates of the (unperturbed, <math> {\mathcal H}_o </math>) Hamiltonian:
<span id="1.2.2"></span>
<math>
\begin{align}
|N\rangle &= \sum_m|m\rangle\langle m|N\rangle\\
&= |n\rangle\langle n|N\rangle + \sum_{m\neq n}|m\rangle\langle m|N\rangle\\
&= |n\rangle + \sum_{m\neq n}|m\rangle\frac{\lambda\langle m|{\mathcal H}'|N\rangle}{(E_n-\epsilon_m)}\\
\end{align} \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\;\;\;\;\;\; (1.2.2)</math>
The last step has been obtained by using eq [[#1.2.1]].  So now we have a recursive relationship for both <math> E_n </math> and <math> |N\rangle </math>
<math> E_n = \epsilon_n+\lambda\langle n|{\mathcal H}'|N\rangle </math> where <math> |N\rangle </math> can be written recursively to any order of <math> \lambda </math> desired
<math> |N\rangle = |n\rangle+\lambda \sum_{m\neq n}|m\rangle\frac{\lambda\langle m|{\mathcal H}'|N\rangle}{(E_n-\epsilon_m)} </math> where <math> E_n </math> can be written recursively to any order of <math> \lambda </math> desired
For example, the expression for <math> |N\rangle </math> to a third order in <math> \lambda </math> would be:
<math>
\begin{align}
|N\rangle &= |n\rangle + \lambda\sum_{m\neq n}|m\rangle\frac{\langle m|{\mathcal H}'}{(E_n-\epsilon_m)}\left(|n\rangle + \lambda\sum_{j\neq n}|j\rangle\frac{\langle j|{\mathcal H}'}{(E_n-\epsilon_j)}\left(|n\rangle + \lambda\sum_{k\neq n}|k\rangle\frac{\langle k|{\mathcal H}'|n\rangle}{(E_n-\epsilon_k)}\right)\right)\\
&= |n\rangle + \lambda\sum_{m\neq n}|m\rangle\frac{\langle m|{\mathcal H}'|n\rangle}{(E_n-\epsilon_m)} + \lambda^2\sum_{m,j\neq n}|m\rangle\frac{\langle m|{\mathcal H}'|j\rangle\langle j|{\mathcal H}'|n\rangle}{(E_n-\epsilon_m)(E_n-\epsilon_j)} + \lambda^3\sum_{m,j,k\neq n}|m\rangle\frac{\langle m|{\mathcal H}'|j\rangle\langle j|{\mathcal H}'|k\rangle\langle k|{\mathcal H}'|n\rangle}{(E_n-\epsilon_m)(E_n-\epsilon_j)(E_n-\epsilon_k)}\\
\end{align}
</math>,
where <math>\sum_{j} |j \rangle \langle j |</math> is unity.
Note that we have chosen <math>\langle n|N \rangle = 1</math>, i.e. the correction is perpendicular to the unperturbed state. That is why at this point <math>|N \rangle</math> is not normalized. The normalized wave function can be written as
<math> |\bar{N} (\lambda) \rangle = \frac{|N(\lambda) \rangle}{\sqrt{\langle N (\lambda) | N (\lambda) }}  \equiv \sqrt{Z (\lambda)} | N(\lambda) \rangle </math>
Interestingly, the normalization constant <math>Z</math> turns out be exactly equal to the derivative of the exact energy with respect to the unperturbed energy, ie
<math> \frac{\partial E_{n}(\lambda)}{\partial \epsilon_{n}}  = Z</math>
[[Normalization constant|The calculation for the normalization constant can be found through this link]].
We can expand [[#1.2.1]] with the help of [[#1.2.2]], this gives:
<math>E_n = \epsilon_n + \lambda\langle n|H'|n\rangle + \lambda^2\sum ' \frac{|\langle m|H'|n\rangle|^2}{E_n - \epsilon_m} \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\;\;\;\; (1.2.3)</math>.
Notice that if we replaced <math>E_n</math> with <math>\epsilon_n</math> we would recover the Raleigh-Schrodinger perturbation theory.  By itself [[#1.2.2]] provides a transcendental equation of <math>E_n</math>, since <math>E_n</math> appears in the denominator of the right hand side.  If we have some idea of the value of a particular <math>E_n</math>, then we could use this as a numerical method to iteratively get better and better values for <math>E_n</math>.
=== ''' <span style="color:#2B65EC"> Degenerate Perturbation Theory </span> ''' ===
Degenerate perturbation theory is an extension of standard perturbation theory which allows us to handle systems where one or more states of the system have non-distinct energies. Normal perturbation theory fails in these cases because the denominators of the expressions for the first-order corrected wave function and for the second-order corrected energy become zero. If more than one eigenstate for the Hamiltonian <math> {\mathcal H}_o </math> has the same energy value, the problem is said to be degenerate.  If we try to get a solution using perturbation theory, we fail, since Rayleigh-Schroedinger PT includes terms like <math> 1/\mathcal(\epsilon_n-\epsilon_m) </math>.
Instead of trying to use these degenerate eigenstates with perturbation theory, we start with the non-degenerate linear combinations of the original eigenstates so that regular perturbation theory may be applied.  In other words, the first, and only, extra step of degenerate perturbation theory is to find linear combinations by diagonalizing the perturbation within the set of degenerate states and then proceeding as usual in non-degenerate perturbation.
<math> \{|n_a\rangle,|n_b\rangle,|n_c\rangle,\dots\} </math> <math> \longrightarrow </math> <math> \{|n_{\alpha}\rangle,|n_{\beta}\rangle,|n_{\gamma}\rangle,\dots\}  </math> where <math> |n_{\alpha}\rangle = \sum_iC_{\alpha,i}|n_i\rangle </math> etc
The general procedure for doing this type of problem is to create the matrix with elements <math> \langle n_a|{\mathcal H}'|n_b\rangle </math> formed from the degenerate eigenstates of <math> {\mathcal H}_o </math>.  This matrix can then be diagonalized, and the eigenstates of this matrix are the correct linear combinations to be used in non-degenerate perturbation theory. In other words, we choose to manipulate the expression for the Hamiltonian so that<math> \langle n_\alpha|H'|n_\beta\rangle </math> goes to zero for all cases <math>\alpha \ne \beta</math>. One can then apply the standard equation for the first-order energy correction, noting that the change in energy will apply to the  energy states described by the new basis set. (In general, the new basis will consist of some linear superposition of the existing state vectors of the original system.)
One of the well-known examples of an application of degenerate perturbation theory is the Stark Effect.  If we consider a Hydrogen atom with <math> n=2 </math> in the presence of an external electric field <math> \vec{\mathcal E}={\mathcal E}\hat{z} </math>.  The Hamiltonian for this system is <math> {\mathcal H}={\mathcal H}_o-e{\mathcal E}z </math>.  The eigenstates of the system are <math> \{|2S\rangle,|2P_{-1}\rangle,|2P_0\rangle,|2P_{+1}\rangle\} </math>.  The matrix of the degenerate eigenstates and the perturbation is:
<math>
\begin{align}
\langle n_i|{\mathcal H}'|n_j\rangle &\longrightarrow \left(\begin{array}{cccc}\langle2S|-e{\mathcal E}z|2S\rangle&\langle2S|-e{\mathcal E}z|2P_{-1}\rangle&\langle2S|-e{\mathcal E}z|2P_0\rangle&\langle2S|-e{\mathcal E}z|2P_{+1}\rangle\\\langle2P_{-1}|-e{\mathcal E}z|2S\rangle&\langle2P_{-1}|-e{\mathcal E}z|2P_{-1}\rangle&\langle2P_{-1}|-e{\mathcal E}z|2P_0\rangle&\langle2P_{-1}|-e{\mathcal E}z|2P_{+1}\rangle\\\langle2P_0|-e{\mathcal E}z|2S\rangle&\langle2P_0|-e{\mathcal E}z|2P_{-1}\rangle&\langle2P_0|-e{\mathcal E}z|2P_0\rangle&\langle2P_0|-e{\mathcal E}z|2P_{+1}\rangle\\\langle2P_{+1}|-e{\mathcal E}z|2S\rangle&\langle2P_{+1}|-e{\mathcal E}z|2P_{-1}\rangle&\langle2P_{+1}|-e{\mathcal E}z|2P_0\rangle&\langle2P_{+1}|-e{\mathcal E}z|2P_{+1}\rangle\\\end{array}\right)\\
&\longrightarrow \left(\begin{array}{cccc}0&0&\langle2S|-e{\mathcal E}z|2P_0\rangle&0\\0&0&0&0\\\langle2P_0|-e{\mathcal E}z|2S\rangle&0&0&0\\0&0&0&0\\\end{array}\right)\\
&\longrightarrow \left(\begin{array}{cccc}0&0&-3e{\mathcal E}a_B&0\\0&0&0&0\\-3e{\mathcal E}a_B&0&0&0\\0&0&0&0\\\end{array}\right)\\
\end{align}
</math>
To briefly summarize how most of the terms in this matrix work out to be zero (the full arguments as to how most of these terms are zero is worked out in G Baym's "Lectures on Quantum Mechanics" in the section on Degenerate Perturbation Theory) first note that the hydrogen atom is degenerate under parity, and as a result, all the elements on the diagonal become zero.  The other elements vanish because of angular momentum.  Matrix elements of <math> \lambda </math>V between states with different eigenvalues of <math> L_{z}</math> vanish, since <math> e{\mathcal E}z </math> commutes with <math> L_{z} </math>.  For example,
<math>  0 = \langle 2P_{-1}| [e{\mathcal E}z, L_{z} ] | 2P_{1} \rangle  = \langle 2P_{-1} | e{\mathcal E}z (L_{z}|P_{1} \rangle) 
-( \langle 2P_{-1}|L_{z}) e{\mathcal E}z|2P_{1} \rangle  = 2\hbar \langle 2P_{-1}|e {\mathcal E}z|2P_{1} \rangle </math>  which means that  <math> \langle 2P_{-1}|e {\mathcal E}z| 2P_{1} \rangle = 0 </math>
The correct linear combination of the degenerate eigenstates ends up being
<math> \{|2P_{-1}\rangle,|2P_{+1}\rangle,\frac{1}{\sqrt{2}}\left(|2S\rangle+|2P_0\rangle\right),\frac{1}{\sqrt{2}}\left(|2S\rangle-|2P_0\rangle\right)\} </math> 
Because of the perturbation due to the electric field, the <math> |2P_{-1}\rangle </math> and <math> |2P_{+1}\rangle </math> states will be unaffected.  However, the energy of the <math> |2S\rangle </math> and <math> |2P_0\rangle </math> states will have a shift due to the electric field.
=== ''' <span style="color:#2B65EC"> Example: 1D harmonic oscillator</span> ''' ===
Consider 1D harmonic oscillator perturbed by a constant force.
<math>V = -F\mathbf{x}</math>
The energy up to second order is given by
<span id="1.3.1"></span>
<math>E_{n}=\epsilon_{n}+\langle n|V|n\rangle +\sum_{m\neq n}  \frac{|\langle m|V|n\rangle |^{2}}{\epsilon_{n}-\epsilon_{m}} \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad (1.3.1) </math>
Let's see the matrix elements
<math>\begin{align}
\langle m|V|n\rangle &=-F\langle m|\mathbf{x}|n\rangle\\
&=-F\langle m|\sqrt{\frac{\hbar}{2m\omega}}\left( \mathbf{a}+\mathbf{a}^{\dagger}\right)|n\rangle\\
&=-F\sqrt{\frac{\hbar}{2m\omega}}\left( \langle m|\mathbf{a}|n\rangle+\langle m|\mathbf{a}^{\dagger}|n\rangle\right)\\
&=-F\sqrt{\frac{\hbar}{2m\omega}}\left( \sqrt{n}\langle m|n-1\rangle+\sqrt{n+1}\langle m|n+1\rangle\right)\\
&=-F\sqrt{\frac{\hbar}{2m\omega}}\left( \sqrt{n}\delta_{m,n-1}+\sqrt{n+1}\delta_{m,n+1}\right)\\
\end{align}</math>
We see that:
* The first order term in eq. [[#1.3.1]] is:
:<math>\langle n|V|n\rangle=0</math>
*The Second order term is:
:<math>\begin{align}
\sum_{m\neq n}  \frac{|\langle m|V|n\rangle |^{2}}{\epsilon_{n}-\epsilon_{m}}&=
\frac{|\langle n-1|V|n\rangle |^{2}}{\epsilon_{n}-\epsilon_{n-1}}+\frac{|\langle n+1|V|n\rangle |^{2}}{\epsilon_{n}-\epsilon_{n+1}}\\
&=\frac{|\langle n-1|V|n\rangle |^{2}}{\hbar \omega (n+\frac{1}{2})-\hbar \omega (n-1+\frac{1}{2})}+\frac{|\langle n+1|V|n\rangle |^{2}}{\hbar \omega (n+\frac{1}{2})-\hbar \omega (n+1+\frac{1}{2})}\\
&=\frac{|\langle n-1|V|n\rangle |^{2}}{\hbar \omega}+\frac{|\langle n+1|V|n\rangle |^{2}}{-\hbar \omega}\\
&=\frac{1}{\hbar \omega}\left(\frac{\hbar}{2m\omega}F^{2}n-\frac{\hbar}{2m\omega}F^{2}(n+1)\right)\\
&=\frac{-F^{2}}{2m\omega^{2}}
\end{align}</math>
Finally the energy is given by
<math>E_{n}=\epsilon_{n}-\frac{F^{2}}{2m\omega^{2}}
</math>
This results is exactly the same as when we solve the problem without perturbation theory.
== ''' <span style="color:#2B65EC"> Time dependent perturbation theory in Quantum Mechanics </span> ''' ==
=== ''' <span style="color:#2B65EC"> Formalism </span> ''' ===
Previously, we learned time independent perturbation theory in which a little change in the Hamiltonian generates a correction in the form of a series expansion for the energy and wave functions. The problem for a time independent <math>\mathcal{H}</math> can be solved by finding a solution to the equation <math>\mathcal{H}|n\rangle = E_n|n\rangle</math>. And then changes in time can be modeled by constructing the states <math> |\psi(t)\rangle = \sum_nc_n(t)|n\rangle </math> where <math>c_n(t) = e^{-\frac{i}{\hbar}E_n t}c_n(0) </math>. In principle this describes any closed system, and there would never be a reason for time-dependent problems if it were practical to consider all systems as closed. However, there are many examples in nature of systems that are easier described as not being closed. For example, while the stationary approach can be used to describe the interaction of electromagnetic field with atoms (i.e. photon with Hydrogen atom), it is more practical to describe it as an open system with an explicitly time dependent term (due to EM radiation). Therefore we explore Time Dependent Perturbation Theory.
One of the main tasks of this theory is the ''calculation of transition probabilities'' from one state <math>|\psi_n \rangle</math> to another state <math>|\psi_m \rangle</math> that occurs under the influence of a time dependent potential. Generally, the transition of a system from one state to another state only makes sense if the potential acts on within a finite time period from <math>\!t = 0</math> to <math>\!t = T</math>. Except for this time period, the total energy is a constant of motion which can be measured.
We start with the Time Dependent Schrodinger Equation,
<math>i\hbar\frac{\partial}{\partial t}|\psi_t^0 \rangle = H_0 |\psi_t^0\rangle,  \qquad t<t_0. \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad (2.1.1)</math>
where a is the Bohr radius.
Then, to answer any questions about the behavior of the system at a later time we must find its state <math> | \psi_{t} \rangle </math>, assuming that the perturbation acts after time <math>\!t_0</math>, we get
<math>i\hbar\frac{\partial}{\partial t}|\psi_t \rangle = (H_0 + V_t)|\psi_t\rangle,  \qquad t>t_0 \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad\;\;\; (2.1.2)</math>
The problem therefore consists of finding the solution <math>|\psi_t\rangle</math> with boundary condition <math>|\psi_t\rangle = |\psi_t^0\rangle</math> for <math>t \leq t_0</math>. However, such a problem is usually impossible to solve completely in closed form. <br />
Therefore, we limit ourselves to the problems in which <math>\!V_t</math> is small.  In that case we can treat <math>\!V_t</math> as a perturbation and seek it's effect on the wavefunction in powers of <math>\!V_t</math>.
Since <math>\!V_t</math> is small, the time dependence of the solution will largely come from <math>\!H_0</math>. So we use
<math>|\psi_t\rangle = e^{-i H_0 t/\hbar}|\psi(t)\rangle \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad\; (2.1.3)</math>,
which we substitute into the Schrodinger Equation to get <span id="(2.1.4)"></span>
<math>i\hbar\frac{\partial}{\partial t}|\psi(t)\rangle=V(t)|\psi(t)\rangle \quad \text{where}\quad V(t) = e^{i H_0 t/\hbar}V_te^{-i H_0 t/\hbar}\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad (2.1.4)</math>.
In this equation <math>\psi(t)</math> and the operator <math>V(t)</math> are in the <i>interaction</i> representation. Now, we integrate equation [[#(2.1.4)]] to get
<math>\int_{t_o}^{t}dt \frac{\partial}{\partial t}|\psi(t)\rangle = \psi(t) - \psi(t_0) = \frac{1}{i\hbar}\int_{t_0}^{t}dt' V(t')|\psi(t')\rangle</math>
or <span id="(2.1.5)"></span>
<math>|\psi(t)\rangle = |\psi(t_0)\rangle + \frac{1}{i\hbar}\int_{t_0}^{t}dt' V(t')|\psi(t')\rangle \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \;\;\ (2.1.5)</math>
Equation [[#(2.1.5)]] can be iterated by inserting this equation itself as the integrand in the r.h.s. We can then write equation [[#(2.1.5)]] as
<math>|\psi(t)\rangle = |\psi(t_0)\rangle + \frac{1}{i\hbar}\int_{t_0}^{t}dt' V(t')\left(|\psi(t_0)\rangle + \frac{1}{i\hbar}\int_{t_0}^{t'}dt'' V(t'')|\psi(t'')\rangle\right), \qquad t''<t'\qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad\;\ (2.1.6)</math>
which can be written compactly as
<math>|\psi(t)\rangle = T e^{-\frac{i}{\hbar}\int_{t_0}^{t}V(t')dt'} |\psi(t_0) \rangle \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad\;\;\ (2.1.7)</math>
This is the general solution. T is called the <i>time ordering operator</i>, which ensures that the series is expanded in the correct order. For now, we consider only the correction to the first order in <math>\!V(t)</math>.
'''First Order Transitions'''
If we limit ourselves to the first order, we use
<math>|\psi(t)\rangle = |\psi(t_0)\rangle + \frac{1}{i\hbar}\int_{t_0}^{t}dt'V(t')|\psi(t_0)\rangle \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad\;\;\ (2.1.8)</math>
We want to see the system undergoes a transition to another state, say <math>|n\rangle</math>.  So we project the wave function <math>|\psi(t)\rangle</math> to <math>|n\rangle</math>. From now on, let <math>|\psi(t_0)\rangle = |0\rangle</math> <br />
for brevity.  In other words, what is the probability of a state <math> |0\rangle </math> making a transition into a state <math> |n \rangle </math> at a given time t?
Projecting <math> |\psi(t)\rangle </math> into state <math>|n\rangle</math> and letting <math> \langle n|0\rangle =0 </math> if <math> n \neq 0 </math>, we get
<span id="(2.1.9)"></span>
<math>\begin{align}\langle n|\psi(t)\rangle & = \langle n|0\rangle + \frac{1}{i\hbar}\int_{t_0}^{t}dt'\langle n|V(t')|0\rangle\\ & = \frac{1}{i\hbar}\int_{t_0}^{t}dt'\langle n|e^{\frac{i}{\hbar}H_0 t}V_{t'}e^{-\frac{i}{\hbar}H_0 t}|0\rangle\\ & = \frac{1}{i\hbar}\int_{t_0}^{t}dt'e^{\frac{i}{\hbar}(\epsilon_n - \epsilon_0)t'}\langle n|V_{t'}|0\rangle \end{align}\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad\;\,\ (2.1.9)</math>
Expression [[#(2.1.9)]] is the probability amplitude of transition. Therefore, we square the final expression to get the probability of having the system in state <math>|n\rangle</math> at time t. <br />
Squaring, we get
<math>P_{0 \rightarrow n}(t) = |\langle n|\psi(t)\rangle|^2 = \left|\frac{1}{i\hbar}\int_{t_0}^{t}dt' e^{\frac{i}{\hbar}(\epsilon_n - \epsilon_0)t'}\langle n|V_{t'}|0\rangle\right|^2 \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad\, (2.1.10)</math>
For example, let us consider a potential <math>\!V_t</math> which is turned on sharply at time <math>\!t_0</math>, but independent of <math> t </math> thereafter. Furthermore, we let <math>\!t_0 = 0</math> for convenience. Therefore :
<math>V_t =
\begin{cases}
0 &\mbox{if} \qquad t<0\\
V &\mbox{if} \qquad t>0
\end{cases}
</math>
<span id="2.1.11"></span>
<math>
\begin{align}
P_{0 \rightarrow n}(t) & = \left|\frac{1}{i\hbar}\int_{0}^{t}dt' e^{\frac{i}{\hbar}(\epsilon_n - \epsilon_0)t'}\langle n|V|0\rangle\right|^2\\
& = \left|\frac{1}{i\hbar}\frac{e^{\frac{i}{\hbar}(\epsilon_n - \epsilon_0)t}-1}{\frac{i}{\hbar}(\epsilon_n - \epsilon_0)}\langle n|V|0\rangle\right|^2\\
& = \frac{4 \sin^2\left(\frac{\epsilon_n - \epsilon_0}{2\hbar}t\right)}{\left(\epsilon_n - \epsilon_0\right)^2}|\langle n|V|0 \rangle|^2
\end{align}\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad\;\;  (2.1.11)
</math>
The plot of the probability vs. <math>\! \epsilon_n</math> is given in the following plot:
[[Image:Amplitude.JPG]]
,where <math>\Delta\epsilon\Delta t \geq 2\pi\hbar</math>. So we conclude that as the time grows, the probability has a very narrow peak and approximate energy conservation is required for a transition with appreciable probability. However, this "uncertainty relation" is not the same as the fundamental <math> x - p </math> uncertainty relation because while <math> x </math> and <math> p </math> are both observable variables, time in non-relativistic quantum mechanics is just a parameter, not an observable.
Now, we imagine shining a light of a certain frequency on a hydrogen atom. We probably ended up getting the atom at a certain bound state. However, it might be ionized as well. The problem with ionization is the fact that the final state is a continuum, so we cannot just simply pick up a state to end with i.e. a plane wave with a specific <math> k </math>.
Furthermore, if the wave function is normalized, the plane waves states will contain a factor of <math>1/\sqrt{V}</math> which goes to zero if <math> V </math> is very large. But, we know that ionization exists, so there must be something missing.  Instead of measuring the probability to a transition to a pointlike wavenumber, <math> k </math>, we want to measure the amplitude of transition to a group of states around a particular <math> k </math>, i.e., we want to measure the transition amplitude from <math> k </math> to <math> k+dk </math>.
Let's suppose that the state <math>|n\rangle</math> is one of the continuum state, then what we could ask is the probability that the system makes transition to a small group of states about <math>|n\rangle</math>, not to a specific value of <math>|n\rangle</math>. For example, for a free particle, what we can find is the transition probability from initial state to a small group of states, viz. <math>|\vec k\rangle</math>, or in other words, the transition probability to an element of phase space <math>\! d^3k / (2\pi)^3</math>
The next step is a mathematical trick. We use
<math>\delta(x) = \lim_{\eta \to 0}\frac{1}{\pi x}\sin\left(\frac{x}{\eta}\right)</math>
Applying this to our result from above, we see that as <math> t \rightarrow \infty </math>, 
<math>\frac{\pi \sin(\frac{\epsilon_n - \epsilon_0}{2\hbar}t)}{\pi \hbar \frac{\epsilon_n - \epsilon_0}{2\hbar}} = \frac{\pi}{\hbar}\delta\left(\frac{\epsilon_n - \epsilon_0}{2\hbar}\right) = 2 \pi \delta\left( \epsilon_n - \epsilon_0 \right) \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad\;\;\;\;\; (2.1.12)</math>
If this result used in the equation [[#2.1.11]], it gives
<math>
{P_{0 \rightarrow n}(t)}\quad\underset{t \rightarrow \infty}{\longrightarrow}\quad \frac{t}{\hbar}2\pi \delta(\epsilon_n - \epsilon_0)|\langle n|V|0\rangle|^2 \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad\;(2.1.13)
</math>
or as a rate of transition, <math>\Gamma_{0\rightarrow n}</math> :
<math>\Gamma_{0 \rightarrow n} = \frac{d}{dt}P_{0 \rightarrow n}(t)\quad\underset{t \rightarrow \infty}{\longrightarrow}\quad\frac{2\pi}{\hbar} \delta(\epsilon_n - \epsilon_0)|\langle n|V|0\rangle|^2 \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad (2.1.14)
</math>
which is called <b><i>The Fermi Golden Rule</i></b>. Using this formula, we should keep in mind to sum over the entire continuum of final states.
To make things clear, let's try to calculate the transition probability for a system from a state <math>|\vec{k}\rangle</math> to a final state <math>|\vec{k'}\rangle</math> due to a potential <math>\! V(r)</math>.
<math>
\langle \vec{k}'|V|\vec{k}\rangle = \int d^3 r \frac{e^{-i\vec{k}'\cdot\vec{r}}}{\sqrt{L^3}}V(r)\frac{e^{i\vec{k}\cdot\vec{r}}}{\sqrt{L^3}} = \frac{V_{k'k}}{L^3} \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad\;\ (2.1.15)
</math>
<math>\Gamma_{\vec{k} \rightarrow \vec{k}'} = \frac{2\pi}{\hbar} \delta(\epsilon_k - \epsilon_{k'})\frac{|V_{k'\rightarrow k}|^2}{L^6}</math>
What we want is the rate of transition, or actually scattering in this case, <math>\!d\Gamma</math> into a small solid angle <math>\!d\Omega</math>. So we must sum over the momentum states in this solid angle:
<math>\sum_{\vec{k}'\in d\Omega}\Gamma_{\vec{k}\rightarrow \vec{k}'} </math>
The sum over states for continuum can be calculated using an integral,
<math>\sum_{\vec{k}'\in d\Omega} \quad \longrightarrow \quad d\Omega\int d\epsilon_{\vec{k}'}\frac{L^3 m k'}{(2\pi)^3 \hbar^2}\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad\;\;\ (2.1.16)</math>
Therefore,
<math>d\Gamma_{\vec{k}\rightarrow{\vec{k}'\in d\Omega}} = \frac{d\Omega}{L^3}\frac{mk}{4\pi^2\hbar^3}|V_{k'k}|^2 \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad\;\;\;\ (2.1.17)</math>
The flux of particles per incident particle of momentum <math>\hbar \vec{k}</math> in a volume <math>\!L^3</math> is <math>\hbar k / m L^3</math>, so
<math>\frac{d\Gamma}{d\Omega \left(\frac{\hbar k}{m L^3}\right)} = \frac{m^2}{4\pi^2\hbar^4}\left|V_{k'k}\right|^2 = \frac{d\sigma}{d\Omega}</math>, in the Born Approximation.
This result makes sense since our potential does not depend on time, so what happened here is that we sent a particle with wave vector <math>\vec{k}</math> through a potential and later detect a particle coming out from that potential with wave vector <math>\vec{k'}</math>. So, it is a scattering problem solved using a different method.
This is [http://wiki.physics.fsu.edu/wiki/index.php/Phy5646/A_Simple_Example_of_Transition_Probability_Calculation another simple example of transition probability calculation in time dependent perturbation theory] with different potential.
Here another example [http://wiki.physics.fsu.edu/wiki/index.php/Phy5646/Another_example example]
An example of the [http://wiki.physics.fsu.edu/wiki/index.php/Phy5646/hydrogen_atom_lifetime_lifetime of the first excited state of the hydrogen atom].
=== ''' <span style="color:#2B65EC"> Harmonic Perturbation Theory </span> ''' ===
Harmonic perturbation is one of the main interests in perturbation theory. We know that in experiment, we usually perturb the system using a certain signal to extract information about it, for example  the difference between the energy levels. We could send a photon with a certain frequency to a Hydrogen atom to excite the electron and let it decay to observe the difference between two energy levels by measuring the frequency of the photon emitted from it. The photon acts as an electromagnetic signal, and it is harmonic (if we consider it as an electromagnetic wave).
In general, we write down the harmonic perturbation as
<math>\!V_t = V cos(\omega t) e^{\eta t}\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad(2.2.1)</math>
where <math>\!e^{\eta t}</math> specifies the rate at which the perturbation is turned on. Since we assume the perturbation is turned on very slowly <math>\eta</math> is a very small positive number which at the end of the calculation, when <math> t_0 = -\infty </math>, is set to be zero.
We start from <math>\!t_0 = - \infty</math>. Since there's no perturbation at that time.  We want to find the probability that there will be a transition from the initial state to some other state, <math>| n \rangle</math>. 
The transition amplitude is,
<math>\!\langle n|\psi_t\rangle = \langle n|e^{\frac{-i}{\hbar}H_0 t}|\psi(t)\rangle = e^{\frac{-i}{\hbar}\epsilon_n t}\langle n|\psi(t)\rangle\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad(2.2.2)</math>
To the first order of V we write
<math>
\begin{align}
\langle n|\psi(t)\rangle & = \frac{1}{i\hbar}\int_{-\infty}^{t}dt' \langle n|V(t')|0\rangle\\
& = \frac{1}{i\hbar}\int_{-\infty}^{t}dt' \langle n|e^{\frac{i}{\hbar}H_0 t'}V_t e^{\frac{-i}{\hbar}H_0 t'}|0\rangle\\
& = \frac{1}{i\hbar}\int_{-\infty}^{t}e^{\frac{i}{\hbar}(\epsilon_n - \epsilon_0)t'}e^{\eta t'}cos(\omega t')\langle n|V|0\rangle\\
& = \frac{\langle n|V|0\rangle}{2i\hbar}\sum_{s=\pm}\int_{-\infty}^{t}dt' e^{\frac{i}{\hbar}(\epsilon_n - \epsilon_0)t'}e^{\eta t'}e^{is\omega t'}\\
& = \frac{\langle n|V|0\rangle}{2i\hbar}\sum_{s=\pm}\frac{e^{\frac{i}{\hbar}(\epsilon_n - \epsilon_0)t}e^{\eta t}e^{is\omega t}}{i(\frac{\epsilon_n - \epsilon_0}{\hbar}+s\omega-i\eta)}\\
& = \frac{\langle n|V|0\rangle}{2} e^{\eta t}\sum_{s = \pm}\frac{e^{\frac{i}{\hbar}(\epsilon_n - \epsilon_0 - s\hbar \omega)t}}{\epsilon_0 - \epsilon_n - s\hbar \omega + i\eta \hbar}\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad (2.2.3)
\end{align}
</math>
Now we calculate the probability as usual:
<math>
\begin{align}
|\langle n|\psi_t\rangle|^2 & = \frac{1}{4} |\langle n|V|0\rangle|^2 e^{2\eta t}\sum_{ss'}\frac{e^{{-i}{\hbar}(s-s')\hbar \omega t}}{(\epsilon_0 - \epsilon_n - s\hbar \omega - i\eta \hbar)(\epsilon_0 - \epsilon_n - s\hbar \omega + i\eta \hbar)}\\
\underset{0 \rightarrow n}{P(t)} & = \frac{1}{4}|\langle n|V|0\rangle|^2 e^{2\eta t}\left[\frac{1}{(\epsilon_0 - \epsilon_ n -\hbar\omega)^2 +  \eta^2 \hbar^2}+\frac{1}{(\epsilon_0 - \epsilon_n + \hbar \omega)^2+  \eta^2 \hbar^2}\right]
\end{align}
</math>
Where all oscillatory terms have been averaged to zero. The transition rate is given by :
<math>
\underset{0 \rightarrow n}{\Gamma(t)}=\frac{d{P(t)_{0 \rightarrow n}}}{d t} = \frac{1}{4}|\langle n|V|0\rangle|^2 e^{2\eta t}\left[\frac{2\eta}{(\epsilon_0 - \epsilon_n - \hbar \omega)^2+  \eta^2 \hbar^2}+\frac{2\eta}{(\epsilon_0 - \epsilon_n + \hbar \omega)^2+  \eta^2 \hbar^2}\right]
</math>
Now, if the response is immediate or if the potential is turned on suddenly, we take <math>\eta = 0</math>. Therefore:
<math>
\underset{0 \rightarrow n}{\Gamma(t)} = \frac{1}{4}|\langle n|V|0\rangle|^2 \frac{2\pi}{\hbar}\left[\delta(\epsilon_n - \epsilon_0 + \hbar \omega)+\delta(\epsilon_n - \epsilon_0 - \hbar \omega)\right]
</math>
Which is the Fermi Golden Rule.  This result shows that there will be a non-zero transition probability for cases where <math>\epsilon_n - \epsilon_0 = \mp \hbar \omega</math> - Roughly speaking, there will be significant transitions only when <math>\omega</math> is a "resonant frequency" for a particular transition.  The Fermi Golden Rule also shows that it doesn't matter how the potential is turned on -- fast or slow -- the transition rate is not really affected.
=== ''' <span style="color:#2B65EC"> Second Order Transitions </span> ''' ===
Sometimes the first order matrix element <math> \langle f|V|i \rangle </math>  is identically zero (parity, Wigner Eckart, etc.) but other matrix elements are nonzero—and the transition can be accomplished by an indirect route. 
<math> c^{(2)}_{n}(t)=\left(\frac{1}{i \hbar}\right)^2 \sum_{n}\int_{0}^{t} \int_{0}^{t'} dt' dt''
e^{-i \omega_{f}\left(t-t'\right)}\langle f|V_{S}(t')|n\rangle e^{-i \omega_{n}\left(t'-t''\right)}
\langle n|V_{S}(t'')|i\rangle e^{-i \omega_{i} t''} </math>
where <math> c^{(2)}_{n}(t) </math> is the probability amplitude for the second-order process,
Taking the gradually switched-on harmonic perturbation <math> V_{S}(t)=e^{\epsilon t} V e^{-i \omega t}</math>, and the initial time
<math> -\infty </math>, as above
<math> c^{(2)}_{n}(t)=\left(\frac{1}{i \hbar}\right)^2 \sum_{n}\langle f|V|n\rangle \langle n|V|i\rangle
e^{-i \omega_{f} t} \int_{-\infty}^{t} dt' \int_{-\infty}^{t'} dt'' e^{i \left(\omega_{f} -\omega_{n}
-\omega-i \epsilon\right)t'} e^{i \left(\omega_{n} -\omega_{i} -\omega-i \epsilon\right)t''}</math>
The integrals are straightforward, and yield
<math>c^{(2)}_{n}(t)=\left(\frac{1}{i \hbar}\right)^2 e^{-i \left(\omega_{i} -\omega_{f}\right)t}
\frac{e^{2 \epsilon t}}{\omega_{f} -\omega_{i} -2 \omega-2 i \epsilon}
\sum_{n} \frac{\langle f|V|n\rangle \langle n|V|i\rangle}{\omega_{n} -\omega_{i} -\omega-i \epsilon}</math>
Exactly as in the section above on the first-order Golden Rule, we can find the transition rate:
<math> \frac{d}{dt}\left|{c^{(2)}_{n}(t)}\right|^2 = \frac{2 \pi}{\hbar^4}
\left|{\sum_{n}\frac{\langle f|V|n\rangle \langle n|V|i\rangle}{\omega_{n} -\omega_{i} -\omega
-i \epsilon}}\right|^2 \delta \left(\omega_{f} -\omega_{i} -2 \omega \right)</math>
The <math> \hbar^4 </math> in the denominator goes to <math> \hbar </math> on replacing the frequencies
<math> \omega </math> with energies E, both in the denominator and the delta function, remember that  <math> E= \hbar \omega </math>
This is a transition in which the system gains energy <math> 2 \hbar \omega </math> from the beam, in other words two photons are absorbed, the first taking the system to the intermediate energy <math> \omega </math>, which is short-lived and therefore not well defined in energy—there is no energy conservation requirement into this state, only between initial and final states.
Of course, if an atom in an arbitrary state is exposed to monochromatic light, other second order processes in which two photons are emitted, or one is absorbed and one emitted (in either order) are also possible.
=== ''' <span style="color:#2B65EC"> Example of Two Level System : Ammonia Maser </span> ''' ===
[[Image:Ammonia.JPG]]
This is a very complicate quantum system and there is no way to solve it in a formal form, however we can take some assumptions to solve the problem. In this model, we assume that the Nitrogen atom, being heavier than Hydrogen, is motionless. The Hydrogen atoms form a rigid equilateral triangle whose axis is always passes through the Nitrogen Atom.
Since there are two significant and different states (the position of the Hydrogen triangle), we write down the wave function as a superposition of both states. Of course it is a function of time.
<math>|\Psi_t\rangle = C_1(t)|1\rangle + C_2(t)|2\rangle</math>
Then we operate on this state the time dependent Schrodinger equation to find the eigenvalues:
<math>i\hbar
\begin{pmatrix}
  \dot{C}_1(t)\\
  \dot{C}_2(t)
\end{pmatrix}
=
\begin{pmatrix}
  E_0 & A\\
  A & E_0
\end{pmatrix}
\begin{pmatrix}
  C_1(t)\\
  C_2(t)
\end{pmatrix}
</math>
In the presence of electric field, the additional energy enters only on the diagonal part of the Hamiltonian matrix
<math>i\hbar
\begin{pmatrix}
  \dot{C}_1(t)\\
  \dot{C}_2(t)
\end{pmatrix}
=
\begin{pmatrix}
  E_0 + \mu \varepsilon(t) & A\\
  A & E_0 - \mu \varepsilon(t)
\end{pmatrix}
\begin{pmatrix}
  C_1(t)\\
  C_2(t)
\end{pmatrix}
</math>
Typically, <math>2 A \sim 10^{-4} \;\mbox{eV}</math> which gives the frequency of the movement of the Hydrogen triangle <math>\nu \sim 2.4 \times 10^9 \;\mbox{Hz}</math> and the wavelength <math>\lambda \approx 1.25 \;\mbox{cm}</math> (microwave region).
Solving for the Schrodinger equation we have above, we find the energy of the two states
<math>E_\pm = E_0 \pm \sqrt{(\mu \varepsilon)^2 + A^2}</math>
The followings are the graphs of the eigenenergy as a function of the applied electric field <math> \varepsilon </math>
[[Image:Crossing.JPG]]
Because of these two different states, ammonia molecule is separable in the electric field. This can be used to select molecule with certain value of energy.
[[Image:Ammonia_maser.JPG]]
It should be clear that if <math>\! \varepsilon(t) = 0</math>, our eigenstates are <math>\underbrace{\frac{1}{\sqrt{2}}\begin{pmatrix} 1 \\ \pm 1 \end{pmatrix}}_{basis \;for \;expansion}= \; \begin{pmatrix} C_1(0) \\ C_2(0) \end{pmatrix} </math>with energies <math>\! E_0 \pm A</math>
Let <math>\begin{pmatrix}C_1(t)\\C_2(t)\end{pmatrix}=\frac{1}{\sqrt{2}}\begin{pmatrix}1\\1\end{pmatrix} \gamma_1(t)+\frac{2}{\sqrt{2}}\begin{pmatrix}1\\-1\end{pmatrix} \gamma_2(t)</math>,
then we find
<math>
\begin{align}
i\hbar \dot{\gamma_1} & = & (E_0 +A)\gamma_1 + \mu \varepsilon(t)\gamma_2\\
i\hbar \dot{\gamma_2} & = & (E_0 -A)\gamma_2 + \mu \varepsilon(t)\gamma_1
\end{align}
</math>
Now, let
<math>
\begin{align}
\gamma_1(t) & = e^{-\frac{i}{\hbar}(E_0 + A)t}\alpha(t)\\
\gamma_2(t) & = e^{-\frac{i}{\hbar}(E_0 - A)t}\beta(t)
\end{align}
</math>
Also, we define the electric field as a function of time <math>\varepsilon(t) = 2\varepsilon_0 \cos\omega t = \varepsilon_0(e^{i\omega t}+e^{-i \omega t})</math> so the above expression can be written as
<math>
\begin{align}
i\hbar \dot{\alpha}(t) & = \mu \varepsilon_0 (e^{i(\omega + \frac{2A}{\hbar})t}+e^{-i(\omega - \frac{2A}{\hbar})t})\beta(t)\\
i\hbar \dot{\beta}(t) & = \mu \varepsilon_0 (e^{i(\omega - \frac{2A}{\hbar})t}+e^{-i(\omega + \frac{2A}{\hbar})t})\alpha(t)
\end{align}
</math>
Now, we observe that as <math>\! \omega \rightarrow \frac{2A}{\hbar} = \omega_0</math> the first term in the right hand side of the first equation will oscillate very rapidly compared to the second term of the same equation. The average of this rapid oscillating term will be zero. So we don't need to consider these oscillating terms in the next calculation. Therefore we get a result
<math>
\begin{align}
i\hbar\dot{\alpha}(t) & = \mu \varepsilon_0 e^{-i(\omega - \omega_0)t}\beta(t)\\
i\hbar\dot{\beta}(t) & = \mu \varepsilon_0 e^{i(\omega - \omega_0)t}\alpha(t)
\end{align}
</math>
At resonance, <math>\! \omega = \omega_0</math>, these equations are simplified to
<math>
\begin{align}
i\hbar\dot{\alpha}(t) & = \mu \varepsilon_0 \beta(t)\\
i\hbar\dot{\beta}(t) & = \mu \varepsilon_0 \alpha(t)
\end{align}
</math>
We can then differentiate the first equation with respect to time and substitute the second equation into it to get
<math>i\hbar \ddot{\alpha} =\mu \varepsilon_0 \left(\frac{\mu \varepsilon_0}{i\hbar}\alpha\right) \Rightarrow \ddot{\alpha}=-\left(\frac{\mu\varepsilon_0}{\hbar} \right)^2 \alpha</math>
With solution (also for <math> \beta </math> with substitution)
<math>
\begin{align}
\alpha(t) & = a \cos\left(\frac{\mu \varepsilon_0}{\hbar}t\right) + b \sin\left(\frac{\mu \varepsilon_0}{\hbar}t\right)\\
\beta(t) & = ib \cos\left(\frac{\mu \varepsilon_0}{\hbar}t\right) - ia \sin\left(\frac{\mu \varepsilon_0}{\hbar}t\right)
\end{align}
</math>
Let's assume that at time <math>\! t = 0</math>, the molecule is in state <math> |1\rangle </math> (experimentally, we can prepare the molecule to be in this state) so that <math> a = 1 \!</math> and <math>b = 0\!</math>. This assumption gives
<math>
\begin{matrix}
\alpha(t) \!\!\! &=& \!\!\! \cos\left(\frac{\mu \varepsilon_0}{\hbar}t\right)
\\
\beta(t) \!\!\! &=& \!\!\! -i \sin\left(\frac{\mu \varepsilon_0}{\hbar}t\right)
\end{matrix}
\; \Rightarrow \;
\begin{matrix}
\gamma_1(t) \!\!\! &=& \!\!\! e^{-i\frac{i}{\hbar}(E_0 + A)t} \cos\left(\frac{\mu \varepsilon_0}{\hbar}t\right) \\
\gamma_2(t) \!\!\! &=& \!\!\! -ie^{-i\frac{i}{\hbar}(E_0 - A)t} \sin\left(\frac{\mu \varepsilon_0}{\hbar}t\right)
\end{matrix}
</math>
Therefore the each probability that the molecule remains in the state <math> |+\rangle = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 \\ 1 \end{pmatrix} </math> and <math> |-\rangle = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 \\ -1 \end{pmatrix} </math> is :
<math>
\begin{align}
P_+(t) & = |\gamma_1(t)|^2 = \cos^2\left(\frac{\mu \varepsilon_0}{\hbar}t\right)\\
P_-(t) & = |\gamma_2(t)|^2 = \sin^2\left(\frac{\mu \varepsilon_0}{\hbar}t\right)
\end{align}
</math>
Note that the probability depends on time. The molecules enter in upper energy state. If the length of the cavity is chosen appropriately, the molecules will come out surely in lower energy state <math>P_{-}=1</math>. If that is the case, the molecules lost some energy and, in reverse, the cavity gains the same amount of energy. The cavity is therefore excited and then produces stimulated emission. That is the mechanism of a MASER which stands for Microwave Amplification by Stimulated Emission of Radiation.
=='''<span style="color:#2B65EC">Interaction of radiation and matter</span>''' ==
The conventional treatment of quantum mechanics uses time-independent wavefunctions with the Schrödinger equation to determine the energy levels (eigenvalues) of a system. To understand the interaction of radiation (electromagnetic radiation) and matter, we need to consider the time-dependent Schrödinger equation.
=== '''<span style="color:#2B65EC">Quantization of electromagnetic radiation</span>''' ===
===='''<span style="color:#2B65EC">Classical view</span>'''====
Let's use '''transverse gauge''' (sometimes called ''Coulomb gauge'') which give us:
<math>\varphi (\mathbf{r},t)=0 </math>
<math>\nabla \cdot \mathbf{A}=0</math>
In this gauge the electromagnetic fields are given by:
<math>\mathbf{E}(\mathbf{r},t)=-\frac{1}{c}\frac{\partial \mathbf{A} }{\partial t}</math>
<math>\mathbf{B}(\mathbf{r},t)=\nabla \times \mathbf{A}</math>
The energy in this radiation is
<math>\varepsilon = \frac{1}{8\pi} \int d^{3} r (\mathbf{E}^{2}+\mathbf{B}^{2})</math>
The rate and direction of energy transfer are given by Poynting vector
<math>\mathbf{P} = \frac{c}{4\pi} \mathbf{E} \times \mathbf{B}  </math>
The radiation generated by classical current is
<math>\Box \mathbf{A} = -\frac{4\pi}{c} \mathbf{j}</math>
Where <math>\Box</math> is the [http://en.wikipedia.org/wiki/D'Alembert_operator d'Alembert operator]. Solutions in the region where <math>\mathbf{j}=0</math> are given by
<math>\mathbf{A}(\mathbf{r},t) = \alpha \boldsymbol{\lambda}\frac{e^{i(\mathbf{k}\cdot\mathbf{r}-\omega t)}}{\sqrt{V}}+\alpha^{*} \boldsymbol{\lambda}^{*} \frac{e^{-i(\mathbf{k}\cdot\mathbf{r}-\omega t)}}{\sqrt{V}} </math>
where <math>\omega=c|\mathbf{k}|</math> and <math>\boldsymbol{\lambda}\cdot \mathbf{k}=0 </math>, as we are considering EM waves in vacuum. The <math>\boldsymbol{\lambda}</math> and <math>\boldsymbol{\lambda^*}</math> are the two general polarization vectors, perpendicular to <math>\mathbf{k}</math>. Note that, in general,
<math> \hat{\mathbf{k}}\times\hat{\boldsymbol{\lambda}} = \hat{\boldsymbol{\lambda^*}}; \hat{\boldsymbol{\lambda}}\times\hat{\boldsymbol{\lambda^*}} = \hat{\mathbf{k}}; \hat{\boldsymbol{\lambda^*}}\times\hat{\mathbf{k}} = \hat{\boldsymbol{\lambda}} </math>
Here the plane waves are normalized with respect to some volume <math>V</math>. This is just for convenience and the physics won't change. Note that <math>\boldsymbol{\lambda}\cdot\boldsymbol{\lambda}^{*}=1</math>, as the polarization vectors are unit vectors. Notice that in this writing <math>\mathbf{A}</math> is a real vector.
Let's compute <math>\varepsilon</math>. For this
<math>
\begin{align}
\mathbf{E}(\mathbf{r},t) & =-\frac{1}{c}\frac{\partial \mathbf{A} }{\partial t} \\
& =-\frac{1}{c\sqrt{V}}\frac{\partial}{\partial t}\left[\alpha \boldsymbol{\lambda}e^{i(\mathbf{k}\cdot\mathbf{r}-\omega t)}+\alpha^{*} \boldsymbol{\lambda}^{*} e^{-i(\mathbf{k}\cdot\mathbf{r}-\omega t)}\right] \\
& =-\frac{i\omega}{c\sqrt{V}}\left[-\alpha \boldsymbol{\lambda} e^{i(\mathbf{k}\cdot\mathbf{r}-\omega t)}+\alpha^{*} \boldsymbol{\lambda}^{*} e^{-i(\mathbf{k}\cdot\mathbf{r}-\omega t)}\right] \\
\mathbf{E}^{2}(\mathbf{r},t) & = \frac{\omega^{2}}{c^{2}V}\left[\alpha\alpha^{*} \boldsymbol{\lambda}\cdot\boldsymbol{\lambda}^{*} -  \alpha\alpha \boldsymbol{\lambda}\cdot\boldsymbol{\lambda} e^{2i(\mathbf{k}\cdot\mathbf{r}-\omega t)}-\alpha^{*}\alpha^{*}\boldsymbol{\lambda}^{*}\cdot\boldsymbol{\lambda}^{*} e^{-2i(\mathbf{k}\cdot\mathbf{r}-\omega t)} + \alpha^{*}\alpha\boldsymbol{\lambda}\cdot\boldsymbol{\lambda}^{*}\right] \\
\end{align}
</math>
Taking the average, the oscillating terms will disappear. Then we have
<math>
\begin{align}
\mathbf{E}^{2}(\mathbf{r}) & = \frac{\omega^{2}}{c^{2}V}\left[\alpha\alpha^{*}+\alpha^{*}\alpha\right] \\
&=2\frac{\omega^{2}}{c^{2}V}|\alpha|^2 \\
\end{align}
</math>
It is well known that for plane waves  <math>\mathbf{B}=\mathbf{n}\times \mathbf{E} </math>, where <math>\mathbf{n}</math> is the direction of <math>\mathbf{k}</math>. This clearly shows that <math>\mathbf{B}^{2}=\mathbf{E}^{2}</math>. However, let's see this explicitly:
<math>
\begin{align}
\mathbf{B}(\mathbf{r},t) & =\nabla \times\mathbf{A}\\
& =\nabla \times \left[\alpha \boldsymbol{\lambda}\frac{e^{i(\mathbf{k}\cdot\mathbf{r}-\omega t)}}{\sqrt{V}}+\alpha^{*} \boldsymbol{\lambda}^{*} \frac{e^{-i(\mathbf{k}\cdot\mathbf{r}-\omega t)}}{\sqrt{V}}\right] \\
\end{align}
</math>
Each component is given by
<math>
\begin{align}
\mathbf{B}_{i}(\mathbf{r},t)& =\frac{1}{{\sqrt{V}}}\left[\alpha \varepsilon _{ijk}\partial_{j} \left(\boldsymbol{\lambda}_{k}e^{i(\mathbf{k}\cdot\mathbf{r}-\omega t)}\right)+\alpha^{*} \varepsilon _{ijk}\partial_{j} \left(\boldsymbol{\lambda}^{*}_{k}e^{-i(\mathbf{k}\cdot\mathbf{r}-\omega t)}\right)\right] \\
& =\frac{i}{{\sqrt{V}}}\left[\alpha \varepsilon _{ijk}\mathbf{k}_{j} \boldsymbol{\lambda}_{k}e^{i(\mathbf{k}\cdot\mathbf{r}-\omega t)}-\alpha^{*} \varepsilon _{ijk}\mathbf{k}_{j} \boldsymbol{\lambda}^{*}_{k}e^{-i(\mathbf{k}\cdot\mathbf{r}-\omega t)}\right] \\
\end{align}
</math>
Then
<math>
\begin{align}
\mathbf{B}(\mathbf{r},t) & =\frac{i}{{\sqrt{V}}}\left[\alpha \mathbf{k}\times\boldsymbol{\lambda}e^{i(\mathbf{k}\cdot\mathbf{r}-\omega t)}-\alpha^{*} \mathbf{k}\times\boldsymbol{\lambda}^{*} e^{-i(\mathbf{k}\cdot\mathbf{r}-\omega t)}\right] \\
\mathbf{B}^{2}(\mathbf{r},t) & =\frac{1}{{V}}\left[\alpha \mathbf{k}\times\boldsymbol{\lambda}e^{i(\mathbf{k}\cdot\mathbf{r}-\omega t)}-\alpha^{*} \mathbf{k}\times\boldsymbol{\lambda}^{*} e^{-i(\mathbf{k}\cdot\mathbf{r}-\omega t)}\right] \left[\alpha \mathbf{k}\times\boldsymbol{\lambda}e^{i(\mathbf{k}\cdot\mathbf{r}-\omega t)}-\alpha^{*} \mathbf{k}\times\boldsymbol{\lambda}^{*} e^{-i(\mathbf{k}\cdot\mathbf{r}-\omega t)}\right]^{*} \\
& =\frac{1}{{V}}\left[\alpha\alpha^{*} \left(\mathbf{k}\times\boldsymbol{\lambda}\right)\cdot\left(\mathbf{k}\times\boldsymbol{\lambda}^{*}\right) -\alpha \alpha\left(\mathbf{k}\times\boldsymbol{\lambda}\right)\cdot\left(\mathbf{k}\times\boldsymbol{\lambda}\right) e^{2i(\mathbf{k}\cdot\mathbf{r}-\omega t)}-\alpha^{*} \alpha^{*} \left(\mathbf{k}\times\boldsymbol{\lambda}^{*}\right)\cdot\left(\mathbf{k}\times\boldsymbol{\lambda}^{*}\right) e^{-2i(\mathbf{k}\cdot\mathbf{r}-\omega t)} + \alpha^{*} \alpha \left(\mathbf{k}\times\boldsymbol{\lambda}^{*}\right)\cdot \left(\mathbf{k}\times\boldsymbol{\lambda}\right) \right] \\
\end{align}
</math>
Again, taking the average the oscillating terms vanish. Then we have
<math>
\begin{align}
\mathbf{B}^{2}(\mathbf{r}) & =\frac{1}{{V}}\left[\alpha \alpha^{*}+\alpha^{*} \alpha\right](\mathbf{k}\times\boldsymbol{\lambda})\cdot(\mathbf{k}\times\boldsymbol{\lambda}^{*}) \\
& =\frac{1}{{V}}\left[\alpha \alpha^{*}+\alpha^{*} \alpha\right][\mathbf{k}^{2}(\boldsymbol{\lambda}\cdot\boldsymbol{\lambda^{*}})-(\mathbf{k}\cdot\boldsymbol{\lambda^{*}})(\mathbf{k}\cdot\boldsymbol{\lambda})] \\
& =\frac{2}{{V}}|\alpha|^{2}\mathbf{k}^{2}\\
&=2\frac{\omega^{2}}{c^{2}V}|\alpha|^2 \\
&= \mathbf{E}^{2}(\mathbf{r},t)\\
\end{align}
</math>
Finally the energy of this radiation is given by 
<math>\begin{align}
\varepsilon &= \frac{1}{8\pi} \int d^{3}r (\mathbf{E}^{2}+\mathbf{B}^{2}) \\
&=\frac{1}{4\pi} \int d^{3}r\; \mathbf{E}^{2}\\
&=\frac{1}{4\pi} \int d^{3}r \left(2\frac{\omega^{2}}{c^{2}V}|\alpha|^2\right)\\
&=\frac{\omega^{2}}{2\pi c^{2}}|\alpha|^2\\
\end{align}</math>
So far, we have treated the potential <math>\mathbf{A}(\mathbf{r},t)</math> as a combination of two waves with the same frequency. Now let's extend the discussion to any form of <math>\mathbf{A}(\mathbf{r},t)</math>. To do this, we can sum <math>\mathbf{A}(\mathbf{r},t)</math> over all values of <math>\mathbf{k}</math> and <math>\boldsymbol{\lambda}</math>:
<math>\begin{align}
\mathbf{A}(\mathbf{r},t)=\sum_{\mathbf{k}\boldsymbol{\lambda}} \left[A_{\mathbf{k}\boldsymbol{\lambda}} \boldsymbol{\lambda}\frac{e^{i(\mathbf{k}\cdot\mathbf{r}-\omega t)}}{\sqrt{V}}+A_{\mathbf{k}\boldsymbol{\lambda}}^{*} \boldsymbol{\lambda}^{*} \frac{e^{-i(\mathbf{k}\cdot\mathbf{r}-\omega t)}}{\sqrt{V}}      \right]\\
\end{align}</math>
To calculate the energy with useing the fact that any exponential time-dependent term is on average zero. Therefore, in the previous sum all cross terms with different <math>\mathbf{k}</math> vanishes. Then, it is clear that   
<math>
\begin{align}
\mathbf{E}^{2}(\mathbf{r}) & = \sum_{\mathbf{k}\boldsymbol{\lambda}}\frac{\omega^{2}}{c^{2}V}\left[A_{\mathbf{k}\boldsymbol{\lambda}}A_{\mathbf{k}\boldsymbol{\lambda}}^{*}+A_{\mathbf{k}\boldsymbol{\lambda}}^{*}A_{\mathbf{k}\boldsymbol{\lambda}}\right] \\
\mathbf{B}^{2}(\mathbf{r}) & = \sum_{\mathbf{k}\boldsymbol{\lambda}}\frac{\mathbf{k}^2}{V}\left[A_{\mathbf{k}\boldsymbol{\lambda}}A_{\mathbf{k}\boldsymbol{\lambda}}^{*}+A_{\mathbf{k}\boldsymbol{\lambda}}^{*}A_{\mathbf{k}\boldsymbol{\lambda}}\right] \\
\end{align}
</math>
Then, the energy is given by
<math>\begin{align}
\varepsilon &= \frac{1}{8\pi} \int d^{3}r (\mathbf{E}^{2}+\mathbf{B}^{2}) \\
&=\frac{1}{4\pi} \int d^{3}r\; \mathbf{E}^{2}\\
&=\frac{1}{4\pi} \int d^{3}r \sum_{\mathbf{k}\boldsymbol{\lambda}}\frac{\omega^{2}}{c^{2}V}\left[A_{\mathbf{k}\boldsymbol{\lambda}}A_{\mathbf{k}\boldsymbol{\lambda}}^{*}+A_{\mathbf{k}\boldsymbol{\lambda}}^{*}A_{\mathbf{k}\boldsymbol{\lambda}}\right] \\
&=\frac{1}{4\pi} \sum_{\mathbf{k}\boldsymbol{\lambda}}\frac{\omega^{2}}{c^{2}}\left[A_{\mathbf{k}\boldsymbol{\lambda}}A_{\mathbf{k}\boldsymbol{\lambda}}^{*}+A_{\mathbf{k}\boldsymbol{\lambda}}^{*}A_{\mathbf{k}\boldsymbol{\lambda}}\right] \\
\end{align}</math>
Let's define the following quantities:
<math>\begin{align}
Q_{\mathbf{k}\boldsymbol{\lambda}}&=\frac{1}{\sqrt{4\pi}c}(A_{\mathbf{k}\boldsymbol{\lambda}}+A_{\mathbf{k}\boldsymbol{\lambda}}^{*})\\
P_{\mathbf{k}\boldsymbol{\lambda}}&=\frac{-i\omega}{\sqrt{4\pi}c}(A_{\mathbf{k}\boldsymbol{\lambda}}-A_{\mathbf{k}\boldsymbol{\lambda}}^{*})\\
\end{align}</math>
Notice that
<math>\begin{align}
\omega^{2} Q_{\mathbf{k}\boldsymbol{\lambda}}^{2}&=\frac{\omega^{2}}{4\pi c^{2}}(A_{\mathbf{k}\boldsymbol{\lambda}}^{2}+A_{\mathbf{k}\boldsymbol{\lambda}}\cdot A_{\mathbf{k}\boldsymbol{\lambda}}^{*}+A_{\mathbf{k}\boldsymbol{\lambda}}^{*}\cdot A_{\mathbf{k}\boldsymbol{\lambda}}+A_{\mathbf{k}\boldsymbol{\lambda}}^{*2})\\
P_{\mathbf{k}\boldsymbol{\lambda}}^{2}&=\frac{-\omega^{2}}{4\pi c^{2}}(A_{\mathbf{k}\boldsymbol{\lambda}}^{2}-A_{\mathbf{k}\boldsymbol{\lambda}}\cdot A_{\mathbf{k}\boldsymbol{\lambda}}^{*}-A_{\mathbf{k}\boldsymbol{\lambda}}^{*}\cdot A_{\mathbf{k}\boldsymbol{\lambda}}+A_{\mathbf{k}\boldsymbol{\lambda}}^{*2})\\
\end{align}</math>
Adding
<math>\begin{align}
P_{\mathbf{k}\boldsymbol{\lambda}}^{2}+\omega^{2} Q_{\mathbf{k}\boldsymbol{\lambda}}^{2}&=\frac{\omega^{2}}{2\pi c^{2}}(A_{\mathbf{k}\boldsymbol{\lambda}}\cdot A_{\mathbf{k}\boldsymbol{\lambda}}^{*}+A_{\mathbf{k}\boldsymbol{\lambda}}^{*}\cdot A_{\mathbf{k}\boldsymbol{\lambda}})\\
\end{align}</math>
Then the energy (in this case the Hamiltonian) can be written as
<math>\begin{align}
H=\frac{1}{2}\sum_{\mathbf{k}\boldsymbol{\lambda}} [P_{\mathbf{k}\boldsymbol{\lambda}}^{2}+\omega^{2} Q_{\mathbf{k}\boldsymbol{\lambda}}^{2}]
\end{align}</math>
This has the same form as the familiar Hamiltonian for a harmonic oscillator.
Note that,
<math>\begin{align}
\frac{\partial H_{cl}}{\partial Q_{k, \lambda}} &= - \dot{P}_{k, \lambda} \\
\frac{\partial H_{cl}}{\partial P_{k, \lambda}} &= \dot{Q}_{k, \lambda}
\end{align}</math>
The makeshift variables, <math>P_{k, \lambda}</math> and <math>Q_{k, \lambda}</math> are canonically conjugate. 
We see that the classical radiation field behaves as a collection of harmonic oscillators, indexed by <math>\mathbf{k}</math> and <math>\boldsymbol{\lambda}</math>, whose frequencies depends on <math>|\mathbf{k}|</math>.
===='''<span style="color:#2B65EC">From classical mechanics to quatum mechanics for radiation</span>'''====
As usual we proceed to do the canonical quantization:
<math>\begin{align}
P_{\mathbf{k}\boldsymbol{\lambda}} & \to \mathbf{P}_{\mathbf{k}\boldsymbol{\lambda}}\\
Q_{\mathbf{k}\boldsymbol{\lambda}} & \to \mathbf{Q}_{\mathbf{k}\boldsymbol{\lambda}}\\
\end{align}</math>
<math>\begin{align}
A_{\mathbf{k}\boldsymbol{\lambda}} & \to \sqrt{\frac{2\pi \hbar c^{2}}{\omega_{\mathbf{k}}}}\;\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}\; , \; [\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}},\mathbf{a}^{\dagger}_{\mathbf{k'}\boldsymbol{\lambda'}}]=\delta_{\mathbf{kk'}}\delta_{\boldsymbol{\lambda \lambda'}}\\
\end{align}</math>
Where last are quantum operators. The Hamiltonian can be written as
<math>\begin{align}
\mathbf{H}_{radiation}&=\sum_{\mathbf{k}\boldsymbol{\lambda}}\hbar \omega_{\mathbf{k}}(\mathbf{a}^{\dagger}_{\mathbf{k} \boldsymbol{\lambda}} \mathbf{a}_{\mathbf{k} \boldsymbol{\lambda}}+\frac{1}{2})
&=\frac{1}{2}\sum_{\mathbf{k}\boldsymbol{\lambda}}\hbar \omega_{\mathbf{k}}(\mathbf{a}^{\dagger}_{\mathbf{k} \boldsymbol{\lambda}} \mathbf{a}_{\mathbf{k} \boldsymbol{\lambda}}+\mathbf{a}_{\mathbf{k} \boldsymbol{\lambda}} \mathbf{a}^{\dagger}_{\mathbf{k} \boldsymbol{\lambda}})\\
\end{align}</math>
The classical potential can be written as 
<math>
\underbrace{\mathbf{A}(\mathbf{r},t)=\sum_{\mathbf{k}\boldsymbol{\lambda}} \left[A_{\mathbf{k}\boldsymbol{\lambda}} \boldsymbol{\lambda}\frac{e^{i(\mathbf{k}\cdot\mathbf{r}-\omega t)}}{\sqrt{V}}+A_{\mathbf{k}\boldsymbol{\lambda}}^{*} \boldsymbol{\lambda}^{*} \frac{e^{-i(\mathbf{k}\cdot\mathbf{r}-\omega t)}}{\sqrt{V}}\right]}_\textrm{Classical Vector potential}\;\;\;\longrightarrow\;\;\; \underbrace{\mathbf{A}_{int}(\mathbf{r},t)=\sum_{\mathbf{k}\boldsymbol{\lambda}} \sqrt{\frac{2\pi \hbar c^{2}}{\omega_{\mathbf{k}}}}\left[\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}} \boldsymbol{\lambda}\frac{e^{i(\mathbf{k}\cdot\mathbf{r}-\omega t)}}{\sqrt{V}}+\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}^{\dagger} \boldsymbol{\lambda}^{*} \frac{e^{-i(\mathbf{k}\cdot\mathbf{r}-\omega t)}}{\sqrt{V}}\right]}_\textrm{Quantum Operator}
</math>
Notice that the quantum operator is time dependent. Therefore we can identify it as the field operator in interaction representation. (That's the reason to label it with ''int''). Let's find the Schrodinger representation of the field operator:
<math>\begin{align}
\mathbf{A}(\mathbf{r})&=e^{-\frac{i}{\hbar}\mathbf{H}_{rad}t}\mathbf{A}_{int}(\mathbf{r},t)e^{\frac{i}{\hbar}\mathbf{H}_{rad}t}\\
&=e^{-\frac{i}{\hbar}\mathbf{H}_{rad}t}\left[\sum_{\mathbf{k}\boldsymbol{\lambda}} \sqrt{\frac{2\pi \hbar c^{2}}{\omega_{\mathbf{k}}}}\left[\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}} \boldsymbol{\lambda}\frac{e^{i(\mathbf{k}\cdot\mathbf{r}-\omega t)}}{\sqrt{V}}+\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}^{\dagger} \boldsymbol{\lambda}^{*} \frac{e^{-i(\mathbf{k}\cdot\mathbf{r}-\omega t)}}{\sqrt{V}}\right]\right]e^{\frac{i}{\hbar}\mathbf{H}_{rad}t}\\
&=\sum_{\mathbf{k}\boldsymbol{\lambda}} \sqrt{\frac{2\pi \hbar c^{2}}{\omega_{\mathbf{k}}}}\left[\left[e^{-\frac{i}{\hbar}\mathbf{H}_{rad}t} \mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}e^{\frac{i}{\hbar}\mathbf{H}_{rad}t}\right] \boldsymbol{\lambda}\frac{e^{i(\mathbf{k}\cdot\mathbf{r}-\omega t)}}{\sqrt{V}}+\left[ e^{-\frac{i}{\hbar}\mathbf{H}_{rad}t}\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}^{\dagger} e^{\frac{i}{\hbar}\mathbf{H}_{rad}t}\right] \boldsymbol{\lambda}^{*} \frac{e^{-i(\mathbf{k}\cdot\mathbf{r}-\omega t)}}{\sqrt{V}}\right]\\
&=\sum_{\mathbf{k}\boldsymbol{\lambda}} \sqrt{\frac{2\pi \hbar c^{2}}{\omega_{\mathbf{k}}}}\left[\left[\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}e^{i\omega t}\right] \boldsymbol{\lambda}\frac{e^{i(\mathbf{k}\cdot\mathbf{r}-\omega t)}}{\sqrt{V}}+\left[ \mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}^{\dagger} e^{-i\omega t}\right] \boldsymbol{\lambda}^{*} \frac{e^{-i(\mathbf{k}\cdot\mathbf{r}-\omega t)}}{\sqrt{V}}\right]\\
&=\sum_{\mathbf{k}\boldsymbol{\lambda}} \sqrt{\frac{2\pi \hbar c^{2}}{\omega_{\mathbf{k}}}}\left[\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}} \boldsymbol{\lambda}\frac{e^{i\mathbf{k}\cdot\mathbf{r}}}{\sqrt{V}}+\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}^{\dagger} \boldsymbol{\lambda}^{*} \frac{e^{-i\mathbf{k}\cdot\mathbf{r}}}{\sqrt{V}}\right]\\
\end{align}</math>
'''COMMENTS'''
<ul>
<li>The meaning of <math>\mathbf{H}_{radiation}</math> is as following: The classical electromagnetic field is quantized. This '''quantum field''' exist even if there is not any source. This means that the '''vacuum''' is a physical object who can interact with matter. In classical mechanics this doesn't occur because, fields are created by sources.
<li>Due to this, the vacuum has to be treated as a quantum dynamical object. Therefore we can define to this object a quantum state.
<li>The perturbation of this quantum field is called '''photon''' (it is called the quanta of the electromagnetic field).
</ul>
'''ANALYSIS OF THE VACUUM AT GROUND STATE '''
Let's call <math>|0\rangle</math> the ground state of the vacuum. The following can be stated:
<ul>
<li>The energy of the ground state is '''infinite'''. To see this notice that for ground state we have
<math>\begin{align}
\mathbf{H}_{radiation}&=\sum_{\mathbf{k}\boldsymbol{\lambda}} \frac{1}{2} \hbar \omega_{\mathbf{k}}=\infin
\end{align}</math>
<li>The state <math>\;\mathbf{a}^{\dagger}_{\mathbf{k} \boldsymbol{\lambda}}|0\rangle</math> represent an exited state of the    vacuum with energy <math>\hbar \omega_{\mathbf{k}}(1+1/2)</math>. This means that the extra energy <math>\hbar \omega_{\mathbf{k}}</math>is carried by a single photon. Therefore <math>\mathbf{a}^{\dagger}_{\mathbf{k} \boldsymbol{\lambda}}</math> represent the creation operator of one single photon with energy <math>\hbar \omega_{\mathbf{k}}</math>. In the same reasoning,  <math>\mathbf{a}_{\mathbf{k} \boldsymbol{\lambda}}</math> represent the annihilation operator of one single photon.
<li>Consider the following normalized state of the vacuum:
<math>\frac{1}{\sqrt{2}}\mathbf{a}^{\dagger}_{\mathbf{k} \boldsymbol{\lambda}}\mathbf{a}^{\dagger}_{\mathbf{k} \boldsymbol{\lambda}}|0\rangle</math>. At the first glance we may think that <math>\mathbf{a}^{\dagger}_{\mathbf{k} \boldsymbol{\lambda}}\mathbf{a}^{\dagger}_{\mathbf{k} \boldsymbol{\lambda}}</math> creates a single photon with energy <math>2\hbar \omega_{\mathbf{k}}</math>. However this interpretation is forbidden in our model. Instead, this operator will create two photons each of the carryng the energy  <math>\hbar \omega_{\mathbf{k}}</math>.
<p></p>
<p>'''Proof'''</p>
<p>Suppose that <math>\mathbf{a}^{\dagger}_{\mathbf{k} \boldsymbol{\lambda}}\mathbf{a}^{\dagger}_{\mathbf{k} \boldsymbol{\lambda}}</math> creates a single photon with energy <math>2\hbar \omega_{\mathbf{k}}</math>. We can find an operator <math>\mathbf{a}^{\dagger}_{\mathbf{k'} \boldsymbol{\lambda}}</math> who can create a photon with the same energy <math>2\hbar \omega_{\mathbf{k}}</math>. This means that</p>
<math>
\frac{1}{\sqrt{2}}\mathbf{a}^{\dagger}_{\mathbf{k} \boldsymbol{\lambda}}\mathbf{a}^{\dagger}_{\mathbf{k} \boldsymbol{\lambda}}|0\rangle\overset{\underset{\mathrm{?}}{}}{=} \mathbf{a}^{\dagger}_{\mathbf{k'} \boldsymbol{\lambda}}|0\rangle \;\;\;\longrightarrow\;\;\;\frac{1}{\sqrt{2}}\mathbf{a}^{\dagger}_{\mathbf{k} \boldsymbol{\lambda}}\mathbf{a}^{\dagger}_{\mathbf{k} \boldsymbol{\lambda}} \overset{\underset{\mathrm{?}}{}}{=} \mathbf{a}^{\dagger}_{\mathbf{k'} \boldsymbol{\lambda}}\;\;\;\longrightarrow\;\;\;\frac{1}{\sqrt{2}}\mathbf{a}_{\mathbf{k} \boldsymbol{\lambda}}\mathbf{a}_{\mathbf{k} \boldsymbol{\lambda}} \overset{\underset{\mathrm{?}}{}}{=} \mathbf{a}_{\mathbf{k'} \boldsymbol{\lambda}}
</math>
<p>Let's see if this works. Using commutation relationship we have</p>
<math>
\left[ \underbrace{\mathbf{a}_{\mathbf{k} \boldsymbol{\lambda}}\mathbf{a}_{\mathbf{k} \boldsymbol{\lambda}}},\mathbf{a}^{\dagger}_{\mathbf{k'} \boldsymbol{\lambda}}\right]=0
</math>
<p>Replace the highlighted part by <math>\mathbf{a}_{\mathbf{k'} \boldsymbol{\lambda}}</math>  </p>
<math>
\left[\mathbf{a}_{\mathbf{k'} \boldsymbol{\lambda}},\mathbf{a}^{\dagger}_{\mathbf{k'} \boldsymbol{\lambda}}\right]=0
</math>
Since <math>\left[\mathbf{a}_{\mathbf{k'} \boldsymbol{\lambda}},\mathbf{a}^{\dagger}_{\mathbf{k'} \boldsymbol{\lambda}}\right]=1</math>, the initial assumption is wrong, namely:
<math>
\frac{1}{\sqrt{2}}\mathbf{a}^{\dagger}_{\mathbf{k} \boldsymbol{\lambda}}\mathbf{a}^{\dagger}_{\mathbf{k} \boldsymbol{\lambda}}|0\rangle \ne \mathbf{a}^{\dagger}_{\mathbf{k'} \boldsymbol{\lambda}}|0\rangle
</math>
This means that <math>\mathbf{a}^{\dagger}_{\mathbf{k} \boldsymbol{\lambda}}\mathbf{a}^{\dagger}_{\mathbf{k} \boldsymbol{\lambda}}
</math> cannot create a single photon with energy <math>2\hbar \omega_{\mathbf{k}}</math>. Instead it will create two photons each of them with energy <math>\hbar \omega_{\mathbf{k}\blacksquare}</math>
</ul>
'''ALGEBRA OF VACUUM STATES'''
A general vacuum state can be written as
<math>
|n_{\mathbf{k_{1}} \boldsymbol{\lambda_{1}}};n_{\mathbf{k_{2}} \boldsymbol{\lambda_{2}}};...;n_{\mathbf{k_{i}} \boldsymbol{\lambda_{i}}};...\rangle
</math>
where <math>n_{\mathbf{k_{i}} \boldsymbol{\lambda_{i}}}</math> is the number of photons in the state <math>\mathbf{k_{i}} \boldsymbol{\lambda_{i}}</math> which exist in the vacuum. Using our knowledge of harmonic oscillator we conclude that this state can be written as 
<math>
|n_{\mathbf{k_{1}} \boldsymbol{\lambda_{1}}};n_{\mathbf{k_{2}} \boldsymbol{\lambda_{2}}};...;n_{\mathbf{k_{i}} \boldsymbol{\lambda_{i}}};...\rangle=\prod_{\mathbf{k_{j}} \boldsymbol{\lambda_{j}}}\frac{(
\mathbf{a}^{\dagger}_{\mathbf{k} \boldsymbol{\lambda}})^{n_{\mathbf{k_{j}} \boldsymbol{\lambda_{j}}}}}{\sqrt{n_{\mathbf{k_{j}} \boldsymbol{\lambda_{j}}}!}}|0\rangle
</math>
Also it is clear that
<math>
\mathbf{a}^{\dagger}_{\mathbf{k_{i}} \boldsymbol{\lambda_{i}}}|n_{\mathbf{k_{1}} \boldsymbol{\lambda_{1}}};n_{\mathbf{k_{2}} \boldsymbol{\lambda_{2}}};...;n_{\mathbf{k_{i}} \boldsymbol{\lambda_{i}}};...\rangle=\sqrt{n_{\mathbf{k_{i}} \boldsymbol{\lambda_{i}}}+1}|n_{\mathbf{k_{1}} \boldsymbol{\lambda_{1}}};n_{\mathbf{k_{2}} \boldsymbol{\lambda_{2}}};...;n_{\mathbf{k_{i}} \boldsymbol{\lambda_{i}}}+1;...\rangle
</math>
<math>
\mathbf{a}_{\mathbf{k_{i}} \boldsymbol{\lambda_{i}}}|n_{\mathbf{k_{1}} \boldsymbol{\lambda_{1}}};n_{\mathbf{k_{2}} \boldsymbol{\lambda_{2}}};...;n_{\mathbf{k_{i}} \boldsymbol{\lambda_{i}}};...\rangle=\sqrt{n_{\mathbf{k_{i}} \boldsymbol{\lambda_{i}}}}|n_{\mathbf{k_{1}} \boldsymbol{\lambda_{1}}};n_{\mathbf{k_{2}} \boldsymbol{\lambda_{2}}};...;n_{\mathbf{k_{i}} \boldsymbol{\lambda_{i}}}-1;...\rangle
</math>
=== '''<span style="color:#2B65EC">Matter + Radiation</span>''' ===
==== '''<span style="color:#2B65EC">Hamiltonian of Single Particle in Presence of Radiation (Gauge Invariance)</span>''' ====
The Hamiltonian of a single charged particle in presence of E&M potentials is given by
:<math>
\mathcal{H}=\frac{\left[\vec{p}-\frac{e}{c}\vec{A}(\vec{r},t)\right]^{2}}{2m}+e\phi (\vec{r},t) + V(\vec{r},t),
</math>
where the vector potential in the first term and the scalar potential in the second term is external E-M interaction and the third term is related with internal interaction.
The time dependent Schrödinger equation is
:<math>
i\hbar \frac{\partial\psi (\vec{r},t)}{\partial t}=\left[\frac{\left[\vec{p}-\frac{e}{c}\vec{A}(\vec{r},t)\right]^{2}}{2m}+e\phi (\vec{r},t) + V(\vec{r},t)
\right]\psi(\vec{r},t)
</math>
Since a gauge transformation,
:<math>
A'_{\mu}=A_{\mu}-\partial_{\mu} \chi ,
</math>
left invariant the E&M fields, we expect that <math>|\psi|^{2} \!</math> which is an observable is also gauge independent. Since <math>|\psi|^{2} \!</math> is independent of the phase choice, we can relate this phase with the E&M gauge transformation. In other words, the phase transformation with E&M transformation must leave Schrödinger equation invariant. This phase transformation is given by:
:<math>
\psi'(\vec{r},t)=e^{i\frac{e}{\hbar c}\chi(\vec{r},t)}\psi(\vec{r},t)
</math>
Let's see this in detail. We want to see if:
:<math>
\begin{align}
i\hbar \frac{\partial\psi' (\vec{r},t)}{\partial t}
& =\left[\frac{\left[\vec{p}-\frac{e}{c}\vec{A}'(\vec{r},t)\right]^{2}}{2m}+e\phi '(\vec{r},t) + V(\vec{r},t)
\right]\psi'(\vec{r},t)  \\
& = \left[\frac{\left[\vec{p}-\frac{e}{c}\vec{A}(\vec{r},t)\right]^{2}}{2m}+e\phi (\vec{r},t) + V(\vec{r},t)
\right]\psi(\vec{r},t) 
= i\hbar \frac{\partial\psi(\vec{r},t)}{\partial t}
\end{align}
</math>
Let's put the transformations:
:<math>\begin{align}
\psi'(\vec{r},t)&=e^{i\frac{e}{\hbar c}\chi(\vec{r},t)}\psi(\vec{r},t) \\
\vec{A}'(\vec{r},t)&=\vec{A}(\vec{r},t)+\vec{\nabla} \chi(\vec{r},t) \\
\phi'(\vec{r},t)&=\phi(\vec{r},t)-\frac{1}{c}\frac{\partial\chi(\vec{r},t)  }{\partial t}
\end{align}</math>
Replacing
:<math>\begin{align}
i\hbar \left[\frac{ie}{\hbar c} \frac{\partial \chi}{\partial t} e^{i\frac{e}{\hbar c}\chi}\psi + e^{i\frac{e}{\hbar c}\chi} \frac{\partial \psi}{\partial t} \right] &=
\left[\frac{\left[\vec{p}-\frac{e}{c}\vec{A}'\right]^{2}}{2m}+e\phi -\frac{e}{c} \frac{\partial \chi}{\partial t} + V \right]e^{i\frac{e}{\hbar c}\chi}\psi\\
i\hbar e^{i\frac{e}{\hbar c}\chi} \frac{\partial \psi}{\partial t} &=
\left[\frac{\left[\vec{p}-\frac{e}{c} \vec{A}'\right]^{2}}{2m}+e\phi + V \right]e^{i\frac{e}{\hbar c}\chi}\psi\\
i\hbar \frac{\partial \psi}{\partial t} &=
\left[\frac{1}{2m} e^{-i\frac{e}{\hbar c}\chi}\left[\vec{p}-\frac{e}{c}\vec{A}'\right]^{2}e^{i\frac{e}{\hbar c}\chi} +e\phi + V \right]\psi\\
i\hbar \frac{\partial \psi}{\partial t} &=
\left[\frac{1}{2m} e^{-i\frac{e}{\hbar c}\chi}\left[\vec{p}-\frac{e}{c}\vec{A}'\right]e^{i\frac{e}{\hbar c}\chi}e^{-i\frac{e}{\hbar c}\chi}\left[\vec{p}-\frac{e}{c}\vec{A}'\right]e^{i\frac{e}{\hbar c}\chi} +e\phi + V \right]\psi\\
i\hbar \frac{\partial \psi}{\partial t} &=
\left[\frac{1}{2m} \left(e^{-i\frac{e}{\hbar c}\chi}\left[\vec{p}-\frac{e}{c}\vec{A}'\right]e^{i\frac{e}{\hbar c}\chi}\right) ^{2} +e\phi + V \right]\psi\\
i\hbar \frac{\partial \psi}{\partial t} &=
\left[\frac{1}{2m} \left(e^{-i\frac{e}{\hbar c}\chi}\left[\frac{\hbar}{i}\vec{\nabla}-\frac{e}{c}\vec{A}-\frac{e}{c}\vec{\nabla} \chi\right]e^{i\frac{e}{\hbar c}\chi}\right) ^{2} +e\phi + V \right]\psi\\
i\hbar \frac{\partial \psi}{\partial t} &=
\left[\frac{1}{2m} \left(e^{-i\frac{e}{\hbar c}\chi}e^{i\frac{e}{\hbar c}\chi}\left[\frac{\hbar}{i} \frac{ie}{\hbar c}\nabla \chi + \frac{\hbar}{i}\vec{\nabla}-\frac{e}{c}\vec{A}-\frac{e}{c}\vec{\nabla} \chi\right]\right) ^{2} +e\phi + V \right]\psi\\
i\hbar \frac{\partial \psi}{\partial t} &=
\left[\frac{1}{2m} \left(\frac{\hbar}{i}\vec{\nabla}-\frac{e}{c}\vec{A} \right) ^{2} +e\phi + V \right]\psi = (no\; prime)_{ \blacksquare}\\
\end{align}</math>
Finally let's write the Hamiltonian in the following way
:<math>
\mathcal{H}=\underbrace{\frac{\vec{p}^2}{2m}+V}_{\mathcal{H}_{0}} \underbrace{-\frac{e}{2mc}\left(\vec{p}\cdot\vec{A}+ \vec{A}\cdot\vec{p} \right)+\frac{e^{2}}{2mc^{2}}A^{2}+e\phi}_{\mathcal{H}_{int}}
</math>
Where <math>\mathcal{H}_{0}</math> is the Hamiltonian without external fields (say hydrogen atom) and <math>\mathcal{H}_{int}</math> is the interaction part with the radiation.
Example: [http://wiki.physics.fsu.edu/wiki/index.php/Electron_on_Helium_Surface electron on helium surface]
==== '''<span style="color:#2B65EC">Hamiltonian of Multiple Particles in Presence of Radiation</span>''' ====
If we have a system of <math> N \!</math> particles we have the following Hamiltonian
:<math>
\mathcal{H}=\sum_{i=1}^N \frac{\left[\vec{p}_{i}-\frac{e_{i}}{c}\vec{A}(\vec{r}_{i},t)\right]^{2}}{2m_{i}} +\sum_{i=1}^N e_{i}\phi(\vec{r}_{i},t) + V(\vec{r}_{1}...\vec{r}_{N})
</math>
(Where <math> e_i</math> and <math> m_i </math> are the charge and the mass of the i-th particle respectively and <math> \vec{r}_i </math> and <math> \vec{p}_i </math> are its coordinate and momentum operators.)
Let's assume all particles having same mass and same charge. Then we have
:<math>\begin{align}
\mathcal{H}&=\sum_{i=1}^N \left[\frac{\vec{p}_{i}^{2}}{2m}-\frac{e_{i}}{2mc}\left(\vec{p}_{i} \cdot \vec{A}(\vec{r}_{i},t)+\vec{A}(\vec{r}_{i},t) \cdot \vec{p}_{i} \right) + \frac{e^{2}}{2mc^{2}} \vec{A}(\vec{r}_{i},t)^{2}\right]
+e\sum_{i=1}^N \phi(\vec{r}_{i},t) + V(\vec{r}_{1}...\vec{r}_{N})\\
&=\underbrace{\sum_{i=1}^N \frac{\vec{p}_{i}^{2}}{2m} + V(\vec{r}_{1}...\vec{r}_{N})}_{\mathcal{H}_{0}} \\
&{\;\;\;\;}\underbrace{+\sum_{i=1}^N -\frac{e}{2mc}\left(\vec{p}_{i} \cdot \vec{A}(\vec{r}_{i},t)+\vec{A}(\vec{r}_{i},t)\cdot \vec{p}_{i}  \right)
+\sum_{i=1}^N \frac{e^{2}}{2mc^{2}} \vec{A}(\vec{r}_{i},t)^{2}
+e\sum_{i=1}^N \phi(\vec{r}_{i},t)}_{\mathcal{H}_{int}}
\end{align}</math>
Using delta function operator <math>\delta (\vec{r}-\vec{r}_{i})</math> we can write
:<math>\begin{align}
\vec{A}(\vec{r}_{i},t)&=\int d^{3}{r}\; \delta (\vec{r}-\vec{r}_{i}) \vec{A}(\vec{r},t)\\
\phi(\vec{r}_{i},t)&=\int d^{3}{r}\; \delta (\vec{r}-\vec{r}_{i}) \phi(\vec{r},t)\\
\end{align}</math>
Then
:<math>\begin{align}
\mathcal{H}&=\mathcal{H}_{0}
+\sum_{i=1}^N -\frac{e}{2mc}\left(\vec{p}_{i} \cdot \int d^{3}{r}\; \delta (\vec{r}-\vec{r}_{i}) \vec{A}(\vec{r},t)+\int d^{3}{r}\; \delta (\vec{r}-\vec{r}_{i}) \vec{A}(\vec{r},t) \cdot \vec{p}_{i} \right)\\
&\;\;\;\;\;\;\;\;\;+\sum_{i=1}^N \frac{e^{2}}{2mc^{2}} \int d^{3}{r}\; \delta (\vec{r}-\vec{r}_{i}) A(\vec{r},t)^{2}
+e\sum_{i=1}^N \int d^{3}{r}\; \delta (\vec{r}-\vec{r}_{i}) \phi(\vec{r},t)\\
&=\mathcal{H}_{0}
-\int d^{3}{r}\;\frac{e}{c}\underbrace{\left[ \frac{1}{2}\sum_{i=1}^N \left[\frac{\vec{p}_{i}}{m} \delta (\vec{r}-\vec{r}_{i})+\delta (\vec{r}-\vec{r}_{i}) \frac{\vec{p}_{i}}{m} \right]\right]}_{\vec{j}(\vec{r})} \vec{A}(\vec{r},t)\\
&\;\;\;\;\;\;\;\;\;+\int d^{3}{r}\; \frac{e^{2}}{2mc^{2}} \underbrace{\left[ \sum_{i=1}^N \  \delta (\vec{r}-\vec{r}_{i}) \right]}_{\rho (\vec{r})} A(\vec{r},t)^{2}
+e\int d^{3}{r}\; \underbrace{\left[\sum_{i=1}^N  \delta (\vec{r}-\vec{r}_{i}) \right]}_{\rho (\vec{r})} \phi(\vec{r},t)\\
&=\mathcal{H}_{0}
+\underbrace{\int d^{3}{r}\; \left[-\frac{e}{c} \vec{j}(\vec{r})\cdot \vec{A}(\vec{r},t)+\frac{e^{2}}{2mc^{2}} \rho (\vec{r}) A(\vec{r},t)^{2}
+e\rho (\vec{r})\phi(\vec{r},t)\right]}_{\mathcal{H}_{int}}\\
&=\mathcal{H}_{0}+\mathcal{H}_{int}
\end{align}</math>
'''COMMENTS'''
<ul>
<li><math>\rho (\vec{r})=\sum_{i=1}^N \delta (\vec{r}-\vec{r}_{i})</math> can be interpreted as the particle density operator.
<li><math>\vec{j}(\vec{r})</math> is called paramagnetic current. It is just a piece of the total current <math> \vec{J}(\vec{r})</math>.
Explicitly we have
:<math>\begin{align}
\vec{J}(\vec{r})&=\sum_{i=1}^N \frac{1}{2}\left[\vec{v}_{i}(\vec{p}_{i},\vec{r}_{i})\delta (\vec{r}-\vec{r}_{i}) + \delta (\vec{r}-\vec{r}_{i})\vec{v}_{i}(\vec{p}_{i},\vec{r}_{i}) \right]\;\;\;\leftarrow\;\;\;\vec{v}_{i}(\vec{p}_{i},\vec{r}_{i})=\frac{\vec{p}_{i}}{m}-\frac{e}{mc}\vec{A}(\vec{r}_{i},t)\\
&=\sum_{i=1}^N \frac{1}{2}\left[\frac{\vec{p}_{i}}{m}\delta (\vec{r}-\vec{r}_{i}) + \delta (\vec{r}-\vec{r}_{i})\frac{\vec{p}_{i}}{m}-\frac{2e}{mc}  \vec{A}(\vec{r}_{i},t)\delta (\vec{r}-\vec{r}_{i})\right]\\
&=\vec{j}(\vec{r})-\frac{e}{mc}\sum_{i=1}^N  \vec{A}(\vec{r}_{i},t)\delta (\vec{r}-\vec{r}_{i})\;\;\;\leftarrow\;\;\;\vec{A}(\vec{r}_{i},t)\delta (\vec{r}-\vec{r}_{i})=\vec{A}(\vec{r},t)\delta (\vec{r}-\vec{r}_{i})\\
&=\underbrace{\vec{j}(\vec{r})}_{paramagnetic}\underbrace{-\frac{e}{mc}  \vec{A}(\vec{r},t) \rho (\vec{r})}_{diamagnetic}
\end{align}</math>
</ul>
==== '''<span style="color:#2B65EC">Light Absorption and Induced Emmission</span>''' ====
Generally for atomic fields <math>\mathbf{j}(\mathbf{r})\cdot \mathbf{A}(\mathbf{r},t)>>\rho \mathbf{A}^{2}</math>. Using the transverse gauge we can approximate the interaction Hamiltonian as
:<math>
\mathbf{H}_{int}=
\int d^{3}\mathbf{r}\; \left[-\frac{e}{c} \mathbf{j}(\mathbf{r})\cdot \mathbf{A}(\mathbf{r},t)\right]
</math>
Let's write <math>\mathbf{A}(\mathbf{r},t)</math> using the Fourier expansion as described above:
:<math>\begin{align}
\mathbf{H}_{int}&=-
\int d^{3}\mathbf{r}\; \left[\frac{e}{c} \mathbf{j}(\mathbf{r}) \cdot \sum_{\mathbf{k}\boldsymbol{\lambda}} \sqrt{\frac{2\pi \hbar c^{2}}{\omega_{\mathbf{k}}}}\left[\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}} \boldsymbol{\lambda}\frac{e^{i(\mathbf{k}\cdot\mathbf{r}-\omega t)}}{\sqrt{V}}+\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}^{\dagger} \boldsymbol{\lambda}^{*} \frac{e^{-i(\mathbf{k}\cdot\mathbf{r}-\omega t)}}{\sqrt{V}}\right]\right]\\
&=-\sum_{\mathbf{k}\boldsymbol{\lambda}} e\sqrt{\frac{2\pi \hbar }{\omega_{\mathbf{k}}V}}\int d^{3}\mathbf{r}\;  \mathbf{j}(\mathbf{r})\cdot \left[  \mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}} \boldsymbol{\lambda}e^{i(\mathbf{k}\cdot\mathbf{r}-\omega t)}+\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}^{\dagger} \boldsymbol{\lambda}^{*} e^{-i(\mathbf{k}\cdot\mathbf{r}-\omega t)}\right]\\
&=-\sum_{\mathbf{k}\boldsymbol{\lambda}} e\sqrt{\frac{2\pi \hbar }{\omega_{\mathbf{k}}V}} \left[  \mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}\underbrace{\left[\int d^{3}\mathbf{r}\;  \mathbf{j}(\mathbf{r})e^{i\mathbf{k}\cdot\mathbf{r}} \right]}_{\mathbf{j}_{-\mathbf{k}}}\cdot \boldsymbol{\lambda}e^{-\omega t}+\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}^{\dagger}\underbrace{\left[\int d^{3}\mathbf{r}\;  \mathbf{j}(\mathbf{r})e^{-i\mathbf{k}\cdot\mathbf{r}} \right]}_{\mathbf{j}_{\mathbf{k}}}\cdot \boldsymbol{\lambda}^{*} e^{\omega t}\right]\\
&=-\sum_{\mathbf{k}\boldsymbol{\lambda}} e\sqrt{\frac{2\pi \hbar }{\omega_{\mathbf{k}}V}} \left[  \mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}\mathbf{j}_{-\mathbf{k}}\cdot \boldsymbol{\lambda}e^{-\omega t}+\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}^{\dagger}\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*} e^{\omega t}\right]\\
\end{align}</math>
Where
:<math>\begin{align}
\mathbf{j}_{\mp\mathbf{k}}
&=\int d^{3}\mathbf{r}\;  \mathbf{j}(\mathbf{r})e^{\pm i\mathbf{k}\cdot\mathbf{r}}\\
&=\int d^{3}\mathbf{r}\;  \frac{1}{2}\sum_{i}
\left[\frac{\boldsymbol{p_{i}}}{m}\delta(\boldsymbol{r}-\boldsymbol{r_{i}})+\delta(\boldsymbol{r}-\boldsymbol{r_{i}})\frac{\boldsymbol{p_{i}}}{m}\right]
e^{\pm i\mathbf{k}\cdot\mathbf{r}}\\
&=\frac{1}{2m} \sum_{i}
\left[\frac{\boldsymbol{p_{i}}}{m}\left(\int d^{3}\mathbf{r}\;\delta(\boldsymbol{r}-\boldsymbol{r_{i}})e^{\pm i\mathbf{k}\cdot\mathbf{r}}\right)+\left(\int d^{3}\mathbf{r}\;\delta(\boldsymbol{r}-\boldsymbol{r_{i}})e^{\pm i\mathbf{k}\cdot\mathbf{r}} \right) \frac{\boldsymbol{p_{i}}}{m}\right]
\\
&=\frac{1}{2m} \sum_{i}
\left[\frac{\boldsymbol{p_{i}}}{m}e^{\pm i\mathbf{k}\cdot\mathbf{r}_{i}}+e^{\pm i\mathbf{k}\cdot\mathbf{r}_{i}}\frac{\boldsymbol{p_{i}}}{m}\right]\\
\end{align}</math>
Let's use golden rule to calculate transition rates for this time-dependent interaction. The evolution of the state in first approximation is
:<math>\begin{align}
|\psi(t)\rangle = |I\rangle+\frac{1}{i\hbar}\int^{t}_{t_{o}}dt'\;e^{\frac{i}{\hbar}\mathbf{H}_{o}t'}\mathbf{H}_{int}e^{\eta t'}e^{-\frac{i}{\hbar}\mathbf{H}_{o}t'}|I\rangle
\end{align}</math>
where <math>|I\rangle</math> is the initial state and <math>e^{\eta t'}</math> is the usual slow "switch" factor. The transition amplitude to a state <math>|F\rangle</math> is
:<math>\begin{align}
\langle F|\psi(t)\rangle =
\langle F|I\rangle+\frac{1}{i\hbar}\int^{t}_{t_{o}}dt'\;\langle F|e^{\frac{i}{\hbar}\mathbf{H}_{o}t'}\mathbf{H}_{int}e^{\eta t'}e^{-\frac{i}{\hbar}\mathbf{H}_{o}t'}|I\rangle
\end{align}</math>
<math>|F\rangle</math> and <math>|I\rangle</math> are eigenstates of <math>\mathbf{H}_{o}</math>. Then we have 
:<math>\begin{align}
\langle F|\psi(t)\rangle &=\frac{1}{i\hbar}\int^{t}_{t_{o}}dt'\;e^{[\frac{i}{\hbar}(E_{n}-E_{o})+\eta ]t'}\langle F|\mathbf{H}_{int}|I\rangle\\
&=\frac{1}{i\hbar}\int^{t}_{t_{o}}dt'\;e^{[\frac{i}{\hbar}(E_{n}-E_{o})+\eta ]t'}\langle F|
-\sum_{\mathbf{k}\boldsymbol{\lambda}} e\sqrt{\frac{2\pi \hbar }{\omega_{\mathbf{k}}V}}\cdot \left[  \mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}\mathbf{j}_{-\mathbf{k}}\cdot \boldsymbol{\lambda}e^{-\omega t'}+\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}^{\dagger}\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*} e^{\omega t'}\right]|I\rangle\\
&=\frac{-1}{i\hbar}\sum_{\mathbf{k}\boldsymbol{\lambda}} e\sqrt{\frac{2\pi \hbar }{\omega V}}
\left[           
\left[\langle F|\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}\mathbf{j}_{-\mathbf{k}}\cdot \boldsymbol{\lambda}|I\rangle           
\int^{t}_{t_{o}=\infin}dt'\;e^{[\frac{i}{\hbar}(E_{n}-E_{o}-\hbar \omega )+\eta ]t'}
\right]+
\left[\langle F|\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}^{\dagger}\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*}|I\rangle                 
\int^{t}_{t_{o}=\infin}dt'\;e^{[\frac{i}{\hbar}(E_{n}-E_{o}+\hbar \omega )+\eta ]t'}
\right]
\right]\\
&=\frac{-1}{i\hbar}\sum_{\mathbf{k}\boldsymbol{\lambda}} e\sqrt{\frac{2\pi \hbar }{\omega V}}
\left[           
\left[\langle F|\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}\mathbf{j}_{-\mathbf{k}}\cdot \boldsymbol{\lambda}|I\rangle           
\frac{e^{[\frac{i}{\hbar}(E_{n}-E_{o}-\hbar \omega )+\eta ]t}}{\frac{i}{\hbar}(E_{n}-E_{o}-\hbar \omega )+\eta }
\right]+
\left[\langle F|\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}^{\dagger}\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*}|I\rangle                 
\frac{e^{[\frac{i}{\hbar}(E_{n}-E_{o}+\hbar \omega )+\eta ]t}}{\frac{i}{\hbar}(E_{n}-E_{o}+\hbar \omega )+\eta }
\right]
\right]\\
&=\sum_{\mathbf{k}\boldsymbol{\lambda}} e\sqrt{\frac{2\pi \hbar }{\omega V}}
\left[           
\left[\langle F|\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}\mathbf{j}_{-\mathbf{k}}\cdot \boldsymbol{\lambda}|I\rangle           
\frac{e^{[\frac{i}{\hbar}(E_{n}-E_{o}-\hbar \omega )+\eta ]t}}{(E_{n}-E_{o}-\hbar \omega )-i\eta \hbar }
\right]+
\left[\langle F|\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}^{\dagger}\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*}|I\rangle                 
\frac{e^{[\frac{i}{\hbar}(E_{n}-E_{o}+\hbar \omega )+\eta ]t}}{(E_{n}-E_{o}+\hbar \omega )-i\eta\hbar }
\right]
\right]\\
\end{align}</math>
The transition probability is given by
:<math>\begin{align}
P_{0 \rightarrow n}&=|\langle F|\psi(t)\rangle|^{2}\\
&=\sum_{\mathbf{k}\boldsymbol{\lambda}} e^{2}\frac{2\pi \hbar }{\omega V}
\left[           
\left[|\langle F|\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}\mathbf{j}_{-\mathbf{k}}\cdot \boldsymbol{\lambda}|I\rangle|^{2}           
\frac{e^{2 \eta t}}{(E_{n}-E_{o}-\hbar \omega )^{2}+\eta^{2} \hbar^{2} }
\right]+
\left[\langle F|\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}^{\dagger}\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*}|I\rangle                 
\frac{e^{2 \eta t}}{(E_{n}-E_{o}+\hbar \omega )^{2}+\eta^{2} \hbar^{2}}
\right]
\right]\\
\end{align}</math>
Where all oscillatory terms have been averaged to zero. Taking a time derivative we obtain the transition rate
:<math>\begin{align}
\Gamma_{0 \rightarrow n}&=\frac{dP_{0 \rightarrow n}}{dt}\\
&=\sum_{\mathbf{k}\boldsymbol{\lambda}} e^{2}\frac{2\pi \hbar }{\omega V}
\left[           
\left[|\langle F|\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}\mathbf{j}_{-\mathbf{k}}\cdot \boldsymbol{\lambda}|I\rangle|^{2}           
\frac{2 \eta e^{2 \eta t}}{(E_{n}-E_{o}-\hbar \omega )^{2}+\eta^{2} \hbar^{2} }
\right]+
\left[|\langle F|\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}^{\dagger}\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*}|I\rangle |^{2}               
\frac{2 \eta e^{2 \eta t}}{(E_{n}-E_{o}+\hbar \omega )^{2}+\eta^{2} \hbar^{2}}
\right]
\right]\\
&\overset{\underset{\mathrm{\eta \rightarrow 0 }}{}}{=}\sum_{\mathbf{k}\boldsymbol{\lambda}} e^{2}\frac{2\pi \hbar }{\omega V}
\left[           
\left[|\langle F|\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}\mathbf{j}_{-\mathbf{k}}\cdot \boldsymbol{\lambda}|I\rangle|^{2}           
\frac{2\pi}{\hbar}\delta (E_{n}-E_{o}-\hbar \omega)
\right]+
\left[|\langle F|\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}^{\dagger}\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*}|I\rangle |^{2}                   
\frac{2\pi}{\hbar}\delta (E_{n}-E_{o}+\hbar \omega)
\right]
\right]\\
&=\sum_{\mathbf{k}\boldsymbol{\lambda}} \frac{4\pi^{2} e^{2} }{\omega V}
\left[           
\left[|\langle F|\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}\mathbf{j}_{-\mathbf{k}}\cdot \boldsymbol{\lambda}|I\rangle|^{2}           
\delta (E_{n}-E_{o}-\hbar \omega)
\right]+
\left[|\langle F|\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}^{\dagger}\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*}|I\rangle|^{2}                 
\delta (E_{n}-E_{o}+\hbar \omega)
\right]
\right]\\
&=\sum_{\mathbf{k}\boldsymbol{\lambda}}
\left[           
\underbrace{
\left[\frac{4\pi^{2} e^{2} }{\omega V}|\langle F|\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}\mathbf{j}_{-\mathbf{k}}\cdot \boldsymbol{\lambda}|I\rangle|^{2}           
\delta (E_{n}-E_{o}-\hbar \omega)
\right]
}_{\Gamma^{abs}_{0 \rightarrow n;\mathbf{k}\boldsymbol{\lambda}} }
+
\underbrace{
\left[\frac{4\pi^{2} e^{2} }{\omega V}|\langle F|\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}^{\dagger}\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*}|I\rangle|^{2}                 
\delta (E_{n}-E_{o}+\hbar \omega)
\right]
}_{\Gamma^{ind.em}_{n \rightarrow 0;\mathbf{k}\boldsymbol{\lambda}} }
\right]\\
&=\sum_{\mathbf{k}\boldsymbol{\lambda}}  \left[\Gamma^{abs}_{0 \rightarrow n;\mathbf{k}\boldsymbol{\lambda}}+ \Gamma^{ind.em}_{n \rightarrow 0;\mathbf{k}\boldsymbol{\lambda}}\right]\\
\end{align}</math>
The above equation says that the transition rate between two states is composed by two possibilities: absorption <math>\Gamma^{abs}_{0 \rightarrow n;\mathbf{k}\boldsymbol{\lambda}}</math> or induced emission <math>\Gamma^{ind.em}_{n \rightarrow 0;\mathbf{k}\boldsymbol{\lambda}}</math>. Let's analyze the matrix elements between states.
'''Absorption'''
Let's suppose that initial and final states are:
:<math>\begin{align}
|I\rangle&=|o\rangle \otimes |N_{1\boldsymbol{\lambda}},...,N_{K\boldsymbol{\lambda}},...\rangle \\
|F\rangle&=|n\rangle \otimes |N_{1\boldsymbol{\lambda}},...,M_{K\boldsymbol{\lambda}},...\rangle \\
\end{align}</math>
Where <math>{|o\rangle, |n\rangle}</math> are the initial and final states of <math>\mathbf{H}_{0}</math> (say hydrogen atom) with energies <math>E_{0}<E_{n}</math> and <math>{|N_{1\boldsymbol{\lambda}},...,N_{K\boldsymbol{\lambda}},...\rangle, |N_{1\boldsymbol{\lambda}},...,M_{K\boldsymbol{\lambda}},...\rangle}</math> are the initial and final states of <math>\mathbf{H}_{rad}</math> (the vacuum).
The matrix element of <math>\Gamma^{abs}_{0 \rightarrow n;\mathbf{k}\boldsymbol{\lambda}}</math> isgiven by:
:<math>\begin{align}
\langle F|\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}\mathbf{j}_{-\mathbf{k}}\cdot \boldsymbol{\lambda}|I\rangle
&=\langle n|\otimes \langle N_{1\boldsymbol{\lambda}},...,M_{K\boldsymbol{\lambda}},...|\left[\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}\mathbf{j}_{-\mathbf{k}}\cdot \boldsymbol{\lambda}\right]|0\rangle \otimes |N_{1\boldsymbol{\lambda}},...,N_{K\boldsymbol{\lambda}},...\rangle\\
&=\langle n|\mathbf{j}_{-\mathbf{k}}\cdot \boldsymbol{\lambda}|0\rangle
\langle N_{1\boldsymbol{\lambda}},...,M_{K\boldsymbol{\lambda}},...|\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}|N_{1\boldsymbol{\lambda}},...,N_{K\boldsymbol{\lambda}},...\rangle\\
&=\langle n|\mathbf{j}_{-\mathbf{k}}\cdot \boldsymbol{\lambda}|0\rangle
\langle M_{K\boldsymbol{\lambda}}|\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}|N_{K\boldsymbol{\lambda}}\rangle\\
&=\langle n|\mathbf{j}_{-\mathbf{k}}\cdot \boldsymbol{\lambda}|0\rangle
\sqrt{N_{K\boldsymbol{\lambda}}}\langle M_{K\boldsymbol{\lambda}}|N_{K\boldsymbol{\lambda}}-1\rangle\\
&=\langle n|\mathbf{j}_{-\mathbf{k}}\cdot \boldsymbol{\lambda}|0\rangle
\sqrt{N_{K\boldsymbol{\lambda}}}\delta_{M_{K\boldsymbol{\lambda}},N_{K\boldsymbol{\lambda}}-1}\\     
\end{align}</math>
The last shows how in the absorption process, the system <math>\mathbf{H}_{int}</math> absorbs a single photon from the radiation. Namely the final state is given by:
:<math>\begin{align}
|F\rangle&=|n\rangle \otimes |N_{1\boldsymbol{\lambda}},...,N_{K\boldsymbol{\lambda}}-1,...\rangle \\
\end{align}</math>
Finally we can write the transition rate absorption as following
:<math>\begin{align}
\Gamma^{abs}_{0 \rightarrow n;\mathbf{k}\boldsymbol{\lambda}}
&=\frac{4\pi^{2} e^{2} }{\omega V}|\langle n|\mathbf{j}_{-\mathbf{k}}\cdot \boldsymbol{\lambda}|0\rangle
\sqrt{N_{K\boldsymbol{\lambda}}}|^{2}           
\delta (E_{n}-E_{o}-\hbar \omega)\\
&=\frac{4\pi^{2} e^{2} }{\omega V}|\langle n|\mathbf{j}_{-\mathbf{k}}\cdot \boldsymbol{\lambda}|0\rangle
|^{2}N_{K\boldsymbol{\lambda}}           
\delta (E_{n}-E_{o}-\hbar \omega)
\end{align}</math>
'''Induced Emission'''
Let's suppose that initial and final states are:
:<math>\begin{align}
|I\rangle&=|n\rangle \otimes |N_{1\boldsymbol{\lambda}},...,N_{K\boldsymbol{\lambda}},...\rangle \\
|F\rangle&=|0\rangle \otimes |N_{1\boldsymbol{\lambda}},...,M_{K\boldsymbol{\lambda}},...\rangle \\
\end{align}</math>
Where <math>{|n\rangle, |0\rangle}</math> are the initial and final states of <math>\mathbf{H}_{0}</math> (say hydrogen atom) with energies <math>E_{0}<E_{n}</math> and <math>{|N_{1\boldsymbol{\lambda}},...,N_{K\boldsymbol{\lambda}},...\rangle, |N_{1\boldsymbol{\lambda}},...,M_{K\boldsymbol{\lambda}},...\rangle}</math> are the initial and final states of <math>\mathbf{H}_{rad}</math> (the vacuum).
The matrix element of <math>\Gamma^{ind.em}_{0 \rightarrow n;\mathbf{k}\boldsymbol{\lambda}}</math> isgiven by:
:<math>\begin{align}
\langle F|\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}^{\dagger}\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*}|I\rangle
&=\langle 0|\otimes \langle N_{1\boldsymbol{\lambda}},...,M_{K\boldsymbol{\lambda}},...|\left[\mathbf{a}_{\mathbf{k}\boldsymbol{\lambda}}^{\dagger}\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*}\right]|n\rangle \otimes |N_{1\boldsymbol{\lambda}},...,N_{K\boldsymbol{\lambda}},...\rangle\\
&=\langle 0|\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*}|n\rangle
\langle N_{1\boldsymbol{\lambda}},...,M_{K\boldsymbol{\lambda}},...|\mathbf{a}^{\dagger}_{\mathbf{k}\boldsymbol{\lambda}}|N_{1\boldsymbol{\lambda}},...,N_{K\boldsymbol{\lambda}},...\rangle\\
&=\langle 0|\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*}|n\rangle
\langle M_{K\boldsymbol{\lambda}}|\mathbf{a}^{\dagger}_{\mathbf{k}\boldsymbol{\lambda}}|N_{K\boldsymbol{\lambda}}\rangle\\
&=\langle 0|\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*}|n\rangle
\sqrt{N_{K\boldsymbol{\lambda}}+1}\langle M_{K\boldsymbol{\lambda}}|N_{K\boldsymbol{\lambda}}+1\rangle\\
&=\langle 0|\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*}|n\rangle
\sqrt{N_{K\boldsymbol{\lambda}}+1}\delta_{M_{K\boldsymbol{\lambda}},N_{K\boldsymbol{\lambda}}+1}\\     
\end{align}</math>
The last shows how in the emmision process, the system <math>\mathbf{H}_{int}</math> release a single photon from the radiation. Namely the final state is given by:
:<math>\begin{align}
|F\rangle&=|0\rangle \otimes |N_{1\boldsymbol{\lambda}},...,N_{K\boldsymbol{\lambda}}+1,...\rangle \\
\end{align}</math>
Finally we can write the transition rate absorption as following
:<math>\begin{align}
\Gamma^{ind.em}_{n \rightarrow 0;\mathbf{k}\boldsymbol{\lambda}}
&=\frac{4\pi^{2} e^{2} }{\omega V}|\langle 0|\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*}|n\rangle
\sqrt{N_{K\boldsymbol{\lambda}}+1}|^{2}           
\delta (E_{0}-E_{n}+\hbar \omega)\\
&=\frac{4\pi^{2} e^{2} }{\omega V}|\langle 0|\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*}|n\rangle
|^{2} (N_{K\boldsymbol{\lambda}}+1)         
\delta (E_{n}-E_{o}-\hbar \omega)
\end{align}</math>
'''Important Phenomena: Spontaneous Emission'''
Let's suppose that initial is a single Hydrogen atom in the 2P state in the vacuum (and nothing else!!!). The state can be written as 
:<math>\begin{align}
|I\rangle&=|2P\rangle \otimes |0,...,0,...\rangle \\
\end{align}</math>
According to induced emission, there could be a process in which the final state is:
:<math>\begin{align}
|F\rangle&=|1S\rangle \otimes |0,...,1,...\rangle \\
\end{align}</math>
Where a single photon has been emitted '''without any external perturbation'''. This is emission process is called ''Spontaneous emission''. For an experimental observation of a Lamb-like shift in a solid state setup see [http://www.sciencemag.org/cgi/reprint/322/5906/1357.pdf here].
==== '''<span style="color:#2B65EC">Einstein's Model of Absorption and Induced Emmision </span>''' ====
Let's use Statistical Mechanics to study a cavity with radiation. For this we need to use the Plank distribution:
:<math>\begin{align}
\langle N_{\boldsymbol{k}\boldsymbol{\lambda}}\rangle=\frac{1}{e^{\frac{\hbar c k}{K_{B}T}}-1}
\end{align}</math>
This is just the occupation number of the state <math>\boldsymbol{k}\boldsymbol{\lambda}</math>. Let's suppose the following situation:
<ul>
<li>Our cavity is made up with atoms with two quantum levels with energies <math>E_{n}</math> and <math>E_{0}</math> such that <math>E_{n}>E_{0}</math>. 
<li>The walls are emitting and absorbing radiation (Thermal Radiation) such that system is at equilibrium. Since there is just two levels, the photons emitted by atoms must have energy equal to <math>E_{n}-E_{0}</math>.
</ul>
The Boltzmann distribution tells us that the probabilities to find atoms at energies <math>E_{n}</math> and <math>E_{0}</math> are respectively
:<math>\begin{align}
P_{n}=\frac{1}{Q}e^{-\frac{E_{n}}{K_{B}T}}\\
P_{0}=\frac{1}{Q}e^{-\frac{E_{0}}{K_{B}T}}\\
\end{align}</math>
Let's call  <math> \langle N \rangle </math>  the number of photons at equilibrium. At equilibrium we have
:<math>\begin{align}
0&=\frac{dN}{dt}\\
0&=\left(\frac{dN}{dt}\right)_{abs}+\left(\frac{dN}{dt}\right)_{ind.em}
\end{align}</math>
It is natural to express the absorption and emission rate as:
:<math>\begin{align}
\left(\frac{dN}{dt}\right)_{abs}&=-BNP_{0}\\
\left(\frac{dN}{dt}\right)_{ind.em}&=BNP_{n}
\end{align}</math>
Where B is some constant. Since <math>P_{n}<P_{0}</math> we have
:<math>\left|\left(\frac{dN}{dt}\right)\right|_{abs}>\left|\left(\frac{dN}{dt}\right)\right|_{ind.em}</math>
This means that eventually all photons will be absorbed and then <math> \langle N \rangle =0</math>. This of course is not a physical situation. Einstein realized that there is another kind of process of emission that balances the rates in such way that <math> \langle N \rangle \ne 0</math>. This emission is precisely the spontaneous emission and can be written as
:<math>\begin{align}
\left(\frac{dN}{dt}\right)_{spon.em}&=AP_{n}
\end{align}</math>
Then we have
:<math>\begin{align}
0&=\left(\frac{dN}{dt}\right)_{abs}+\left(\frac{dN}{dt}\right)_{ind.em}+\left(\frac{dN}{dt}\right)_{spon.em}\\
0&=-BNP_{0}+BNP_{n}+AP_{n}\\
\end{align}</math>
And solving for <math> A </math> we have
:<math>\begin{align}
A&=B \langle N \rangle \left(e^{\frac{E_{n}-E_{0}}{K_{B}T}}-1\right)\\
&=B \langle N \rangle \frac{1}{ \langle N \rangle }\\
&=B
\end{align}</math>
As conclusion we obtain for the emission rate the follwing:
:<math>\begin{align}
\left(\frac{dN}{dt}\right)_{emission}&=\left(\frac{dN}{dt}\right)_{ind.em}+\left(\frac{dN}{dt}\right)_{spon.em}\\
&=BNP_{n}+AP_{n}\\
&=BP_{n}(N+1)\\
\end{align}</math>
Notice that the factor <math>(N+1)</math> matches with our previous result.
==== '''<span style="color:#2B65EC">Details of Spontaneous Emission</span>''' ====
'''Power of the emitted light'''
Using our previous result for <math>\Gamma^{spon.em}_{n \rightarrow 0;\mathbf{k}\boldsymbol{\lambda}}</math>, we can calculate the power <math>dP</math> of the light with polarization <math>\boldsymbol{\lambda}</math> per unit of solid angle that the spontaneus emission produce:
:<math>\begin{align}
dP&=\sum_{k}\hbar \omega \;\Gamma^{spon.em}_{n \rightarrow 0;\mathbf{k}\boldsymbol{\lambda}}\\
&=d\Omega V \int \frac{dk\;k^{2}}{(2\pi)^{3}}\;\hbar \omega \;\Gamma^{spon.em}_{n \rightarrow 0;\mathbf{k}\boldsymbol{\lambda}}\\
&=d\Omega V \int \frac{d\omega\;\omega^{2}}{(2\pi c)^{3}}\;\hbar \omega \;\Gamma^{spon.em}_{n \rightarrow 0;\mathbf{k}\boldsymbol{\lambda}}\\
\end{align}
</math>
Then
:<math>\begin{align}
\frac{dP}{d\Omega}
&=V \int\frac{d\omega\;\omega^{2}}{(2\pi c)^{3}}\;\hbar \omega \left[ \frac{4\pi^{2} e^{2} }{\omega V}|\langle 0|\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*}|n\rangle|^{2} \delta (E_{n}-E_{0}-\hbar \omega) \right]\\
&=\frac{e^{2}\hbar}{2\pi c^{3}}|\langle 0|\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*}|n\rangle|^{2}\int d\omega\;\omega^{2} \delta (E_{n}-E_{0}-\hbar \omega)\\
&=\frac{e^{2}\hbar}{2\pi c^{3}}|\langle 0|\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*}|n\rangle|^{2}\frac{(E_{n}-E_{0})^{2}}{\hbar^{3}}\;\;\;\leftarrow\;\;\;\hbar\omega_{n,0}=E_{n}-E_{0}\\
&=\frac{e^{2}\omega^{2}_{n,0}}{2\pi c^{3}}|\langle 0|\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*}|n\rangle|^{2}\\
\end{align}
</math>
'''Conservation of Momentum'''
Consider a matter in the eigenstate of the momentum <math>\hbar q_{n}</math>. Suppose that it make a transition to eigenstate with momentum <math>\hbar q_{0}</math> via spontaneus emission. The momentum must conserve. Therefore we have a process where:
Initial Momenta<math>\;\;\;\rightarrow\;\;\;</math><math>\begin{align}matter& \rightarrow \hbar q_{n}\\vacuum& \rightarrow 0\end{align}</math>
Final Momenta<math>\;\;\;\rightarrow\;\;\;</math><math>\begin{align}matter& \rightarrow \hbar q_{0}\\vacuum& \rightarrow \hbar q_{n}-\hbar q_{0}\end{align}</math>
Let's calculate the matrix element  <math>\langle \mathbf{q_{0}}|\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*}|\mathbf{q_{n}}\rangle</math> for two cases.
Case 1: Single free charged particle
:<math>\begin{align}
\langle \mathbf{q_{0}}|\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*}|\mathbf{q_{n}}\rangle
&=\boldsymbol{\lambda}^{*}\cdot\langle \mathbf{q_{0}}|\mathbf{j}_{\mathbf{k}}|\mathbf{q_{n}}\rangle\\
&=\boldsymbol{\lambda}^{*}\cdot\left\langle \mathbf{q_{0}}\left|\frac{1}{2}
\left[\frac{\boldsymbol{p_{i}}}{m}e^{- i\mathbf{k}\cdot\mathbf{r}_{i}}+e^{- i\mathbf{k}\cdot\mathbf{r}_{i}}\frac{\boldsymbol{p_{i}}}{m}\right]\right|\mathbf{q_{n}}\right\rangle\\
&=\boldsymbol{\lambda}^{*}\cdot\frac{1}{2}\left\langle \mathbf{q_{0}}\left|
\left[\frac{\hbar \mathbf{q_{0}}}{m}e^{- i\mathbf{k}\cdot\mathbf{r}_{i}}+e^{- i\mathbf{k}\cdot\mathbf{r}_{i}}\frac{\hbar \mathbf{q_{n}}}{m}\right]\right|\mathbf{q_{n}}\right\rangle\\
&=\boldsymbol{\lambda}^{*}\cdot\frac{\hbar (\mathbf{q_{0}}+\mathbf{q_{n}})}{2m}\langle \mathbf{q_{0}}|
e^{- i\mathbf{k}\cdot\mathbf{r}_{i}}|\mathbf{q_{n}}\rangle\\
&=\boldsymbol{\lambda}^{*}\cdot\frac{\hbar (\mathbf{q_{0}}+\mathbf{q_{n}})}{2m}
\int d^{3}r_{i} \langle \mathbf{q_{0}}|\mathbf{r}_{i}\rangle \langle \mathbf{r}_{i}| e^{-i\mathbf{k}\cdot\mathbf{r}_{i}}|\mathbf{q_{n}}\rangle\\
&=\boldsymbol{\lambda}^{*}\cdot\frac{\hbar (\mathbf{q_{0}}+\mathbf{q_{n}})}{2m}
\int d^{3}r_{i} e^{-i\mathbf{q_{0}}\cdot\mathbf{r}_{i}} e^{-i\mathbf{k}\cdot\mathbf{r}_{i}} e^{i\mathbf{q_{n}}\cdot\mathbf{r}_{i}}\\
&=\boldsymbol{\lambda}^{*}\cdot\frac{\hbar (\mathbf{q_{0}}+\mathbf{q_{n}})}{2m}
\delta(\mathbf{q_{n}}-\mathbf{q_{0}}-\mathbf{k}) \\
\end{align}
</math>
This result is very interesting!!!. It says that the emitted light must be
:<math>\begin{align}
\hbar \mathbf{k} =\hbar \mathbf{q_{n}} -\hbar \mathbf{q_{0}} 
\end{align}
</math>
However this is impossible from the point of view of conservation of energy:
:<math>\begin{align}
\hbar c k =\frac{\hbar q^{2}_{n}}{2m}-\frac{\hbar q^{2}_{0}}{2m}
\end{align}
</math>
This means that a single charged particle can not make transitions. So a single charged particle doesn't see the vacuum fluctuations.
Case 2: General Case (System of particles)
:<math>\begin{align}
\langle \mathbf{q_{0}}|\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*}|\mathbf{q_{n}}\rangle
&=\boldsymbol{\lambda}^{*}\cdot\langle \mathbf{q_{0}}|\mathbf{j}_{\mathbf{k}}|\mathbf{q_{n}}\rangle\\
&=\boldsymbol{\lambda}^{*}\cdot\left\langle \mathbf{q_{0}}\left|\int d^{3}r j(\mathbf{r}) e^{-i\mathbf{k}\cdot\mathbf{r}}\right|\mathbf{q_{n}}\right\rangle\\
&=\boldsymbol{\lambda}^{*}\cdot\int d^{3}r \langle \mathbf{q_{0}}|j(\mathbf{r})|\mathbf{q_{n}}\rangle e^{-i\mathbf{k}\cdot\mathbf{r}}\\
\end{align}
</math>
We can use the total momentum of the system <math>\mathbf{P}=\sum_{i}\mathbf{p}_{i}</math> as generator of translations for <math>\mathbf{r}</math>. So that we can write
:<math>\begin{align}
j(\mathbf{r})=e^{-\frac{i}{\hbar}\mathbf{P}\cdot\mathbf{r}}j(\mathbf{r}=0)e^{\frac{i}{\hbar}\mathbf{P}\cdot\mathbf{r}}
\end{align}
</math>
Then
:<math>\begin{align}
\langle \mathbf{q_{0}}|\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*}|\mathbf{q_{n}}\rangle
&=\boldsymbol{\lambda}^{*}\cdot\int d^{3}r \langle \mathbf{q_{0}}|j(\mathbf{r})|\mathbf{q_{n}}\rangle e^{-i\mathbf{k}\cdot\mathbf{r}}\\
&=\boldsymbol{\lambda}^{*}\cdot\int d^{3}r \langle \mathbf{q_{0}}|e^{-\frac{i}{\hbar}\mathbf{P}\cdot\mathbf{r}}j(0)e^{\frac{i}{\hbar}\mathbf{P}\cdot\mathbf{r}}|\mathbf{q_{n}}\rangle e^{-i\mathbf{k}\cdot\mathbf{r}}\\
&=\boldsymbol{\lambda}^{*}\cdot\int d^{3}r \langle \mathbf{q_{0}}|e^{-i\mathbf{q_{0}}\cdot\mathbf{r}}j(0)e^{i\mathbf{q_{n}}\cdot\mathbf{r}}|\mathbf{q_{n}}\rangle e^{-i\mathbf{k}\cdot\mathbf{r}}\\
&=\boldsymbol{\lambda}^{*}\cdot \langle \mathbf{q_{0}}|j(0)|\mathbf{q_{n}}\rangle\int d^{3}r e^{i\mathbf{q_{n}}\cdot\mathbf{r}} e^{-i\mathbf{q_{0}}\cdot\mathbf{r}} e^{-i\mathbf{k}\cdot\mathbf{r}}\\
&=\boldsymbol{\lambda}^{*}\cdot \langle \mathbf{q_{0}}|j(0)|\mathbf{q_{n}}\rangle\delta(\mathbf{q_{n}}-\mathbf{q_{0}}-\mathbf{k})\\
\end{align}
</math>
The last shows that
:<math>\begin{align}
\hbar \mathbf{k} =\hbar \mathbf{q_{n}} -\hbar \mathbf{q_{0}} 
\end{align}
</math>
==== '''<span style="color:#2B65EC">Electric Dipole Transitions</span>''' ====
Let's consider a nucleus (say hydrogen atom) well localized in space. Typically the wave length of the emitted light is much bigger than electron's orbit around nucleus (say Bohr radius <math>a_{B}</math>). For example the wavelength of blue light is on the order of 100 nm or 1000 Angstrom, while the wavelength of the electron orbiting the nucleus in the Hydrogen atom is of the order of 1 Angstrom.  This means that:
:<math>\;\;\;\;\lambda >>> a_{B}\;\;\;\;\leftrightarrow\;\;\;\;\;\mathbf{k}<<<1</math>
The matrix element is then
:<math>\begin{align}
\langle 0|\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*}|n\rangle
&=\mathbf{\lambda}^{*}\cdot\langle 0|\mathbf{j}_{\mathbf{k}}|n\rangle\\
&=\mathbf{\lambda}^{*}\cdot\int d^{3}\mathbf{r}\;e^{-i\mathbf{k}\cdot\mathbf{r}} \langle 0|\mathbf{j}(\mathbf{r})|n\rangle\\
&=\mathbf{\lambda}^{*}\cdot\int d^{3}\mathbf{r}\;e^{-i\mathbf{k}\cdot\mathbf{r}} \langle 0|\mathbf{j}(\mathbf{r})|n\rangle\\
&=\mathbf{\lambda}^{*}\cdot\int d^{3}\mathbf{r}\;\left[1-i\mathbf{k}\cdot\mathbf{r}+...\right] \langle 0|\mathbf{j}(\mathbf{r})|n\rangle\\
&\cong\mathbf{\lambda}^{*}\cdot\int d^{3}\mathbf{r}\;\langle 0|\mathbf{j}(\mathbf{r})|n\rangle\\
&\cong\mathbf{\lambda}^{*}\cdot\int d^{3}\mathbf{r}\;\langle 0|\frac{1}{2}\left[\sum_{i} \frac{\mathbf{p}_{i}}{m}  \delta(\mathbf{r}-\mathbf{r}_{i})+\delta(\mathbf{r}-\mathbf{r}_{i})\frac{\mathbf{p}_{i}}{m}\right]|n\rangle\\
&\cong\mathbf{\lambda}^{*}\cdot\langle 0|\sum_{i} \frac{\mathbf{p}_{i}}{m}|n\rangle\\
&\cong\mathbf{\lambda}^{*}\cdot\langle 0|\frac{\mathbf{P}}{m}|n\rangle\;\;\;\;\;\;\;
\leftarrow\;\;\;\;\;\frac{\mathbf{P}}{m}=\frac{[\mathbf{R},\mathbf{H}_{0}]}{i\hbar}\\
&\cong\mathbf{\lambda}^{*}\cdot\frac{1}{i\hbar}\langle 0|[\mathbf{R}\cdot \mathbf{H}_{0}-\mathbf{H}_{0}\cdot\mathbf{R}]|n\rangle\\
&\cong\mathbf{\lambda}^{*}\cdot\frac{1}{i\hbar}\langle 0|[\mathbf{R}E_{n}-E_{0}\mathbf{R}]|n\rangle\\
&\cong\mathbf{\lambda}^{*}\cdot\frac{E_{n}-E_{0}}{i\hbar}\langle 0|\mathbf{R}|n\rangle\\
&\cong\mathbf{\lambda}^{*}\cdot\frac{\hbar\omega_{n,0}}{i\hbar}\langle 0|\mathbf{R}|n\rangle\\
&\cong\mathbf{\lambda}^{*}\cdot\frac{\omega_{n,0}}{i}\underbrace{\langle 0|\mathbf{R}|n\rangle}_{\mathbf{d}_{0,n}}\\
&\cong\frac{\omega_{n,0}}{i}\mathbf{d}_{0,n}\cdot\mathbf{\lambda}^{*}\\
\end{align}</math>
Notice that <math>\mathbf{d}_{0,n}</math> is the off diagonal elements of the dipole moment operator. The power per unit of solid angle for a given polarization <math>\lambda</math> is given by
:<math>\begin{align}
\frac{dP}{d\Omega}
&=\frac{e^{2}\omega^{2}_{n,0}}{2\pi c^{3}}|\langle 0|\mathbf{j}_{\mathbf{k}}\cdot \boldsymbol{\lambda}^{*}|n\rangle|^{2}\\
&\cong\frac{e^{2}\omega^{2}_{n,0}}{2\pi c^{3}}\left|\frac{\omega_{n,0}}{i}\mathbf{d}_{0,n}\cdot\mathbf{\lambda}^{*}\right|^{2}\\
&\cong\frac{e^{2}\omega^{4}_{n,0}}{2\pi c^{3}}\left|\mathbf{d}_{0,n}\cdot\mathbf{\lambda}^{*}\right|^{2}\\
\end{align}</math>
'''Selection Rules'''
Let's assume that initial and final states are eigenstates of <math>\mathbf{L}^{2}</math> and  <math>\mathbf{L}_{z}</math>. Using commutation relationships we can obtain the following selection rules the
vector <math>\mathbf{d}_{0,n}</math>:
1. Selection Rules for <math>m</math>
1.1<math>[\mathbf{L}_{z},\mathbf{R}_{z}]=0</math>. From this we have
:<math>\begin{align}
0&=\langle l' m' |[\mathbf{L}_{z},\mathbf{R}_{z}]| l m \rangle\\
&=\langle l' m' |\mathbf{L}_{z} \mathbf{R}_{z} - \mathbf{L}_{z}\mathbf{R}_{z}| l m \rangle\\
&=\hbar(m'-m)\langle l' m' |\mathbf{R}_{z}| l m \rangle\\
\end{align}</math>
This means that <math>\langle l' m' |\mathbf{R}_{z}| l m \rangle=0</math> if <math>m'-m\neq 0</math>.
1.2 <ul> <li><math>[\mathbf{L}_{z},\mathbf{R}_{x}]=i\hbar\mathbf{R}_{y}</math>. From this we have
:<math>\begin{align}
\langle l' m' |[\mathbf{L}_{z},\mathbf{R}_{x}] |l m \rangle&=i\hbar \langle l' m' |\mathbf{R}_{y}] l m \rangle\\
(m'-m)\langle l' m' |\mathbf{R}_{x}| l m \rangle&=i\langle l' m' |\mathbf{R}_{y}] l m \rangle\\
\end{align}</math>
<li><math>[\mathbf{L}_{z},\mathbf{R}_{y}]=-i\hbar\mathbf{R}_{x}</math>. From this we have
:<math>\begin{align}
\langle l' m' |[\mathbf{L}_{z},\mathbf{R}_{y}] |l m \rangle&=-i\hbar \langle l' m' |\mathbf{R}_{x}] l m \rangle\\
(m'-m)\langle l' m' |\mathbf{R}_{y}| l m \rangle&=-i\langle l' m' |\mathbf{R}_{x}] l m \rangle\\
\end{align}</math>
</ul>
Combining
:<math>\begin{align}
(m'-m)^{2}\langle l' m'|\mathbf{R}_{x}|l m \rangle=\langle l' m'|\mathbf{R}_{x}|l m \rangle\\
(m'-m)^{2}\langle l' m'|\mathbf{R}_{y}|l m \rangle=\langle l' m'|\mathbf{R}_{y}|l m \rangle\\
\end{align}</math>
From here we see that
:<math>\begin{align}
(m'-m)^{2}\langle l' m'|\mathbf{R}_{x,y}|l m \rangle&=\langle l' m'|\mathbf{R}_{x,y}|l m \rangle\\
((m'-m)^{2}-1)\langle l' m'|\mathbf{R}_{x,y}|l m \rangle &=0\\
\end{align}</math>
This means that <math>\langle l' m'|\mathbf{R}_{x,y}|l m \rangle=0</math> if <math>[(m'-m)^{2}-1]\neq0 \;\;\;\;\rightarrow\;\;\;\;m'\neq m\pm 1</math>
2. Selection Rule for <math>l</math>
Consider the following commutator proposed by Dirac
;<math>[\mathbf{L}^{2},[\mathbf{L}^{2},\mathbf{R}]]=2\hbar ^{2}(\mathbf{R}\mathbf{L}^{2}+\mathbf{L}^{2}\mathbf{R})</math>
After some algebra we can see that
;<math>(l'+l)(l'+l+2)((l'-l)^{2}-1)\langle l' m'|\mathbf{R}|l m \rangle=0</math>
Since <math>l</math> is non negative <math>(l'+l+2)\neq0\;\;\;\;\forall\;\;\;\;l',l </math>. There are two possibilities:
<ul>
<li> <math>\langle l' m'|\mathbf{R}|l m \rangle=0</math> if <math>(l'+l)\neq 0</math>. However <math>(l'+l)=0</math> for <math>l'=l=0</math>, which corresponds to <math>\langle 0 0|\mathbf{R}|0 0 \rangle=0</math>. This possibility is trivial and it doesn't say anything new.
 
<li><math>\langle l' m'|\mathbf{R}|l m \rangle=0</math> if <math>((l'-l)^{2}-1)\neq 0\;\;\;\;\rightarrow\;\;\;\;l'\neq l\pm 1</math>
</ul>
'''Summary'''
If the initial and final states are eigenstates for <math>\mathbf{L}^{2}</math> and  <math>\mathbf{L}_{z}</math> then the possible transitions that can occur in the dipole approximation are
;<math>\begin{align}
l'&= l\pm 1\\
m'&= m\\
m'&= m\pm 1\\
\end{align}</math>
'''Example: Transitions Among Levels n=1,2,3 of Hydrogen Atom '''
Let's consider the levels n=1,2,3 of Hydrogen Atom. The possible transitions to the state 1S according to the sharp selection rules are the following
[[Image:1s.jpg|400px]]
The possibles transitions to the state <math> 2p_0 </math> are the following
[[Image:2p0.jpg|400px]]
'''Power & Polarization of Emitted Light'''
Case <math>m'=m</math>: In this case the selection rules tell us that:
;<math>\begin{align}
\mathbf{d}_{0,n}= \langle 0|\mathbf{R}|n\rangle=
\begin{pmatrix}
  \langle 0|\mathbf{R}_{x}|n\rangle  \\
  \langle 0|\mathbf{R}_{y}|n\rangle  \\
  \langle 0|\mathbf{R}_{z}|n\rangle  \\
\end{pmatrix}
=\begin{pmatrix}
  0  \\
  0  \\
  \langle 0|\mathbf{R}_{z}|n\rangle  \\
\end{pmatrix}
\end{align}</math>
Then we can say
<ul>
<li>The light is always plane polarized in the plane defined by <math>\mathbf{k}</math>.
</ul>
[[Image:Planepolarization.png]]
<ul>
<li>The power is given by
:<math>\begin{align}
\frac{dP}{d\Omega}
&\cong\frac{e^{2}\omega^{4}_{n,0}}{2\pi c^{3}}\left|\mathbf{d}_{0,n}\cdot\mathbf{\lambda}^{*}\right|^{2}\\
&\cong\frac{e^{2}\omega^{4}_{n,0}}{2\pi c^{3}}\left|\langle 0|\mathbf{R}_{z}|n\rangle\right|^{2}\;sin^{2}\theta\\
\end{align}</math>
</ul>
Case <math>m'=m\pm 1</math>: In this case the selection rules tell us that:
:<math>\begin{align}
\mathbf{d}_{0,n}= \langle 0|\mathbf{R}|n\rangle=
\begin{pmatrix}
  \langle 0|\mathbf{R}_{x}|n\rangle  \\
  \langle 0|\mathbf{R}_{y}|n\rangle  \\
  \langle 0|\mathbf{R}_{z}|n\rangle  \\
\end{pmatrix}
=\begin{pmatrix}
\langle 0|\mathbf{R}_{x}|n\rangle  \\
  \langle 0|\mathbf{R}_{y}|n\rangle  \\
  0 \\
\end{pmatrix}
\end{align}</math>
From the previous result we have
:<math>\begin{align}
\mp \langle l' m' |\mathbf{R}_{y}| l m \rangle&=-i\langle l' m' |\mathbf{R}_{x}] l m \rangle\\
\end{align}</math>
Then
:<math>\begin{align}
\mathbf{d}_{0,n}= \langle 0|\mathbf{R}_{x}|n\rangle
\begin{pmatrix}
  1  \\
  \pm i  \\
  0  \\
\end{pmatrix}
\end{align}</math>
Then we can say
<ul>
<li><math>\mathbf{d}_{0,n}</math> rest at the XY plane. The polarization of the emitted light is circular.
<li>Lets put a detector to see the light coming toward positive Z axis. Since right circular polarized light has angular momentum <math>\hbar</math> while negative circular polarized light has angular momentum<math>-\hbar</math> we can state the following:
<ul>
<li>If we see a circular polarized light then by conservation of angular momentum we know that
:<math>\begin{align} 
\hbar m=\hbar m' + \hbar^{photon}\;\;\;\;\;\rightarrow \;\;\;\;\; m'-m=-1
\end{align}</math>
the transition was <math>m'-m=-1</math>
<li>If we see a negative circular polarized light then by conservation of angular momentum we know that
:<math>\begin{align} 
\hbar m=\hbar m' - \hbar^{photon}\;\;\;\;\;\rightarrow \;\;\;\;\; m'-m=1
\end{align}</math>
the transition was <math>m'-m=1</math>
</ul>
==== '''<span style="color:#2B65EC">Scattering of Light</span>''' ====
''
''( Notes and LaTex code, courtesy of Dr. Oskar Vafek)''
We can analyze how a charged system interact with photons and scatter it. The problem of light scattering can be considered as a transition from initial state, <math>|\chi_0\rangle=|0;N_{k,\lambda},N_{k',\lambda'}=0\rangle</math> to a final state <math>|n;N_{k,\lambda}-1,N_{k',\lambda'}=1\rangle </math>. For this transition we can calculate the transition amplitude. Let us deal with some basics first.
First of all we can write the  Schrodinger equation for an electron in a potential
<math>V(r)</math> interacting with quantized EM radiation as:
<math>i\hbar\frac{\partial}{\partial t}|\psi\rangle
=\mathcal{H}|\psi\rangle </math>
where
<math>\mathcal{H}=\frac{1}{2m}\left(p-\frac{e}{c}A(r)\right)^2+V(r)+\sum_{k,\hat{\lambda}}\hbar\omega_{k}\left(\hat{a}_{k\hat{\lambda}}^{\dagger}\hat{a}_{k\hat{\hat{\lambda}}}+\frac{1}{2}\right) </math>
We are considering the transverse gauge, in which the vector potential  operator can be defined as:
<math>\mathbf{\hat{A}(r)}=\frac{1}{\sqrt{V}}\sum_{k,\lambda}\left[\sqrt{\frac{2\pi\hbar}{\omega_{k}}}c\;\left(\hat{a}_{k,\hat{\lambda}}\hat{\lambda}e^{ik\cdot r}+\hat{a}^{\dagger}_{k,\hat{\lambda}}\hat{\lambda^*}e^{-ik\cdot r}\right)\right]</math>
where
<math>[\hat{a}_{k\hat{\lambda}},\hat{a}_{k'\hat{\lambda'}}^{\dagger}]=\delta_{kk'}\delta_{\hat{\lambda}\hat{\lambda'}};\;\;\;\;
[\hat{a}_{k\hat{\lambda}},\hat{a}_{k'\hat{\lambda'}}]=0 </math>
Let us define,
<math>\mathcal{H}=\mathcal{H}_0+\mathcal{H}'</math>
where
<math>\mathcal{H}_0=\mathcal{H}^{(at)}_0+\mathcal{H}^{(rad)}_0=\left(\frac{p^2}{2m}+V(r)\right)+\sum_{k,\hat{\lambda}}\hbar\omega_{k}\left(\hat{a}_{k\hat{\lambda}}^{\dagger}\hat{a}_{k\hat{\hat{\lambda}}}+\frac{1}{2}\right)</math>
and
<math>\mathcal{H}'=-\frac{e}{mc}\mathbf{A(r)}\cdot p+\frac{e^2}{2mc^2}\mathbf{A(r)}\cdot \mathbf{A(r)}</math>
We can use the Dirac picture to represent the wavefunction as:
<math>|\psi(t)\rangle=e^{-\frac{i}{\hbar}\mathcal{H}_0t}|\chi(t)\rangle </math>
Therefore,
<math>i\hbar\frac{\partial}{\partial t}|\chi\rangle =\mathcal{H}'_I(t)|\chi\rangle
=e^{\frac{i}{\hbar}\mathcal{H}_0t}\mathcal{H}'e^{-\frac{i}{\hbar}\mathcal{H}_0t}|\chi\rangle
=e^{\frac{i}{\hbar}\mathcal{H}^{(at)}_0t}\left(e^{\frac{i}{\hbar}\mathcal{H}^{(rad)}_0t}\mathcal{H}'
e^{-\frac{i}{\hbar}\mathcal{H}^{(rad)}_0t}\right)e^{-\frac{i}{\hbar}\mathcal{H}^{(at)}_0t}|\chi\rangle</math>
More precisely,
<math>\mathcal{H}'_I(t)= e^{\frac{i}{\hbar}\mathcal{H}^{(at)}_0t}\left(e^{\frac{i}{\hbar}\mathcal{H}^{(rad)}_0t}\mathcal{H}'
e^{-\frac{i}{\hbar}\mathcal{H}^{(rad)}_0t}\right)e^{-\frac{i}{\hbar}\mathcal{H}^{(at)}_0t}
=e^{\frac{i}{\hbar}\mathcal{H}^{(at)}_0t}\left(
-\frac{e}{mc}A(r,t)\cdot p+\frac{e^2}{2mc^2}A(r,t)\cdot A(r,t)\right)e^{-\frac{i}{\hbar}\mathcal{H}^{(at)}_0t}</math>
where the vector potential operator which is now time dependent can be defined as,
<math>\mathbf{A(r,t)}=\frac{1}{\sqrt{V}}\sum_{k,\lambda}\left[\sqrt{\frac{2\pi\hbar}{\omega_{k}}}c\;\left(\hat{a}_{k,\hat{\lambda}}\hat{\lambda}e^{ik\cdot r-i\omega_{k} t}+\hat{a}^{\dagger}_{k,\hat{\lambda}}\hat{\lambda^*}e^{-ik\cdot r+i\omega_{k}t}\right)\right]</math>
Using second order time dependent perturbation theory up to to <math>2^{nd}</math> order, we can write the wavefunction is Dirac picture as,
<math>|\chi(t)\rangle\approx|\chi_0\rangle+\frac{1}{i\hbar}\int_{-\infty}^{t}dt'\mathcal{H}'_I(t')|\chi_0\rangle+
\frac{1}{(i\hbar)^2}\int_{-\infty}^{t}dt'\int_{-\infty}^{t'}dt''\mathcal{H}'_I(t')\mathcal{H}'_I(t'')|\chi_0\rangle </math>
where  the perturbation is slowly switched on at <math>t=-\infty</math>.
As mentioned before,we need to calculate the transition probability from
<math>|\chi_0\rangle=|0;N_{k,\lambda},N_{k',\lambda'}=0\rangle</math>
to the final state
<math>|n;N_{k,\lambda}-1,N_{k',\lambda'}=1\rangle </math>
Therefore we need to calculate that following transition probability,
<math> C(t)=\langle n;N_{k,\lambda}-1,N_{k',\lambda'}=1|\chi(t)\rangle </math>
Using second order time dependent perturbation theory the probability for such a transition is
<math>C(t)=\frac{1}{i\hbar}\int_{-\infty}^{t}dt'\langle
n;N_{k,\lambda}-1,N_{k',\lambda'}=1|\mathcal{H}'_I(t')|0;N_{k,\lambda},N_{k',\lambda'}=0\rangle </math>
<math> +\frac{1}{(i\hbar)^2}\int_{-\infty}^{t}dt'\int_{-\infty}^{t'}dt''\langle n;N_{k,\lambda}-1,N_{k',\lambda'}=1|\mathcal{H}'_I(t')\mathcal{H}'_I(t'')|0;N_{k,\lambda},N_{k',\lambda'}=0\rangle </math>
The required transition can be made by term  proportional to <math>\mathbf{A(r)}^2</math>)(the diamagnetic term)in first order,  while term  proportional to <math>\mathbf{A(r)}</math> (paramagnetic term) gives non-zero overlap in second order perturbation theory. Therefore we have:
<math> \begin{align}C(t)&=\frac{1}{i\hbar}\int_{-\infty}^{t}dt'\langle
n;N_{k,\lambda}-1,N_{k',\lambda'}=1|
e^{\frac{i}{\hbar}\mathcal{H}^{(at)}_0t'}\left(
\frac{e^2}{2mc^2}\mathbf{A(r},t')\cdot \mathbf{A(r},t')\right)e^{-\frac{i}{\hbar}\mathcal{H}^{(at)}_0t'}
|0;N_{k,\lambda},N_{k',\lambda'}=0\rangle \\
&+ \frac{1}{(i\hbar)^2}\int_{-\infty}^{t}dt'
\int_{-\infty}^{t'}dt''\langle
n;N_{k,\lambda}-1,N_{k',\lambda'}=1|e^{\frac{i}{\hbar}\mathcal{H}^{(at)}_0t'}\left(
-\frac{e}{mc}A(r,t')\cdot p\right)e^{-\frac{i}{\hbar}\mathcal{H}^{(at)}_0t'}\times\\
&\times e^{\frac{i}{\hbar}\mathcal{H}^{(at)}_0t''}\left(
-\frac{e}{mc}A(r,t'')\cdot p\right)e^{-\frac{i}{\hbar}\mathcal{H}^{(at)}_0t''}
|0;N_{k,\lambda},N_{k',\lambda'}=0\rangle\\
\end{align}</math>
We can ignore the <math>\mathbf r-</math>dependence in gauge field by using dipole approximation, that is we can say <math>\mathbf{exp}(-iK.r)=1-iK.r </math>
<math>\begin{align}C(t)&=\frac{1}{i\hbar}\frac{e^2}{2mc^2}\int_{-\infty}^{t}dt'e^{\frac{i}{\hbar}(\epsilon_n-\epsilon_0)t'}\langle
N_{k,\lambda}-1,N_{k',\lambda'}=1| A(t')\cdot A(t')
|N_{k,\lambda},N_{k',\lambda'}=0\rangle\langle n|0\rangle\\
&+\frac{1}{(i\hbar)^2}\frac{e^2}{m^2c^2}\sum_{\alpha}\int_{-\infty}^{t}dt'
\int_{-\infty}^{t'}dt''e^{\frac{i}{\hbar}(\epsilon_n-\epsilon_{\alpha})t'}e^{\frac{i}{\hbar}(\epsilon_{\alpha}-\epsilon_0)t''}\times\\
&\langle N_{k,\lambda}-1,N_{k',\lambda'}=1|A_{\mu}(t')A_{\nu}(t'')|N_{k,\lambda},N_{k',\lambda'}=0\rangle \langle n| p_{\mu} |\alpha\rangle \langle \alpha| p_{\nu} |0\rangle \\
\end{align}</math>
Let's define
<math>\mathbf{C(t)=C_1(t)+C_2(t)}</math>
where
<math>\begin{align}C_1(t) &=\frac{\delta_{n,0}}{i\hbar}\frac{e^2}{2mc^2}
\frac{1}{V}\frac{2\pi\hbar c^2}{\sqrt{\omega_{k}\omega_{k'}}}\hat{\lambda}\cdot {\hat{\lambda}^{'*}}
\langle
N_{k,\lambda}-1,N_{k',\lambda'}=1|(a_{k\lambda}a^{\dagger}_{k'\lambda'}+a^{\dagger}_{k'\lambda'}a_{k\lambda})
|N_{k,\lambda},N_{k',\lambda'}=0\rangle \times\\
&\int_{-\infty}^{t}dt'e^{\frac{i}{\hbar}(\epsilon_n-\epsilon_0)t'}e^{-i(\omega_{k}-\omega_{k'})t'}e^{2\eta
t'}\\
 
&=\frac{\delta_{n,0}}{i\hbar}\frac{e^2}{m}
\frac{1}{V}\frac{2\pi\hbar
}{\sqrt{\omega_{k}\omega_{k'}}}\hat{\lambda}\cdot {\hat{\lambda}^{'*}}\sqrt{N_{k\lambda}}
\times\frac{e^{\frac{i}{\hbar}(\epsilon_n-\epsilon_0)t}e^{-i(\omega_{k}-\omega_{k'})t}e^{2\eta
t}}{\frac{i}{\hbar}(\epsilon_n-\epsilon_0)-i(\omega_{k}-\omega_{k'})+2\eta}
\end{align}</math>
The second order term is
<math>\begin{align} C_2(t)&=
\frac{1}{(i\hbar)^2}\frac{e^2}{m^2c^2}\frac{1}{V}\frac{2\pi\hbar
c^2}{\sqrt{\omega_{k}\omega_{k'}}}\sqrt{N_{k\lambda}}
\sum_{\alpha}\int_{-\infty}^{t}dt' \int_{-\infty}^{t'}dt''
e^{\frac{i}{\hbar}(\epsilon_n-\epsilon_{\alpha})t'}e^{\frac{i}{\hbar}(\epsilon_{\alpha}-\epsilon_0)t''}\times\\
&\left( \langle n| p |\alpha\rangle \cdot \hat{\lambda}\langle
\alpha| p
|0\rangle\cdot{\hat{\lambda}}^{'*}e^{-i\omega_{k}t'}e^{\eta
t'}e^{i\omega_{k'}t''}e^{\eta t''}+
\langle n| p |\alpha\rangle \cdot \hat{\lambda}^{'*}\langle
\alpha| p |0\rangle\cdot{\hat{\lambda}}e^{i\omega_{k'}t'}e^{\eta
t'}e^{-i\omega_{k}t''}e^{\eta t''}\right)\\
&=
\frac{1}{(i\hbar)^2}\frac{e^2}{m^2}\frac{1}{V}\frac{2\pi\hbar}{\sqrt{\omega_{k}\omega_{k'}}}\sqrt{N_{k\lambda}}
\times\frac{e^{\frac{i}{\hbar}\left(\epsilon_n-\epsilon_0+\hbar\omega_{k'}-\hbar\omega_{k}\right)t}e^{2\eta
t}}{\frac{i}{\hbar}\left(\epsilon_n-\epsilon_0+\hbar\omega_{k'}-\hbar\omega_{k}-2i\hbar\eta\right)}
\times\\
&\sum_{\alpha}\left( \frac{\langle n| p |\alpha\rangle \cdot
\hat{\lambda}\langle \alpha| p
|0\rangle\cdot{\hat{\lambda}}^{'*}}{\frac{i}{\hbar}\left(\epsilon_{\alpha}-\epsilon_0+\hbar\omega_{k'}-i\hbar\eta\right)}+
\frac{\langle n| p |\alpha\rangle \cdot \hat{\lambda}^{'*}\langle
\alpha| p
|0\rangle\cdot{\hat{\lambda}}}{\frac{i}{\hbar}\left(\epsilon_{\alpha}-\epsilon_0-\hbar\omega_{k}-i\hbar\eta\right)}\right)\\  \end{align}</math>
Therefore,
<math>\begin{align}C(t)&=C_1(t)+C_2(t)\\
&=-\frac{e^{\frac{i}{\hbar}\left(\epsilon_n-\epsilon_0+\hbar\omega_{k'}-\hbar\omega_{k}\right)t}e^{2\eta
t}}{\left(\epsilon_n-\epsilon_0+\hbar\omega_{k'}-\hbar\omega_{k}-2i\hbar\eta\right)}\frac{\sqrt{N_{k\lambda}}}{V}
\frac{2\pi \hbar e^2}{m\sqrt{\omega_{k}\omega_{k'}}}\times\\
&\left(\delta_{n,0}\hat{\lambda}\cdot{\hat{\lambda}}^{'*}-\frac{1}{m}
\sum_{\alpha}\left( \frac{\langle n| p |\alpha\rangle \cdot
\hat{\lambda}\langle \alpha| p
|0\rangle\cdot{\hat{\lambda}}^{'*}}{\epsilon_{\alpha}-\epsilon_0+\hbar\omega_{k'}-i\hbar\eta}+
\frac{\langle n| p |\alpha\rangle \cdot \hat{\lambda}^{'*}\langle
\alpha| p
|0\rangle\cdot{\hat{\lambda}}}{\epsilon_{\alpha}-\epsilon_0-\hbar\omega_{k}-i\hbar\eta}\right)\right)\\
\end{align}</math>
The time dependent probability is
<math>\begin{align}\mathcal{P}(t)&=|C(t)|^2\\
&=\frac{e^{4\eta
t}}{\left(\epsilon_n-\epsilon_0+\hbar\omega_{k'}-\hbar\omega_{k}\right)^2+4\hbar^2\eta^2}\frac{N_{k\lambda}}{V^2}
\frac{4\pi^2 \hbar^2 e^4}{m^2\omega_{k}\omega_{k'}}\times\\
&\left|\delta_{n,0}\hat{\lambda}\cdot{\hat{\lambda}}^{'*}-\frac{1}{m}
\sum_{\alpha}\left( \frac{\langle n| p |\alpha\rangle \cdot
\hat{\lambda}\langle \alpha| p
|0\rangle\cdot{\hat{\lambda}}^{'*}}{\epsilon_{\alpha}-\epsilon_0+\hbar\omega_{k'}-i\hbar\eta}+
\frac{\langle n| p |\alpha\rangle \cdot \hat{\lambda}^{'*}\langle
\alpha| p
|0\rangle\cdot{\hat{\lambda}}}{\epsilon_{\alpha}-\epsilon_0-\hbar\omega_{k}-i\hbar\eta}\right)\right|^2\\ \end{align}</math>
and the transition rate is
<math>\begin{align}\Gamma&=\frac{\partial \mathcal{P}(t)}{\partial
t}\\
&=\frac{2\pi}{\hbar} \frac{N_{k\lambda}}{V^2}
\frac{4\pi^2 \hbar^2 e^4}{m^2\omega_{k}\omega_{k'}}\times
\delta\left(\epsilon_n-\epsilon_0-\hbar\omega_{k}+\hbar\omega_{k'}\right)
\times\\
&\left|\delta_{n,0}\hat{\lambda}\cdot{\hat{\lambda}}^{'*}-\frac{1}{m}
\sum_{\alpha}\left( \frac{\langle n| p |\alpha\rangle \cdot
\hat{\lambda}\langle \alpha| p
|0\rangle\cdot{\hat{\lambda}}^{'*}}{\epsilon_{\alpha}-\epsilon_0+\hbar\omega_{k'}-i\hbar\eta}+
\frac{\langle n| p |\alpha\rangle \cdot \hat{\lambda}^{'*}\langle
\alpha| p
|0\rangle\cdot{\hat{\lambda}}}{\epsilon_{\alpha}-\epsilon_0-\hbar\omega_{k}-i\hbar\eta}\right)\right|^2\\
\end{align}</math>
We observe that,
<math>\frac{i}{\hbar}[\mathcal{H}_0^{(at)},r]=\frac{1}{m}p\;\;\Rightarrow\;\;
\langle n| p |\alpha\rangle=\frac{i}{\hbar}m\langle
n|[\mathcal{H}_0^{(at)},r]
|\alpha\rangle=\frac{i}{\hbar}m(\epsilon_n-\epsilon_{\alpha})\langle n| r
|\alpha\rangle </math>
Taking  (as <math>\eta\rightarrow 0</math>) we get,
<math>\begin{align}&\frac{1}{m} \sum_{\alpha}\left( \frac{\langle n| p
|\alpha\rangle \cdot \hat{\lambda}\langle \alpha| p
|0\rangle\cdot{\hat{\lambda}}^{'*}}{\epsilon_{\alpha}-\epsilon_0+\hbar\omega_{k'}-i\hbar\eta}+
\frac{\langle n| p |\alpha\rangle \cdot \hat{\lambda}^{'*}\langle
\alpha| p
|0\rangle\cdot{\hat{\lambda}}}{\epsilon_{\alpha}-\epsilon_0-\hbar\omega_{k}-i\hbar\eta}\right)\\
&=\frac{i}{\hbar} \sum_{\alpha}\left(
\frac{(\epsilon_n-\epsilon_{\alpha})\langle n| r |\alpha\rangle \cdot
\hat{\lambda}\langle \alpha| p
|0\rangle\cdot{\hat{\lambda}}^{'*}}{\epsilon_{\alpha}-\epsilon_n+\hbar\omega_{k}-i\hbar\eta}+
\frac{(\epsilon_{\alpha}-\epsilon_0)\langle n| p |\alpha\rangle \cdot \hat{\lambda}^{'*}\langle
\alpha| r
|0\rangle\cdot{\hat{\lambda}}}{\epsilon_{\alpha}-\epsilon_0-\hbar\omega_{k}-i\hbar\eta}\right)\\
&=\frac{i}{\hbar} \sum_{\alpha}\left(-\langle n| r |\alpha\rangle
\cdot \hat{\lambda}\langle \alpha| p
|0\rangle\cdot{\hat{\lambda}}^{'*}+ \langle n| p |\alpha\rangle
\cdot \hat{\lambda}^{'*}\langle \alpha| r
|0\rangle\cdot{\hat{\lambda}}\right)\\
&+i\omega_{k} \sum_{\alpha}\left( \frac{\langle n| r
|\alpha\rangle \cdot \hat{\lambda}\langle \alpha| p
|0\rangle\cdot{\hat{\lambda}}^{'*}}{\epsilon_{\alpha}-\epsilon_n+\hbar\omega_{k}-i\hbar\eta}+
\frac{\langle n| p |\alpha\rangle \cdot \hat{\lambda}^{'*}\langle
\alpha| r
|0\rangle\cdot{\hat{\lambda}}}{\epsilon_{\alpha}-\epsilon_0-\hbar\omega_{k}-i\hbar\eta}\right)\\
&=\delta_{n0}\hat{\lambda}\cdot{\hat{\lambda}}^{'*}+i\omega_{k}
\sum_{\alpha}\left( \frac{\langle n| r |\alpha\rangle \cdot
\hat{\lambda}\langle \alpha| p
|0\rangle\cdot{\hat{\lambda}}^{'*}}{\epsilon_{\alpha}-\epsilon_n+\hbar\omega_{k}-i\hbar\eta}+
\frac{\langle n| p |\alpha\rangle \cdot \hat{\lambda}^{'*}\langle
\alpha| r
|0\rangle\cdot{\hat{\lambda}}}{\epsilon_{\alpha}-\epsilon_0-\hbar\omega_{k}-i\hbar\eta}\right)\\
\end{align}</math>
where in the second line we have used the energy conserving
<math>\delta-</math>function, giving
<math>\epsilon_n+\hbar\omega_{k'}=\epsilon_0+\hbar\omega_{k}</math>. Using the
above commutation relation again we finally find
<math>\begin{align}&\frac{1}{m} \sum_{\alpha}\left( \frac{\langle n| p
|\alpha\rangle \cdot \hat{\lambda}\langle \alpha| p
|0\rangle\cdot{\hat{\lambda}}^{'*}}{\epsilon_{\alpha}-\epsilon_0+\hbar\omega_{k'}-i\hbar\eta}+
\frac{\langle n| p |\alpha\rangle \cdot \hat{\lambda}^{'*}\langle
\alpha| p
|0\rangle\cdot{\hat{\lambda}}}{\epsilon_{\alpha}-\epsilon_0-\hbar\omega_{k}-i\hbar\eta}\right)\\
&=\delta_{n0}\hat{\lambda}\cdot{\hat{\lambda}}^{'*}+m\omega_{k}\omega_{k'}
\sum_{\alpha}\left( \frac{\langle n| r |\alpha\rangle \cdot
\hat{\lambda}\langle \alpha| r
|0\rangle\cdot{\hat{\lambda}}^{'*}}{\epsilon_{\alpha}-\epsilon_0+\hbar\omega_{k'}-i\hbar\eta}+
\frac{\langle n| r |\alpha\rangle \cdot \hat{\lambda}^{'*}\langle
\alpha| r
|0\rangle\cdot{\hat{\lambda}}}{\epsilon_{\alpha}-\epsilon_0-\hbar\omega_{k}-i\hbar\eta}\right)\\
\end{align}</math>
Therefore
<math>\Gamma =\frac{2\pi}{\hbar} \frac{N_{k\lambda}}{V^2} \frac{4\pi^2
\hbar^2 e^4}{m^2\omega_{k}\omega_{k'}}\times
\delta\left(\epsilon_n-\epsilon_0-\hbar\omega_{k}+\hbar\omega_{k'}\right)m^2\omega^2_{k}\omega^2_{k'}</math>
<math>( \sum_{\alpha}\left( \frac{\langle n|r|\alpha \rangle \hat{\lambda}\langle \alpha| r|0\rangle {\hat{\lambda}}^{'*}} {\epsilon_\alpha-\epsilon_0 + \hbar \omega_{k'}-i\hbar \eta} + \frac{\langle n| r |\alpha\rangle  \hat{\lambda}^{'*}\langle
\alpha| r|0\rangle \hat{\lambda}}{\epsilon_\alpha-\epsilon_0 - \hbar \omega_{k}-i\hbar \eta}\right)</math>
To get the total transition rate we need to sum over all wavevectors in a solid angle <math>d\Omega'</math>.
<math>\begin{align}dw\!\!&=\!\!\sum_{k'\in d\Omega'}\Gamma \\&= \frac{2\pi}{\hbar}
\frac{d\Omega'
\omega^2_{k'}}{8\pi^3c^3\hbar}\frac{N_{k\lambda}}{V}
\frac{4\pi^2 \hbar^2
e^4}{m^2\omega_{k}\omega_{k'}}m^2\omega^2_{k}\omega^2_{k'}\left|\sum_{\alpha}\left(
\frac{\langle n| r |\alpha\rangle \cdot \hat{\lambda}\langle
\alpha| r
|0\rangle\cdot {\hat{\lambda}}^{'*}}{\epsilon_{\alpha}-\epsilon_0+\hbar\omega_{k'}-i\hbar\eta}+
\frac{\langle n| r |\alpha\rangle \cdot \hat{\lambda}^{'*}\langle
\alpha| r
|0\rangle\cdot {\hat{\lambda}}}{\epsilon_{\alpha}-\epsilon_0-\hbar\omega_{k}-i\hbar\eta}\right)
\right|^2\\
&=d\Omega'\frac{e^4\omega_{k}\omega^3_{k'}}{c^3}\frac{N_{k\lambda}}{V}
\left|\sum_{\alpha}\left( \frac{\langle n| r |\alpha\rangle \cdot
\hat{\lambda}\langle \alpha| r
|0\rangle\cdot {\hat{\lambda}}^{'*}}{\epsilon_{\alpha}-\epsilon_0+\hbar\omega_{k'}-i\hbar\eta}+
\frac{\langle n| r |\alpha\rangle \cdot \hat{\lambda}^{'*}\langle
\alpha| r
|0\rangle\cdot {\hat{\lambda}}}{\epsilon_{\alpha}-\epsilon_0-\hbar\omega_{k}-i\hbar\eta}\right)
\right|^2\\\end{align}</math>
where <math>\epsilon_n+\hbar\omega_{k'}=\epsilon_0+\hbar\omega_{k}</math>. Finally
the differential cross-section is found by dividing by the photon
flux <math>\mathbf{c N_{k\lambda}/V}</math> to yield
<math>\frac{d\sigma}{d\Omega'}
=\frac{e^4\omega_{k}\omega^3_{k'}}{c^4}
\left|\sum_{\alpha}\left( \frac{\langle n| r |\alpha\rangle \cdot
\hat{\lambda}\langle \alpha| r
|0\rangle\cdot{\hat{\lambda}}^{'*}}{\epsilon_{0}-\epsilon_{\alpha}-\hbar\omega_{k'}+i\hbar\eta}+
\frac{\langle n| r |\alpha\rangle \cdot \hat{\lambda}^{'*}\langle
\alpha| r
|0\rangle\cdot{\hat{\lambda}}}{\epsilon_{0}-\epsilon_{\alpha}+\hbar\omega_{k}+i\hbar\eta}\right)
\right|^2</math>
Therefore the scattering cross-section is inversely proportional to the fourth power of wavelength ( for elastic scattering). This explains why sky is blue since blue light having lower wavelength, gets scattered more.
== '''<span style="color:#2B65EC">Non-perturbative methods</span>''' ==
''
Apart from the conventional perturbative methods, there also exist non-perturbative methods to approximately determine the lowest energy eigenstate or ground state, and some excited states, of a given system. Superconductivity and the fractional quantum Hall effect are examples of problems that were solved using non-perturbative methods. One of the important methods in the approximate determination of the wave function and eigenvalues of a system is the Variational Method, which is based on the variational principle. The variational method is a very general one that can be used whenever the equations can be put into the variational form. The variational method is now a pringboard to many numerical computations.
===Principle of the Variational Method===
Consider a completely arbitrary system with time independent Hamiltonian <math>\mathcal{H}</math> and we assume that it's entire spectrum is discrete and non-degenerate.
<math>\mathcal{H}|{\varphi}_{n}\rangle</math>=<math>\mathcal{E}_{n}|{\varphi}_{n}\rangle</math> ; <math>n = 0,1,2,\dots \!</math>
Let's apply the variational principle to find the ground state of the system.Let <math> |{\psi}\rangle </math> be an arbitrary ket of the system. We can define the expectation value of the Hamiltonian as
<span id="4.1.1"></span>
<math>\langle\mathcal{H}\rangle=\frac{\langle{\psi}|\mathcal{H}|{\psi}\rangle}{\langle{\psi}|{\psi}\rangle}\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad (4.1.1)
</math>
Of course, if the wavefunction is normalized so that <math>\langle{\psi}|{\psi}\rangle=1 </math>, then the expectation value of the hamiltonian is just: <math>\langle\mathcal{H}\rangle=\langle{\psi}|\mathcal{H}|{\psi}\rangle </math>
The variational principle states that,
<span id="4.1.2"></span>
<math>\langle\mathcal{H}\rangle\geq \mathcal{E}_0 \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad (4.1.2)</math>
<math>\langle\mathcal{H}\rangle= \mathcal{E}_0 \qquad \qquad \qquad \qquad \qquad \qquad</math> is only true if the wave functions used in the expectation value are the exact wave functions of the true ground state for the Hamiltonian; they can not be unperturbed or approximate wave functions.
Because the expectation value of the Hamiltionian is always greater than or equal to the ground state energy, this gives an upper bound for the ground state energy when using unperturbed wavefunctions to calculate the expectation value. 
If you are making a guess at the wavefunction, but do not know it explicitly, you can write it up to a parameter and then minimize the expectation value of the hamiltonian with respect to that parameter.  For example, we can write the ground state wavefunction of the hydrogen atom, up to a parameter as:
<math>\psi= \dfrac{e^{-b r}}{\sqrt{\pi a_o^3}}</math>
You would then minimize the expectation value of <math> \mathcal{H} \!</math> with respect to <math> b \! </math>, lowering your upper bound as far as possible so that you have a better idea of the true value of the energy.
In some cases a lower bound can also be found by a similar method.  In the case that <math>\langle{\psi}|V|{\psi}\rangle\geq \ 0 </math>, <math> V \!</math> is said to be a positive operator because <math>\langle{\psi}|\mathcal{H}+V|{\psi}\rangle = E \geq \langle{\psi}|V|{\psi}\rangle</math>.  Therefore, <math>\langle{\psi}|V|{\psi}\rangle</math> is a lower bound for the energy.
Since the exact eigenfunctions of <math> |{\varphi}\rangle </math> form a complete set, we can express our arbitrary ket <math> |{\psi}\rangle </math> as a linear combination of the exact wavefunction.Therefore,we have
<math> |{\psi}\rangle=\sum_{n} C_n |{\varphi}_n\rangle \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad (4.1.3)</math>
Multiplying both sides by <math>\langle{\psi}|\mathcal{H}</math> we get
<math>\langle{\psi}|\mathcal{H}|{\psi}\rangle= \sum_{n} |C_n|^{2}\langle{\varphi}_n| \mathcal{H} |{\varphi}_n\rangle =\sum_{n}|C_n|^{2} \mathcal{E}_n </math>
However,  <math> \mathcal{E}_n \geq \mathcal{E}_0 </math>. So, we can write the above equation as
<math>\langle{\psi}|\mathcal{H}|{\psi}\rangle \geq \mathcal{E}_0 \sum_{n} |C_n|^{2}</math>
Or
<math> \mathcal{E}_0 \leq \frac{ \langle{\psi}|\mathcal{H}|{\psi}\rangle } {\langle{\psi}|{\psi}\rangle} \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad (4.1.4)</math>
with <math>{\langle{\psi}|{\psi}\rangle}=\sum_{n} |C_n|^{2}</math>, thus proving eq. [[#4.1.2]].
Thus eq. [[#4.1.2]] gives an upper bound to the exact ground state energy. For the equality to be applicable in the <math>\qquad \text{(4.1.2)}</math> all coefficients except <math>\mathcal{C}_0</math> should be zero and then <math>|{\psi}\rangle </math> will be the eigenvector of the Hamiltonian and <math>\mathcal{E}_0</math> the ground state eigenvalue.
===Generalization of Variational Principle: The Ritz Theorem.===
We claim that the expectation value of the Hamiltonian is stationary in the neighborhood of its discrete eigenvalues. Let us again consider the expectation value of the Hamiltonian eq.[[#4.1.1]].
<math>\langle\mathcal{H}\rangle=\frac{\langle{\psi}|\mathcal{H}|{\psi}\rangle}{\langle{\psi}|{\psi}\rangle}</math>
Here <math>\langle\mathcal{H}\rangle</math> is considered as a functional of <math>|\psi\rangle</math>. Let us define the variation of <math>\langle\mathcal{H}\rangle</math> such that <math>|\psi\rangle</math> goes to <math>|\psi\rangle +| \delta \psi\rangle </math> where <math>| \delta \psi\rangle </math> is considered to be infinetly small. Let us rewrite eq.[[#4.1.1]] as
<math>\langle\mathcal{H}\rangle\langle{\psi}|{\psi}\rangle=\langle{\psi}|\mathcal{H}|{\psi}\rangle\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad (4.2.1)</math>.
Differentiating the above relation,
<span id="4.2.2"></span>
<math>\langle{\psi}|{\psi}\rangle\delta\langle\mathcal{H}\rangle+\langle\mathcal{H}\rangle[\langle{\psi}|\delta{\psi}\rangle+\langle\delta{\psi}|{\psi}\rangle]=\langle{\psi}|\mathcal{H}|{\delta\psi}\rangle+\langle{\delta\psi}|\mathcal{H}|{\psi}\rangle\qquad \qquad \qquad \qquad \qquad (4.2.2)</math>
However, <math>\langle\mathcal{H}\rangle</math> is just a c-number, so we can rewrite eq [[#4.2.2]] as
<span id="4.2.3"></span>
<math>\langle{\psi} | {\psi}\rangle\delta\langle\mathcal{H}\rangle =\langle{\psi} | [\mathcal{H}-\langle\mathcal{H}\rangle] | {\delta\psi}\rangle+\langle{\delta\psi} | [\mathcal{H}-\langle\mathcal{H}\rangle]|{\psi}\rangle\qquad \qquad \qquad \qquad \qquad (4.2.3)</math>.
If <math>\delta \langle \mathcal{H}\rangle=0 </math>, then the mean value of the Hamiltonian is stationary.
Therefore, 
<math>\langle{\psi} | [\mathcal{H}-\langle\mathcal{H}\rangle] | {\delta\psi}\rangle+\langle{\delta\psi} | [\mathcal{H}-\langle\mathcal{H}\rangle]|{\psi}\rangle=0 </math>.
Define,
<span id="4.2.4"></span>
<math>|{\varphi}\rangle =|[\mathcal{H}-\langle\mathcal{H}\rangle] | {\psi}\rangle \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad (4.2.4)</math>.
Hence,eq. [[#4.2.3]] becomes
<span id="4.2.5"></span>
<math>\langle{\varphi}|\delta{\psi}\rangle+ \langle\delta{\psi}|{\varphi}\rangle=0 \qquad \qquad \qquad \qquad \qquad \qquad (4.2.5)</math>.
We can define the variation of <math>|{\psi}\rangle</math> as
<math>|\delta{\psi}\rangle=\delta\lambda|\delta{\psi}\rangle</math>,
with <math>\lambda \!</math> being a small (real) number. Therefore eq [[#4.2.5]] can be written as
<math>\langle{\psi}|{\psi}\rangle \delta\lambda=0 \qquad \qquad \qquad \qquad \qquad \qquad (4.2.6)</math>
Since the norm is zero, the wave function itself should be zero. Keeping this in mind, if we analyze eq [[#4.2.4]], it's clear that we can rewrite it as an eigenvalue problem.
<math>\mathcal{H}|{\psi}\rangle=\langle\mathcal{H}\rangle|{\psi}\rangle \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad (4.2.7)</math>.
Finally, we can say that expectation value of the Hamiltonian is stationary if the arbitrary wavefunction <math>|{\psi}\rangle</math> is actually the eigenvector of the Hamiltonian with the stationary values of the expectation values of the Hamiltonian, <math>\langle\mathcal{H}\rangle </math> being precisely the eigen values of the Hamiltonian.
The general method is to find an approximate trial wavefunction that contain one or more parameters <math> \alpha, \beta, \gamma, \dots \! </math>. If the expectation value <math>\langle\mathcal{H}\rangle </math> can be differentiated with respect to these parameters, the extrema of <math>\langle\mathcal{H}\rangle </math> can be found using the following equation.
<math>\frac{\partial\langle\mathcal{H}\rangle}{\partial\alpha}=\frac{\partial\langle\mathcal{H}\rangle}{\partial\beta}=\frac{\partial\langle\mathcal{H}\rangle}{\partial\gamma}= \dots = 0 </math>
The absolute minimum of the expectation value of the Hamiltonian obtained by this method correspond to the upper bound on the ground state energy. The other relative, extrema corresponds to excited states. There are many virtues of using the Variational method. Even a poor approximation to the actual wave function can yield an excellent approximation to the actual energy.
===Upper Bound on First Excited State===
We claim that if <math>\langle{\psi}|{\varphi}_0\rangle=0</math>, then <math>\langle\mathcal{H}\rangle \geq \mathcal{E}_1</math>
where <math>\mathcal{E}_1</math> is the energy of the first excited state and <math>|{\varphi}_0\rangle</math> is the exact ground state of the Hamiltonian.
From <math>\qquad\text{(4.3)}</math> it is clear that if the above condition is satisfied then, <math> \mathcal{C}_0=0 </math>. Therefore,we can write the expectation value of the hamiltonian as
<math>\langle\mathcal{H}\rangle =\sum_{n=1} |\mathcal{C}_n|^2\mathcal{E}_n \geq \mathcal{E}_1 \sum_{n=1} |\mathcal{C}_n|^2</math>
Thus if we can find a suitable trial wavefunction that is orthogonal to the exact ground state wavefunction, then by calculating the expectation value of the Hamiltonian, we get an upperbound on the first excited state. The trouble is that we might not know the exact ground state( which is one reason why we implement the variational principle). However if we have a Hamiltonian which is an even function, then the exact ground state will be an even function and hence any odd trial function will be a right candidate as the first excited state wavefunction.
===A Special Case where The Trial Functions form a Subspace===
Assume that we choose for the trial kets the set of kets belonging to a vector subspace <math>\mathcal{F}</math> of <math>\mathcal{E}</math>. In this case, the variational method reduces to the resolution of the eigenvalue equation of the Hamiltonian <math>\mathcal{H}</math> inside <math>\mathcal{F}</math>, and no longer in all of <math>\mathcal{E}</math>.
To see this, we simply apply the argument of Sec. <math>\text{4.2}</math>, limiting it to the kets <math>|\psi\rangle</math> of the subspace <math>\mathcal{F}</math>. The maxima and minima of <math>\langle\mathcal{H}\rangle</math>, characterized by <math>\delta \langle\mathcal{H}\rangle=0</math>, are obtained when <math>|\psi\rangle</math> is an eigen vector of <math>\mathcal{H}</math> in <math>\mathcal{F}</math>. The
corresponding eigenvalues constitute the variational method approximation for the true eigenvalues of <math>\mathcal{H}</math> in <math>\mathcal{E}</math>.
We stress the fact that the restriction of the eigenvalue equation of <math>\mathcal{H}</math> to a subspace <math>\mathcal{F}</math> of the state space <math>\mathcal{E}</math> can considerably simplify its solution. However, if <math>\mathcal{F}</math> is badly chosen, it can also yield results which are rather far from true eigenvalues and eigenvectors of <math>\mathcal{H}</math> in <math>\mathcal{E}</math>. The subspace <math>\mathcal{F}</math> must therefore be chosen so as to simplify the problem enough to make it soluble, without too greatly altering the physical reality. In certain cases, it is possible to reduce the study of a complex system to that of a two-level system, or at least, to that of a system of a limited number of levels. Another important example of this procedure is the method of the linear combination of atomic orbitals, widely used in molecular physics. This method essentially consists of the determination of the wave functions of electrons in a molecule in the form of linear combination of the eigenfunctions associated with the various atoms which constitute the molecule, treated as if they were isolated. It, therefore, limits the search for the molecular states to a subspace chosen using physical criteria. Similarly, in complement, we shall choose as a trial wave function for an electron in a solid a linear combination of atomic orbitals relative to the various ions which constitute this solid.
===Applications of Variational Method===
==== Harmonic Potential====
Armed with the variational method let us apply it first to a simple Hamiltonian. Consider the following Hamiltonian with harmonic potential whose eigenvalues and eigenfunctions are known exactly. We will determine how close we can get with a suitable trial function.
<math>\mathcal{H}=-\frac{\hbar^2}{2m}\frac{d^2}{dx^2}+\frac{1}{2}mw^2x^2 \qquad \qquad\ \qquad \qquad \qquad \qquad (4.5.1.1)</math>
The above hamiltonian is even therefore, to find the ground state upper bound we need to use an even trial function. Let us consider the following state vector with one parameter <math>\mathcal{\alpha}</math>
<math>\psi(x)=A e^{-\alpha x^2}\qquad;\qquad\alpha>0  \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad (4.5.1.2)</math>
where <math>A \,\!</math> is the normalization constant.
Let us normalize the trial wavefunction to be unity
<math>1=\langle\psi|\psi\rangle= |A|^2\int_{-\infty}^{\infty}e^{-2\alpha x^2} dx =|A|^2\sqrt{\frac{\pi}{2\alpha}} \Rightarrow A=\left[ \frac{2\alpha}{\pi}\right] ^{1/4}\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad(4.5.1.3)</math>
While,
<math>\langle\mathcal{H}\rangle= |A|^2 \int_{-\infty}^{\infty} dx e^{-\alpha x^2}\left[ -\frac{\hbar^2}{2m}\frac{d^2}{dx^2}+\frac{1}{2}mw^2x^2 \right] e^{-\alpha x^2}=\frac{\hbar^2\alpha}{2m}+\frac{mw^2}{8\alpha}\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad(4.5.1.4)</math>
Minimizing the expectation value with respect to the parameter we get,
<math>\frac{\partial\langle\mathcal{H}\rangle}{\partial\alpha}=  \frac{\hbar^2}{2m}-\frac{mw^2}{8\alpha^2}=0 \Rightarrow \alpha=\frac{mw}{2\hbar}</math>
Putting this value back in the expectation value, we get
<math>\langle \mathcal{H}\rangle_{min}=\frac{1}{2}\hbar w</math>
Due to our judicious selection of trial wavefunction, we were able to find the exact ground state energy. If we want to find the first excited state, a suitable candidate for trial wavefunction would be an odd function.
'''Rational wave functions'''
The calculations of the previous sections enabled us to familiarize ourselves with the variational method, but they do not really allow us to judge its effectiveness as a method of approximation, since the families chosen always included the exact wave function. Therefore, we shall now choose trial functions of a totally different type, for example
<math>\psi_{a}(x)=\frac{1}{x^2+a}\qquad; \quad a>0 </math>
A simple calculation then yields:
<math>\langle \psi_{a}|\psi_{a}\rangle=\int_{-\infty}^{+\infty}\frac{dx}{\left(x^2+a\right)^2}=\frac{\pi}{2a\sqrt{a}}</math>
and finally:
<math>\langle\mathcal{H}\rangle (a)=\frac{\hbar^2}{4m}\frac{1}{a}+\frac{1}{2}m \omega^2 a</math>
The minimum value of this function is obtained for:
<math>a=a_{0}=\frac{1}{\sqrt{2}}\frac{\hbar}{m \omega}</math>
and is equal to:
<math>\langle\mathcal{H}\rangle (a_{0})=\frac{1}{\sqrt{2}}\, \hbar \omega </math>
The minimum value is therefore equal to <math>\sqrt{2}</math> times the exact ground state energy <math>\hbar \omega/2</math>. To measure the error committed, we can calculate the ratio of <math>\langle\mathcal{H}\rangle (a_{0})-\hbar \omega/2</math> to the energy quantum
<math>\hbar \omega</math>:
<math>\frac{\langle\mathcal{H}\rangle (a_{0})-\frac{1}{2} \hbar \omega}{\hbar \omega}=\frac{\sqrt{2}-1}{2} \simeq 20 \%</math>
''' Discussions '''
The example of the previous section shows that it is easy to obtain the ground state energy of a system, without significant error, starting with arbitrary chosen trial kets. This is one of the principal advantages of the variational method. Since the exact eigenvalue is a minimum of the mean value <math>\langle\mathcal{H}\rangle </math>, it is not surprising that <math>\langle\mathcal{H}\rangle </math> does not vary much near this minimum.
On the other hand, as the same reasoning shows, the "approximate" state can be rather different from the true eigenstate. Thus, in the example of the previous section, the wave function <math>\frac{1}{\left(x^2+a_{0}\right)}</math> decreases too rapidly for small values of <math>x \!</math> and much too slowly when <math>x \!</math> becomes large. The table below gives quantitative support for this qualitative assertion. It gives, for various values of <math>x^2 \!</math>, the values of the exact normalized eigenfunction:
<math>\varphi_{0}(x)=\left(\frac{2 \alpha_{0}}{\pi}\right)^{1/4} e^{-\alpha_{0} x^2}</math>
and of the approximate normalized eigenfunction of the wave function <math>\frac{1}{\left(x^2+a_{0}\right)}</math> :
<math>\sqrt{\frac{2}{\pi}} (a_{0})^{3/4} \psi_{a_{0}}(x) = \sqrt{\frac{2}{\pi}} \frac{(a_{0})^{3/4}}{x^2+a_{0}} = \sqrt{\frac{2}{\pi}} \left(2 \sqrt{2} \alpha_{0} \right)^{1/4} \frac{1}{1+2\sqrt{2} \alpha_{0} x^2}</math>,
where <math> a_0 = \frac{1}{2\sqrt2 \alpha_0} </math>.
{| class="wikitable" style="text-align:center"; border="1"
! <math> x\sqrt{\alpha_{0}}</math> !! <math>\left(\frac{2\alpha_0}{\pi}\right)^{1/4}e^{-\alpha_0 x^2}</math> !! <math>\sqrt{\frac{2}{\pi}} \frac{\left(2 \sqrt{2} \alpha_0
\right)^{1/4}}{1+2\sqrt{2} \alpha_{0} x^2}</math>
|-
| 0 || 0.893 || 1.034
|-
| 1/2 || 0.696 || 0.606
|-
| 1 || 0.329 || 0.270
|-
| 3/2 || 0.094 || 0.141
|-
| 2 || 0.016 || 0.084
|-
| 5/2 || 0.002 || 0.055
|-
| 3 || 0.0001 || 0.039
|}
Therefore, it is necessary to be very careful when physical properties other than the energy of the system are calculated using the approximate state
obtained from the variational method. The validity of the result obtained varies enormously depending on the physical quantity under consideration. In the particular problem which we are studying here, we find, for example, that the approximate mean value of the operator <math>X^2 \!</math> is not very different from the exact value:
<math>\frac{\langle \psi_{a_{0}}|X^2|\psi_{a_{0}}\rangle}{\langle \psi_{a_{0}}|\psi_{a_{0}}\rangle}=\frac{1}{\sqrt{2}}\frac{\hbar}{m \omega}</math>
which is to be compared with <math>\hbar/{2 m \omega}</math>. On the other hand, the mean value of <math>X^4</math> is infinite for the approximate normalized eigenfunction, while it is, of course, finite for the real wave function. More generally, the table shows that the approximation will be very poor for all properties which depend strongly on the behavior of the wave function for <math>x \gtrsim 2/\sqrt{\alpha_{0}}</math>.
The drawback we have just mentioned is all the more serious as it is very difficult, if not impossible, to evaluate the error in a variational calculation if we do not know the exact solution of the problem (and, of course, if we use the variational method, it is because we do not know this exact solution).
The variational method is therefore a very flexible approximation method, which can be adapted to very diverse situations and which gives great scope to physical intuition in the choice of trial kets. It gives good values for the energy rather easily, but the approximate state vectors may present
certain completely unpredictable erroneous features, and we can not check these errors. This method is particularly valuable when physical arguments give us an idea of the qualitative or semi-qualitative form of the solutions.
Here is another problem related to the energy of the ground state and first excited state of a harmonic potential.
-[http://wiki.physics.fsu.edu/wiki/index.php/Chapter4problem problem1]
==== Delta Function Potential====
As another example lets consider the delta function potential.
Suppose <math> H = \frac{-\hbar^2}{2m}\frac{d^2}{dx^2} - \alpha \delta(x) </math>.
Use as a trial wave function, a gaussian function. <math> \Psi(x) = Ae^{-bx^2} </math>
First normalizing this:
<math> 1=|A|^2 \int_{-\infty}^{\infty} e^{-2bx^2}dx = |A|^2 \sqrt(\frac{\pi}{2b}) \rightarrow A=(\frac{2b}{\pi})^{1/4} </math>
First calculate <T> then <V>.
<math> <T>= -\frac{\hbar^2}{2m}|A|^2\int_{-\infty}^{\infty} e^{-bx^2}\frac{d^2}{dx^2}(e^{-bx^2})dx = \frac{\hbar^2b}{2m} </math>
<math> <V>=-\alpha^2|A|^2\int_{-\infty}^{\infty} e^{-bx^2}\delta(x)dx = -\alpha\sqrt(\frac{2b}{\pi})</math>
Evidently <math> <H> = \frac{\hbar^2b}{2m}-\alpha\sqrt(\frac{2b}{\pi}) </math>
Minimizing with respect to the parameter b:
<math> \frac{d}{db}<H> = \frac{\hbar^2}{2m}-\frac{\alpha}{\sqrt(2\pi b)} = 0 </math>
<math>b = \frac{2m^2 \alpha^2}{\pi \hbar^4} </math>
So, plugging b back into the expression for the expectation value, we get
<math> <H>_{min}=-\frac{m \alpha^2}{\pi \hbar^2} </math>
====Ground State of Helium atom====
Let us use variational principle to determine the ground state energy of a Helium atom with a stationary nucleus. Helium atom has two electrons and two protons. For simplicity, we ignore the presence of neutrons. We also assume that the atom is non relativistic and ignore spin. 
The Hamiltonian can be written as
<span id="4.5.3.1"></span>
<math>
\mathcal{H} =  -\frac{\hbar^2}{2m}\left(\boldsymbol\nabla_1^2+\boldsymbol \nabla_2^2\right)- \frac{Ze^2}{|\vec{r}_1|} - \frac{Ze^2}{|\vec{r}_2|}+\frac{e^2}{|\vec{r}_1-\vec{r}_2|}, \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \mbox{(4.5.3.1)}</math>
where <math>\vec{r}_1,\vec{r}_2 \!</math> are the coordinates of the two electrons.
If we ignore the mutual interaction term, then the wavefunction will be the product of the two individual electron wavefunction which in this case is that of a hydrogen like atom. Therefore, the ground state wavefunction can be written as
<span id="4.5.3.2"></span>
<math> \psi_0(\vec{r}_1,\vec{r}_2)=\psi_{100}( \vec{r}_1)\psi_{100}(\vec{r}_2),\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \mbox{(4.5.3.2)}</math>
where we ignored spin and
<span id="4.5.3.3"></span>
<math>\psi_{100}\left(\vec{r}_{1,2}\right)=\left(\frac{Z^3}{\pi{a_0}^3}\right)^{1/2} \exp\left[-{\frac{Z |\vec{r}_{1,2}|}{a_0}}\right],
\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \mbox{(4.5.3.3)}</math>
where <math> a_0 = \frac{\hbar^2}{me^2}. \! </math>
Therefore we can write
<span id="4.5.3.4"></span>
<math>\psi_{0}\left(\vec{r}_1,\vec{r}_2\right)=\frac{Z^3}{\pi{a_0}^3} \exp\left[-{\frac{Z ( |\vec{r}_1|+|\vec{r}_2|)}{a_0}}\right]. \qquad \qquad\qquad \qquad \qquad \qquad \qquad \qquad \qquad \mbox{(4.5.3.4)}</math>
We can write the lowest unperturbed energy for this situation with <math>Z=2\!</math> as
<span id="4.5.3.5"></span>
<math> E_0' = 2 \left( -\frac{m(Ze^2)^2}{2\hbar^2}\right) \simeq 2\left(-Z^2 \times 13.6 eV \right) = - 4 \times 13.6 eV = -108.8 eV. \qquad \qquad \mbox{(4.5.3.5)}      </math>
The first order correction to the energy is
<math>
\Delta E = \left\langle V \left( \vec{r}_1, \vec{r}_2 \right) \right\rangle
        = \int \int \left| \psi_0\left(\vec{r}_1, \vec{r}_2 \right) \right|^2
        \frac{Ze^2}{\left| \vec{r}_1 - \vec{r}_2 \right|} d^3 r_1 d^3 r_2
        = \frac{5Ze^2}{4a_0} = \frac{5}{2} \times 13.6eV = 34 eV. 
</math>
Therefore, the ground state energy in first approximation is
<math>
E_0 = E_0' + \Delta E = - 108.8 eV + 34 eV = - 74.8 eV
\!</math>
However, the ground state energy has been experimentally determined accurately to <math> -78.86 eV \!</math>. Therefore, our model is not a good one. Now, let us apply variational method using a trial wavefunction. The trial wavefucntion we are going to use is [[#4.5.3.2]] itself but we will allow <math>Z \!</math> to remain a free parameter. This argument is perfectly valid since each electron screens the nuclear charges seen by the other electron and hence the effective atomic number is less than <math> 2 \!</math>.
We can manipulate the Hamiltonian with <math> Z \!</math> going to <math> \sigma \! </math> and rewriting the potential term as <math>\frac{\sigma e^2}{|\vec{r}|}+\frac{(Z-\sigma) e^2}{|\vec{r}|}</math>. So the Hamiltonian becomes
<math>
\mathcal{H}= -\frac{\hbar^2}{2m}(\boldsymbol\nabla_1^2+\boldsymbol \nabla_2^2)- \frac{\sigma e^2}{|\vec{r}_1|}-\frac{(Z-\sigma) e^2}{|\vec{r}_1|}-\frac{\sigma e^2}{|\vec{r}_2|}-\frac{(Z-\sigma) e^2}{|\vec{r}_2|}+\frac{e^2}{|\vec{r}_1-\vec{r}_2|}
</math>
Now we can use the variational principle. The expectation value of the Hamiltonian is
<span id="4.5.3.6"></span>
<math>
\begin{align}
\langle \mathcal{H}\rangle
= & \int{d^3r_1}\int{d^3r_2} \psi^*_{100}(\vec{r}_1)\psi^*_{100}(\vec{r}_2) \\
& \times \left[-\frac{\hbar^2}{2m}\boldsymbol\nabla_1^2-\frac{(Z-\sigma) e^2}{|\vec{r}_1|}- \frac{\sigma e^2}{|\vec{r}_1|} -\frac{\hbar^2}{2m}\boldsymbol\nabla_2^2-\frac{(Z-\sigma) e^2}{|\vec{r}_2|}-\frac{\sigma e^2}{|\vec{r}_2|}+\frac{e^2}{|\vec{r}_1-\vec{r}_2|}\right] \psi_{100}( \vec{r}_1)\psi_{100}(\vec{r}_2)\qquad \mbox{(4.5.3.6)}
\end{align}
</math>
The first two terms give
<math>E_0^{(1)}(\sigma)=- \frac{(Z-\sigma)^2 me^4}{2\hbar^2}.</math>
The fourth and fifth term will give the same. The third term and sixth term will give
<math>E_0^{(2)}(\sigma)=-{\sigma e^2} \left\langle\frac{1}{r_1}\right\rangle = -{\sigma e^2}\frac{\left(Z-\sigma\right)}{a_0} = - \frac{me^4}{\hbar^2} \sigma \left(Z-\sigma\right).</math>
The seventh term will give an expectation value of
<math> E_0^{(3)}(\sigma)= \frac{5 \left(Z-\sigma\right) m e^4}{8\hbar^2}. </math>
Adding all this we get,
<span id="4.5.3.7"></span>
<math>
\begin{align}
E_0(\sigma) &= -\frac{m e^4}{\hbar^2}\left( \left(Z - \sigma\right)^2 + 2 \sigma\left(Z-\sigma\right) - \frac{5 \left(Z - \sigma\right)}{8}\right) \\
& = -\frac{e^2}{a_0}\left( Z^2 - \frac{5}{8}Z + \frac{5}{8}\sigma - \sigma^2 \right),
\end{align}  \qquad \qquad \qquad \qquad \qquad \mbox{(4.5.3.7)}
</math>
where <math> a_0 = \frac{\hbar^2}{me^2}. \!</math>
'''Excercise 18.22 of E. Merzbacher's Quantum Mechanics (3rd Ed.)'''
----
Since <math> \sigma \! </math> in [[#4.5.3.7]] is the variational parameter, we can minimize the energy, <math> E_0(\sigma) \!</math> with respect to <math> \sigma \! </math>. That is,
<math>
\frac{\partial E_0(\sigma)}{\partial \sigma} = -\frac{e^2}{a_0} \left( \frac{5}{8} - 2 \sigma \right) = 0.
</math>
This will give us
<math> \sigma = \frac{5}{16}.</math>
Therefore, putting this value in [[#4.5.3.7]] , then we have
<math>
E_0\left(\frac{5}{16}\right) = - \left(Z-\frac{5}{16}\right)^2 \frac{e^2}{a_0}
= - \frac{Z_{\mbox{eff}}^2 e^2}{a_0},
</math>
where <math> Z_{\mbox{eff}} = \left( Z - \sigma \right) = \left( Z - \frac{5}{16} \right). </math>
Putting <math> Z=2 \!</math>, we get <math> Z_{\mbox{eff}} = 1.6875. \!</math>
Substituting this, <math> Z_{\mbox{eff}} \! </math> in [[#4.5.3.5]] instead of <math> Z \! </math>, we get
<math> E_0 = -77.46 eV \!</math>
which is very close to the experimental value<math> \left( \sim - 78.86 eV \right) \! </math>. Thus, using variational principle we were able to calculate the ground state energy of the helium atom very close to the experimental value.
----
A sample problem related '''Rayleigh_Ritz variational principle''': '''Exercise 18.23''' of ''Quantum Mechanics'', 3rd Ed., which is written by ''Eugen Merzbacher'': [http://wiki.physics.fsu.edu/wiki/index.php/Phy5646/Rayleigh_Ritz_Variational_Principle_Ex]
A problem related to the variational principle and non-degenerate perturbation theory:
[http://wiki.physics.fsu.edu/wiki/index.php/Chapter4problem2 problem]
=='''<span style="color:#2B65EC">Spin</span>''' ==
==='''<span style="color:#2B65EC">General Theory of Angular Momentum </span>''' ===
Up to now we have been working with the rotational degree of freedom using the orbital angular momentum. Namely we use the operator <math>\mathbf L</math> (the generator of rotations in <math>\mathbb{R}^{3}</math>) to construct wave functions which carry the rotational information of the system.
To clarify, all that we have done consists of quantizing everything that we know from classical mechanics. Specifically:
* Invariance of time translation    <math> \rightarrow </math> Conservation of the Hamiltonian
* Invariance of Spatial translation  <math> \rightarrow </math> Conservation of the Momentum   
* Invariance of Spacial Rotations    <math> \rightarrow </math> Conservation of Orbital Angular Momentum
However nature shows that there are other kinds of degrees of freedom that don't have classical analog. The first one was observed for the first time in 1922 by Stern and Gerlach (see Cohen-Tannoudji Chap 4). They saw that electrons have one additional degree of freedom of the angular momentum. This degree of freedom was called "Spin 1/2" since they exhibit just two single results: Up and Down. It is interesting to note that from the algebra of angular momenta, it is necessary that spins take on either half-integer or integer values; there is no continuous range of possible spins. (For example, one will never find a spin 2/3 particle.)
Spin 1/2 is the first truly revolutionary discovery of quantum mechanics. The properties of this physical quantity in itself, the importance of its existence, and the universality of its physical effects were totally unexpected.
The physical phenomenon is the following. In order to describe completely the physics of an electron, one cannot use only its degrees of freedom corre- sponding to translations in space. One must take into account the existence of an internal degree of freedom that corresponds to an intrinsic angular mo- mentum. In other words, the electron, which is a pointlike particle, “spins” on itself. We use quotation marks for the word “spins”. One must be cautious with words, because this intrinsic angular momentum is purely a quantum phenomenon. It has no classical analogue, except that it is an angular mo- mentum.
One can use analogies and imagine that the electron is a sort of quantum top. But we must keep in mind the word “quantum”. The electron is a pointlike object down to distances of
<math>10^{-18}</math> m.One must admit that a pointlike object can possess an intrinsic angular momentum.
The goal of this section is to extend the notion of orbital angular momentum to a general case. For this we use the letter <math>\mathbf{J}</math> for this abstract angular momentum. As we will see, orbital angular momentum is just a simple case of the general angular momentum.
'''Experimental results'''
Experimentally, this intrinsic angular momentum, called spin, has the follow- ing manifestations (we do not enter in any technical detail):
1. If we measure the projection of the spin along any axis, whatever the state of the electron, we find either of two possibilities:
<math>\frac{\hbar}{2}</math>    or    <math>-\frac{\hbar}{2}</math> 
There are two and only two possible results for this measurement.
2.Consequently, if one measures the square of any component of the spin, the result is  <math>\frac{\hbar}{4}</math> with probability one.
3. Therefore, a measurement of the square of the spin <math>S^{2}=S_{x}^{2}+S_{y}^{2}+S_{z}^{2}</math> gives the result
<math>S^{2}=\frac{3\hbar^{2}}{4}</math>
4. A system that has a classical analogue, such as a rotating molecule, can rotate more or less rapidly on itself. Its intrinsic angular momentum can take various values. However, for the electron, as well as for many other particles, it is an amazing fact that the square of its spin S2 is always the same. It is fixed: all electrons in the universe have the same values of the square of their spins <math>S^{2}=\frac{3\hbar^{2}}{4}</math>. The electron “spins” on itself, but it is not possible to make it spin faster.
One can imagine that people did not come to that conclusion immediately. The discovery of the spin 1/2 of the electron is perhaps the most breathtaking story of quantum mechanics.
The elaboration of the concept of spin was certainly the most difficult step of all quantum theory during the first quarter of the 20th century. It is a real suspense that could be called the various appearances of the number 2 in physics. There are many numbers in physics; it is difficult to find a simpler one than that.
And that number 2 appeared in a variety of phenomena and enigmas that seemed to have nothing to do a priori with one another, or to have a common explanation. The explanation was simple, but it was revolutionary. For the first time people were facing a purely quantum effect, with no classical analogue. Nearly all the physical world depends on this quantity, the spin 1/2.
The challenge existed for a quarter of a century (since 1897). Perhaps, there was never such a long collective effort to understand a physical structure. It is almost impossible to say who discovered spin 1/2, even though one personality dominates, Pauli, who put all his energy into finding the solution.
We show that in order to manipulate spin 1/2, and understand technical- ities we essentially know everything already. We have done it more or less on two-state systems.
'''Note on Hund's Rules'''
Hund's Rules describe <math> L-S \!</math> coupling approximation as long as there is convergence for an atom with a given configuration. Below list the steps.
1. Choose a maximum value of <math> S \!</math> (total spin) consistent with Pauli exclusion principle
2. Choose a maximum value of <math> L \!</math> (angular momentum)
3. If the shell is less than half full, choose <math>J=J_{min} =|L-S| \!</math>
4. If the shell is more than half full, choose <math>J=J_{max} =L+S \!</math>
For example, we use Silicon: <math>1s^2 2s^2 2p^6 3s^2 3p^2 \!</math> then <math>S= \dfrac{1}{2} + \dfrac{1}{2} = S_{max} \!</math> for 2 spin <math>\dfrac{1}{2}\!</math> particles. Angular momentum is <math> L=1 \!</math> and since it is less than half full, <math> J=0 \!</math>.
Since the equations has the same form (same commutation relationships) as in the case of orbital angular momentum, we can easily extend everything:
:<math>\begin{align}
\mathbf{J}^2&|j,m \rangle = \hbar^{2} j(j+1)|j,m \rangle\\
J_z & |j,m \rangle = \hbar m|j,m \rangle;\;\;\;\;\;-j\le m\le j  \\
J_{\pm} & |j,m \rangle = \hbar \sqrt{j(j+1)-m(m\pm 1)}|j,m \pm 1 \rangle\\
\end{align}</math>
One important feature is that the allowed values for <math>m\!</math> are integers or half-integers (See Shankar). Therefore the possible values for <math>j\!</math> are
:<math>\begin{align}
j=0,\;1/2,\;1,\;3/2,\;2,\;5/2...
\end{align}</math>
We can construct the following table:
:<math>\begin{array}{c|r|r|r|r|r|r|r}
j \rightarrow& 0 & 1/2 & 1 & 3/2 & 2 & 5/2 & ...\\
\hline
m& 0 & 1/2 & 1 & 3/2 & 2 & 5/2 & ...\\
\downarrow&  &-1/2 & 0 & 1/2 & 1 & 3/2 &    \\
&  &    &-1 &-1/2 & 0 & 1/2 &    \\
&  &    &  &-3/2 &-1 &-1/2 &    \\
&  &    &  &    &-2 &-3/2 &    \\
&  &    &  &    &  &-5/2 &    \\
\hline
&(2\cdot 0+1)&(2\cdot \frac{1}{2}+1)&(2\cdot1+1)&(2\cdot \frac{3}{2}+1)&(2\cdot2+1)&(2\cdot \frac{5}{2}+1)&(2\cdot j+1)    \\
&=1          &=2                    &=3        &=4                    &=5        &=6                    &\\
\hline
\end{array}</math>
Each of these columns represent subspaces of the basis <math>|j,m \rangle \!</math> that diagonalize <math>\mathbf{J}^{2}\!</math> and <math>J_z \!</math>. For orbital angular momentum the allowed values for <math>m\!</math> are integers. This is due to the periodicity of the azimuthal angle.
For electrons, they have an additional degree of freedom which takes values "up" or "down". Physically this phenomena appears when the electron is exposed to magnetic fields. Since the coupling with the magnetic field is via magnetic moment, it is natural to consider this degree of freedom as internal angular momentum. Since there are just 2 states, therefore, the angular momentum is represented by the subspace <math>j=1/2\!</math>.
It is important to see explicitly the representation of this group. Namely we want see the matrix elements of the operators <math>\mathbf{J}^2 \! </math>, <math> J_x \!</math>, <math>J_y \!</math> and <math>J_z \!</math>. The procedure is as follow:
* <math>\mathbf{J}^{2}</math> and <math>J_z \!</math> are diagonal since the basis are their eigenvectors.
* To find <math>J_x\!</math> and <math>J_y\!</math>, we use the fact that
:<math>\begin{align}
J_x&=\frac{1}{2}[J_+ + J_- ]\\
J_y&=\frac{1}{2i}[J_+ - J_- ]\\
\end{align}</math>
And the matrix elements of <math>J_{\pm} \!</math> are given by
:<math>\begin{align}
\langle j',m'|J_{\pm}|j,m\rangle &= \langle j',m'|\hbar \sqrt{j(j+1)-m(m\pm1)}|j,m\pm 1\rangle \\
&= \hbar \sqrt{j(j+1)-m(m\pm1)}\delta_{j' j} \delta_{m' m\pm 1} \\
\end{align}</math>
Let's find the representations for the subspaces <math>j=0,\frac{1}{2}\!</math>, and <math> 1 \!</math>
'''Subspace <math>j=0</math>: (matrix 1x1)'''
* <math>\mathbf{J}^{2}=0 \!</math>
* <math>J_z=0 \!</math>
* <math>\langle 00|J_{\pm}|00\rangle =0 \;\;\;\rightarrow\;\;\;J_x=J_y=0 \!</math>
'''Subspace <math>j=1/2</math>: (matrix 2x2)'''
* <math>\mathbf{J}^{2}=
\begin{array}{r|c|c}
                  & |1/2,1/2\rangle      & |1/2,-1/2\rangle \\ \hline
\langle 1/2,1/2|  & \frac{3}{4}\hbar^{2} & 0                \\ \hline
\langle 1/2,-1/2|  &        0            &  \frac{3}{4}\hbar^{2} \\ \hline
\end{array}
=\frac{3}{4}\hbar^{2}
\begin{pmatrix}
  1 & 0 \\
  0 & 1
\end{pmatrix}
</math>
* <math>J_z=
\begin{array}{r|c|c}
                  & |1/2,1/2\rangle      & |1/2,-1/2\rangle \\ \hline
\langle 1/2,1/2|  & \frac{1}{2}\hbar & 0                \\ \hline
\langle 1/2,-1/2|  &        0            &  -\frac{1}{2}\hbar \\ \hline
\end{array}
=\frac{1}{2}\hbar
\begin{pmatrix}
  1 & 0 \\
  0 & -1
\end{pmatrix}
</math>
* For <math>J_+ \!</math> and <math> J_- \!</math> are given by
:<math>\begin{align}
J_+ & = \hbar \sqrt{\frac{1}{2}\left(\frac{1}{2}+1\right)-m(m+1)}\;\;\;\delta_{\frac{1}{2},\frac{1}{2}} \delta_{m',m+1}\\
&=\begin{array}{r|c|c}
                  & |1/2,1/2\rangle                                & |1/2,-1/2\rangle \\ \hline
\langle 1/2,1/2|  & 0                                              & \hbar \sqrt{\frac{1}{2}\left(\frac{1}{2}+1\right)-\left(-\frac{1}{2}\right)\left((-\frac{1}{2})+1\right)} \\ \hline
\langle 1/2,-1/2|  & 0                                              & 0                              \\ \hline
\end{array}
=\hbar
\begin{pmatrix}
  0 & 1 \\
  0 & 0\\
\end{pmatrix}
\end{align}</math>
:<math>\begin{align}
J_- & = \hbar \sqrt{\frac{1}{2}\left(\frac{1}{2}+1\right)-m(m-1)}\;\;\;\delta_{\frac{1}{2},\frac{1}{2}} \delta_{m',m-1}\\
&=\begin{array}{r|c|c}
                  & |1/2,1/2\rangle                                & |1/2,-1/2\rangle \\ \hline
\langle 1/2,1/2|  & 0                                              & 0\\ \hline
\langle 1/2,-1/2|  & \hbar \sqrt{\frac{1}{2}\left(\frac{1}{2}+1\right)-\left(\frac{1}{2}\right)\left((\frac{1}{2})-1\right)}  & 0                              \\ \hline
\end{array}
=\hbar
\begin{pmatrix}
  0 & 0 \\
  1 & 0\\
\end{pmatrix}
\end{align}</math>
* The matrices for <math>J_x \!</math> and <math> J_y \!</math> are given by
:<math>\begin{align}
J_x & = \frac{1}{2}[J_+ + J_- ]
=\frac{1}{2}\left [
\hbar\begin{pmatrix}
  0 & 1 \\
  0 & 0\\
\end{pmatrix}
+
\hbar\begin{pmatrix}
  0 & 0 \\
  1 & 0\\
\end{pmatrix}
\right ]\\
&=\frac{\hbar}{2}\begin{pmatrix}
  0 & 1 \\
  1 & 0\\
\end{pmatrix}\\
J_y & = \frac{1}{2i}[J_+ - J_- ]
=\frac{1}{2i}\left [
\hbar\begin{pmatrix}
  0 & 1 \\
  0 & 0\\
\end{pmatrix}
-
\hbar\begin{pmatrix}
  0 & 0 \\
  1 & 0\\
\end{pmatrix}
\right ]\\
&=\frac{\hbar}{2i}\begin{pmatrix}
  0 & 1 \\
  -1 & 0\\
\end{pmatrix}
=\frac{\hbar}{2}\begin{pmatrix}
  0 & -i \\
  i & 0\\
\end{pmatrix}
\end{align}</math>
'''Subspace <math>j=1</math>: (matrix 3x3)'''
* <math>\mathbf{J}^{2}=
\begin{array}{r|c|c|c}
                  & |1,1\rangle      & |1,0\rangle    & |1,-1\rangle  \\ \hline
\langle 1,1|      & 2\hbar^{2}      & 0              &0              \\ \hline
\langle 1,0|      &        0        & 2\hbar^{2}      &0              \\ \hline
\langle 1,-1|      &        0        & 0              &2\hbar^{2}    \\ \hline
\end{array}
=2\hbar^{2}
\begin{pmatrix}
  1 & 0 & 0\\
  0 & 1 & 0\\
  0 & 0 & 1\\
\end{pmatrix}
</math>
* <math> J_z =
\begin{array}{r|c|c|c}
                  & |1,1\rangle      & |1,0\rangle    & |1,-1\rangle  \\ \hline
\langle 1,1|      & \hbar            & 0              &0              \\ \hline
\langle 1,0|      &        0        & 0              &0              \\ \hline
\langle 1,-1|      &        0        & 0              &-\hbar        \\ \hline
\end{array}
=\hbar
\begin{pmatrix}
  1 & 0 & 0\\
  0 & 0 & 0\\
  0 & 0 & -1\\
\end{pmatrix}
</math>
* For <math>J_+ \!</math> and <math> J_- \!</math> are given by
:<math>\begin{align}
J_+ & = \hbar \sqrt{1(1+1)-m(m+1)}\;\;\;\delta_{1,1} \delta_{m',m+1}\\
&=\begin{array}{r|c|c|c}
                  & |1,1\rangle      & |1,0\rangle                & |1,-1\rangle  \\ \hline
\langle 1,1|      & 0                & \hbar \sqrt{1(1+1)-0(0+1)} &0                                      \\ \hline
\langle 1,0|      &        0        & 0                          &\hbar \sqrt{1(1+1)-(-1)((-1)+1)}              \\ \hline
\langle 1,-1|      &        0        & 0                          &0                                      \\ \hline
\end{array}
=\hbar
\begin{pmatrix}
  0 & \sqrt{2} & 0\\
  0 & 0        & \sqrt{2}\\
  0 & 0        & 0\\
\end{pmatrix}
\end{align}
</math>
:<math>\begin{align}
J_- & = \hbar \sqrt{1(1+1)-m(m-1)}\;\;\;\delta_{1,1} \delta_{m',m-1}\\
&=\begin{array}{r|c|c|c}
                  & |1,1\rangle              & |1,0\rangle                & |1,-1\rangle  \\ \hline
\langle 1,1|      & 0                        & 0                          &0              \\ \hline
\langle 1,0|      &\hbar \sqrt{1(1+1)-1(1-1)} & 0                          &0              \\ \hline
\langle 1,-1|      & 0                        &\hbar \sqrt{1(1+1)-0(0-1)}  &0              \\ \hline
\end{array}
=\hbar
\begin{pmatrix}
  0 & 0 & 0\\
  \sqrt{2} & 0        & 0\\
  0 & \sqrt{2}        &0 \\
\end{pmatrix}
\end{align}</math>
* The matrices for <math>J_x \!</math> and <math>J_y \!</math> are given by
:<math>\begin{align}
J_x & = \frac{1}{2}[J_+ + J_- ]
=\frac{1}{2}\left [
\hbar
\begin{pmatrix}
  0 & \sqrt{2} & 0\\
  0 & 0        & \sqrt{2}\\
  0 & 0        & 0\\
\end{pmatrix}
+
\hbar
\begin{pmatrix}
  0 & 0 & 0\\
  \sqrt{2} & 0        & 0\\
  0 & \sqrt{2}        &0 \\
\end{pmatrix}
\right ]\\
&=\frac{\hbar}{\sqrt{2}}
\begin{pmatrix}
  0 & 1 & 0\\
  1 & 0 & 1\\
  0 & 1 & 0 \\
\end{pmatrix}\\
J_y & = \frac{1}{2i}[J_+ - J_- ]
=\frac{1}{2i}\left [
\hbar
\begin{pmatrix}
  0 & \sqrt{2} & 0\\
  0 & 0        & \sqrt{2}\\
  0 & 0        & 0\\
\end{pmatrix}
-
\hbar
\begin{pmatrix}
  0 & 0 & 0\\
  \sqrt{2} & 0        & 0\\
  0 & \sqrt{2}        &0 \\
\end{pmatrix}
\right ]\\
&=\frac{\hbar}{\sqrt{2}i}
\begin{pmatrix}
  0 & 1 & 0\\
  -1 & 0 & 1\\
  0 & -1 & 0 \\
\end{pmatrix}
=\frac{\hbar}{\sqrt{2}}
\begin{pmatrix}
  0 & -i & 0\\
  i & 0 & -i\\
  0 & i & 0 \\
\end{pmatrix}
\end{align}</math>
'''Summary'''
The following table is the summary of above calculations. 
<math>
\begin{array}{r|c|c|c|c|c|c|c|c}
                  & j=0  & j=1/2 &j=1  \\ \hline
\mathbf{J}^{2}   
&0
&\frac{3}{4}\hbar^{2}
\begin{pmatrix}
  1 & 0 \\
  0 & 1
\end{pmatrix}
&2\hbar^{2}
\begin{pmatrix}
  1 & 0 & 0\\
  0 & 1 & 0\\
  0 & 0 & 1\\
\end{pmatrix}\\ \hline
J_z
&0
&\frac{\hbar}{2}
\begin{pmatrix}
  1 & 0 \\
  0 & -1
\end{pmatrix}
&\hbar
\begin{pmatrix}
  1 & 0 & 0\\
  0 & 0 & 0\\
  0 & 0 & -1\\
\end{pmatrix}\\ \hline
J_x
&0
&\frac{\hbar}{2}
\begin{pmatrix}
  0 & 1 \\
  1 & 0
\end{pmatrix}
&\frac{\hbar}{\sqrt{2}}
\begin{pmatrix}
  0 & 1 & 0\\
  1 & 0 & 1\\
  0 & 1 & 0\\
\end{pmatrix}\\ \hline
J_y
&0
&\frac{\hbar}{2}
\begin{pmatrix}
  0 & -i \\
  i & 0
\end{pmatrix}
&\frac{\hbar}{\sqrt{2}}
\begin{pmatrix}
  0 & -i & 0\\
  i & 0 & -i\\
  0 & i & 0\\
\end{pmatrix}\\ \hline
\end{array}
</math>
==='''<span style="color:#2B65EC">Spin 1/2 Angular Momentum</span>''' ===
Many particles, such as the electron, proton and neutron, exhibit an intrinsic angular momentum, which unlike orbital angular momentum, has no relation with the spatial degrees of freedom. These are called spin 1/2 particles. An important concept about spin is that it is a purely quantum mechanical construct, with no classical analog, and it cannot be described by a differential operator. The angular momentum of a stationary spin 1/2 particle is found to be quantized to the <math>\pm\frac{\hbar}{2}</math> regardless of the direction of the axis chosen to measure the angular momentum. This means that there is a vector operator <math>\vec{S}=(S_x, S_y, S_z)</math> and when it projected along an arbitrary axis satisfies the following equations:
<math>\vec{S}\cdot\hat{m}|\hat{m}\uparrow\rangle = \frac{\hbar}{2}|\hat{m}\uparrow\rangle</math>
<math>\vec{S}\cdot\hat{m}|\hat{m}\downarrow\rangle = -\frac{\hbar}{2}|\hat{m}\downarrow\rangle</math>
<math>|\hat{m}\uparrow\rangle</math> and <math>|\hat{m}\downarrow\rangle</math> form a complete basis, which means that any state <math>|\hat{n}\uparrow\rangle</math> and <math>|\hat{n}\downarrow\rangle</math> with different quantization axis can be expanded as a linear combination of <math>|\hat{m}\uparrow\rangle</math> and <math>|\hat{m}\downarrow\rangle</math>.
The spin operator obeys the standard angular momentum commutation relations
<math>[S_{\mu}, S_{\nu}]=i\hbar\epsilon_{\mu\nu\lambda}S_{\lambda}\Rightarrow [S_{x}, S_{z}]=-i\hbar S_{y}</math>
The most commonly used basis is the one which diagonalizes <math>\vec{S}\cdot \hat{z} = S_{z}</math>.
By acting on the states <math>|\hat{z}\uparrow\rangle</math> and <math>|\hat{z}\downarrow\rangle \!</math> with <math>S_z \!</math>, we find
<math>S_{z}|\hat{z}\uparrow\rangle = \frac{\hbar}{2}|\hat{z}\uparrow\rangle</math>, and
<math>S_{z}|\hat{z}\downarrow\rangle = -\frac{\hbar}{2}|\hat{z}\downarrow\rangle</math>
Now by acting to the left with another state, we can form a 2x2 matrix.
<math>\begin{align} S_{z} & =\left( \begin{array}{ll}
\langle\hat{z}\uparrow|S_{z}|\hat{z}\uparrow\rangle & \langle\hat{z}\uparrow|S_{z}|\hat{z}\downarrow\rangle \\
\langle\hat{z}\downarrow|S_{z}|\hat{z}\uparrow\rangle & \langle\hat{z}\downarrow|S_{z}|\hat{z}\downarrow\rangle
            \end{array} \right)\\ & =\left(\begin{array}{ll}
\hbar/2 & 0 \\
0 & -\hbar/2
      \end{array}\right)\\ & =\dfrac{\hbar}{2}\left(
\begin{array}{ll}
1 & 0 \\
0 & -1
      \end{array}\right)\\ &=\dfrac{\hbar}{2}\sigma_{z} \end{align}</math>
where <math>\mathcal\sigma_{z}</math> is the <math> z \!</math> component of Pauli spin matrix. Repeating the steps (or applying the commutation relations), we can solve for the <math> x \! </math> and <math> y \!</math> components.
<math>S_{x}=\dfrac{\hbar}{2}\left(\begin{array}{ll}
0 & 1 \\
1 & 0
            \end{array} \right)=\dfrac{\hbar}{2}\sigma_{x}</math>
<math>S_{y}=\dfrac{\hbar}{2}\left( \begin{array}{ll}
0 & -i \\
i & 0
            \end{array} \right)=\dfrac{\hbar}{2}\sigma_{y}</math>
In this basis, <math> \vec{S} = \frac{\hbar}{2} \vec{\sigma} \!</math>. It should be noted that a spin lying along an axis may be rotated to any other axis using the proper rotation operator.
'''Properties of the Pauli Spin Matrices'''
Each Pauli matrix squared produces the unity matrix
<math>\sigma_{x}^2=\sigma_{y}^2=\sigma_{z}^2=\left( \begin{array}{ll}
1 & 0 \\
0 & 1
            \end{array} \right)</math>
The commutation relation is as follows
<math>\mathcal{[\sigma_{\mu}, \sigma_{\nu}]}=2i\epsilon_{\mu\nu\lambda}\sigma_{\lambda}</math>
and the anticommutator relation
<math> \{\sigma_{\mu}, \sigma_{\nu} \}= [ \sigma_{\mu}, \sigma_{\nu} ]_+ = \sigma_{\mu}\sigma_{\nu}+\sigma_{\nu}\sigma_{\mu}=2\delta_{\mu\nu} \left( \begin{array}{ll}
1 & 0 \\
0 & 1
            \end{array} \right)</math>
For example,
<math>\sigma_{\mu}\sigma_{\nu}=\frac{1}{2}\left\{\sigma_{\mu}, \sigma_{\nu}\right\} + \frac{1}{2}\left[\sigma_{\mu}, \sigma_{\nu}\right]
= i\epsilon_{\mu\nu\lambda}\sigma_{\lambda} + \delta_{\mu\nu}</math>
<math>S_{\mu}S_{\nu}=\dfrac{\hbar^2}{4}\delta_{\mu\nu}+\dfrac{i\hbar}{2}\epsilon_{\mu\nu\lambda}S_{\lambda}</math>
The above equation is true for <math>1/2\!</math> spins only!!
In general,
<math> \begin{align}
(\vec{a} \cdot \vec\sigma)(\vec{b}\cdot\vec\sigma) & =(a_{x}\sigma_{x}+a_{y}\sigma_{y}+a_{z}\sigma_{z})(b_{x}\sigma_{x}+b_{y}\sigma_{y}+b_{z}\sigma_{z})\\ & = a_{\mu}\sigma_{\mu}b_{\nu}\sigma_{\nu}\\ & =a_{\mu}b_{\nu}\sigma_{\mu}\sigma_{\nu}\\ & =a_{\mu}b_{\nu} \left( \left(
\begin{array}{ll}
1 & 0 \\
0 & 1
\end{array} \right) \delta_{\mu\nu} + i\epsilon_{\mu\nu\lambda}\sigma_{\lambda} \right)\\
& = \left( \begin{array}{ll}
1 & 0 \\
0 & 1
\end{array} \right) \vec{a}\cdot \vec{b} + i(\vec{a} \times \vec{b})\cdot\vec{\sigma} \end{align}</math>
Finally, any <math> 2 \times 2 \!</math> matrix can be written in the form
<math> M=\alpha \left( \begin{array}{ll}
1 & 0 \\
0 & 1
\end{array} \right) +\vec\beta \cdot \vec\sigma= \left( \begin{array}{ll}
M_{11} & M_{12} \\
M_{21} & M_{22}
\end{array} \right) </math>
<math>\Rightarrow\alpha=\frac{1}{2}\left(M_{11}+M_{22}\right)</math>
<math>\Rightarrow\beta_{x}=\frac{1}{2}\left(M_{12}+M_{21}\right)</math>
<math>\Rightarrow\beta_{y}=\frac{i}{2}\left(M_{12}-M_{21}\right)</math>
<math>\Rightarrow\beta_{z}=\frac{1}{2}\left(M_{11}-M_{22}\right)</math>
For infinitesimal <math>\vec{\alpha}</math>
[[Image:Spin.JPG]]
<math>\hat{n}=\hat{m}+\vec{\alpha} \times \hat{m} + O(\alpha^2)</math>
<math>
\Rightarrow \vec{S}\cdot\hat{n}=\vec{S}\cdot\hat{m}+\vec{S} \cdot(\vec{\alpha} \times \vec{m})</math>
<math>
\Rightarrow S_{\mu}\hat{n}_{\mu} = S_{\mu}\hat{m}_{\mu} + S_{\mu}\epsilon_{\mu\nu\lambda}\alpha_{\nu}\hat{m}_{\lambda}</math>
Note that using the previous developed formulas, we find that
<math>
S_{\mu}\epsilon_{\mu\nu\lambda}=\frac{1}{i\hbar} \left[S_{\nu}, S_{\lambda}\right]
</math>
<math>
\begin{align}
\Rightarrow \vec{S}\cdot\hat{n} & =\vec{S}\cdot\hat{m}+\frac{1}{i\hbar}[\vec{\alpha}\cdot\vec{S}, \hat{m}\cdot\vec{S}] \\
& =\vec{S}\cdot\hat{m}+\frac{i}{\hbar}[\vec{S}\cdot\hat{m}, \vec{S}\cdot\vec{\alpha}]
\end{align}
</math>
To this order in <math>\vec{\alpha}</math>, this equation is equivalent to
<math>\vec{S}\cdot\hat{n}=e^{-\frac{i}{\hbar}\vec{S}\cdot\vec{\alpha}} \left(\vec{S}\cdot\hat{m}\right) e^{\frac{i}{\hbar}\vec{S}\cdot\vec{\alpha}}</math>.
This equation is exact for any <math> \vec\alpha \!</math> not just infinitesimal <math> \vec\alpha \!</math> just as in hte case of orbital angular momentum.
Consider <math> \vec{S} \cdot \hat{n} \!</math> acts on <math> e^{-\frac{i}{\hbar} \vec{S} \cdot {\alpha}} \left| \hat{m} \uparrow \right\rangle \!</math> which is an eigenstate of <math> \vec{S} \cdot \hat{n} \!</math>,
<math>
\begin{align}
\vec{S}\cdot\hat{n} \left( e^{-\frac{i}{\hbar}\vec{S}\cdot\vec{\alpha}} |\hat{m} \uparrow\rangle \right) & = e^{-\frac{i}{\hbar} \vec{S}\cdot\vec{\alpha}} \left( \vec{S}\cdot\hat{m} \right)
\left|\hat{m} \uparrow\right\rangle  \\
& = \frac{\hbar}{2}\left( e^{-\frac{i}{\hbar}\vec{S}\cdot\vec{\alpha}} |\hat{m} \uparrow\rangle \right)
\end{align}
</math>
Another way of expressing the rotation of the spin basis by an angle <math>\vec \alpha</math> about some axis <math>\hat{\alpha}</math> (and the one derived in class) is the following. 
Consider an operator <math>e^{-\frac{i}{\hbar}\vec{S}\cdot\vec{\alpha}}</math> from the previous equation. This can also be written as
<math>
\begin{align}
e^{-\frac{i}{\hbar}\vec{S}\cdot\vec{\alpha}} & = e^{-\frac{i}{2}\vec{\sigma}\cdot\vec{\alpha}} \\
& = 1 - \frac{i}{2} \vec{\sigma}\cdot\vec{\alpha} + \frac{1}{2}\left(-\frac{i}{2}\vec{\sigma}\cdot\vec{\alpha}\right)^2 + \cdots \\
& = \sum_{n=0}^{\infty} \frac{1}{n!} \left(-\frac{i}{2} \vec{\sigma}\cdot\vec{\alpha}\right)^n \\
& = \sum_{n=0}^{\infty} \frac{(-i)^n}{n! 2^n} |\vec{\alpha}|^n \left( \vec{\sigma}\cdot\hat{\alpha}\right)^n
\end{align}
</math>.
Consider,
<math>
\begin{align}
\left( \vec{\sigma}\cdot\hat{\alpha} \right)^2 
& = \left(\sigma_x \alpha_x + \sigma_y \alpha_y + \sigma_z \alpha_z \right)\left(\sigma_x \alpha_x + \sigma_y \alpha_y + \sigma_z \alpha_z\right) \\
& = \left(\alpha_x ^2 + \alpha_y ^2 + \alpha_z ^2 \right)
+ \alpha_x \alpha_y \left(\sigma_x \sigma_y + \sigma_y \sigma_x\right)
+ \alpha_x \alpha_z \left(\sigma_x \sigma_z + \sigma_z \sigma_x\right)
+ \alpha_y \alpha_z \left(\sigma_y \sigma_z + \sigma_z \sigma_y\right) \\
& = 1
\end{align}
</math>. 
The non-squared terms vanish because of the anti-commutation property of the Pauli matrices. 
Therefore, <math>(\vec{\sigma}\cdot\hat{\alpha})^{2n} = 1</math> (<math> n \!</math> is an integer), thus the above equation can be split:
<math>
\begin{align}
e^{-\frac{i}{\hbar}\vec{S}\cdot\vec{\alpha}}
& = \sum_{n = even}^{\infty} \frac{(-i)^{n}}{n!2^n}\left|\vec{\alpha}\right|^n
+ \vec{\sigma}\cdot\hat{\alpha} \sum_{n = odd}^{\infty} \frac{(-i)^n}{n!2^n} \left| \vec{\alpha} \right|^n \\
& = \cos\left(\frac{\left|\vec\alpha\right|}{2}\right) - i \vec{\sigma}\cdot\hat{\alpha} \sin\left(\frac{\left|\vec\alpha\right|}{2}\right)
\end{align}
</math>
This form may be more convenient when performing rotations.
[[A solved problem for spins]]
[[Phy5646/Grp3SpinProb|A Solved Problem on General Spin Vectors.]]
== Addition of angular momenta ==
==='''<span style="color:#2B65EC">Formalism</span>''' ===
In order to consider the addition of angular momentum, consider two angular momenta, <math> \vec{J}_1 </math> and <math> \vec{J}_2 </math> which belong to two different subspaces. <math> \vec{J}_1 </math> has a Hilbert space of <math>\left(2 j_{1} + 1\right)</math> states, and <math> \vec{J}_2 </math> has a Hilbert space of <math>\left(2 j_{2} + 1\right)</math> states. The total angular momentum is then given by: <br />
<math>\vec{J}=\vec{J}_{1}+\vec{J}_{2}=\vec{J}_{1}\otimes\mathbb{I}_1 + \mathbb{I}_2\otimes\vec{J}_{2} \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad (6.1.1)</math> <br \>
where <math> \mathbb{I}_1 </math> and <math> \mathbb{I}_2 </math> are the identity matrices of <math>\vec{J}_{1}</math>'s and <math>\vec{J}_{2}</math>'s Hilbert spaces, and where the dimension Hilbert space is <math>\!(2j_1+1)(2j_2+1)</math>.                                                             
The components of <math> \vec{J}_1 </math> and <math> \vec{J}_2 </math> obey the commutation relation:
<math>\left[J_{1\mu}, J_{1\nu}\right] = i\hbar\epsilon_{\mu\nu\lambda} J_{1\lambda}
\qquad \qquad
\left[J_{2\mu}, J_{2\nu}\right] = i\hbar\epsilon_{\mu\nu\lambda} J_{2\lambda} \qquad \qquad \qquad \qquad \qquad \qquad\;\ (6.1.2a)</math>
And since <math> \vec{J}_1 </math> and <math> \vec{J}_2 </math> belong to different Hilbert spaces:
<math> \left[J_{1\mu}, J_{2\nu}\right] = 0 \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad\; (6.1.2b)</math>
Given the simultaneous eigenkets of <math> J_1^2</math> and <math>\!J_{1z}</math> denoted by <math>|j_1 m_1\rangle</math> , and of <math>J_2^2</math> and <math>\!J_{2z}</math> denoted by <math>|j_2 m_2\rangle</math> we have the following relations:
<math>J_1^2|j_1 m_1\rangle = j_1(j_1+1)\hbar|j_1 m_1\rangle </math>
<math>J_{1z}|j_1 m_1\rangle = m_1\hbar|j_1 m_1\rangle </math>
<math>J_2^2|j_2 m_2\rangle = j_2(j_2+1)\hbar|j_2 m_2\rangle </math>
<math>J_{2z}|j_2 m_2\rangle = m_2\hbar|j_2 m_2\rangle </math>
Now looking at the two subspaces together, the operators <math> J_1^2</math>, <math> \!J_{1z}</math>, <math> J_2^2</math>, <math> \!J_{2z}</math> can be simultaneously diagonalized by their join eigenstates. These eigenstates can be formed by the '''''direct products''''' of <math> |j_1 m_1\rangle </math> and <math> |j_2 m_2\rangle </math>:
<math> |j_1 m_1\rangle \otimes |j_2 m_2\rangle = |j_1,j_2; m_1,m_2\rangle </math>
This basis for the total system diagonalizes <math> J_1^2 \!</math>, <math> \!J_{1z}</math>, <math> J_2^2 \! </math>, <math> \!J_{2z}</math>, but these four operators DO NOT define the total angular momentum of the system. Therefore it is useful to relate these direct product eigenstates to the total angular momentum <math> \vec{J} = \vec{J}_{1} + \vec{J}_{2}</math>.
Recall that <math> J_{z} = J_{1z} + J_{2z} \!</math> and <math>\left[J_{\mu}, J_{\nu}\right] = i\hbar\epsilon_{\mu\nu\lambda} J_{\lambda}</math>.
We also know the relations:
<math>\left[J_{1,2}^2, J^2\right]=0 </math> and <math>\left[J_{1,2}^2, J_{z}\right] = 0 </math> and <math>\left[J^{2}, J_{z}\right] = 0 </math>
This tells us that we have a set of four operators that commute with each other. From this we can specify <math>J_{1}^2 , J_{2}^2 , J^2</math>, and <math>J_{z}\!</math> simultaneously. The joint eigenstates of these four operators denoted by <math>|j m j_1 j_2\rangle</math>. These four operators operate on the base kets according to:
<math>J^2|j m j_1 j_2\rangle = \hbar^2 j(j + 1)|j m j_1 j_2\rangle \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad\;\;\;\; (6.1.7)</math>
<math>J_{z}|j m j_{1} j_{2}\rangle=\hbar m |j m j_{1} j_{2}\rangle \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad\;\;\;\;\;\ (6.1.8)</math>
<math>J_{1,2}^2|j m j_{1} j_{2}\rangle=\hbar^2 j_{1,2}(j_{1,2}+1) |j m j_{1} j_{2}\rangle \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad\;\;\ (6.1.9)</math>
The choice of basis is now dictated by the specific problem being solved because we can find the relationship between the direct product basis and total-<math> J \!</math> basis.
For example, consider two spin 1/2 particles with basis <math> |\uparrow\uparrow\rangle, |\uparrow\downarrow\rangle, |\downarrow\uparrow\rangle ,</math> and <math> |\downarrow\downarrow\rangle </math>. These states are eigenstates of <math>J_{1}^2, J_{2}^2, J_{1z},\!</math> and<math> J_{2z}\!</math>, but are they eigenstates of <math>\!J^2</math> and <math>J_z^2</math>? <br />
Let us see what happens with the state <math>|\uparrow\downarrow\rangle</math>:
<math>J^2 |\uparrow\downarrow\rangle = \left(J_{x}^2+J_{y}^2+J_{z}^2\right)|\uparrow\downarrow\rangle = \left((J_{x}+iJ_{y})(J_{x}-iJ_{y})+i[J_{x}, J_{y}] + J_{z}^2 \right)|\uparrow\downarrow\rangle</math>.
Let's define <math> J_{\pm}=(J_{x}\pm iJ_{y})</math>, then
<math> J^2 |\uparrow\downarrow\rangle =\left(J_{+}J_{-}-\hbar J_{z} + J_{z}^2 \right)| \uparrow\downarrow\rangle = \left((J_{1+}+J_{2+})(J_{1-}+J_{2-})+(J_{1z}+J_{2z})^2-\hbar (J_{1z}+J_{2z})\right)|\uparrow\downarrow\rangle </math>
Now <math>(J_{1z}+J_{2z}) |\uparrow\downarrow\rangle = \left(\frac{\hbar}{2}-\frac{\hbar}{2}\right) |\uparrow\downarrow\rangle = 0 </math>
Also, <math>(J_{1+}+J_{2+})(J_{1-}+J_{2-})|\uparrow\downarrow\rangle = (J_{1+}+J_{2+})|\downarrow\downarrow\rangle = |\uparrow\downarrow\rangle + |\downarrow\uparrow\rangle </math>
<math> \therefore J^2 |\uparrow\downarrow\rangle = |\uparrow\downarrow\rangle + |\downarrow\uparrow\rangle </math>
Which means that <math>|\uparrow\downarrow\rangle </math> is not an eigenstate of <math>\!J^2</math>. Similarly, it can be shown that the other three states are also not eigenstates of <math>\!J^2</math>.
To find a relationship between the direct product basis and the total-<math> J \!</math> basis, begin by finding the maximum total <math> m \!</math> state:
<math>\left|j =1, m=1; \frac{1}{2}, \frac{1}{2}\right\rangle = |\uparrow \uparrow\rangle </math>
This must be true because <math> |\uparrow \uparrow\rangle </math> is the only state with <math> \!m = 1 </math>.
Now we can lower this state using <math> J_{-} \!</math> to yield:
<math>\left|j =1, m=0; \frac{1}{2}, \frac{1}{2}\right\rangle = \frac{1}{\sqrt{2}}\left(|\uparrow \downarrow\rangle + |\downarrow \uparrow\rangle\right) </math>
And then lower this state to yield:
<math>\left|j =1, m=-1; \frac{1}{2}, \frac{1}{2}\right\rangle = |\downarrow \downarrow\rangle </math>
All we are missing now is the antisymmetric combination of <math>|\uparrow \downarrow\rangle</math> and <math>|\downarrow \uparrow\rangle)</math>:
<math>\left|j =0, m=0; \frac{1}{2}, \frac{1}{2}\right\rangle = \frac{1}{\sqrt{2}}\left(|\uparrow \downarrow\rangle - |\downarrow \uparrow\rangle\right) </math>
We now have a relationship between the two bases. Also, we can write <math> \mathbf{\frac{1}{2}} \otimes \mathbf{\frac{1}{2}} = \mathbf{1} \otimes \mathbf{0} \! </math> where <math> \mathbf{1} \!</math> and <math> \mathbf{0} \! </math> represent triplet state and single state respectively.
Problem: CG coefficients[[http://wiki.physics.fsu.edu/wiki/index.php/Phy5646/CG_coeff_example1#Find_the_CG_coefficients]]
Another problem: CG coefficients[[http://wiki.physics.fsu.edu/wiki/index.php/Phy5646/CG_coeff_example2]]
==='''<span style="color:#2B65EC">Clebsch-Gordan Coefficients</span>''' ===
Now that we have constructed two different bases of eigenkets, it is imperative to devise a way such that eigenkets of one basis may be written as linear combinations of the eigenkets of the other basis. To achieve this, we write:
<math>|j m j_1 j_2\rangle = \sum_{m_1,m_2}|j_1 j_2 m_1 m_2\rangle\langle j_1 j_2 m_1 m_2|j m j_1 j_2\rangle \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad\ (6.2.1) </math>
In above, we have used the completeness of the basis <math>|j_1 j_2 m_1 m_2\rangle</math>, given by:
<math>\sum_{m_1,m_2}|j_1 j_2 m_1 m_2\rangle\langle j_1 j_2 m_1 m_2| = 1 \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad\;\;\; (6.2.2)</math>
The coefficients <math>\langle j_1 j_2 m_1 m_2|j m j_1 j_2\rangle</math> are called Clebsch-Gordan coefficients (for an extensive list of these coefficients, see [http://en.wikipedia.org/wiki/Table_of_Clebsch-Gordan_coefficients here]), which have the following properties, giving rise to <b> two "selection rules": </b>
<b>1</b>. If <math>m \neq m_1 + m_2</math>, then the coefficients vanish.
:<i>Proof</i>: <math> \because J_z = J_{1z} + J_{2z}</math>, we get
:<math>(J_z - J_{1z} - J_{2z})|j m j_1 j_2\rangle = 0</math>
:<math>\Rightarrow \langle j_1 j_2 m_1 m_2|(J_z - J_{1z} - J_{2z})|j m j_1 j_2\rangle = 0</math>
:<math> \therefore (m - m_1 - m_2)\langle j_1 j_2 m_1 m_2|j m j_1 j_2 \rangle = 0 </math>. <b>Q.E.D.</b>
<b>2</b>. The coefficients vanish, unless <math> |j_1 - j_2| \le j \le j_1 + j_2 </math>
:This follows from a simple counting argument. Let us assume, without any loss of generality, that <math>\! j_1 > j_2 </math>. The dimensions of the two bases should be the same. If we count the dimensions using the <math>|j_1 j_2 m_1 m_2\rangle </math> states, we observe that for any value of <math>\! j </math>, the values of <math>\! m </math> run from <math>\! -j</math> to <math>\! j </math>. Therefore, for <math>\! j_1 </math> and <math>\! j_2 </math>, the number of eigenkets is <math>\! (2j_1 + 1)(2j_2 + 1) </math>. Now, counting the dimensions using the <math> |j m j_1 j_2 \rangle </math> eigenkets, we observe that, again, <math>\! m </math> runs from <math>\! -j </math> to <math>\! j </math>. Therefore, the number of dimensions is <math> N = \sum_a^b (2j + 1) </math>. It is easy to see that for <math>\! a = j_1 - j_2 </math> and <math>\! b = j_1 + j_2.</math> Therefore, <math> N = (2j_1 + 1)(2j_2 +1)\!</math>.
Further, it turns out that, for fixed <math>\!j_1</math>, <math>\!j_2</math> and <math>\!j</math>, coefficients with different values for <math>\!m_1</math> and <math>\!m_2</math> are related to each other through recursion relations. To derive these relations, we first note that:
<span id="(6.2.3)"></span>
<math>J_{\pm}|j m j_1 j_2\rangle = \sqrt{(j \mp m)(j \pm m + 1)}\hbar |j m \pm 1 j_1 j_2\rangle \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad\;\ (6.2.3) </math>
Now we write,
<span id="(6.2.4)"></span>
<math>J_{\pm}|j m j_1 j_2 \rangle = (J_{1 \pm} + J_{2 \pm}) \sum_{m_1,m_2}|j_1 j_2 m_1 m_2 \rangle \langle j_1 j_2 m_1 m_2|j m j_1 j_2 \rangle \qquad \qquad \qquad \qquad (6.2.4)</math>
Using equation [[#(6.2.4)]], we get (with <math> m_1 \to m'_1 </math>, <math> m_2 \to m'_2 </math>): <br />
<math>
\begin{align} & \sqrt{(j \mp m)(j \pm m + 1)}|j m \pm 1 j_1 j_2 \rangle \\
& = \sum_{m'_1,m'_2} \left( \sqrt{(j_1 \mp m'_1)(j_1 \pm m'_1 + 1)}|j_1 j_2 m'_1 \pm 1 m'_2 \rangle + \sqrt{(j_2 \mp m'_2)(j_2 \pm m'_2 + 1)}|j_1 j_2 m'_1 m'_2 \pm 1 \rangle \right)\langle j_1 j_2 m'_1 m'_2|j m j_1 j_2 \rangle \end{align}</math>
The Clebsch-Gordan coefficients form a unitary matrix, and by convention, they are all taken real. Any real unitary matrix is orthogonal, as we study below.
==='''<span style="color:#2B65EC">Example</span>''' ===
As an example lets calculate some Clebsch-Gordan coefficients through applications of <math> S_{\plusmn}=S_x+iS_y</math> on the states <math> |Sm_s> </math>.
Let <math> S=S_1+S_2</math> be the total angular momentum of two spin 1/2 particles <math> (S_1=S_2=1/2) </math>. Calculate the Clebsch-Gordan coefficients 
<math><m_1m_2|Sm_s> </math>by successive applications of <math> S_{\plusmn}=S_x+iS_y </math> on the states <math> |Sm_s> </math>. Work separately in the two subspaces S=1 and S=0.
In order to find the coefficients for the addition of spin 1/2, we shall use the following relations:
'''I''' <math> S_{\plusmn}|Sm_s> = \hbar\sqrt{S(S+1)-m_s(m_s\plusmn 1)}|Sm_s\plusmn 1></math>
'''II''' <math> S_{1\plusmn}|m_1m_2> = \hbar\sqrt{S_1(S_1+1)-m_1(m_1\plusmn 1)}|m_1\plusmn 1 m_2></math>
'''III'''<math> S_{2\plusmn}|m_1m_2> = \hbar\sqrt{S_2(S_2+1)-m_2(m_2\plusmn 1)}|m_1 m_2\plusmn 1></math>
We shall also use the phase condition
<math>|S=S_1+S_2,m_s=\plusmn(S_1+S_2)> = |m_1=\plusmn S_1,m_2=\plusmn S_2> </math>
Note: The states <math> |S=S_1=S_2,m_s=\plusmn (S_1+S_2)> </math> are eigenstates of <math> S^2 </math> and <math> S_z </math>, with nondegenerate eigenvalues <math> \lambda_{\plusmn} = \plusmn \hbar(S_1+S_2) </math>, repsectively. Therefore,
<math>|S=S_1+S_2,m_s=\plusmn(S_1+S_2)> = e^{i \phi}|m_1=\plusmn S_1,m_2=\plusmn S_2> </math>
and the phase <math>\phi</math> may be chosen to be <math>\phi =0</math>.
'''i''' Subspace S=1: From the phase condition we immediately have <math>|1,1>=|1/2,1/2>=|++> </math>
Then, operating with <math>S_-=S_{1-}+S_{2-} </math> on both sides of this, and using '''II''' and '''III''', we obtain
<math>S_{-}|1,1>=\hbar\sqrt{1(1+1)-1(1-1)}|1,0>=\hbar\sqrt{2}|1,0></math>
<math>S_{-}|1,1>=(S_{1-}+S_{2-})|1/2,1/2>=\hbar\sqrt{1}|-1/2,1/2>+\hbar|1/2,-1/2></math>
Thus,
<math> |1,0>=\frac{1}{\sqrt{2}}(|1/2,-1/2>+|-1/2,1/2>) = \frac{1}{\sqrt{2}}(|+->+|-+>) </math>
Similarly, operating with <math> S_{-} </math> once again on the state |1,0>, we find
<math> S_{-}|1,0> = \hbar\sqrt{1(1+1)-0(0-1)}|1,-1>=\hbar\sqrt{2}|1,-1></math>
<math> S_{-}|1,0> = \frac{1}{\sqrt{2}}S_{1-}(|1/2,-1/2>+|-1/2,1/2>)+\frac{1}{\sqrt{2}}S_{2-}(|1/2,-1/2>+|-1/2,1/2>) </math>
<math>= \frac{\hbar}{\sqrt{2}}(|-1/2,-1/2>+|-1/2,-1/2>) = \frac{2}{\sqrt{2}}|-1/2,-1/2></math>
Therefore, in accordance with the phase condition, <math> |1,-1> = |-1/2,-1/2> = |--> </math>
'''ii''' Subspace S=0: Since <math> m_s = m_1 + m_2</math> (in this case <math> m_s=0</math>), we have
<math> |0,0> = \alpha|-1/2,1/2>+ \beta |-1/2,1/2> </math>
Next, due to orthonormality of <math> |Sm_s> </math> basis we get
<math> <1,0|0,0> = 0 \rightarrow \frac{1}{\sqrt{2}}(\alpha+\beta)=0 \rightarrow \beta=-\alpha </math>
<math> <0,0|0,0>=1 \rightarrow |\alpha|^2+|\beta|^2=1 \rightarrow 2|\alpha|^2=1 \rightarrow \alpha = \frac{1}{\sqrt{2}} </math>
Therefore, we find <math> |0,0> = \frac{1}{\sqrt{2}}(|1/2,-1/2>-|-1/2,1/2>) </math>
==='''<span style="color:#2B65EC">Orthogonality of Clebsch-Gordan Coefficients</span>''' ===
We have the following symmetry:
<math>\langle j_1 j_2 m_1 m_2 | j_1 j_2 j m  \rangle
= (-1)^{j-j_1-j_2} \langle j_2 j_1 m_2 m_1 | j_2 j_1 j m \rangle
=  \langle j_2 j_1, -m_2, -m_1 |j_2 j_1 j, -m \rangle</math>
If we put the coefficients into a matrix, it is real and unitary, meaning <math>\langle j m j_1 j_2 |j_1 j_2 m_1 m_2 \rangle = \langle j_1 j_2 m_1 m_2 |j m j_1 j_2 \rangle ^*</math>
<math>| j_1 j_2 m_1 m_2 \rangle = \sum_{j,m} |j m j_1 j_2 \rangle \langle j m j_1 j_2 |j_1 j_2 m_1 m_2 \rangle </math>.
For example,
<math>|\uparrow_1 \downarrow_2 \rangle = \dfrac{1}{\sqrt{2}}(|10 \rangle + |00 \rangle )</math>
<math>|\downarrow_1 \uparrow_2 \rangle = \dfrac{1}{\sqrt{2}}(|10 \rangle - |00 \rangle )</math>
We have the following orthogonality relations: <br />
<math>\sum_{jm}\langle j_1 m'_1 j_2 m'_2|jmj_1 j_2\rangle \langle jmj_1 j_2 | j_1 m_1 j_2 m_2\rangle = \delta_{m_1 m'_1} \delta_{m_2 m'_2} </math>
<math>\sum_{m_1 m_2}\langle j m j_1 j_2|j_1 m_1 j_2 m_2\rangle \langle j_1 m_1 j_2 m_2 | j' m' j_1 j_2\rangle = \delta_{j j'} \delta_{m m'} </math>
Hydrogen atom with spin orbit coupling given by the following hamiltonian
<math>H'=\dfrac{e^2}{2m^2 c^2 r^3}\vec{L}\cdot\vec{S}</math>
Recall, the atomic spectrum for bound states
<math>E_n = -\frac{e^2}{2a_o n^2}</math> where <math> n=1, 2, 3, ...\!</math>
The ground state, <math>|1s\rangle</math>, is doubly degenerate: <math>\dfrac{\uparrow\downarrow}{1s}</math>
First excited state is 8-fold degenerate:  <math>\dfrac{\uparrow\downarrow}{1s}\dfrac{\uparrow\downarrow}{}\dfrac{\uparrow\downarrow}{2p}\dfrac{\uparrow\downarrow}{}</math>
<math>n \!</math>-th state is <math>2n^2\!</math> fold degenerate.
We can break apart the angular momentum and spin into its <math> x, y, z \!</math>-components
<math>\vec{L}\cdot\vec{S} = L_x S_x + L_y S_y + L_z S_z </math>
Define lowering and raising operators
<math>\Rightarrow L_\pm = L_x \pm iL_y</math>
<math>\Rightarrow S_\pm = S_x \pm iS_y</math>
<math>\vec{L}\cdot\vec{S} = L_z S_z + \dfrac{1}{2} L_{+} S_{-} + \dfrac{1}{2} L_{-} S_{+} </math>
For the ground state, <math>(|1s, \uparrow\rangle, |1s, \downarrow\rangle )</math>, nothing happens. Kramer's theorem protects the double degeneracy.
For the first excited state, <math>(|2s, \uparrow\rangle, |2s, \downarrow\rangle )</math>, once again nothing happens.
For <math>(|2p, \uparrow\rangle, |2p, \downarrow\rangle )</math>, there is a four fold degeneracy.
We can express the solutions in matrix form
<math>\left( \begin{array}{llllll}
\dfrac{\hbar^2}{2} & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & \dfrac{\hbar^2}{2}
\end{array} \right)</math>
But there is a better and more exact solution, which we can solve for by adding the momenta first.
<math>\vec{L}\cdot\vec{S} = \frac{1}{2} \left(\vec{L} + \vec{S}\right)^2 -\frac{1}{2}\vec{L}^2 -\frac{1}{2}\vec{S}^2 = \frac{1}{2}\left(J^2 -L^2 - S^2\right)</math>
add the angular momenta:
<math>|1s\rangle : l=0, s=\dfrac{1}{2}: 0\otimes \dfrac{1}{2}= \dfrac{1}{2}</math>
<math>|2s\rangle : l=0, s=\dfrac{1}{2}: 0\otimes \dfrac{1}{2}= \dfrac{1}{2}</math>
<math>|2p_m, 0 \rangle : l=1, s=\dfrac{1}{2}: 1\otimes \dfrac{1}{2}= \dfrac{3}{2} \oplus \dfrac{1}{2}</math>
So that
<math>\vec{L}\cdot\vec{S} \left|j=\dfrac{3}{2}, m, l=1, s=\dfrac{1}{2} \right\rangle =\dfrac{1}{2} \left(\hbar^2\dfrac{3}{2}\dfrac{5}{2}-2 \hbar^2 - \dfrac{3}{4} \hbar^2\right) \left|j=\dfrac{3}{2}, m, l=1, s=\dfrac{1}{2} \right\rangle = \dfrac{\hbar^2}{2} \left| j=\dfrac{3}{2}, m, l=1, s=\dfrac{1}{2} \right\rangle </math>
<math>\vec{L}\cdot\vec{S} \left|j=\dfrac{1}{2}, m, l=1, s=\dfrac{1}{2} \right\rangle =\dfrac{-\hbar^2}{2} \left| j=\dfrac{1}{2}, m, l=1, s=\dfrac{1}{2} \right\rangle </math>
<math> \left|j=\dfrac{3}{2}, m= \dfrac{3}{2}, l=1, s=\dfrac{1}{2} \right\rangle = \left|l=1, m_l =1 \right\rangle \left|s=\dfrac{1}{2}, m_s = \dfrac{1}{2} \right\rangle </math>
<math> \left|j= \dfrac{3}{2}, m= \dfrac{3}{2} \right\rangle = \left|m_l =1 \right\rangle \left|m_s = \dfrac{1}{2} \right\rangle </math>
Define <math> J_{-} = L_{-} + S_{-} \!</math>
<math>J_{-} \left|\dfrac{3}{2}, \dfrac{1}{2} \right\rangle = \left(L_{-} + S_{-} \right)\left|l=1, m=1 \right\rangle \left| S=\frac{1}{2}, m_s=\frac{1}{2} \right\rangle </math>
<math>
\Rightarrow \hbar \sqrt{\dfrac{3}{2} \dfrac{5}{2}- \dfrac{3}{2}\dfrac{1}{2}} \left|\dfrac{3}{2}, \dfrac{1}{2} \right\rangle = \hbar \sqrt{2} \left|l=1, m=0 \right\rangle \left|s=\frac{1}{2}, m_s=\frac{1}{2} \right\rangle + \hbar \sqrt{\dfrac{1}{2} \dfrac{3}{2} - \frac{1}{2}\left(\frac{1}{2}-1\right)}|l=1,m=1\rangle \left| s=\frac{1}{2}, m_s=-\frac{1}{2} \right\rangle</math>
<math>
\Rightarrow \sqrt{3} \left|\dfrac{3}{2}, \dfrac{1}{2} \right\rangle =  \sqrt{2}\left|1,0 \right\rangle \left|\frac{1}{2}, \frac{1}{2}\right\rangle + \left|1,1 \right\rangle \left| \frac{1}{2}, -\frac{1}{2} \right\rangle</math>
<math>
\Rightarrow \left|\frac{3}{2}, \frac{1}{2} \right\rangle = \sqrt{\frac{2}{3}}\left|1,0 \right\rangle \left|\dfrac{1}{2}, \frac{1}{2} \right\rangle + \sqrt{\dfrac{1}{3}}\left|1,1 \right\rangle \left| \frac{1}{2},- \dfrac{1}{2} \right \rangle</math>
As the same,
<math> \left|\dfrac{3}{2}, -\dfrac{1}{2} \right\rangle =  \sqrt{\dfrac{2}{3}} \left|1,0 \right\rangle \left| \frac{1}{2},-\dfrac{1}{2} \right\rangle + \sqrt{\dfrac{1}{3}}\left|1,-1 \right\rangle \left| \frac{1}{2}, \dfrac{1}{2} \right\rangle</math>,
<math> \left|\dfrac{3}{2}, \pm \dfrac{3}{2} \right\rangle =  \left|1, \pm 1 \right \rangle \left| \frac{1}{2}, \pm \dfrac{1}{2} \right\rangle</math>
We can express as follows:
<math>
\left|j=\dfrac{1}{2}, m =\dfrac{1}{2} \right\rangle = \alpha \left|1,0 \right\rangle \left| \frac{1}{2}, \dfrac{1}{2} \right\rangle + \beta \left|1,1 \right\rangle \left|\frac{1}{2}, -\dfrac{1}{2} \right\rangle </math>,
<math>\left|j=\dfrac{1}{2}, m = -\dfrac{1}{2} \right \rangle = \alpha ' \left|1,0 \right\rangle \left|\frac{1}{2},-\dfrac{1}{2} \right\rangle + \beta ' \left|1,-1 \right\rangle \left| \frac{1}{2}, \dfrac{1}{2} \right\rangle </math>.
When we project these states on the previously found states, we find that
<math>\alpha = \dfrac{1}{\sqrt{3}}</math>, 
<math>\beta = - \sqrt{\dfrac{2}{3}}</math>,
and
<math>\alpha' = - \dfrac{1}{\sqrt{3}}</math>, 
<math>\beta' = \sqrt{\dfrac{2}{3}}</math>.
For a more detailed account of these and other related results, see [http://homepage.mac.com/thubsch/QM2/QM2WE.pdf here].
==='''<span style="color:#2B65EC">Addition of three angular momenta</span>''' ===
To add three angular momenta <math>\bold J_1, \bold J_2, \bold J_3</math>, first we add <math>\bold J_{12}=\bold J_1 + \bold J_2</math>, and construct the simultaneous eigenstates of <math>\bold J_{1} ^{2}, \bold J_2 ^{2}, \bold J_{12} ^{2}, \bold J_{12z}, \bold J_3 ^{2}, \bold J_{3z}</math>. We write such states as <math>|j_1 j_2 j_{12} m_{12} j_3 m_3 \rangle</math>. Such states can be given in terms of Clebsch-Gordan coefficients and <math>|j_1 j_2 j_3 m_1 m_2 m_3 \rangle</math> (eigenstates of <math>\bold J_1^{2}, \bold J_2^{2}, \bold J_3^{2}, \bold J_{1z}, \bold J_{2z}, \bold J_{3z}</math>):
<math>|j_1 j_2 j_{12} m_{12} j_3 m_3 \rangle = \sum_{m_1,m_2} |j_1 j_2 j_3 m_1 m_2 m_3 \rangle \langle j_1 j_2 j_3 m_1 m_2 m_3|j_1 j_2 j_{12} m_{12} j_3 m_3\rangle</math>
Next we add <math>\bold J_{12}</math> to <math>\bold J_3</math>, forming simultaneous eigenstates <math>|j_1 j_2 j_{12} j_3 j m \rangle</math> of <math> \bold J_1^{2}, \bold J_2^{2}, \bold J_{12}^{2}, \bold J_3^{2}, \bold J^{2}, \bold J_{z}</math>. These are given in terms of the <math>| j_1 j_2 j_{12} m_{12} j_3 m_3 \rangle</math> by
<math>|j_1 j_2 j_{12} j_3 j m \rangle = \sum_{m_{12},m_{3}} | j_1 j_2 j_{12} m_{12} j_3 m_3 \rangle \langle j_{12} m_{12} j_3 m_3 | j_{12} j_3 j m \rangle </math>
Therefore, we can construct eigenstates of <math> \bold J_1^{2}, \bold J_2^{2}, \bold J_{12}^{2}, \bold J_3^{2}, \bold J^{2}, \bold J_{z}</math> in terms of eigenstates of <math>\bold J_1^{2}, \bold J_2^{2}, \bold J_3^{2}, \bold J_{1z}, \bold J_{2z}, \bold J_{3z}</math> as follows:
<math>|j_1 j_2 j_{12} j_3 j m \rangle = \sum_{m_{1},m_{2},m_{3}} |j_1 j_2 j_3 m_1 m_2 m_3 \rangle \sum_{m_{12}} \langle j_1 j_2 m_1 m_2|j_1 j_2 j_{12} m_{12}\rangle \langle j_{12} m_{12} j_3 m_3 | j_{12} j_3 j m \rangle </math>
Thus the analogous addition coefficients for three angular momenta are products of Clebsch-Gordan coefficients.
Note that for addition of two angular momenta, the dimension of Hilbert space is <math>\!(2J_1 + 1)(2J_2 + 1)</math>. For three angular momenta, it is <math>\!(2J_1 + 1)(2J_2 + 1)(2J_3 + 1)</math>.
== Elementary applications of group theory in Quantum Mechanics ==
==='''<span style="color:#2B65EC">Symmetry</span>''' ===
Mathematically, A Group is consisted as a set of elements in which a operation is defined to combine any two of the elements to form a third element. Also, the group should satisfy some axioms such as closure, associativity, identity and invertibility.
<math>G=\lbrace A,B,C \dots\rbrace</math>
G is a group under operation AB if
. <math>AB\in G</math>,for<math>\forall A,B\in G</math>  (closure)
. <math>\left(AB\right)C=A\left(BC\right)</math>  (associativity)
. <math>\exists \mathbf I</math>, such that <math>\mathbf IA=A\mathbf I=A</math>, for<math>\forall A</math>  (identity)
. <math>\forall A, \exists A^{-1}</math>, Such that <math>AA^{-1}=A^{-1}A=\mathbf I </math>  (invertibility)
G can be discrete(isolated elements) or it can be continuous(rotation)
Examples(discrete group):
<math>\mathbf I_2=\lbrace\mathbf I,A\rbrace</math> where <math>A^{2}=\mathbf I</math>
<math>\mathbf I_n=\lbrace n \rbrace</math>, The operation <math>“\circ”</math> is defined as <math>n\circ m=n+m</math>
And continuous group:
<math>u\left(1\right)=\lbrace e^{i\theta}; \theta\in\lbrack 0,2\pi\rbrack\rbrace</math>, in which <math>e^{i\theta}e^{i\phi}=e^{i\left(\theta+\phi\right)}</math>
<math>SU\left(2\right)</math>: group of <math>2\times2</math> matrices with unit determinant (special unitary group)
<math>O\left(3\right)</math>: group of all rotations about the origin in 3D. i.e. set of all orthogonal transformation in a 3D vector space, or a group of all <math>3\times3</math> orthogonal matrices.
<math>SU\left(3\right)</math>: a group of all unitary matrices with determinant <math>+1</math>
If all the elements commute with each other, then we shall call the group is "Abelian", otherwise it's non-abelian.
Definition of conjugate elements: <math>B</math> is conjugate to <math>A</math> if <math>B=XAX^{-1}</math> for some <math>X\in G</math>. This property is reciprocal since that <math>A=X^{-1}BX</math>
Collecting all conjugate elements gives a conjugacy class. We can divide G into conjugate classes.
==='''<span style="color:#2B65EC">Theory of group representations</span>''' ===
Group representations are the description on abstract objects using linear transformations of vector spaces. Usually, the elements in the group are represent by matrices so that the group operation can be represents by matrix multiplication.
Associate a matrix <math>\Gamma\left(A\right)</math>with each <math>A\in G</math>
<math>\Gamma\left(A\right)\Gamma\left(B\right)=\Gamma\left(AB\right)</math>
==='''<span style="color:#2B65EC">Group theory and Quantum Mechanics</span>''' ===
== Irreducible tensor representations and Wigner-Eckart theorem ==
==='''<span style="color:#2B65EC">Representation of rotations</span>''' ===
If <math>\bold J</math> is the total angular momentum of a system (<math>\bold J = \bold J_1 + \bold J_2</math>), the operator <math>R_{\vec \alpha}=e^{-\frac{i}{\hbar} \bold J \cdot \vec \alpha}</math> acting to the right on a state of the system, rotates it in a positive sense about the axis <math>\vec \alpha</math> by an angle <math>|\vec \alpha|</math>. This is similar to how <math> e^{-\frac{i}{\hbar} \bold L \cdot \vec \alpha}</math> rotates in the plane, and <math> e^{-\frac{i}{\hbar} \bold S \cdot \vec \alpha}</math> rotates spin states. Suppose that we act with <math>R_{\vec \alpha}</math> on an eigenstate <math>|jm \rangle</math> of <math>\bold J^2</math> and <math>\bold J_z</math>. This generates a superposition of states. Under the rotation, the state is generally no longer an eigenstate of <math>\bold J_z</math>. However, the rotated state remains an eigenstate of <math>\bold J^2</math>, so the value of <math> \! j </math> remains the same while the value of <math> \! m </math> will change. This is because <math>\bold J^2</math> commutes with every component of <math>\bold J</math>, (<math>\bold J_x</math>,<math>\bold J_y</math>,<math>\bold J_z</math>), and therefore <math>\bold J^2</math> commutes with <math>R_{\vec \alpha}</math>. Indeed
<math>\bold J^2 R_{\vec \alpha}|jm \rangle = \bold R_{\vec \alpha} J^2 |jm \rangle = j(j+1)R_{\vec \alpha}|jm \rangle</math>
Therefore, when we act with the rotation operator on a state, we are only mixing the multiplet. For example, acting on a 3d state with the rotation operator will result in a mixture of ''only'' the five 3d states. There will be no mixing of the 3p or 3s states.
Considering <math>\bold J_z</math>, we know <math>\bold J_z</math> will not commute with <math> \bold R_{\vec \alpha} </math> because <math>\bold J_z</math> does not commute with either <math>\bold J_x</math> or <math>\bold J_y</math>:
<math>\lbrack\bold J_z, \bold R_{\vec \alpha}\rbrack \ne0</math>
Therefore the rotated state can be expressed as a linear combination of <math>|jm'' \rangle</math> as follows:
<math>R_{\vec \alpha}|jm \rangle= \sum _{m''=-j} ^{j}|jm'' \rangle d_{m''m} ^{(j)}(\vec \alpha)</math>
Multiplying this equation on the left by a state <math> \langle jm'|</math> , and using the orthonormality of the angular momentum eigenstates we find
<math>d_{m'm} ^{(j)}(\vec \alpha) = \langle jm'|e^{-i \bold J \vec \alpha}|jm \rangle </math>
Thus we can associate with each rotation a <math>2J+1</math> by <math>2J+1</math> dimension matrix <math>\bold d_{\vec \alpha} ^{(j)}</math> whose matrix elements are <math>d_{m'm} ^{(j)}(\vec \alpha)</math>. These matrix elements don't depend on dynamics of the system; they are determined entirely by the properties of the angular momentum.
The matrices have a very important property. Two rotations performed in a row, say <math>\vec \alpha</math> followed by <math>\vec \beta</math>, are equivalent to a single rotation, <math>\vec \gamma</math>. Thus
<math>R_{\vec \gamma}=R_{\vec \beta}R_{\vec \alpha}</math>
Taking matrix elements of both sides and putting in a complete set of <math>|j'm'\rangle</math> states between <math>R_{\vec \beta}</math> and <math>R_{\vec \alpha}</math> we find
<math>\langle jm|R_{\vec \gamma}| jm'\rangle=\sum_{m''}\langle jm|R_{\vec \beta}| jm''\rangle \langle jm''|R_{\vec \alpha}| jm'\rangle</math>,
or
<math>d_{mm'} ^{(j)}(\vec \gamma)=\sum_{m''}d_{mm''} ^{(j)}(\vec \beta)d_{m''m'} ^{(j)}(\vec \alpha)</math>
or equivalently
<math>d ^{(j)}(\vec \gamma)=d ^{(j)}(\vec \beta)d ^{(j)}(\vec \alpha)</math>
A set of matrices associated with rotations having this property is call a representation of the rotation group.
The rotation operators <math>R_{\vec \alpha}</math> act on the set of states <math>|jm \rangle</math> for fixed j, in an irreducible fashion. To see what this means, let's consider the effect of rotations on the set of eight states for <math>j=1</math> and <math>j=2</math>. Under any rotation a <math>j=1</math> state becomes a linear combination of <math>j=1</math> states, with no <math>j=2</math> components, conversely a <math>j=2</math> state becomes a linear combination of <math>j=2</math> states with no <math>j=1</math> components. Thus this set of eight states which transform each among themselves under rotation with no mixing. One says that the rotations act on these eight states reducibly. On the other hand, for a set of states all with the same j, there is no smaller subset of states that transforms privately among itself under all rotations; the rotations are said to act irreducibly. Put another way, if we start with any state, <math>|jm \rangle</math>, then we can rotate it <math>2j+1</math> linearly independent states, and therefore there can't be any subspace of j states that transforms among itself under rotations. One can prove this in detail starting from the fact that one can generate all the <math>|jm \rangle</math> states starting form <math>|jj \rangle</math>
by applying <math>J_{-}</math> enough times.
==='''<span style="color:#2B65EC">Tensor operators</span>''' ===
The types of operators having simple transform properties under rotations are known as tensor operators. By an irreducible tensor operator <math>\bold T^{(k)}</math> of order k we shall mean a set of 2k+1 operators <math>T_q ^{(k)},\;q=\;-k,\;-k+1,....,k-1,\;k </math> that transform among themselves under rotation according to the transformation law:
<math>R_{\vec \alpha} T_q ^{(k)}R_{\vec \alpha} ^{-1}=\sum_{q'=-k}^{k}T_{q'} ^{(k)}d_{q'q}^{(k)}(\vec \alpha)</math>
If we consider and infinitesimal rotation <math>\vec \epsilon</math>,then
<math>R_{\vec \epsilon } = e^{- \frac {i} {\hbar} \vec J \vec \epsilon} \approx 1- \frac {i} {\hbar} \vec J \vec \epsilon</math>
to the first order in <math>\bold \epsilon</math>
<math>T_q ^{(k)}-\frac {i}{\hbar}[\vec J \vec \epsilon , T_q ^{(k)}]=T_q ^{(k)}- \frac {i}{\hbar} \vec \epsilon \sum_{q'=-k}^{k}T_{q'} ^{(k)} \langle kq' |\vec J| kq \rangle</math>
Comparing coefficients of we see that tensor operators must obey the commutation relation with the angular momentum:
<math>[\vec J , T_q ^{(k)}]=\sum_{q'=-k}^{k}T_{q'} ^{(k)} \langle kq' |\vec J| kq \rangle</math>
The z component of this relation is
<math>[J _z, T_q ^{(k)}]=qT_{q} ^{(k)}</math>
while
<math>[J _{\pm}, T_q ^{(k)}]=\sum_{q'=-k}^{k}T_{q'} ^{(k)}\langle kq' |J_{\pm}| kq \rangle=T_{q \pm 1} ^{(k)} \sqrt {k(k+1)-q(q \pm 1)}</math>
Tensor operators have many simple properties. For example, <math>T_q ^{(k)}</math> acts on state <math>|\alpha j_1 m_1 \rangle</math> of a system (<math>\bold \alpha</math> refers to other quantum number), creating a state whose z component of angular momentum is <math>q+m_{1}</math>. To prove this, let us consider the transformation properties of the state <math>T_q ^{(k)}|\alpha j_1 m_1 \rangle</math> under rotation about the z axis by <math>\bold \phi</math>
<math>R_{\bold \phi}T_q ^{(k)}|\alpha j_1 m_1 \rangle = R_{\bold \phi}T_q ^{(k)}R_{\bold \phi}^{-1}R_{\bold \phi}|\alpha j_1 m_1 \rangle = \sum_{q'} T_{q'} ^{(k)} d_{q'q}^{(k)} \sum_{m'_{1}}|\alpha j_1 m'_1 \rangle d_{m'_1 m_1}^{(j_1)}(\bold \phi)</math>
but <math>d_{m' m}^{(j)}(\bold \phi)=\delta _{m'm} e^{-im \bold \phi}</math>
so that
<math>R_{\bold \phi}T_q ^{(k)}|\alpha j_1 m_1 \rangle = e^{-i(q+m_1)\bold \phi}T_q ^{(k)}|\alpha j_1 m_1 \rangle</math>
This is exactly the transformation law for an eigenstate of <math>J_z</math> with eigenvalue <math>q+m_1</math>. Thus is an operator that increases the eigenvalue <math>J_z</math> of by q.
==='''<span style="color:#2B65EC">Wigner-Eckart theorem</span>''' ===
The Wigner-Eckart theorem postulates that in a total angular momentum basis, the matrix element of a tensor operator can be expressed as the product of a factor that is independent of <math>\displaystyle{j_z}</math> and a Clebsch-Gordan coefficient. To see how this is derived, we can start with the matrix element <math>\langle \underbrace{\alpha \prime} j\prime m\prime |\underbrace{\tilde{\alpha}} j m\rangle</math>, where the <math>\underbrace{\alpha}</math> represents all the properties of the state not related to angular momentum:
<math>\langle \underbrace{\alpha \prime} j\prime m\prime |\underbrace{\tilde{\alpha}} j m\rangle = \int d\alpha \langle \underbrace{\alpha \prime} j\prime m\prime |R_{\alpha}^{-1}R_{\alpha}|\underbrace{\tilde{\alpha}} j m\rangle</math>
<math>\Rightarrow \langle \underbrace{\alpha \prime} j\prime m\prime |\underbrace{\tilde{\alpha}} j m\rangle = \sum_{m_1 ,m_1 \prime} \int d\alpha  \ d_{m_1 \prime m\prime}^{(j\prime )}(\alpha)^{*} d_{m_1 m}^{(j)}(\alpha)\langle \underbrace{\alpha \prime} j\prime m_1 \prime |\underbrace{\tilde{\alpha}} j m_1 \rangle</math>
Using the orthogonality of rotation matrices, this reduces to
<math>\langle \underbrace{\alpha \prime} j\prime m\prime |\underbrace{\tilde{\alpha}} j m\rangle = \delta _{jj\prime} \delta _{mm\prime} \sum _{m_1} \frac{\langle \underbrace{\alpha \prime} j\prime m_1|\underbrace{\tilde{\alpha}} j\prime m_1 \rangle}{2j\prime +1}</math>
Finally, using the fact that <math>|\underbrace{\tilde{\alpha}} j m\rangle = \sum_{q, \tilde{m}} T_{q}^{(k)}|\underbrace{\alpha} \tilde{j} \tilde{m}\rangle\langle k\tilde{j}q\tilde{m}|k\tilde{j}jm\rangle</math> and the orthogonality of the Clebsch-Gordan coefficients, we obtain
<math>\langle \underbrace{\alpha \prime} j\prime m\prime |T_{q}^{(k)}|\underbrace{\alpha} j m\rangle = \sum_{m_1}\frac{\langle \underbrace{\alpha \prime} j\prime m_1 |\underbrace{\tilde{\alpha}} j\prime m_1 \rangle}{2j\prime +1} \langle kjqm|kjj\prime m\prime \rangle </math>
Historically, this is written as
<math>\langle \underbrace{\alpha \prime} j\prime m\prime |T_{q}^{(k)}|\underbrace{\alpha} j m\rangle = \frac{\langle \underbrace{\alpha \prime} j\prime || T_{q}^{(k)}|| \underbrace{\alpha} j \rangle}{\sqrt{2j\prime +1}} \langle kqjm|kjj\prime m\prime \rangle </math>
where <math>\langle \underbrace{\alpha \prime} j\prime || T_{q}^{(k)}|| \underbrace{\alpha} j \rangle</math> is referred to as the reduced matrix element.
In summary, the '''Wigner-Eckart theorem''' states that the matrix elements of spherical tensor operators <math>T_q^{(k)}</math> with respect to the total-<math>\bold J</math> eigenstates <math>|j,m \rangle </math> can be written in terms of the Clebsch-Gordan coefficients, <math> <kq;jm|j\prime m\prime;kj\rangle</math>, and the reduced matrix elements of <math>T_q^{(k)}</math>, which ''do not'' depend on the orientation of the system is space, i.e., no dependence on  <math>\! m\prime </math>,  <math>\! m </math>, and  <math>\! q </math>:
<math>
\langle \underbrace{\alpha \prime} j\prime m\prime |T_{q}^{(k)}|\underbrace{\alpha} j m\rangle = \frac{\langle \underbrace{\alpha \prime} j\prime || T_{q}^{(k)}|| \underbrace{\alpha} j \rangle}{\sqrt{2j\prime +1}} \langle kqjm|kjj\prime m\prime \rangle
</math>
As an example of how this theory can be useful, consider the example of the matrix element <math>T_{0}^{(1)} = z =r\cos \left(\theta \right)</math> with hydrogen atom states <math>|n \ell m\rangle</math>. Because of the Clebsch-Gordan coefficients, the matrix element <math>\langle n\prime \ell \prime m\prime | T_{0}^{(1)}|n\ell m\rangle</math> is automatically zero unless <math>\displaystyle{m=m\prime}</math> and  <math>\displaystyle{\ell \prime = \ell \pm 1}</math> or <math>\displaystyle{\ell}</math>. Also, because z is odd under parity, we can also eliminate the <math>\ell \prime = \ell</math> transition.
Also, for <math>x=\frac{1}{\sqrt{2}}\left(T_{-1}^{1}-T_{1}^{1}\right)</math>, the Wigner-Eckart Theorem reads
<math>\langle n \ell  m | x|n\ell m\rangle=\frac{1}{\sqrt{2}}\langle n \ell || T^1 || n \ell \rangle\left(C^{\ell m}_{\ell m11}-C^{\ell m}_{\ell m1-1}\right)</math>
The result vanishes since the CG coefficients on the right hand side are zero.
Problem [http://wiki.physics.fsu.edu/wiki/index.php/Editing_Matrix_Elements_and_the_Wigner_Eckart_Theorem_Example]
EXAMPLE PROBLEM [http://wiki.physics.fsu.edu/wiki/index.php/DetailedBalance]
Application [http://wiki.physics.fsu.edu/wiki/index.php/Phy5646/Using_Wigner-Eckart_Theorem_to_get_selection_rule_of_spontaneous_emission]
== Elements of relativistic quantum mechanics ==
The description of phenomena at high energies requires the investigation of the relativistic wave equations, the
equations which are invariant under Lorentz transformations. The translation from a nonrelativistic to a relativistic description, implies that several concepts of the nonrelativistic theory have to be reinvestigated, in particular:
(1) Spatial and temporal coordinates have to be treated equally within the theory.
(2) Since, from the Uncertainty principle, we know
<math>\triangle x \sim \frac{\hbar}{\triangle p} \sim \frac{\hbar}{m_{0} c}</math>,
a relativistic particle can not be localized more accurately than <math>\approx \hbar/{m_{0} c}</math>; otherwise pair creation occurs for <math>E > 2m_{0} c^2</math>. Thus, the idea of a free particle only makes sense if the particle is not confined by external constraints to a volume which is smaller than approximately than the Compton wavelength <math>\lambda_c=\hbar/{m_{0} c}</math>. Otherwise, the particle automatically has companions due to particle-antiparticle creation.
(3) If the position of the particle is uncertain, i.e. if
<math>\triangle x > \frac{\hbar}{m_{0} c}</math>,
then the time is also uncertain, because
<math>\triangle t \sim \frac{\triangle x}{c} > \frac{\hbar}{m_{0} c^2}</math>.
In a nonrelativistic theory, <math>\triangle t</math> can be arbitrary small, because <math>c \to \infty</math>. Thereby, we recognize the necessity to reconsider the concept of probability density, which describes the probability of finding a particle at a definite place <math>r</math> at a fixed time <math>t</math>.
(4) At high energies i.e. in the relativistic regime, pair creation and annihilation processes occur, ususlly in the form of creating particle-antiparticle pairs. Thus, at relativistic energies, particle conservation is no longer a valid assumption. A relativistic theory must be able to describe the phenomena like pair creation, vacuum polarization, particle conservation etc.
In nonrelativistic quantum mechanics, states of particles are described by Schrodinger equation of states:
<math>i\hbar\frac{\partial\psi(\bold r, t)}{\partial t}=\left(-\frac{\hbar^2}{2m}\nabla^2+V(\bold r, t)\right)\psi(\bold r, t)</math>
Schrodinger equation is a first order differential equation in time. However, it is second order in space and  therefore, it is not invariant under the Lorentz transformations. As mentioned above, in relativistic quantum mechanics, the equation describing the states must be invariant under the Lorentz transformations. In order to satisfy this condition, equation of state must contain the derivatives with respect to time and space of the same order. Equations of states in relativistic quantum mechanics are Klein-Gordon equation (for spinless particles) and Dirac equation (for spin <math>\frac {1}{2}</math> particles). The former contains second ordered derivatives while the latter contains first ordered derivatives with respect to both time and space. The way to derive these equations is similar to that of Schrodinger equation: making use of the correspondence principle, starting from the equation connecting the energy and momentum with the substitution <math>E</math> by <math>i\hbar \frac {\partial}{\partial t}</math> and <math>\bold p</math> by <math>-i\hbar \nabla</math>.
Follow this link to learn about [[Klein-Gordon equation|Klein-Gordon equation]].
Follow this link to learn about [[Dirac equation|Dirac equation]].
Here is a worked problem for a [[Phy5646/Group3RelativisticProb | free relativistic particle]].
Here is a worked problem to review the use of relativistic 4-vectors: [[Phy5646/Group5RelativisticProb | relativistic 4-vectors]]
== '''The Adiabatic Approximation and Berry Phase''' ==
The adiabatic approximation can be applied to systems in which the Hamiltonian evolves '''slowly''' with time. The Hamiltonian of an adiabatic system contains several degrees of freedom. The basic idea behind the adiabatic approximation is to solve the Schrodinger equation for the "fast" degree of freedom and only then allow the "slow" degree of freedom to evolve slowly. For example, imagine a molecule with a heavy nucleus and an electron. In this system there is a "slow" degree of freedom (the nucleus) and a "fast" degree of freedom (the electrons). Imagine that the nucleus is stationary, and the electrons align themselves. Now that the electrons have aligned themselves, allow the nucleus to move very slowly - which will cause the electrons to realign. This is the adiabatic approximation.
==='''<span style="color:#2B65EC">Adiabatic Process</span>''' ===
The gradual change in the external conditions characterizes an adiabatic process.
In another word, let <math>T_{i}</math> be the internal characteristic frequency, <math>T_{e}</math> be the external characteristic frequency. The adiabatic process is one for which <math>T_{e}>>T_{i}</math>.
==='''<span style="color:#2B65EC">The Adiabatic Theorem</span>''' ===
The adiabatic theorem states that if a system is initially in the ''n''th state and if its Hamiltonian evolves slowly with time, it will be found at a later time in the ''n''th state of the new Hamiltonian. (Proof: Messiah Q.M. (wiley NY 1962) Vol II ch. XVII)
Application(Born-Oppenheimer Approximation)[http://wiki.physics.fsu.edu/wiki/index.php/Phy5646/Born-Oppenheimer_Approximation]
==='''<span style="color:#2B65EC">Geometric Phase (Berry Phase)</span>''' ===
The phase of a wave function is often considered arbitrary, and it is canceled out for most physics quantities, such as <math> |\Psi |^2 </math>. For that reason, the time-dependent phase factor on the wave function of a particle going from the ''n''th eigenstate of <math>\hat{H}_0</math> to the ''n''th eigenstate of <math>\hat{H}_t</math> was ignored. However, Berry showed that if the Hamiltonian is evolved over a closed loop the relative phase is not arbitrary, and cannot be gauged away. For more information on this discovery, see [http://mecklenburg.bol.ucla.edu/Berry%20Berry%20Phase%201984.pdf this paper.] This is called the Berry Phase.
If <math>\psi(x, 0) = |n (0)\rangle</math>,
<math>\psi (x,t)\simeq e^{\theta _n(t)}e^{\gamma (t)}|n(t)\rangle</math>,
where <math>\theta _n(t)=-\frac{1}{\hbar }\int _{\text{t0}}^tE_n\left(t'\right)dt'</math>, is called dynamic phase, and <math>\gamma (t)</math> is geometric phase.
Do you want to know what geometric phase looks like? If so,
let' s begin our work :
Substitute <math>\left.\psi (x,t)\simeq e^{\theta _n(t)}e^{\gamma (t)}\right|n(t)\rangle</math> into Schrodinger equation,
<math>i\hbar \left[\frac{\partial }{\partial t}\left|n(t)\rangle e^{i\theta (t)}e^{i\gamma (t)}-\frac{i}{\hbar }E_n(t)\right|n(t)\rangle e^{i\theta (t)}e^{i\gamma(t)}+i\frac{d\gamma _n(t)}{dt}|n(t)\rangle e^{i\theta(t)}e^{i\gamma(t)}\right]=H(t)|n(t)\rangle e^{i\theta(t)}e^{i\gamma (t)}=E_n(t)|n(t)\rangle e^{i\theta(t)}e^{i\gamma(t)}</math>
So, <math>\frac{d\gamma _n(t)}{dt}=i\langle n(t)|\frac{\partial }{\partial t}|n(t)\rangle.</math>
Since <math>\frac{\partial }{\partial t}|n(t)\rangle=\frac{\partial |n(t)\rangle}{\partial R}\frac{\partial R}{\partial t}</math>
<math>\therefore \gamma _n(t)=i\int _{t_0}^t\langle n\left(t'\right)|\frac{\partial }{\partial t}\left|n\left(t'\right)\rangle dt'\right.=i\int _{t_0}^t\langle n\left(t'\right)|\frac{\partial }{\partial R}\left|n\left(t'\right)\rangle\frac{\partial R}{\partial t'}dt'\right.=\int _{R_i}^{R_f}\langle n\left(t'\right)|\frac{\partial }{\partial R}\left|n\left(t'\right)\rangle dR\right.</math>
This is the expression of geometric phase.
If it's a 1 D problem : <math>\gamma _n(t)=0</math>, there is no geometric phase change.
If more than 1 D: <math>\gamma _n(t)=\int _{R_i}^{R_f}\langle n\left(t'\right)|\nabla _R\left|n\left(t'\right)\rangle dR.\right.</math>
The larger number of dimensions allows for the geometric phase change.
Berry' s phase :
If the Hamiltonian returns to its oringinal form after a time T,
the net geometric phase change is :
<math>\gamma _n(t)=\oint \langle n\left(t'\right)|\nabla _R\left|n\left(t'\right)\rangle dR.\right.</math>
Why geometric phase is special?
Because it does have physical meaning.
We can observe it from interference experiment.
==='''<span style="color:#2B65EC">Berry Potentials</span>''' ===
It is possible to construct potentials that give rise to this phase, by carefully considering a general Hamiltonian of two interacting particles, where one is much larger (and hence slower) than the other. (This can also be done for more particles, but the construction in very similar.)
<math> \mathcal{H} = \frac{P^2}{2m_n} + \frac{p^2}{2m_e} + V(\vec{R},\vec{r}) </math> where <math>\vec{R}</math> refers to the coordinate of the larger particle, and not the center of mass.
After some work, it can be shown that terms similar to both a vector and scalar potential can be found that explicitly create the Berry Phase.
The final result is:
for the Vector Potential,
<math> A^{(n)} = i\hbar \langle n(R)|\vec{\nabla_R}|n(R)\rangle </math>
and for the Scalar Potential,
<math> \Phi^{(n)} = \frac{\hbar^2}{2m_n}\left(\langle\vec{\nabla_R} n(R)|\vec{\nabla_R} n(R) \rangle - \langle \vec{\nabla_R} n(R)|n(R)\rangle \langle n(R)|\vec{\nabla_R}n(R)\rangle \right) </math>
where <math> |n(R)\rangle </math> is the wavefunction of the smaller particle, depending on the position of the larger. More generally, it would be the wavefunction of the object with the 'fast' degree of freedom, depended on the state of the slower degree of freedom.
Once these are found, an effective Hamiltonian may be constructed:
<math> \mathcal{H} = \frac{1}{2m_n}\left(\vec{P}-\vec{A^{(n)}}\right)^2 + \Phi^{(n)} </math>
== '''Time Reversal Symmetry (& Kramer's Degeneracy)''' ==
The Schrodinger equation is:
<math>i\hbar\frac{\partial }{\partial (t)}\psi(r,t) =H\psi(r,t)</math>
Taking t <math>\rightarrow</math> -t yeilds:
<math>-i\hbar\frac{\partial }{\partial (t)}\psi(r,-t) =H\psi(r,-t)</math>
This is obviously not a symmetric transformation. In addition, take the complex conjugate: <math>i\hbar\frac{\partial }{\partial (-t)}\psi ^*=H^*\psi ^*</math>
Whether or not this equation is symmetric depends on the form of '''H''' we are working with.
To find an expression for the time reversal operator, we consider the specific Hamiltonian for an electron:
<math> H = \frac{p^2}{2m} + V(r) + (\frac{1}{2m^2c^2}\frac{1}{r}\frac{dV}{dr})\mathbf{L}\cdot \mathbf{S}</math>
The time reversal operator for a one electron system is:
<math> \hat{K} = i \sigma _{y} C </math>  where C indicates to take the complex conjugate
Prove:
<math>K\psi=i\sigma_y\psi^*</math>
<math>KHK^{\dagger}=(i\sigma_yC) H (-i\sigma _yC^*)=H^*</math>
To get degeneracy from the time reversal. (Kramers Degeneracy)
For n-electrons:
<math>\hat{K}=i^n\sigma _{y_1}\sigma _{y_2}...\sigma _{y_n}C</math>
For time reversal invariant H:
<math>(KHK^{\dagger})K\psi=E(K\psi)</math>
So, <math>\psi</math> and <math>K\psi</math> have same energy.
Assume they are linear dependent: <math>K\psi=\psi^'=a\psi</math>
<math>K^2\psi=K\psi^'=K\psi=a^*K\psi=a^*a\psi=\psi</math>
It requires <math>K^2=1</math>.
However, <math>K^2=(i^n\sigma_y1\sigma_y2...\sigma_yn C)(i^n\sigma_y1\sigma_y2...\sigma_yn C)=(-1)^n</math>
And thus we have arrived at Kramer's Degeneracy Theoreom: For an odd number of electrons, the energy levels of the system are ''at least'' doubly degenerate, as long as H is time reversal invariant.
== '''Many Particle Systems and Identical Particles''' ==
At this point it is second nature to write down the hamiltonian for a system if the potential and kinetic energy of the particle is known.  The hamiltonian is then denoted by: <math> \hat H = \frac{p^2}{2m} + V(\vec r) </math> 
The next natural step is to investigate what the hamiltonian would look like and the resulting wavefunctions and energies for systems with more than one particle.  The easiest place to start is with two identical particles.
==='''<span style="color:#2B65EC">Two Identical Particles</span>''' ===
It is straight forward to generalize a Hamiltonian for two identical particles:
<math> \hat H = \frac{p_1^2}{2m} + \frac{p_2^2}{2m} + V(\vec r_1) + V(\vec r_2) + (u(\vec r_1, \vec r_2) + u(\vec r_2, \vec r_1)), </math>
where the potential and kinetic energy are written down for each particle individually and there is an additional term which represents the interaction between the two particles.  For simplicity we will treat the interaction potential as a central force and from this point on write it as <math> u(| \vec r_1 - \vec r_2|) </math>.
The above Hamiltonian for the two identical particles is invariant under exchange symmetry (or the permutation of particle labels) and as such it is either even or odd under permutation.  Likewise the eigenfunctions can be chosen to be even or odd under the exchange of particle labels.  Ignoring spin orbit coupling the general solution will therefore be of the form:
<math> \psi (\eta_1, \eta_2) = \phi (\vec r_1, \vec r_2) \chi (\sigma_1, \sigma_2), </math>
where <math> \sigma_1 \!</math> and <math> \sigma_2 \!</math> are just spin labels not Pauli spin matrices. If <math> \psi (\eta_1, \eta_2) \! </math> is a solution, then <math> \psi  (\eta_2, \eta_1) \! </math> is also a solution and as such there are two possible states, one is symmetric and the other is anti-symmetric solution:
<math> \Rightarrow \frac{1}{\sqrt{2}} \left(\psi (\eta_1, \eta_2) + \psi (\eta_2, \eta_1)\right) </math>
and
<math> \Rightarrow \frac{1}{\sqrt{2}} \left(\psi (\eta_1, \eta_2) - \psi (\eta_2, \eta_1)\right) </math>
Although mathematically this formula will result in symmetric and anti-symmetric solutions, in nature that is not the case, and the solution must be chosen to be one or the other.  If the system deals with two fermions, which have half-integer spin, then only the anti-symmetric solution appears in nature.  Likewise if the system deals with two bosons, which have integer spin, then only the symmetric solution appears in nature.
==='''<span style="color:#2B65EC">N Particles</span>''' ===
If the Hamiltonian were for a three particle system, it would be:
<math> \hat H = \frac{p_1^2}{2m} + \frac{p_2^2}{2m} + \frac{p_3^2}{2m} + V(\vec r_1) + V(\vec r_2) + V(\vec r_3) +(u(|\vec r_1 - \vec r_2 |) + u(| \vec r_1 - \vec r_3 |) + u(| \vec r_2 - \vec r_3 |).</math>
In general, the Hamiltonian for a system with N particles can be written as:
<math>\hat H = \sum _{j=1} ^{N} \left( \frac{p^2_j}{2m} + V(\vec r_j)\right) + \frac{1}{2} \sum_{j,k}^N u(r_j, r_k). </math>
In general, it is difficult to solve this problem with interaction terms but assuming you could do so. The only physically admissible states are either symmetric or antisymmetric under exchange of any two particle labels as before, therefore, the wavefunction is given by:
<math>\psi (\eta_1, \eta_2, \eta_3, ... , \eta_N) = \phi (\vec r_1, \vec r_2, \vec r_3, ... , \vec r_N) \chi (\sigma_1, \sigma_2, \sigma_3, ..., \sigma_N) </math>,
and it follows the same rules as before for bosons and fermions.
It is important to note that if a solution doesn't satisfy the proper symmetry, then a linear combination of all permutations will result in a properly symmetrized solution that will be an eigenstate.
==='''<span style="color:#2B65EC">Constructing Admissible Eigenstates</span>''' ===
As stated above, if a solution does not satisfy the necessary symmetry properties, then a linear combination of the different permutations of product states (that are completely symmetric for bosons and anti-symmetric for fermions) must be made. 
For spin-less bosons the normalized wavefunction is:
<math>\psi_{\mbox{bosons}} (1,2,....N) = \sqrt{\frac{N_a! N_b! .... N_n!}{N!}} \sum_P P \varphi_a (1) \varphi_b(2) ....... \varphi_n (N)</math>
where the sum is over all <math> N!\!</math> permutations of indices 1 through <math> N \!</math>.
For spin-less fermions the normalized wavefunction is:
<math>\psi_{\mbox{fermions}} (1,2,....N) = \frac{1}{\sqrt{N!}}\sum_P  (-1)^P P \varphi_a (1) \varphi_b(2) ....... \varphi_n (N)</math>
where <math>(-1)^P \!</math>  is <math> +1 \!</math> if a permutation can be decomposed into an even number of two particle exchanges and <math> -1 \!</math> for odd.
Another way of writing the sum to form an anti-symmetric matrix is through the use of the Slater determinant.
==='''<span style="color:#2B65EC">Second Quantization</span>''' ===
==='''<span style="color:#2B65EC">Second Quantization</span>''' ===


Line 4,117: Line 23:


<math> |n_0, n_1, .., n_N\rangle = {\frac{(\hat{a}_0 ^{\dagger})^{n_o}}{\sqrt{n_0 !}}}{\frac{(\hat{a}_1 ^{\dagger})^{n_1}}{\sqrt{n_1 !}}}...{\frac{(\hat{a}_N ^{\dagger})^{n_N}}{\sqrt{n_N !}}} |0\rangle \!</math>
<math> |n_0, n_1, .., n_N\rangle = {\frac{(\hat{a}_0 ^{\dagger})^{n_o}}{\sqrt{n_0 !}}}{\frac{(\hat{a}_1 ^{\dagger})^{n_1}}{\sqrt{n_1 !}}}...{\frac{(\hat{a}_N ^{\dagger})^{n_N}}{\sqrt{n_N !}}} |0\rangle \!</math>
Fermions, however, obey anti-commutator relationships, of the following form:
<math> \{ \hat{a}_i, \hat{a}_j ^{\dagger} \} = \delta_{ij};  \{ \hat{a}_i, \hat{a}_j \} = \{ \hat{a}_i ^{\dagger}, \hat{a}_j ^{\dagger} \} = 0 \!</math>
For this type of system, the state <math> |n_0, n_1, .., n_N\rangle \!</math> can be written as:
<math> |n_0, n_1, .., n_N\rangle = (\hat{a}_0 ^{\dagger})^{n_0}(\hat{a}_1 ^{\dagger})^{n_1}...(\hat{a}_N ^{\dagger})^{n_N} |0\rangle \!</math>

Revision as of 16:28, 22 April 2010

Second Quantization

Consider now a wavefunction pertaining to a many-particle system, , which is considered to be a field variable. For the many-particle system, this field variable must also quantized by a process known as second quantization.

In order to perform this quantization of the field variable, we must construct special raising and lowering operators, associated with the individual energy levels of the system, and , which add and subtract particles from the energy level, respectively. In the presence of spin, an additional subscript is added to separate the creation and annihilation operators for each case of spin, so that each operator only acts on particles with the same spin attributed to said operator. In the simple, although rather non-physical, case of spinless particles, this extra factor can be ignored for simplicity in examining how the operators work on the quantized field:

For the case of fermions, an additional constraint on the operators is placed due to the exclusion principle:

Given the two classes of particles, fermions and bosons, two sets of relations result to relate the creation and annihilation operators.

For the case of bosons, the operators obey a commutator relationship of the form:

The state of the system, is therefore of the form:

Fermions, however, obey anti-commutator relationships, of the following form:

For this type of system, the state can be written as: