The physical state of a system is represented by a set of probability amplitudes (wave functions), which form a linear vector space. This linear vector space is a particular type of space called a Hilbert Space. Another way to think about the Hilbert space is as an infinite-dimensional space of square normalizable functions. This is analogous to a three-dimensional space, where the basis is
in a generalized coordinate system. In the Hilbert space, the basis is formed by an infinite set of complex functions. The basis for a Hilbert space is written like
, where each
is a complex vector function.
We denote a state vector
in Hilbert space with Dirac notation as a “ket”
, and its complex conjugate (or dual vector)
is denoted by a “bra”
.
Therefore, in the space of wavefunctions that belong to the Hilbert space, any wave function can be written as a linear combination of the basis functions:
where
is a complex number.
By projecting the state vector
onto different bases, we can obtain the wave functions of the system in different bases. For example, if we project
onto the position basis
we would get
while projecting onto the momentum basis
gives us
We interpret
as the probability density of finding the system at position
and
as the probability density of finding the system with momentum
.
In Dirac notation, the scalar product of two state vectors
and
is denoted by a “bracket”
. In the position-space representation, the scalar product is given by
and thus the normalization condition may now be written as
This additionally shows that any wave function is determined to within a phase factor,
, where
is some real number.
The vectors in this space also obey some useful rules following from the fact that the Hilbert space is linear and complete:
where
is a complex number.
In Dirac notation, the Schrödinger equation is written as
By projecting the equation in position space, we can obtain the previous form of the Schrödinger equation,
On the other hand, we can also project it into momentum space and obtain
where
and
are related through Fourier transform as described in the next section.
For time-independent Hamiltonians, the wave function may be separated into a position-dependent part and a time-dependent part,
.
as described previously, thus yielding the equation for stationary states in Dirac notation:
The eigenfunctions (now also referred to as eigenvectors) are replaced by eigenkets. Use of this notation makes solution of the Schrödinger equation much simpler for some problems, where the Hamiltonian can be re-written in the form of matrix operators having some algebra (defined set of operations on the basis vectors) over the Hilbert space of the eigenvectors of that Hamiltonian. (See the section on operators.)
We now ask how an arbitrary state
evolves in time? The initial state
can be expressed as the linear superposition of the energy eignstates:
We can then solve the time-dependent Schrödinger equation, we obtain, for a time-independent Hamiltonian,