Basic concepts of quantum mechanics

Objectives

In this chapter we introduce the Schrödinger equation, the dynamic equation describing a quantummechanical system. We discuss the role of energy eigenvalues and eigenfunctions in the process of finding it’s solutions. Through the problem of linear harmonic oscillator the way one solves quantum mechanical problems is demonstrated. Finally the expectation value and the variance of an operator is discussed.

Prerequisites

Elements of differential equations, and the material of Chapter 1. The classical equations of motions. The classical harmonic oscillator.

The Schrödinger equation

As we have seen in the previous section, the state of a particle in position space is described by a wave function $\Psi (x,t)$ in one, or $\Psi (\mathbf{r},t)$ in three dimensions. This is a probability amplitude, and $|\Psi (\mathbf{r},t)|^{2}$ is the probability density of finding the particle somewhere in space around the position given by the vector $\mathbf{r}$. It was Erwin Schrödinger who wrote up the equation, the solutions of which gives us the concrete form of the function. His fundamental equation is called the time dependent Schrödinger equation, or dynamical equation, and has the form:

  \begin{equation} i\hbar \frac{\partial \Psi (\mathbf{r},t)}{\partial t}=-\frac{\hbar ^{2}}{2m}\Delta \Psi (\mathbf{r},t)+V(\mathbf{r})\Psi (\mathbf{r},t). \label{sch0} \end{equation}   (2.1)

To simplify the treatment we consider in this subsection the one dimensional motion along the $x$ axis:

  \begin{equation} i\hbar \frac{\partial \Psi (x,t)}{\partial t}=-\frac{\hbar ^{2}}{2m}\frac{\partial ^{2}\Psi (x,t)}{\partial x^{2}}+V(x)\Psi (x,t). \label{sch1} \end{equation}   (2.2)
\includegraphics[width=450px]{schrodinger.png}
Figure 2.1:

In quantum mechanics the right hand side of the Schrödinger equation is written shortly as $\hat{H}\Psi (x,t)$, so (2.2) can be written as

  \begin{equation} i\hbar \frac{\partial \Psi (x,t)}{\partial t}=\hat{H}\Psi (x,t) \label{sch} \end{equation}   (2.3)

This notation has a deeper reason, which is explained here shortly. The operation $-\frac{\hbar ^{2}}{2m}\frac{\partial ^{2}\Psi (x,t)}{\partial x^{2}}+V(x)\Psi (x,t)$ can be considered as a transformation of the function $\Psi (x,t)$, and the result is a another function of $x$ and $t$. This means that the right hand side is a (specific) mapping from the set of functions to give again a function. The mapping is linear, and

  \begin{equation} \hat{H}=-\frac{\hbar ^{2}}{2m}\frac{\partial ^{2}}{\partial x^{2}}+V(x) \label{Ham} \end{equation}   (2.4)

is a linear transformation or linear operator, as it is called in quantum mechanics. The operator in question $\hat{H}$ is called the operator of the energy, that bears the name the Hamilton operator or the Hamiltonian for short. Later on, besides $\hat{H}$ we shall encounter other types of linear operators, corresponding to physical quantities other than the energy. The form (2.4) of the Hamiltonian is used for the specific problem of a particle moving in one dimension $x$, where the external force field given by the potential energy function $V(x)$. For other problems the expression of the Hamiltonian is different, but the dynamical equation in the form (2.3) is the same. This is like in classical mechanics where the equation of motion for one particle is always $\mathbf{\dot{p}}=\mathbf{F}$ but $\mathbf{F}$, the force depends on the physical situation in question. In quantum mechanics the Schrödinger equation replaces Newton’s equation of motion of classical mechanics which shows its fundamental significance.

We enumerate here a few important properties of the equation (2.2), or equivalently those of eq. (2.3):

  • It is a linear equation, which means that if $\Psi _{1}(x,t)$ and $\Psi _{2}(x,t)$ are solutions of the equation, then linear combinations of the form $c_{1}\Psi _{1}+c_{2}\Psi _{2}$ will be also a solution, where $c_{1}$ and $c_{2}$ are complex constants. The summation can be extended to infinite sums, (with certain mathematical restrictions). Linearity is valid, because on both sides of the equation we have linear operations, derivations and multiplication with the given function $V(x)$.

  • Another important property is that the equation is of first order in time. Therefore if we give an initial function at $t=t_{0}$ depending only on $x$: $\Psi (x,t_{0})\equiv \psi _{0}(x)$, then by solving the equation we can find – in principle – a unique solution that satisfies this initial condition. There are of course infinitely many possible solutions, but they correspond to different initial conditions. The time dependence of the wave function from a given initial condition is called time evolution.

  • The equation conserves the norm of the wave function, which means that if $\Psi (x,t)$ obeys (2.2), then the integral of the position probability density with respect to $x$ is constant independently of $t$. Specifically, if this integral is equal to 1 at $t_{0}$, then it remains so for all times.

      \begin{equation} \int \limits _{-\infty }^{\infty }\left\vert \psi _{0}(x)\right\vert ^{2}dx=1\Longrightarrow \int \limits _{-\infty }^{\infty }\left\vert \Psi (x,t)\right\vert ^{2}dx=1,\quad \forall t \label{intpsi} \end{equation}   (2.5)

    In other words the normalization property remains valid for all times. This property is called unitarity of the time evolution. The proof of this statement is left as a problem.

Problem 2.1

Using (2.2) prove that the time derivative of the normalization condition is zero, which means the validity of (2.5).

Stationary states

Specific solutions of eq. (2.2) can be found by making the "ansatz" (this German word used also in English math texts, and means educated guess), that the solution is a product of two functions

  \begin{equation} \Psi (x,t)= au (t)u(x). \label{prod} \end{equation}   (2.6)

where $ au (t)$ depends only on time, $\ $while $u(x)$ is a space dependent function. This procedure is called separation. Not all the solutions of Eq. (2.2) have this separated form, but it is useful to find first these kind of solutions. Substituting back the product (2.6) into (2.2), we find

  \begin{equation} i\hbar u(x)\frac{\partial au (t)}{\partial t}=\left( -\frac{\hbar ^{2}}{2m}\frac{\partial ^{2}u(x)}{\partial x^{2}}+V(x)u(x)\right) au (t). \end{equation}   (2.7)

Dividing by $ au (t)u(x)$, we get:

  \begin{equation} i\hbar \frac{1}{ au (t)}\frac{\partial au (t)}{\partial t}=\frac{1}{u(x)}\left( -\frac{\hbar ^{2}}{2m}\frac{\partial ^{2}u(x)}{\partial x^{2}}+V(x)u(x)\right). \end{equation}   (2.8)

As the function on the left hand side depends only on time, while the one on the right hand side depends only on $x$, and they must be equal for all values of these variables, this is possible only if both sides are equal to a constant independent of $x$ and $t$. It is easy to check that the constant has to be of dimension of energy, and will be denoted by $\varepsilon $. We get then two equations. One of them is

  \begin{equation} i\hbar \frac{\partial au (t)}{\partial t}= au (t)\varepsilon . \end{equation}   (2.9)

with the solution

  \begin{equation} au (t)=Ce^{-i\varepsilon t/\hbar }, \label{taut} \end{equation}   (2.10)

where $C$ is an integration constant.

The other equation takes the form:

  \begin{equation} -\frac{\hbar ^{2}}{2m}\frac{\partial ^{2}u(x)}{\partial x^{2}}+V(x)u(x)=\varepsilon u(x). \label{ensaj} \end{equation}   (2.11)

We recognize that on the left hand side we have now again the operator $\hat{H}$ acting this time on $u(x):$

  \begin{equation} \hat{H}u(x)=\varepsilon u(x) \label{Heigen} \end{equation}   (2.12)

This is a so called eigenvalue problem: the effect of the operator is such that it gives back the function itself multiplied by a constant. As the operator here is the Hamiltonian, i.e. the energy operator, (2.12) is an energy eigenvalue equation. Sometimes it is also called the (time independent) Schrödinger equation, but we shall not use this terminology.

Animation

\includegraphics[width=110px]{./animaciok/eigenvector_anim.png}

This simple gif animation tries to visualize the concepts of eigenvectors.

http://upload.wikimedia.org/wikipedia/commons/0/06/Eigenvectors.gif

Animation

\includegraphics[width=110px]{./animaciok/eigenvec_demo.jpg}

Drag each vector until the coloured parallelogram vanishes. If you can do this for two independent vectors, they form a basis of eigenvectors and the matrix of the linear map becomes diagonal, that is, nondiagonal terms are zero. This is impossible for some of the initial matrices—try them all. When you have found an eigenvector, check that it can be prolonged in its own direction while remaining an eigenvector; it is interesting to keep an eye on the matrix at the same time.

http://demonstrations.wolfram.com/EigenvectorsByHand/

It is important to stress here that Eq. (2.11) must be considered together with certain boundary conditions to be satisfied by the solutions $u(x)$ at the boundaries of their domain. In other words the boundary conditions are parts of the notion of the differential operator $\hat{H}$ in (2.11). The boundary conditions can be chosen in several different ways, and in general, physical considerations are used to choose the appropriate ones. In other words among the solutions of (2.11) one has to select those special ones where $u(x)$ obeys the boundary conditions which is usually possible only for certain specific values of $\varepsilon $. The functions $u_{\varepsilon }(x)$ obeying the equation with the given boudary conditions are called the eigenfunctions of $\hat{H}$ belonging to the corresponding energy eigenvalue $\varepsilon $: $\hat{H}u_{\varepsilon }(x)=\varepsilon u_{\varepsilon }(x)$.

According to the separation condition all the functions of the form

  \begin{equation} \psi _{\varepsilon }(x,t)=u_{\varepsilon }(x)e^{-i\varepsilon t/\hbar } \label{stac1} \end{equation}   (2.13)

will be the solutions of the Schrödinger equation (2.2). The allowed energy eigenvalues $\varepsilon $, selected by the boundary conditions can be discrete, then they are usually labelled by integers $\varepsilon _{n}$ $(n=1,\ 2,\cdots )$, but they can also be continuous. In the latter case one finds proper solutions for all $\varepsilon $-s within a certain energy interval. It can be easily shown that the physically acceptable solutions must belong to real energy eigenvalues.

Problem 2.2

Show that the normalization condition allows only real values of $\varepsilon $.

In general, there are infinitely many solutions of the form (2.13), but they do not exhaust all the solutions. As the Hamiltonian, as well as the Schrödinger-equation are linear, appropriate linear combinations of the specific solutions (2.13) also obey the Schrödinger equation, and have the general form:

  \begin{equation} \Psi (x,t)=\sum _{n}c_{n}u_{n}(x)e^{-i\varepsilon _{n}t/\hbar }+\int c(\varepsilon )u_{\varepsilon }(x)e^{-i\varepsilon t/\hbar }d\varepsilon . \label{gensol} \end{equation}   (2.14)

Here the complex numbers $c_{n}$ and the complex function $c(\varepsilon )$ are arbitrary, the only condition is that the resulting $\Psi (x,t)$ must be normalizable, i.e. they must be such that the condition $\int |\Psi (x,t)|^{2}dx=1$ must hold.

It can also be shown – although this is usually far from simple – that for physically adequate potential energy functions, or $\hat{H}$ operators, all the solutions of (2.2) can be written in the form as given by (2.14).

The solutions $\psi _{\varepsilon }(x,t)$ of the form (2.13) are seen to be specific in the sense that they contain only a single term from the sum or integral in (2.14). Therefore the probability distributions corresponding to these solutions: $|\psi _{\varepsilon }(x,t)|^{2}=|u_{\varepsilon }(x)|^{2}$ do not depend on time, while the wave functions i.e. the probability amplitudes are time dependent. These wave functions, and the corresponding physical states are called stationary states. According to (2.14) a general solution can be obtained by an expansion in terms of the stationary states.

The set of all eigenvalues is called the spectrum of the $\hat{H}$ operator, or the energy spectrum. Note that this terminology is different from the notion of the spectrum in experimental spectroscopy, but – as we will see – they are related to each other. In experimental spectroscopy an energy eigenvalue is called a term, and the spectrum seen e.g. in optical spectroscopy is the frequency corresponding to energy eigenvalue differences. The energy eigenfunctions belonging to given eigenvalues can be identified with the stationary states (orbits) postulated by Bohr. Therefore the existence of stationary states is the quantum mechanical proof of Bohr’s first postulate. In addition QM also gives us the method how to find the energies and wave functions of the stationary states.

Now we shall introduce the following important property of the eigenfunctions of $\hat{H}$. It can be proven, that the integral of the product $u_{n}^{\ast }(x)u_{n^{\prime }}(x)$ of eigenfunctions belonging to different eigenvalues $\varepsilon _{n} eq \varepsilon _{n^{\prime }}$ vanishes $\int u_{n}^{\ast }(x)u_{n^{\prime }}(x)dx=0$, where the limits of the integration is taken over the domain of the eigenfunctions, usually between $-\infty $ and $\infty $. This property is called orthogonality of the eigenfunctions. If we also prescribe the normalization of the eigenfunctions, which can be always achieved by multiplying with an appropriate constant, we can write

  \begin{equation} \int u_{n}^{\ast }(x)u_{n^{\prime }}(x)dx=\delta _{nn^{\prime }} \end{equation}   (2.15)

where $\delta _{nn^{\prime }}$ is Kronecker’s symbol. Both orthogonality and normalization is taken together here, and we say therefore that the eigenfunctions are orthonormal.

There is a little more complicated but important point here. It may turn out, that one finds several linearly independent solutions of (2.11) with one and the same $\varepsilon $. Let the number of these solutions belonging to $\varepsilon $ be $g_{\varepsilon }$. We say that $\varepsilon $ is $g_{\varepsilon }$ times degenerate. It can also be proven that among the degenerate solutions there are orthogonal ones whose number is $g_{\varepsilon }$. We can write this in the following way: 

  \begin{equation} \int u_{nk}^{\ast }(x)u_{nk^{\prime }}(x)dx=\delta _{kk^{\prime }}\text { } \end{equation}   (2.16)

where $u_{nk}(x)$ means the $k$-th solution belonging to a given $\varepsilon _{n}:k=1,2\ldots g_{\varepsilon _{n}}$.

Problem 2.3

Show that the probability density obtained from the linear combination of two stationary states belonging to different energy eigenvalues depends on time. What is the characteristic time dependence of this probability density?

An example: the linear harmonic oscillator

This is a very important system both in classical and in quantum physics, so besides demonstrating the way one solves quantum mechanical problems, it has far reaching applications in all branches of theoretical physics

The potential energy is $\frac{1}{2}Dx^{2}=\frac{1}{2}m\omega ^{2}x^{2}$, and the eigenvalue equation is

  \begin{equation} -\frac{\hbar ^{2}}{2m}\frac{\partial ^{2}u}{\partial x^{2}}+\frac{1}{2}m\omega ^{2}x^{2}u(x)=\varepsilon u(x),\label{Hhosc}\end{equation}   (2.17)

where $-\infty <x<\infty $.

It will be useful to introduce the dimensionless coordinate $\xi $ and the dimensionless energy $\epsilon $ by the relations:

  \begin{equation} \xi =\sqrt {m\omega /\hbar }x,\qquad \epsilon =\frac{2\varepsilon }{\hbar \omega }\end{equation}   (2.18)

We obtain from (2.17)

  \begin{equation} \frac{\partial ^{2}u}{\partial \xi ^{2}}+(\epsilon -\xi ^{2})u=0.\label{dlesH}\end{equation}   (2.19)

We first find the asymptotic solution of this equation. For large values of $|\xi |$ we have $\frac{\partial ^{2}u}{\partial \xi ^{2}}-\xi ^{2}u=0$, with the approximate solution $e^{-\xi ^{2}/2}$. (The term $\xi ^{2}e^{-\xi ^{2}/2}$ will dominate over $e^{-\xi ^{2}/2}$ for large $|\xi |$) The other possibility $e^{+\xi ^{2}/2}$ must be omitted, because it is not square integrable, so it is not an allowed function to describe a quantum state.

The exact solutions of (2.19) are to be found in the form

  \begin{equation} u(\xi )=\mathcal{H}(\xi )e^{-\xi ^{2}/2}\label{ansat}\end{equation}   (2.20)

where $\mathcal{H}(\xi )$ is a polynomial of $\xi $. Substituting this assumption into 2.19, we get the equation

  \begin{equation} \frac{d^{2}\mathcal{H}}{d\xi ^{2}}-2\xi \frac{d\mathcal{H}}{d\xi }+(\epsilon -1)\mathcal{H}=0\label{Hermp}\end{equation}   (2.21)

This is known as the differential equation for the Hermite polynomials. Looking for its solution as a power series $\mathcal{H=}\sum _{k}a_{k}\xi ^{k}$, we find that the coefficients have to obey the recursion condition

  \begin{equation} (2k-\epsilon +1)a_{k}=(k+2)(k+1)a_{k+2}.\label{recur}\end{equation}   (2.22)

Problem 2.4

Derive (2.21) from the assumption (2.20) and (2.19). Derive the recursion formula, for the coefficients of the power series .

In case the series would be an infinite one, for large $k$-s we would have $a_{k+2}\simeq 2 k a_{k}$, which is obtained by keeping only the highest order terms in $k$ on both sides. This recursion relation is, however, the property of the series of $e^{\xi ^{2}}$ which would lead to a non square integrable function as the asymptotic form of $u$ would then be again $e^{\xi ^{2}}e^{-\xi ^{2}/2}=e^{\xi ^{2}/2}$. Therefore the sum determining $\mathcal{H}$ must remain finite, and so it has to terminate at a certain integer power, we shall denote it by $v$. This means that all the coefficients higher than $v$ are zero, and in particular:

  \begin{equation} a_{v} eq 0,\qquad a_{v+1}=0,\quad a_{v+2}=0\label{acoef}\end{equation}   (2.23)

In other words $\mathcal{H}$ is a polynomial of degree $v$, and the condition $a_{v+1}=0$ requires then that all the coefficients $0=a_{v-1}=a_{v-3}=\ldots $ so the polynomials are either even or odd, depending on $v$. In view of 2.23 we get 2.22 that $2v-\epsilon +1=0$, or $\epsilon =2v+1$.

To summarize the result the eigenvalue equation has square integrable solutions only if

  \begin{equation} \varepsilon _{v}=\hbar \omega \left(v+\frac{1}{2}\right),\qquad v=0,1,2\ldots \end{equation}   (2.24)

where $v$ is called the vibrational quantum number. The possible energy values of the harmonic oscillator are equidistant with the separation $\Delta \varepsilon =\hbar \omega $. The lowest energy for $v=0$, $\varepsilon _{0}=\hbar \omega /2$ is not zero, but equals half of the level separation. $\varepsilon _{0}$ is called zero-point energy.

The corresponding eigenfunctions are

  \begin{equation} u_{v}(x)=\mathcal{N}_{v}\mathcal{H}_{v}\left( \sqrt {\frac{m\omega }{\hbar }}x\right) e^{-\frac{m\omega }{\hbar }x^{2}/2}\end{equation}   (2.25)

where $\mathcal{H}_{v}$-s are polynomials of degree $v$, called Hermite polynomials, and $\mathcal{N}_{v}$-s are normalization coefficients. The square integrability is ensured by the exponential factor. For the highest nonvanishing coefficient of $\mathcal{H}_{v}$, which is not determined by the recursion, the convention is to set $a_{v}=2^{v}$.

\includegraphics[width=450px]{oszcill_e.png}
Figure 2.2: Energy eigenfunctions of the linear harmonic oscillator.

Problem 2.5

Determine the first 3 Hermite polynomials.

It can be shown that the $\varepsilon _{v}$ eigenvalues are nondegenerate, so the eigenfunction $u_{v}(x)$ is unique (up to a constant factor). The eigenfunctions have also the important property of being orthonormal:

  \begin{equation} \int \limits _{-\infty }^{\infty }u_{v^{\prime }}(x)u_{v}(x)dx=\delta _{v^{\prime }v}. \end{equation}   (2.26)

The general time dependent solutions of the problem of the harmonic oscillator are of the form:

  \begin{equation} \Psi (x,t)=\sum \limits _{v=0}^{\infty }c_{v}e^{-i\varepsilon _{v}t/\hbar }u_{v}(x)=e^{-i\omega t/2}\sum \limits _{v=0}^{\infty }c_{v}e^{-iv\omega t}u_{v}(x) \end{equation}   (2.27)

where the $c_{v}$ coefficients are complex constants, obeying the normalization condition $\sum \limits _{v=0}^{\infty }|c_{v}|^{2}=1$. They are determined by the initial condition

  \begin{equation} \Psi (x,0)=\sum \limits _{v=0}^{\infty }c_{v}u_{v}(x),\quad \text {as \quad }c_{v}=\int \limits _{-\infty }^{\infty }u_{v}(x)\Psi (x,0)dx \end{equation}   (2.28)

according to the orthonormality condition.

Animation

\includegraphics[width=110px]{./animaciok/harmosc01.png}

This animation shows the time evolution of the simple harmonic oscillator if it is initially in the superposition of the ground state ($n=0$) and the $n=1$ state. $\Psi (x,0)=\frac{1}{\sqrt {2}}\left(\varphi _0(x)+\varphi _1(x)\right)$

http://titan.physx.u-szeged.hu/~mmquantum/videok/Harmonikus_oszcillator_szuperpozicio_0_1.flv

Animation

\includegraphics[width=110px]{./animaciok/harmoscillsupnb.png}

This interactive animation gives us a tool to play with the eigenfunctions of the linear harmonic oscillator. We can construct different linear combinations of the energy eigenfunctions and study their evolution in time.

http://titan.physx.u-szeged.hu/~mmquantum/download.php?download_file=HarmonikusOszcillatorIdofuggoSzuperpozicio.nbp

Expectation values and operators

The wave function gives the probability distribution of the position of a particle with the property (2.5). According to probability theory the expectation value or mean value of the position of the particle moving in one dimension is given by

  \begin{equation} \langle \hat{X}\rangle _{\psi }=\int _{-\infty }^{\infty }x|\psi (x)|^{2}dx=\int \psi ^{\ast }(x)x\psi (x)dx \label{Xexp} \end{equation}   (2.29)

the reason of writing capital $\hat{X}$ will be seen below. We did not put the limits deliberately in the second definite integral. In what follows, if the limits are not shown explicitly, then the integration always goes from $-\infty $ to $+\infty $, in one dimension, and over the whole three dimensional space in three dimensions. Therefore in an analogous way for a three-dimensional motion we can define the expectation value of the radius vector as

  \begin{equation} \langle \mathbf{\hat{R}}\rangle _{\psi }=\int \mathbf{r}|\psi (\mathbf{r})|^{2}d^{3}\mathbf{r}=\int \psi ^{\ast }(\mathbf{r})\mathbf{r}\psi (\mathbf{r})d^{3}\mathbf{r} \end{equation}   (2.30)

which actually means three different integrals for each component of $\mathbf{r}$. Now assume that the particle moves in an external force field given by the potential energy $V(\mathbf{r})$. If we know only the probability distribution and not the exact value of the particles position we cannot speak either about the value of the potential energy but only about its probability distribution. The expectation value of the potential energy is

  \begin{equation} \langle V(\mathbf{\hat{R}})\rangle _{\psi }=\int \psi ^{\ast }(\mathbf{r})V(\mathbf{r})\psi (\mathbf{r})d^{3}\mathbf{r} \end{equation}   (2.31)

Note that the expectation value depends on the wave function, i. e on the physical state of the system, in which the quantities are measured. The general expectation value of a measurable quantity, $\hat{A}$ (which is called sometimes as an observable), is defined as,

  \begin{equation} \langle \hat{A}\rangle _{\psi }=\int \psi ^{\ast }(\mathbf{r})\hat{A}\psi (\mathbf{r})d^{3}\mathbf{r} \label{Aexp} \end{equation}   (2.32)

where $\hat{A}$ is the operator corresponding to the physical quantity. An operator in the present context is an operation that transforms a square integrable function to another function. According to (2.29) the $\hat{X}$ operator corresponding to the coordinate multiplies the the wave function with the coordinate $x$.

  \begin{equation} \hat{X}\Psi (x,t)=x\Psi (x,t), \end{equation}   (2.33)

Or more generally in three dimensions:

  \begin{equation} \mathbf{\hat{R}}\Psi (\mathbf{r},t)=\mathbf{r}\Psi (\mathbf{r},t), \end{equation}   (2.34)

 We may raise the question, what is the operator of the other fundamental quantity, momentum $\mathbf{p}$. It turns out that the corresponding operator is the derivative of $\psi $ multiplied by the factor $\frac{\hbar }{i}$, or equivalently by $-i\hbar $ :

  \begin{equation} \hat{P}_{x}\Psi (x,t)=-i\hbar \frac{\partial }{\partial x}\Psi (x,t). \label{Pop} \end{equation}   (2.35)
  \begin{equation} \mathbf{\hat{P}}\Psi (\mathbf{r},t)=-i\hbar \nabla \Psi (\mathbf{r},t) \end{equation}   (2.36)

Then according to the general rule 2.32 we have in one dimension

  \begin{equation} \langle \hat{P}_{x}\rangle _{\psi }=-i\hbar \int \psi ^{\ast }(x)\frac{\partial }{\partial x}\psi (x)dx \end{equation}   (2.37)

A justification of this statement is left to the next series of problems

Problem 2.6

Using the Schrödinger equation 2.2 prove that with the definitions above one has (in one dimension)

  \begin{equation} \langle \hat{P}_{x}\rangle _{\psi }=m\frac{d}{dt}\langle \hat{X}\rangle _{\psi } \end{equation}   (2.38)

Hints:

  • Show that the time derivative of the expectation value of the coordinate can be calculated as

      \begin{equation} \frac{d}{dt}\langle \hat{X}\rangle =\frac{\hbar }{2im}\int \left( \frac{\partial ^{2}\Psi ^{\ast }(x,t)}{\partial x^{2}}x\Psi (x,t)-\Psi ^{\ast }(x,t)x\frac{\partial ^{2}\Psi (x,t)}{\partial x^{2}}\right) dx \end{equation}   (2.39)
  • Rewrite the integrand as $\frac{\partial }{\partial x}\left( \frac{\partial \Psi }{\partial x}^{\ast }x\Psi -\Psi ^{\ast }x\frac{\partial \Psi }{\partial x}-|\Psi |^{2}\right) +2\Psi ^{\ast }\frac{\partial \Psi }{\partial x}$, and assuming that $\Psi (x,t)$ goes to zero at $\pm \infty $ argue that only the last term contributes to the result.

Problem 2.7

Show that the expectation value of $\langle \hat{P}\rangle $ is real.

Problem 2.8

An operator $\hat{A}$ is called a self-adjoint or (Hermitian) if the following property holds for square integrable functions $\varphi (\mathbf{r})$ and $\psi (\mathbf{r})$

  \begin{equation} \int \varphi ^{\ast }(\mathbf{r})\left[ \hat{A}\psi (\mathbf{r})\right] d^{3}\mathbf{r}=\int \left[ \hat{A}\varphi (\mathbf{r})\right] ^{\ast }\psi (\mathbf{r})d^{3}\mathbf{r}. \end{equation}   (2.40)

Show that the components of $\mathbf{\hat{R}}$ and $\mathbf{\hat{P}}$ are selfadjoint.

Problem 2.9

Show that the expectation value of a selfadjoint operator is real.

Noncommutativity of $\hat{X}$ and $\hat{P}$ operators

The fact that in quantum mechanics the coordinate and momentum are represented by operators – and they have the form we have given above – implies that their action on a wave function $\Psi (\mathbf{r},t)$ will give different result if they act on it in a different order:

  \begin{equation} \hat{X}\hat{P}_{x}\Psi (\mathbf{r},t)-\hat{P}_{x}\hat{X}\Psi (\mathbf{r},t) =\displaystyle -i\hbar x\frac{\partial }{\partial x}\Psi (\mathbf{r},t)+i\hbar \frac{\partial }{\partial x}\left[ x\Psi (\mathbf{r},t)\right] = i\hbar \Psi (\mathbf{r},t), \end{equation}   (2.41)

or written in another way:

  \begin{equation} (\hat{X}\hat{P}-\hat{P}\hat{X})\Psi (\mathbf{r},t)=i\hbar \Psi (\mathbf{r},t) \end{equation}   (2.42)

for any function. This means that the action of the two operators are different, if they are taken in the reverse order, so this pair of operators is noncommutative. It is easily seen, that the same is true for $Y$ and $\hat{P}_{y}$ and for $\hat{Z}$ and $\hat{P}_{z}$, while say $\hat{X}$ and $\hat{P}_{y}$ do commute, because the partial derivation by $y$ gives zero for $x$. Similarly the components of $\mathbf{\hat{R}}$ among themselves as well as the components of $\mathbf{\hat{P}}$ among themselves commute with each other. Introducing the notation

  \begin{equation} \hat{X}\hat{P}_{x}-\hat{P}_{x}\hat{X}=:[\hat{X},\hat{P}_{x}] \end{equation}   (2.43)

which is called the commutator of the operators $\hat{X}$ and $\hat{P}_{x}$, we see that the commutator vanishes if the operators commute, and it is nonzero if this is not the case. To summarize we write down here the following canonical commutation relations:

  \begin{equation} \lbrack \hat{X}_{i},\hat{X}_{j}]=0,\qquad \lbrack \hat{P}_{i},\hat{P}_{j}]=0,\qquad \lbrack \hat{X}_{i},\hat{P}_{j}]=i\hbar \delta _{ij}. \end{equation}   (2.44)

From the operators of the coordinate and the momentum we can build up other operators depending on these quantities. The rule is that in the classical expression of a function of $\mathbf{r}$ and $\mathbf{p}$ we replace them by the corresponding operators. We will see for instance that the operator of orbital angular momentum $\mathbf{\hat{L}}=\mathbf{\hat{R}}\times \mathbf{\hat{P}}=-i\hbar \mathbf{r}\times \mathbf{\nabla }$, and the operator of energy in a conservative system is $\hat{H}=\frac{\mathbf{\hat{P}}^{2}}{2m}+V(\mathbf{\hat{R}})=-\frac{\hbar ^{2}}{2m}\Delta +V(r)$ in agreement with the definition given in (2.4)

Animation

\includegraphics[width=110px]{./animaciok/pauli_spin_demo.jpg}

Non commutativity arise also in case of matrix multiplication. This demonstration shows non commutativity in case of special $2\times 2$ complex matrices, the so called Pauli matrices.

http://demonstrations.wolfram.com/PauliSpinMatrices/

Variance of an operator

There is an important number characterising a probability distribution, which is the property showing how sharp the distribution is. A measure of this property can be obtained, if we take the measured values of the random variable in question, and consider a mean value of certain differences between the measured values and the expectation value. As an example we consider again the one dimensional coordinate $x$. Taking the expectation value of the simple differences between the measured values and the expectation yields no information, as this gives zero in cases of all possible distributions due to the identity: $\int _{-\infty }^{\infty }\left[ x^{\prime }-\int _{-\infty }^{\infty }x|\psi (x)|^{2}dx\right] |\psi (x^{\prime })|^{2}dx^{\prime }=0$. A good measure is therefore to take the expectation value of the square of the differences from the expectation value, i.e. the quantity:

  \begin{equation} (\Delta \hat{X})_{\psi }^{2}:=\int _{-\infty }^{\infty }\left[ x^{\prime }-\int _{-\infty }^{\infty }x|\psi (x)|^{2}dx\right] ^{2}|\psi (x^{\prime })|^{2}dx^{\prime }, \end{equation}   (2.45)

which is called the variance of $x$, and which is the usual definition in probability theory, with a probability density $|\psi (x)|^{2}=\rho (x)$. We shall also call this the variance of the operator $\hat{X}$ in the state $\psi $. The latter terminology is due to the reformulation of the above definition as:

  \begin{equation} (\Delta \hat{X})_{\psi }^{2}=\langle (\hat{X}-\langle \hat{X}\rangle _{\psi })^{2}\rangle _{\psi } \end{equation}   (2.46)

or in general for any linear and selfadjoint operator (2.32).

  \begin{equation} (\Delta \hat{A})_{\psi }^{2}=\langle (\hat{A}-\langle \hat{A}\rangle _{\psi })^{2}\rangle _{\psi }. \end{equation}   (2.47)

The variance is also called the second central moment of the probability distribution. The square root $\sqrt {(\Delta \hat{A})_{\psi }^{2}}=:(\Delta \hat{A})_{\psi }$ is called the root mean square deviation of the physical quantity $\hat{A}$ in the state given by $\psi $. We can rewrite this formula in two different ways. First it is simply seen that

  \begin{equation} (\Delta \hat{A})_{\psi }^{2}=\langle \hat{A}^{2}-2\hat{A}\langle \hat{A}\rangle _{\psi }+\langle \hat{A}\rangle _{\psi }^{2}\rangle _{\psi }=\langle \hat{A}^{2}\rangle _{\psi }-\langle \hat{A}\rangle _{\psi }^{2}, \end{equation}   (2.48)

because $\langle \hat{A}\rangle _{\psi }$ is already a number. An important statement follows if we write

  \begin{equation} \begin{array}{rcl} (\Delta \hat{A})_{\psi }^{2} & =& \displaystyle \left\langle \left(\hat{A}-\langle \hat{A}\rangle _{\psi }\right)^{2}\right\rangle _{\psi }=\int \psi ^{\ast }(\mathbf{r})(\hat{A}-\langle \hat{A}\rangle _{\psi })^{2}\psi (\mathbf{r})d^{3}\mathbf{r}= \\ & =& \displaystyle \int \left[ \left(\hat{A}-\langle \hat{A}\rangle _{\psi }\right)\psi (\mathbf{r})\right] ^{\ast }\left[ \left(\hat{A}-\langle \hat{A}\rangle _{\psi }\right)\psi (\mathbf{r})\right] d^{3}\mathbf{r}=\int \left|\hat{A}-\langle \hat{A}\rangle _{\psi }\psi (\mathbf{r}))\right|^{2}d^{3}\mathbf{r}. \end{array} \end{equation}   (2.49)

Based on this formula, we can simply answer the question: what kind of wave functions are those, where we can measure the quantity corresponding to $\hat{A}$ with zero variance, i.e. with a value which is always the same? In order to have $(\Delta \hat{A})_{\psi }^{2}=0$ the integral in the last expression must be zero. But as it is an integral of a squared absolute value being nonnegative everywhere, it can be zero if and only if $(\hat{A}-\langle \hat{A}\rangle _{\psi })\psi (\mathbf{r})=0$, or stated otherwise, if and only if

  \begin{equation} \hat{A}\psi (\mathbf{r})=\langle \hat{A}\rangle _{\psi }\psi (\mathbf{r}). \label{Aeigen} \end{equation}   (2.50)

Any function obeying this equation is called the eigenfunction of the operator $\hat{A}$. In the form above this equation has only a principal significance and in most cases knowing the operator, the eigenfunctions are not known a priori. Therefore the problem is usually to determine the eigenvalues and the eigenfunctions of $\hat{A}$ from the equation:

  \begin{equation} \hat{A}\varphi (\mathbf{r})=\alpha \varphi (\mathbf{r}). \end{equation}   (2.51)

This was the case above, in particular, for the Hamilton operator in (2.12).

Problem 2.10

Show that the expectation value of $\hat{A}$ in the state given by the normalized $\varphi (r)$, is just $\alpha $.

An important theorem shows that the product of variances of two noncommuting operators has a lower bound, which is generally positive. In the case of the coordinate and momentum operators it takes the form

  \begin{equation} (\Delta \hat{X})_{\psi }\cdot (\Delta \hat{P}_{x})_{\psi }\geq \frac{\hbar }{2} \end{equation}   (2.52)

for any wave function $\psi $, and a similar inequality holds for the other two $(y,z)$ components. The mathematical proof of the inequality, which is called customarily as Heisenberg’s uncertainty relation, will not be given here.

Licensed under the Creative Commons Attribution 3.0 License