Microcanonical Ensemble Unit 8

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 12

The microcanonical ensemble

Finding the probability distribution

We consider an isolated system in the sense that


the energy is a constant of motion. N

E H ( p, q ) E
We are not able to derive from first principles
Two typical alternative approaches

Postulate of Equal a Priori Probability Use (information) entropy as starting


concept
Construct entropy expression from it and
show that the result is consistent Derive from maximum entropy
with thermodynamics principle
Information entropy*
*Introduced by Shannon. See textbook and E.T. Jaynes, Phys. Rev. 106, 620 (1957)

Idea: Find a constructive least biased criterion for setting up probability


distributions on the basis of only partial knowledge (missing information)

What do we mean by that?


Lets start with an old mathematical problem

Consider a quantity x with discrete random values x1 , x2 , ..., xn


Assume we do not know the probabilities p1 , p2 , ..., pn
n
We have some information however, namely p
i 1
i 1
and we know the average of the function f(x) (we will also consider cases
n where we know more than one average)
f ( x ) pi f ( xi )
i 1
With this information, can we calculate an average of the function g(x) ?

To do so we need all the p1 , p2 , ..., pn but we have only the 2 equations


n n

p
i 1
i 1 and f ( x ) pi f ( xi )
i 1
we are lacking (n-2) additional equations
What can we do with this underdetermined problem?
n
There may be more than one probability distribution creating f ( x ) pi f ( xi )
i 1
We want the one which requires no further assumptions
We do not want to prefer any pi if there is no reason to do so

Information theory tells us how to find this unbiased distribution


(we call the probabilities now rather than p)

Shannon defined information entropy, Si:

Si k n lnn or Si k dpdq ( p, q)ln ( p, q) for continuous distribution


n
with normalization


n
n 1
dpdq ( p, q) 1

We make plausible that:


- Si is a measure of our ignorance of the microstate of the system.
More quantitatively
- Si is a measure of the width of the distribution of the n.
Lets consider an extreme case:
An experiment with N potential outcomes (such as rolling dice)
However:
Outcome 1 has the probability 1=1 Si k n lnn 0
Outcome 2,3, ,N have n=0 n

-Our ignorance regarding the outcome of the experiment is zero.


-We know precisely what will happen
- the probability distribution is sharp (a delta peak)
Lets make the distribution broader:
1 1 1 1
Outcome 1 has the probability 1=1/2 Si k n ln n k ln ln 0 ... 0
Outcome 2 has the probability 2=1/2 n 2 2 2 2
Outcome 3,4, ,N have n=0 1 1
k ln 2 ln 2 k ln 2
2 2
Lets consider the more general case:
Outcome 1 has the probability 1=1/M
Outcome 2 has the probability 2=1/M Si k n lnn
. n

. 1 1 1 1 1 1
k ln ln ... ln ... 0
. M M M M M M
Outcome M has the probability M=1/M k ln M
Outcome M+1, ,N have n=0
So far our considerations suggest:

Ignorance/information entropy increases with increasing width of the distribution

Which distribution brings information entropy to a maximum

For simplicity lets start with an experiment with 2 outcomes

Binary distribution with 1, 2 and 1+ 2=1

Si k 1ln1 2ln2 with 1 2 1

Si k 1ln1 1 1 ln 1 1
dSi 1 1
k ln1 1 ln 1 1 1 1 k ln
d i 1 1 1 1
0.7

0.6

maximum 0.5

0.4

Si
1
0.3
dSi
k ln 1 0 1 1 1/ 2 2 0.2

d i 1 1 1 1 uniform distribution
0.1

0.0
0.0 0.2 0.4 0.6 0.8 1 1.0
Lagrange multiplier technique

Once again
Si k 1ln1 2ln2 with 1 2 1
at maximum constraint

F ( 1 , 2 , ) k 1ln1 2ln2 1 2 1
From
http://en.wikipedia.org/wiki/File:LagrangeMultipliers2D.svg
F
k ln1 1 0 Finding an extremum of f(x,y) under the constraint g(x,y)=c.

1
1 2
F 1 1/ 2 2
k ln2 1 0
2
1 2 1 uniform distribution
F
1 2 1 0

Lets use Lagrange multiplier technique to find distribution that maximizes

M
Si k n ln n
n 1
M
M
F ( 1 , 2 ,..., M , ) k n lnn n 1
n 1 n 1

F
k ln j 1 0 1 2 ... M e1 / k
j

M
1 2 ... M
1 uniform distribution
with
n 1
n 1
M maximizes entropy

k ln M
max
Si

In a microcanonical ensemble where each system has N particles, volume V


and fixed energy between E and E+ the entropy is at maximum in equilibrium.
Distribution function
- When identifying information entropy with thermodynamic entropy
1
const . if E H ( p, q) E
( p, q ) Z (E)
0 otherwise

Where Z(E) = # of microstate with energy in [E,E+ ]
called the partition function of the microcanonical ensemble
Information entropy and thermodynamic entropy
When identifying k=kB
S kB n lnn
n

has all the properties we expect from the thermodynamic entropy


(for details see textbook)

We show here S is additive


S 1 2 S 1 S 2
(1) n : probability distribution S1 S2
for system 1

(2) m : probability distribution


for system 2
Statistically independence of system 1 and 2
probability of finding system 1 in state n and system 2 in state m n(1) m(2)
S (1 2) k B n(1) m(2) ln n(1) m(2) k B n(1) m(2) ln n(1) ln m(2)
n,m n ,m

k B m(2) n(1) ln n(1) k B n(1) m(2) ln m(2)


m n n m

k B n(1) ln n(1) k B m(2) ln m(2) S (1) S (2)


n m
Relation between entropy and the partition function Z(E)

1
S kB n lnn kB
1
ln
1 k B lnZ ( E )
n n Z (E) Z (E) n Z (E)

1
S kBlnZ ( E)

Derivation of Thermodynamics in the microcanonical Ensemble

Where is the temperature ?


In the microcanonical ensemble the energy, E, is fixed

E ( S ,V ) U ( S ,V )
1 S P S
and
with dU TdS PdV T U V T V U
Lets derive the ideal gas equation of state from the microcanonical ensemble
despite the fact that there are easier ways to do so
U+
p
Major task: find Z(U) :=# states in energy shell [U,U+ ] U

1
Z (U )
3N 3N
3N
d p d q
N !h U H ( p ,q )U
q
another leftover from qm: phase space quantization, makes Z a dimensionless #
correct Boltzmann counting
requires qm origin of indistinguishability of atoms Solves Gibbs paradox
We derive it when discussing the classical limit of qm gas

N pi2
For a gas of N non-interacting particles we have H
i 1 2m

1 VN
Z (U )
N ! h3 N d 3N
pd q
3N
N ! h3 N d 3 p1 d 3 p2 ...d 3 pN
N
pi2
N
pi2
U 2 m U U 2 m U
i 1 i 1
N

pi2 2 mU 3N dim. sphere


in momentum space
2 m(U )
i 1
2mU

VN
Z (U ) V N C3 N
2m(U ) 2mU
3 3 3
d p1 d p2 ...d pN
3N / 2 3N / 2
N ! h3 N N
pi2 N ! h3 N
U 2 m U V2 dim S2dim dr 2 rdr r 2
i 1

4
Remember: V3dim S3dim dr 4 r 2 dr r 3
V3N ( pU 2m(U )) V3N ( pU 2mU ) 3
...
V3 N dim S3 N dim dr r 3 N 1dr C3 N r 3 N

U
3N / 2

V U 1 const
N 3 N / 2

U 3 N / 2
S kBlnZ (U )
3
k B N ln V N ln U ln 1
ln const

2 U
In the thermodynamic limit of
lim a n 0
N N
n

const for 0 a 1
V V

U ln1=0

http://en.wikipedia.org/wiki/Exponentiation

3 U 3 N / 2
S kB N ln V N ln U ln 1 ln const
2 U

3
S k B N ln V N ln U ln const
2

1 S U
3
with Nk BT
T U V 2

P S
PV NkBT
T V U

You might also like