Chapter 03
Chapter 03
Chapter 03
Lecturer: McGreevy
To counter your impatience to get to statistical mechanics, let me offer the following [from
Prof. David Tong]:
...the weakness of thermodynamics is also its strength. Because the theory is ignorant of the
underlying nature of matter, it is limited in what it can tell us. But this means that
the results we deduce from thermodynamics are not restricted to any specific system.
They will apply equally well in any circumstance, from biological systems and quantum
gravity. And you cant say that about a lot of theories!
3-1
3.1 Basic terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-3
3-2
3.1 Basic terminology
• adiabatic: No heat can flow between them. Adiabatic walls keep the systems isolated.
system 1 system 2
T1 T2
... or ...
• diathermal: Heat can flow. (Particles cannot.) This means that after a little while
T1 = T2 = T .
system 1 system 2
T T
3-3
The macroscopic state of each system is described by thermodynamic varaibles.
These can be
The number of particles N is extensive. If I divide the room in half, (roughly) half the
atoms will end up in each half. Similarly, energy is extensive. On the other hand, the
temperature of each half of the room will be unchanged; temperature is intensive.
I claim that no other dependence on N will arise in the thermodynamic (large N ) limit, at
least in 8.044.
Examples:
Such pairs of thermodynamic state variables and their associated generalized force are
called conjugate variables.
T and S are also called conjugate variables, although T ∆S is related to heat, not work.
3-4
Thermodynamic, or Thermal, Equilibrium
When the surroundings of a system change, the system itself will change. After a time, no
further macroscopically-detectable changes take place. Transients have died down. We say
the system is in equilibrium.
The equilibrium state of a system is completely specified by a certain number (this number
is called the number of (thermodynamic) degrees of freedom) of independent variables. In
equilibriumhese specify the other variables via equations of state.
These examples above have two thermodynamic degrees of freedom (dofs). It’s easy to
generalize: e.g. consider a magnetic gas, where the atoms have spins. Then we need P, V
and M, H.
3-5
Thermodynamic Reversibility
A series of changes undergone by a system is reversible iff its direction can be reversed by
an infinitesimal change in the condition. (e.g.: yanking the piston out of the cylinder is not
reversible. Moving it out very slowly may be.)
Reversibility requires:
1. Quasistatic processes: (slowness) All changes must be made so slowly that every
state through which the system passes may be considered an equilibrium state.
In particular: P V = N kT always satisfied
no shock waves
no energy dissipated due to friction: frictional forces ∝ v.
v→0
∆Elost to friction ∝ v × distance → 0.
e.g. bar of iron. Even if we quasistatically vary the applied field, the state of a ferro-
magnet depends on its history.
e.g. rod under tension, when it deforms.
Having said this, in 8.044 we will only discuss systems for which quasistatic = reversible.
That is, we will have no truck with hysteresis here. That’s for 8.08.
3-6
3.2 0th Law of Thermodynamics and definition of temperature
Define: A is ”in equilibrium with” B if when we stick them next to each other, separated by
a diathermal wall, nothing happens – no macroscopic change occurs.
A B and B C =⇒ A C
| {z } | {z } | {z }
in equilibrium in equilibrium in equilibrium
3-7
Observation: many macroscopic states of C can be in equilibrium with a given state of A.
On the right, I’ve drawn the locus of states of C that are all in equilibrium with the same
state of A. This is an isotherm.
Similarly, if I fix the state of C, there will be a locus (a curve in this case) of states of
system A which are in equilibrium with it.
We can use the 0th Law to define an “empirical temperature” called Θ by Adkins.
Idea: the temperature of a system is some function of the other state variables that deter-
mines whether or not the system is in equilibrium with other systems.
1. definition of equilbrium
3. definition of isotherm
4. definition of temperature
3-8
A pedantic implementation of this definition
Now bring them in contact, and adjust VB (move a piston) until they are in equilibrium.
PA , VA PB , VB
(If you like, it is some additional input from our experience of the world that it is possible
to do this.) Demanding equilibrium imposes one relation among the four state variables:
e.g. VB |equilibrium with A = f1 (PA , VA , PB ) (1)
Now repeat for C and B, i.e. bring a second system C in contact with B.
PB , VB PC , VC
Hold fixed the parameters of B and adjust VC to put B + C in equilibrium. Again this gives
one relation between the thermodynamic variables (this time of B + C), which I choose to
express as:
e.g. VB |equilibrium with C = f2 (PC , VC , PB )
Claim: f1 (PA , VA , PB ) = f2 (PC , VC , PB ) (2)
by the 0th Law – if we got different values for VB in these two cases, “equilibrium with”
wouldn’t be transitive. This equation (2) is one equation for 5 variables. Solve (2) for VA :
VA = g(PA , PC , VC , PB ) (3)
Now repeat the story for A + C (adjust VA to put it in equilibrium with C):
VA = f3 (PA , PC , VC ) (4)
Eqn. (4) determines VA in terms of PA , PC , VC .
But now combine the information in (3) and (4): if we plug the expression in (4) for VA into
(3) we discover that we can find g without knowing anything about PB – g is independent of
PB . This is only possible if the equation (2) relating f1 and f2 is independent of PB .1 This
allows us to rewrite equation (2) in a way that makes no reference to B:
f1 (PA , VA ) = f2 (PC , VC ) ≡ Θ
1
Note that in fact f1,2 themselves can depend on pB in such a way that the dependence cancels. This
is what happens e.g. for the ideal gas, where PA VA = N kB T , so in this case f1 (PA , VA , PB ) = PP
A VA
B
and
PC VC
f2 (PC , VC , PB ) = PB . Equating these, we learn that PA VA = PC VC , each side of which is independent of
system B (and is proportional to the temperature). Thanks to Sabrina Gonzalez Pasterski for correcting an
error on this point.
3-9
We’ve found a function of state variables of system C which is equal to a function of the
state variables of system A when they are in equilibrium.
For each of the systems in equilibrium, there is a function of its state variables that takes
on the same value:
Note that we could have called P, V here X, Y for some generic state variables. Details
about gases are not relevant at all.
For any of the systems, we can now map out isotherms with varying Θ.
We can also determine that Θ3 > Θ2 > Θ1 by watching the direction of heat flow when we
bring systems on the respective isotherms into contact. Θ1 > Θ2 if heat goes from 1 to 2.
Q: Suppose A, B have the same temperature. How do we know that the direction of heat
flow is transitive – i.e. that it is the same between A + C and between B + C?
A: We know because the states of A, B define one isotherm of C, by the 0th Law.
3-10
Thermodynamic temperature
... is a concept defined by Atkins, roughly Θ with units. It is measured in Kelvin (defined
below). Based on the fact (we’ll explain this later) that all gases in the dilute limit satisfy
P V = N kB T .
The dilute limit is P → 0, V → ∞, with P V fixed. (Note that the curves I’ve been drawing
are the hyperbolae in (P, V ) specified by constant T in this ideal gas law.)
Then:
T (water boils, at 1 atm) = 373.15K = 100◦ C
T (water freezes, at 1 atm) = 273.15K = 0◦ C
(The number 273.16 was chosen so that we get 100◦ between freezing and boiling of water.)
Although it’s reproducible, this scale is not so practical. We can now use this dilute gas
thermometer to calibrate other, more practical, thermometers. Any physically observable
quantity which varies with temperature can be used. You can imagine therefore that lots of
things have been tried. e.g.
expansion of liquids,
resistance (temperature dependence of resistance of metals and semiconductors),
thermocouples...
Different systems are useful for different temperature ranges and different sensitivities (there
is generally a tradeoff between how sensitive a thermometer is and how large a range it can
measure).
Combining all these together is something called the ”International Practical Temperature
Scale” which you can read more about in Adkins.
3-11
Then we can use these thermometers to map out isotherms of other systems. They are not
always hyperbolae.
e.g. a paramagnet.
H
T = const .
M
3-12
3.3 Differential changes of state and exact differentials
I remind you that when we have several independent variables, we need to be careful in
specifying what we mean by a derivative – we have to specify what is held fixed; this is
indicated by the subscript: ∂x
∂y
means the derivative of x with respect to y with z held
z
fixed.
Similarly,
∂P ∂P
P = P (T, V ) =⇒ dP = dV + dT
∂V T ∂T V
∂T ∂T
T = T (P, V ) =⇒ dT = dV + dP
∂V P ∂P V
3-13
are called response functions. How does the system’s something respond when we make a
perturbation of its something else under conditions where its a third thing is held fixed?
This kind of thing is often what gets measured in experiments.
We’ve written six of them on the board. They are not all independent.
You can show (see Adkins and recitation and subsection 3.3.1) that if three variables x, y, z
are related by one equation of the form
and the two other equations related by permuting the varaibles. Note that the same quantity
is held fixed on both sides.
∂x ∂y ∂z
= −1
∂y z ∂z x ∂x y
These identities (just math, no physics) reduce six response functions to two independent
ones. We’ll use these over and over.
3-14
Exact differentials.
This identity says that it doesn’t matter which path you take from the point 1 to the point
2 . The end result, x( 2 ), is the same no matter which path you take to get there. This is
true as long as x(y, z) is a smooth, single-valued function.
dx = A(y, z) dy + B(y, z) dz
| {z } | {z }
=( ∂x )
∂y z
=( ∂x
∂z )y
3-15
Integrating exact differentials.
Suppose you know A, B (e.g. because you measured them), in dx = Ady + Bdz. How can
you reconstruct the state variable x(y, z)?
First, in order for x to exist, it’s necessary that its mixed partials be equal:
∂A ∂B
=
∂z y ∂y z
If your functions A, B pass this check, you can proceed as follows. The overall additive
constant in x isn’t fixed. Pick a random starting point y0 , z0 and declare x(y0 , z0 ) = 0.
(Usually there will be some additional physical input that fixes this ambiguity.) To get x at
some other point (y, z) we integrate along a path from (y0 , z0 ) to (y, z); since the answer is
path-independent, we can pick a convenient path. I pick this one:
2
Optional cultural comment for mathy people: This assumes that there is no interesting topology of the
space of values of the thermodynamic variables. We’ll assume this always.
3-16
An alternative procedure, which is a little slicker (but gives the same answer, of course, up
to the additive constant), is:
1. Z
x(y, z) = dyA(y, z) + f (z)
The last term here is like an integration constant, except it can still depend on z. We
need to figure it out. To do that consider:
2. Z
∂x df
B= = dy∂z A +
∂z y dz
3. Z Z
f (z) = dz B − dy∂z A
Z Z Z Z !
∂A
Result: x(y, z) = dyA(y, z) + dzB(y, z) − dydz
∂z y
3-17
Example: a non-ideal gas
Suppose you have the following information (e.g. because you measured these things):
2aN 2
∂P N kT
3. =− +
∂V T (V − N b)2 V3
∂P ∂P
dP = dT + dV
∂T V ∂V T
First check that it is an exact differential. I made sure that it is.
Z
Nk N kT
P = dT + f (V ) = + f (V )
V − Nb V − Nb
The additive constant is stuffed into f (V ) here.
2aN 2
∂P N kT 0 fact 3 N kT
=− + f (V ) = − +
∂V T (V − N b)2 (V − N b)2 V3
Therefore:
2aN 2 aN 2
Z
f (V ) = dV = − + const
V3 V2
N kT N2
=⇒ P = − a 2 + const
V − Nb V
The constant must be zero so that P V = N kT at large V . (This is called the Van der Waals
equation of state; it includes the leading corrections to a slightly non-ideal gas.)
3-18
3.3.1 Appendix: Reciprocity theorem, derivation
In case you are curious, here is the derivation of the ‘Reciprocity Theorem’ which applies
also for N variables instead of 3 [Adkins, page 12 discusses the case of n = 3.]
The variation of any given variable as we move along the constraint surface can be expressed
in terms of the variations of the others:
X ∂x1
dx1 = |xl6=1,j dxj (7)
j6=1
∂x j
X ∂x2
dx2 = |xl6=1,j dxj (8)
j6=2
∂xj
Note that you can actually ignore the annoying |xl6=1,j bits here– they always express that
all the other variables are fixed besides the two that are involved in the derivative. So I’m
going to suppress them here to make the equations look better – you have to remember that
they are there. Now substitute in (7) using (8) for dx2 :
!
X ∂x1 ∂x 1
X ∂x 2 ∂x 2
dx1 = dxj + dxj + dx1
j6=1,2
∂x j ∂x 2
j6=1,2
∂x j ∂x 1
Now since we can vary x1 and x3 , x4 ...xn independently, this is actually n − 1 equations.
Varying only x1 we learn that:
∂x1 ∂x2
−1 +
∂x2 ∂x1
3-19
i.e.
∂x1 1
|all others fixed = ∂x2
∂x2 |
∂x1 all others fixed
for any i, j, k distinct. Note that the RHS is dimensionless because each of xi,j,k appears
once in the top and once in the bottom. This is sometimes called the “reciprocity theorem”.
Let’s go back to the case of three variables for simplicity, and call them (x1 , x2 , x3 ) = (x, y, z)
If we combine this last relation (10) with the reciprocal relation (9), we have
∂y ∂y ∂z
=− .
∂x ∂z ∂x
This way of writing (10) makes the minus sign seem to conflict with the chain rule in single-
dY
variable calculus, dX = dY dZ
dZ dX
. There is actually no conflict, because the latter formula
applies to a different situation, namely where X, Y, Z each determine each other, i.e. we
have two independent relations X = X(Y ) AND Y = Y (Z) among the three variables
(which specifies a curve in space, rather than a surface).
3-20
An even simpler example
To check that this funny-seeming sign is really there, let’s do a simple example. Take
0 = F (x, y, z) = x + y + z.
Then
x(y, z) = −y − z, y(x, z) = −x − z, z(x, y) = −x − y
∂x ∂y ∂z
= −1, = −1, = −1
∂y ∂z ∂x
So indeed their product is
∂x ∂y ∂z
= −1.
∂y ∂z ∂x
3-21
3.4 Work in quasistatic processes
Consider the example of a hydrostatic system. This just means a fluid that exerts a constant
pressure on its surroundings, for example, gas in a cylinder:
A is the cross-sectional area. In order for the piston to stay here it is, something must be
exerting a force F = P A to the left. If the piston is moved by some infinitesimal amount,
Newton tells us that:
−dV
d¯W = − |{z}P dV = (P A)
|{z} | {z } A
intensive extensive force | {z }
displacement
3-22
Work done depends on the path:
Wiaf 6= Wibf
=⇒ d¯W is not an exact differential.
d¯W is NOT d of some single-valued function W (P, V ). That’s the point of the little slash.
3-23
Some other examples of how to do work on various systems. Most of the point here is the
names and the signs.
−→ f
d¯W = + f · |{z}
dL
|{z}
applied force
Note sign convention: assumes f > 0 means spring is under tension. dL > 0 =⇒ d¯W > 0.
If the spring is being compressed, f < 0. This is the opposite of the convention for pressure.
Capacitor:
d¯W = E
|{z} · dq
|{z}
voltage drop charge
3-24
Polarized dielectric:
d¯W = ~
E · d~p
|{z} |{z}
electric field change in dipole moment
The total dipole moment is p~ = P~e V where P~e is the electric polarization (the dipole moment
per unit volume). To see this, imagine placing the dielectric between the plates of a capacitor:
The charge on the plates is q = D~ · n̂A where A is the area of the plates and D
~ is the electric
displacement; it is related to the electric field by
~ = 0 E
D ~ + P~e .
So if E = Ex is the potential difference across the plates, and the separation between the
plates is x,
d¯W = Edq = x|E|dDA = V EdD
where V = Ax is the volume of dielectric. Then
~ · dE
d¯W = 0 E ~ +E
~ · dP~ .
The first term is energy cost for making an electric field in vacuum and has nothing to do
with the dielectric. The second term is what we want.
3-25
Magnetized paramagnet:
~ , in an applied magnetic
If we have a chunk of paramagnetic material, with magnetization M
field M~ , what’s the work done in changing that magnetization?
Claim: d¯W = ~
H · dM~
|{z} |{z}
appl. magn. field magnetizat’n
Here is a derivation of the claim, using Ampere and Faraday. It’s interesting, and confus-
ingly treated in Adkins §3.5.4.
AL V
d¯W = HdB = HdB.
4π 4π
What’s dB? Taking d of (12)
4π
dB = dH + dM
V
V
d¯W = HdH + HdM
4π | {z }
| {z }
extra work done because of the presence of the paramagnet
H2
=d V 8π
The first term is the work needed to create H in vacuum. We don’t care about this. The
second term is the work done to magnetize the paramagnet. Therefore d¯W = HdM .
3-27
3.5 First law of thermodynamics; internal energy and heat
As usual, this Law is a statement of our experience of the world. This one we can prove
microscopically: it’s conservation of energy. Ultimately this follows (via Noether’s theorem)
from the fact that any time is the same as any other. That’s a story for another time.
First Law: “If a system is changed from an initial state to a final state by adiabatic means
only, the work done is the same for any path connecting the two states.”
It applies to quasistatic processes and to sudden irreversible processes, as long as the system
in question is thermally isolated.
In the figure: We can change the state quasistatically with the pistons.
We can change the state non-quasistatically with the pistons.
We can change the state by running current through the resistor.
We can change the state by stirring vigorously with the paddle wheel.
Heat can flow within the system.
No matter what combination of processes gets you from the initial state to the final state,
the sum of the work done is the same.
3-28
Internal Energy.
This suggests to us that we should define a quantity whose differences give the work done:
Wi→f (adiabatic) = Uf − Ui
U is called the internal energy. It’s the thing that’s not escaping through the adiabatic walls.
(It’s called U in thermodynamics, and E in stat mech. Get used to it.)
U is a state function.
That is, Uf − Ui does not depend on the path we pick between i and f .
3-29
Now, suppose we take a system from state i to state f NON-adiabatically, i.e. while it’s
not isolated.
Then Uf − Ui 6= W . Define heat Q to be the quantity such that:
Uf − Ui = W
|{z} + Q
|{z}
work done on system heat flowing into system
This is (a more general version of) the 1st Law of Thermodynamics which applies without
restriction to adiabatic changes. It is less contentful in that it is really a definition of heat.
It is more obviously the statement that energy is conserved.
Note the sign convention. It is important to be clear about this convention whenever we
talk about heat or work. Also, don’t forget that W and Q can be negative! A negative
amount of heat flowing into the system is a positive amound of heat flowing out.
dU
|{z} = d¯W + d¯Q
| {z }
is an exact diff’l are not exact diff’ls
dU = d¯Q − P dV
(For non-quasistatic processes we can’t make such an accurate account of the energy.)
We have learned how to write d¯W in terms of state variables: e.g. d¯W = −P dV , for
quasistatic processes.
To do the same for d¯Q we need a better statement of the 2nd Law. (Spoiler alert: d¯Q = T dS
for quasistatic processes.)
3-30
3.6 Heat Capacities
Now we begin our quantitative use of the 1st Law. Soon we will determine the shape of
adiabats.
d¯Q d¯Q
CV ≡ , CP ≡
dT V dT P
It is the answer to the natural question: how much heating d¯Q does it take to change the
temperature of an object by dT ? The answer depends on what we hold fixed when we shoot
the heat gun (or do whatever we’re doing to heat the object), e.g. whether we do this while
fixing the volume V or fixing the pressure.
Don’t be tricked by the name “heat capacity” into becoming a believer in the Caloric
Theory – heat is not a conserved fluid that enters a body and stays there. The thing that
stays is energy, which can also take many other forms.
For thermodynamic systems with other variables, we can hold other things fixed instead.
e.g. For a rod: CL or Cf with Cf > CL .
e.g. For a paramagnet: CM or CH with CH > CM .
3-31
3.7 CV and CP for a hydrostatic system; enthalpy; heat reservoirs
Let’s find an expression for CV in terms of other things we know for a hydrostatic system –
a fluid in a cylinder.
d¯Q = dU + P dV
∂U ∂U
= dT + dV + P dV
∂T V ∂V T
d¯Q ∂U ∂U ∂V
=⇒ CP ≡ = + +P
dT P ∂T V ∂V T ∂T
| {z } | {z P}
CV Vα
3-32
Enthalpy
for some quantity H? Let’s figure out what H has to be for this equality to be true (Spoilers:
H stands for ‘enthalpy’).
d¯Q = dU + P dV (17)
We’re going to set dP = 0, so let’s consider U as a function of T and P .
∂U ∂U
dU = dT + dP . (18)
∂T P ∂P T
Combining (17) and (18)
d¯Q ∂U ∂V
CP ≡ = +P .
dT P ∂T P ∂T P
∂(P V ) ∂V
Finally, notice that ∂T
=P ∂T P
. So
P
∂H
CP = if we define H ≡ U + P V
∂T P
H is called enthalpy. Since it’s made from a simple combination of state functions it is
clearly also a state function. (This particular way of assembling state variables is called a
Legendre transformation. It will come up again.)
| +{zP dV} +V dP .
dH = dU
d¯Q
A sometimes useful relation, which can be shown using the same methods as above, is:
∂P ∂H
CV − CP = −V .
∂T V ∂P T
3-33
Heat reservoirs
This definition is most easily stated in terms of heat capacity, so it has waited until now.
Def: A heat reservoir is a system with such a large CV and CP (for whatever reason –
usually this means it’s made of lots of stuff and is very big and heavy) that it may absorb
or give up “unlimited” quantities of heat (for purposes of discussion) without appreciable
change in temperature or in any other intensive thermodynamic variable.
Note that this definition is meant to be flexible – the meaning of ‘unlimited’ depends on
the system of interest; really we just want the reservoir to be have a CV much bigger than
that of the system of interest. For example, if we’re doing a little chemistry experiment in
a beaker, the air in the room is a useful heat reservoir, which is keeping the beaker at fixed
T and P . This is to be contrasted with what happens if our little experiment blows up
and creates a room-sized conflagration. Then the room can no longer be considered a heat
reservoir.
3-34
3.8 CV and CP for ideal gas
An ideal gas is a fluid where collisions between the constituents may be ignored. Mean free
paths are infinite. How can it have a nonzero pressure? Only because of collisions with the
walls of its container.
For an ideal gas, the equation of state is P V = N kT . We’ll derive this later. k = kB =
Boltzmann’s constant
' 10−16 · 1.38erg/Kelvin
' 10−5 · 8.62eV/Kelvin.
A useful statement about these to remember is:
What about U ? Consider the following experiment: Free expansion of an ideal gas [Joule]:
Open the valve suddenly, then, as usual, wait until the system reaches a new equilibrium
state.
3-35
Conclusions: ∆Q = 0 (adiabatic walls)
∆W = 0 (free expansion – no piston gets moved)
1st Law
=⇒ ∆U = 0
In general, U = U (V, T ):
∂U ∂U
dU = dV + dT
∂V T ∂T V
| {z }
CV
(A hard-to-resist comment about the microscopic picture: Particles in an ideal gas have
no interactions. This means that microscopically U is made up of energies of the individual
particles, and not interaction energies that depend on the separations between the particles
– no dependence on the spacing between the particles means no dependence on V .)
=⇒ dU = CV dT
Z T
=⇒ U (T ) = dT 0 CV (T 0 ) + const
| {z }
0
choose zero of energy to set this to zero
This formula is true for any system (not just ideal gas) if we heat it while holding its volume
fixed. It’s true for an ideal gas no matter how we do it.
Now, let’s examine the formula we derived above for CP in this case:
∂U
CP − CV = V α +P
∂V
| {z T}
=0
∂V
= V αP = P
∂T P
3-36
Different kinds of ideal gases are distinguished by the form of CV . To go farther we will
need more microphysical information4 . For a monatomic ideal gas (e.g. He, Ne, Ar, Kr, Xe),
experiments show that (in this chapter this phrase is almost always code for “we’ll show this
later using stat mech”):
3
CV = N k B
2
3 5
This implies that U = 2 N kB T and CP = 2 N kB and
CP 5
γ≡ = for a monatomic ideal gas.
CV 3
Note that γ > 1 for any gas since CP > CV .
of diatomic ideal gas molecules must include the energies of vibration and rotation
5
of the molecules, not just their translational motion.
4
I’m making a lot of promises here about what we’re going to learn from statistical mechanics.
5
(We’ll see that the equation of state (i.e. the pressure P ) doesn’t care about this.)
3-37
For a typical diatomic gas:
This picture (which you could imagine obtaining by a series of measurements involving
adding a little heat and measuring temperature differences) is trying to tell about the internal
structure of the constituents.
3-38
Two types of adiabatic expansion of an ideal gas:
(1) (2)
In case (2) it can be adiabatic and quasistatic – the piston moves slowly and the gas does
work on it. All the energy is accounted for, and the process can be reversed.
Case (1), free expansion is not quasistatic. “sudden”. We might want to write ∆W =
−P dV but we can’t – the pressure isn’t even well-defined during the sudden escape of the
gas.
∆Q = 0 ← (adiabatic) → ∆Q = 0
∆W = 0 6= −P dV ∆W = −P dV < 0
∆U = 0 ∆U < 0 =⇒ Tf < Ti .
We used this in Chapter I. Now we can be quantitative about the shapes of various curves
describing reversible paths.
3-39
N kT
= −
CV V
CP − CV T
= −
CV V
dT dV
= −(γ − 1) for adiabatic and quasistatic exp of ideal gas
T V
dT γ−1
=−
dV Vα
We can go further if we make the approximation that γ = constant. You saw from the
plots of CP above that this is approximately true away from the special temperatures where
new degrees of freedom appear. So this will be correct away from the steps.
=⇒ T V γ−1 = const
If we use the equation of state P V ∝ T :
P V γ = const
or P ∝ V −γ . This is the shape of an adiabat. Since γ > 1 always, this is always a more
steeply-falling curve than an isotherm. For a monatomic ideal gas, this is P ∼ V −5/3 , which
is plotted here:
3-40