part ii: thermochemical properties 144 chapter 7
October 30, 2017 | Author: Anonymous | Category: N/A
Short Description
8.4.1 The equilibrium vacancy concentration 327. 11.4.2 Other heterogeneous nucleation sites. 328. 11.4.3 Implications&n...
Description
Materials Science
Fall, 2008
PART II: THERMOCHEMICAL PROPERTIES
144
CHAPTER 7: THERMODYNAMICS
146
7.1
Introduction
146
7.2 Entropy 7.2.1 Entropy and time 7.2.2 Entropy and heat 7.2.3 Entropy and randomness
147 147 149 151
7.3 The Conditions of Equilibrium 7.3.1 The equilibrium of an isolated system 7.3.2 Internal equilibrium 7.3.3 Non-equilibrium states; constrained equilibria
154 154 156 158
7.4 The Thermodynamic Potentials 7.4.1 The Helmholtz free energy 7.4.2 The Gibbs free energy 7.4.3 The work function
160 161 163 165
7.5 The Fundamental Equation 7.5.1 The entropy function 7.5.2 The energy function 7.5.3 Alternate forms of the fundamental equation 7.5.4 The integrated form of the fundamental equation 7.5.5 The statistical form of the fundamental equation
168 168 169 172 174 175
7.6 The Thermodynamics of Surfaces 7.6.1 The Gibbs construction 7.6.2 The fundamental equation of an interface 7.6.3 The conditions of equilibrium at an interface
176 177 178 179
CHAPTER 8: SIMPLE SOLIDS 8.1
185
Introduction
185
8.2 The Perfect Crystal 8.2.1 The internal energy 8.2.2 Lattice vibrations 8.2.3 The vibrational energy 8.2.4 The vibrational contribution to the specific heat 8.2.5 A qualitative version of the Debye model 8.2.6 The electronic contribution to the specific heat 8.2.7 The Helmholtz free energy
Page 139
186 186 187 196 197 203 205 209
Materials Science
Fall, 2008
8.2.8 Thermodynamic properties
211
8.3 The Random Solid Solution 8.3.1 The Bragg-Williams model 8.3.2 The internal energy 8.3.3 The configurational entropy 8.3.4 The free energy and thermodynamic behavior
213 213 214 215 216
8.4 Equilibrium Defect Concentrations 8.4.1 The equilibrium vacancy concentration 8.4.2 Dislocations and grain boundaries
218 218 220
CHAPTER 9: PHASES AND PHASE EQUILIBRIUM 9.1
Introduction
222 222
9.2 Phase Equilibria in a One-Component System 9.2.1 Phase equilibria and equilibrium phase transformations 9.2.2 Metastability 9.2.3 First-order phase transitions: latent heat 9.2.4 Transformation from a metastable state
223 224 226 226 228
9.3 Mutations 9.3.1 The Nature of a Mutation 9.3.2 Common Transitions that are mutations
228 228 229
9.4 Phase Equilibria in Two-Component Systems 9.4.1 The free energy function 9.4.2 The common tangent rule 9.4.3 The phases present at given T, P 9.4.4 Equilibrium at a congruent point 9.4.5 Equilibrium at the critical point of a miscibility gap
232 233 236 239 241 242
9.5
243
Binary Phase Diagrams
9.6 The Solid Solution Diagram 9.6.1 The thermodynamics of the solid solution diagram 9.6.2 Equilibrium information contained in the phase diagram 9.6.3 Equilibrium phase changes
244 245 247 247
9.7 The Eutectic Phase Diagram 9.7.1 Thermodynamics of the eutectic phase diagram 9.7.2 Equilibrium phase changes 9.7.3 Precipitation from the α phase 9.7.4 The eutectic microstructure 9.7.5 Mixed microstructures in a eutectic system 9.7.6 Phase diagrams that include a eutectoid reaction
249 249 251 251 252 253 254
Page 140
Materials Science
Fall, 2008
9.8 Common Binary Phase Diagrams 9.8.1 Solid solution diagrams 9.8.2 Low-temperature behavior of a solid solution 9.8.3 Phase diagrams with eutectic or peritectic reactions 9.8.4 Structural transformations in the solid state 9.8.5 Systems that form compounds 9.8.6 Mutation lines in binary phase diagrams 9.8.7 Miscibility gap in the liquid
CHAPTER 10: KINETICS
254 255 256 259 263 265 271 272
274
10.1 Introduction
274
10.2 Local Equilibrium
275
10.3 The Conduction of Heat 10.3.1 Heat conduction in one dimension: Fourier's Law 10.3.2 Heat conduction in three dimensions 10.3.3 Heat sources
277 277 279 281
10.4 Mechanisms of Heat Conduction 10.4.1 Heat conduction by a gas of colliding particles 10.4.2 Heat conduction by mobile electrons 10.4.3 Heat conduction by phonons 10.4.4 Heat conduction by photons
282 283 285 287 292
10.5 Non-equilibrium Thermodynamics 10.5.1 The thermodynamic forces 10.5.2 The non-equilibrium fluxes and the kinetic equations 10.5.3 Simplification of the kinetic equations
293 293 294 295
10.6 Diffusion 10.6.1 Fick's First Law for the diffusion flux 10.6.2 Fick's Second Law for the composition change 10.6.3 Solutions of the diffusion equation
298 298 299 301
10.7 The Mechanism of Diffusion in the Solid State 10.7.1 The mobility of interstitial species 10.7.2 The mobility of substitutional species 10.7.3 Random-walk diffusion; Fick's First Law 10.7.4 The mean diffusion distance in random walk diffusion 10.7.5 Uses of the mean diffusion distance
304 304 305 307 309 311
10.8 Microstructural Effects in Diffusion 10.8.1 The vacancy concentration 10.8.2 Grain boundary diffusion 10.8.3 Diffusion through dislocation cores
313 313 315 317
Page 141
Materials Science
Fall, 2008
CHAPTER 11: PHASE TRANSFORMATIONS
318
11.1 Common Types of Phase Transformations
318
11.2 The Basic Transformation Mechanisms 11.2.1 Nucleated transformations and instabilities 11.2.2 First-order transitions and mutations
319 319 321
11.3 Homogeneous Nucleation 11.3.1 Nucleation as a thermally activated process 11.3.2 The activation energy for homogeneous nucleation 11.3.3 The nucleation rate 11.3.4 The initiation time
321 321 323 324 326
11.4 Heterogeneous Transformations 11.4.1 Nucleation at a grain boundary 11.4.2 Other heterogeneous nucleation sites 11.4.3 Implications for materials processing
327 327 328 329
11.5 The Thermodynamics of Nucleation 11.5.1 The thermodynamic driving force for nucleation 11.5.2 Nucleation in a one-component system 11.5.3 Nucleation from a supersaturated solid solution
330 331 332 334
11.6 Nucleation of Non-Equilibrium States 11.6.1 Congruent nucleation 11.6.2 Coherent nuclei 11.6.3 Nucleation of metastable phases: the Ostwald rule 11.6.4 Sequential Nucleation in a Eutectic
336 337 341 343 344
11.7 Recrystallization
347
11.8 Growth 11.8.1 Primary growth and coarsening 11.8.2 Time-temperature-transformation (TTT) curves
349 349 350
11.9 Interface-Controlled Growth 11.9.1 Isotropic growth of a congruent phase 11.9.2 Interface control at stable surfaces
351 351 353
11.10 Diffusion-Controlled Growth 11.10.1 Growth controlled by thermal diffusion 11.10.2 Growth controlled by chemical diffusion
357 357 359
11.11 Chemical Segregation During Growth
364
11.12 Grain Growth and Coarsening
366
Page 142
Materials Science
Fall, 2008
11.13 Instabilities
367
11.14 Martensitic Transformations
368
11.15 Spinodal Decomposition 11.15.1 Spinodal instability within a miscibility gap 11.15.2 Spinodal decomposition 11.15.3 Spinodal decomposition to a metastable phase 11.15.4 Use of spinodal decomposition in materials processing
372 372 374 377 378
11.16 Ordering Reactions 11.16.1 Ordering reactions that are mutations 11.16.2 First-order transitions that change the state of order 11.16.3 Ordering at the stoichiometric composition 11.16.4 Ordering at an off-stoichiometric composition 11.16.5 Implications for materials processing
378 379 380 382 382 384
CHAPTER 12: ENVIRONMENTAL INTERACTIONS
385
12.1 Introduction
385
12.2 Chemical Changes near the Surface 12.2.1 Thermal treatment 12.2.2 Diffusion Across the interface 12.2.3 Ion implantation
385 385 387 390
12.3 Chemical Reactions at the Surface: Oxidation 12.3.1 Thermodynamics of oxidation 12.3.2 The kinetics of oxidation 12.3.3 Protecting against oxidation
392 392 394 399
12.4 Electrochemical Reactions 12.4.1 The galvanic cell 12.4.2 The electromotive series and the galvanic series 12.4.3 The influence of concentration: concentration cells 12.4.4 Reactions at the cathode 12.4.5 The influence of an impressed voltage 12.4.6 Thermodynamics of the galvanic cell
402 403 405 407 408 411 411
12.5 The Kinetics of Electrochemical Reactions 12.5.1 The current in an electrochemical cell 12.5.2 The current-voltage characteristic
418 418 420
12.6 Aqueous Corrosion in Engineering Systems 12.6.1 Corrosion cells in engineering systems 12.6.2 The corrosion rate 12.6.3 Corrosion protection
427 427 430 431
Page 143
Materials Science
Fall, 2008
Part II: Thermochemical Properties ...In war the victorious strategist seeks battle after the victory has been won, while he who is destined for defeat fights first and looks for victory afterwards - Sun Tzu, The Art of War
The thermochemical properties of materials govern two kinds of behavior: internal reactions that occur within the material and determine its microstructure, and environmental interactions that alter the chemistry of the material or its environment. The most important of these are the reactions that are manipulated to control microstructure and fix engineering properties. The management of thermochemical processes within the material is the essence of materials processing. Thermochemical properties also affect behavior in service. If the microstructure evolves as a material is used its engineering properties change accordingly. Such changes must be anticipated and, if necessary, prevented. To understand thermochemical processes it is necessary to recognize the interplay of two central themes: thermodynamics and kinetics. Thermodynamics governs whether a given process is possible and fixes the magnitude of the forces that drive it. Kinetics determines how quickly the process can happen, given the thermodynamic driving force. In many cases, kinetic constraints are used to prevent thermodynamically favorable processes from happening at all. Virtually all useful materials have microstructures that would spontaneously change if they were free to do so. They are, nonetheless, useful because the rate of change is negligible. Silica glasses and amorphous polymers are used routinely despite the fact that the crystalline state is preferred, and crystallization will eventually occur. Almost all metallic structures are polygranular despite the fact that grain boundaries are unfavorable, high-energy defects that will eventually be removed by grain growth. Electronic devices employ silicon chips that are doped to create chemical heterogeneities that will eventually homogenize and disappear. These are all examples of useful microstructures that are maintained by kinetic constraints. Because the kinetics of change are slow, "eventually" is a time much longer than the expected life of the device in which the engineer uses it (and, often, much longer than the expected life of the engineer). Materials are made to have useful microstructures by combining thermodynamics and kinetics in a constructive way. The material is brought into a condition in which the microstructure that is wanted is both thermodynamically possible and kinetically achievable. The conditions are then altered, usually by cooling, to prevent further microstructural changes. For example, glasses are manufactured by melting, then cooling
Page 144
Materials Science
Fall, 2008
quickly to a temperature at which both the amorphous and the crystalline solid states are thermodynamically preferred to the liquid. Since the amorphous structure is easier to achieve, it forms first. If the composition is such that crystallization is difficult and the glass is kept reasonably cool, it will remain amorphous for as long as desired. Chemically heterogeneous silicon chips are manufactured by allowing chemical species to diffuse into particular regions of the silicon crystal at high temperature, then cooling to a temperature at which the rate of subsequent diffusion is negligible. Almost all other useful engineering materials are made in a conceptually similar way. To appreciate how this is done it is necessary to understand both thermodynamics and kinetics. In the following chapter we discuss thermodynamics. Thermodynamics defines the conditions of equilibrium, which lead to the notions of homogeneous phases, which are volumes of material that have uniform structure and composition, and equilibrium phase diagrams, which are maps that show the phases that are preferred in a material as a function of its temperature and composition. These concepts identify the two ways in which the microstructures of materials evolve: heterogeneities in single-phase regions diminish and vanish with time, and distinct phases appear or disappear as suggested by the phase diagram. The rate at which these processes occur is governed by the kinetics of diffusion in the former case, and the kinetics of structural change in the latter. The kinetics of thermochemical changes will be treated in general, and then specifically applied to the problems of chemical diffusion and structural change in the solid state. The environmental interactions we shall specifically consider are those that govern the deterioration of materials at high temperature or in aqueous environments: high-temperature oxidation and aqueous corrosion. Both are important problems that concern almost all branches of engineering. Finally, we shall consider two types of catalytic behavior: wetting, in which a solid determines the morphology of a second phase that may coat it, and chemical catalysis, in which a solid promotes a chemical reaction in which it does not directly participate. Wetting phenomena are important in coating, bonding and the catalysis of structural change. Chemical catalysis is particularly relevant to chemical engineering, and has given rise to a particular branch of materials science, the science of catalytic materials.
Page 145
Materials Science
Fall, 2008
Chapter 7: Thermodynamics The great German physicist, Boltzmann, spent a lifetime deciphering the laws of thermodynamics, and died by his own hand. His intellectual successor, Paul Ehrenfest, also killed himself. We shall now study thermodynamics ... - I can't recall who wrote this. However, per Oscar Wilde , a good quote should not be held responsible for its source.
7.1 INTRODUCTION While the origins of materials engineering are lost in the mists of prehistory, it is arguably possible to date the beginning of materials science. The year was 1876, and the occasion was the publication of a paper by a then obscure Professor of Mathematical Physics at Yale University named Josiah Willard Gibbs. The paper appeared in an even more obscure scientific journal, the Transactions of the Connecticut Academy. It was entitled "On the Equilibrium of Heterogeneous Substances", and it was the first part of a two-part paper (the second part appeared in the same journal in 1878) that discussed how the recently formulated laws of thermodynamics might be used to understand the microstructures of materials. For centuries before Gibbs, perceptive engineers had recognized that the properties of materials were determined by their nature, or, as we would say today, by their composition and microstructure. They also recognized that the nature of a material could be intentionally modified or controlled by processing it in an appropriate way. Even in the ancient world engineers achieved a degree of control over the materials they used that is impressive today. But the rules that governed processing were almost entirely empirical. The competent materials engineer was an artisan who resembled a master chef more than a modern technologist. Recipes were passed from master to apprentice, to be memorized rather than understood. Progress almost invariably reflected accidental discovery or observations derived from casual experimentation. The development of the thermodynamics of materials in the hands of Gibbs and his successors revolutionized materials engineering and turned it into materials science. The keys lay in Gibbs derivation of the fundamental equation that governs the properties of a material, his demonstration that the fundamental equation could be written in alternative forms to define convenient thermodynamic potentials that rule behavior in common experimental situations, and his formulation of the conditions of equilibrium and stability that determine when microstructural changes can occur. Much of this information can be presented in the form of equilibrium phase diagrams that are plots of the equilibrium states of materials under given conditions. The equilibrium phase diagrams identify the state a material will seek under a given set of conditions, for
Page 146
Materials Science
Fall, 2008
example, a given composition, temperature and pressure, and can be used to infer the microstructural changes that will happen when these conditions are altered. Thermodynamics is a field in its own right that is the subject of a number of courses in various disciplines. I assume that you have some familiarity with it from your general background in chemistry and physics, and will give only a short overview here. The content of thermodynamics that is essential to understand the thermochemical behavior of materials includes the fundamental equation and the conditions of equilibrium. Both of these are consequences of the Second Law of Thermodynamics, which defines the entropy of a system. I shall assume that you are generally familiar with the concept of energy and its conservation, which is the content of the First Law of Thermodynamics, and move immediately to the Second Law.
7.2 ENTROPY The concept of entropy is one of the most fundamental in science, and is also one of the more difficult for the typical student to grasp. A major reason is that entropy is not a thing that is easily seen or measured, and hence does not call up a concrete physical picture of the type that often helps illustrate other fundamental principles of physics. To understand entropy it is useful to think of it in connection with one of the more familiar concepts with which it is intimately associated: time, heat, equilibrium, and probability. 7.2.1 Entropy and time The classical definition of entropy likens it to time, in the sense of real, or evolutionary time. To illustrate how and why we consider the simple system shown in Fig. 7.1. The system contains a uniform gas in a container whose walls insulate it from thermal or chemical interaction with its environment. Such a container is called adiabatic, and establishes a situation in which the state of the gas within the container can only be changed by doing mechanical work on it. In the example shown mechanical work can be done by displacing the piston at one end of the container or by rotating the paddle-wheel in its interior. The state of the gas is defined by its fixed chemical composition ({N}, the set of atom or mole numbers of the components it contains), by its volume (V), which can be changed by displacing the piston, and by its internal energy (E), which can be changed in the positive sense by compressing the piston or turning the paddle wheel to do mechanical work on the system, and in the negative sense by displacing the piston to expand the gas so that it does work on its environment. It is found experimentally that all of the possible states of a gas in an adiabatic container can be divided into three sets on the basis of their accessibility from the initial state. The first set (1) includes states that can be reached in a reversible way from the initial state. These states are achieved by displacing the piston so slowly and smoothly that friction and turbulence are negligible. The initial state can be recovered by slowly returning the piston to its original position. The second set (2) includes all other states that can be reached from the initial state without changing the nature of the system or its Page 147
Materials Science
Fall, 2008
adiabatic container. The transitions that lead to these states are irreversible: the system can never return to its original state as long as it remains adiabatic. All states that are reached by processes that stir the gas by turning the paddle wheel fall into this category. One can introduce an arbitrary amount of energy into the gas by turning the paddle wheel, but once this is done and the gas has become quiescent the wheel will never spontaneously turn itself to do work on the environment. The third set (3) includes all states that are inaccessible in the sense that they cannot be obtained from the initial state by any adiabatic process at all. These states have the property that they could only be reached if the energy could be decreased at constant volume, but this could only happen if gas spontaneously turned the paddle wheel.
Fig. 7.1: A gas in an adiabatic container fitted with a piston and a stirrer. Each of the three sets of states that are defined above includes an infinite number of distinct states that have a common property: their accessibility from the initial state by adiabatic processes (adiabatic accessibility). Since adiabatic accessibility can be measured experimentally, it is possible to give this property a name and define it in an objective way. We call this property the entropy, S, and define it so that states that are mutually accessible have the same entropy, those that are irreversibly accessible have greater entropy, and those that are inaccessible have lower entropy. By changing the initial state it is possible to define the entropy of all the states of the system. In any adiabatic process ÎS ≥ 0. If ÎS = 0 the process is reversible; if ÎS > 0 it is not. The definition of the entropy and its monotonic behavior in adiabatic processes form the content of the Second Law of Thermodynamics. The concept of entropy may appear to be restricted by the fact that it was defined for a particular type of system, the adiabatic system. In fact, this is not a restriction. An example of an adiabatic system is an isolated system, which does not interact with its environment at all. Any system can be made into an isolated system by joining it to the environment with which it interacts. The composite system is adiabatic and, hence, can only evolve in such a way that its entropy increases or remains the same. The end of its evolution, its preferred or ultimate equilibrium state, is that which has the greatest possible entropy. In a very real sense the entropy is the thermodynamic measure of time. The entropy of an adiabatic system orders its states in time. Those whose entropies are higher must necessarily come after those whose entropies are lower. Since the entropy can only
Page 148
Materials Science
Fall, 2008
increase, time moves forward and cannot be reversed. The association between entropy and time is not casual, but fundamental; the Second Law of Thermodynamics is the only fundamental principle of theoretical physics in which time has a direction. 7.2.2 Entropy and heat A body of material that is homogeneous in its composition and structure is called a phase. The thermodynamic state of a homogeneous phase of a simple material, such as the gas of Fig. 7.1, is fixed by its energy, its volume and its chemical content, that is, by the variables E, V, and {N}. Since each state of the system has an entropy, S, the entropy can be written as a function S = ¡S(E,V,{N})
7.1
of the variables that characterize the state. It is possible to choose the entropy so that the entropy function, eq. 7.1, is a continuous function of its variables that is additive in the sense that the joint entropy of a composite of two or more systems is the sum of their individual entropies. Moreover, S can be defined so that the partial derivatives of the function ¡S(E,V,{N}) have the following simple physical values: ∆¡S =1 ∆E T
7.2
where T is the absolute temperature, ∆¡S =P ∆V T
7.3
where P is the mechanical pressure, and ∆¡S µk ∆Nk = - T
7.4
where Nk is the mole number of the kth component and µk is its chemical potential. This definition of S is called the metrical entropy. As we shall discuss below, the entropy function, eq. 7.1, that expresses the entropy as a function of the thermodynamic content of the system is Gibbs' fundamental equation that governs the thermodynamic behavior of the system. When the entropy is given by equation 7.1 its change in an infinitesimal change of state, that is, a change of state that involves an infinitesimal change in one or more of the variables, E, V, and {N}, is
Page 149
Materials Science
Fall, 2008
∆¡S ∆¡S ∆¡S dS = ∆E dE + ∆V dV + ∑ ∆N dNk k
k
1 = T (dE + PdV - ∑ µkdNk)
7.5
k
Solving equation 7.5 for the incremental change in the energy, dE, yields the relation dE = TdS - PdV + ∑ µkdNk 7.6 k
Eq. 7.6 has a simple physical interpretation. If the state is changed quasistatically, that is, slowly enough that friction can be ignored, then the second term on the right-hand side, -PdV, is the mechanical work done. The third term on the right is the energy due to the chemical change, {dN}, and is hence the chemical work. The energy change that is unaccounted for is that due to the thermal interaction, the thermal work, or heat, dQ. Hence in a quasi-static change of state dQ = TdS
7.7
and there is a direct association between the entropy change and the heat, or thermal work. The association between the entropy change and the quasi-static heat that is expressed by eq. 7.7 makes it relatively easy to measure the entropy of a phase as a function of its energy and composition. If the system is heated incrementally and quasi-statically at constant volume and composition then the change in entropy is given by the change in energy, which can be found independently, divided by the absolute temperature. It is also possible to evaluate the second and third terms on the right-hand side of equation 7.6 independently, so the entropies of states that differ in volume and composition can be related to one another. Such measurements evaluate the entropy with respect to its value in a fixed reference state. It is, however, possible to proceed further to define the absolute value of the metrical entropy. The Third Law of Thermodynamics states that in the limit of zero temperature the most stable state of a system is a state of perfect order whose entropy may be set equal to zero. If this state is used as a reference the metrical entropy is fixed to within a multiplicative constant that sets the scale of the temperature. When the process that changes the state of the system is not quasi-static, for example, when friction cannot be neglected, then the simple association between the heat added and the change in the entropy is lost. Since energy is conserved and the change in energy is equal (by convention) to the sum of the work done on the system and the heat added to it then dE = dQ + dW
7.8
Page 150
Materials Science
Fall, 2008
for any infinitesimal change of state whatever. (In the context of the First Law of Thermodynamics eq. 7.8 is the definition of the heat.) Since non-static phenomena such as friction or turbulence increase the energy of the system (work is done on the system by the frictional forces or the stirring that introduces turbulence), it follows from eqs. 7.6 and 7.8 that dW ≥ - PdV + ∑ µkdNk
7.9
k
and hence that dQ ≤ TdS
7.10
The entropy change is always greater than or equal to the heat added divided by the temperature. The additional entropy is due to internal changes in the system of the sort that are caused by frictional forces or turbulence. Since these processes are inherently irreversible, the inequality (7.10) measures the irreversibility of a change of state. 7.2.3 Entropy and randomness The metrical entropy of a homogeneous phase of an isolated simple system can be calculated statistically from the relation S(E,V,{N}) = k ln„(E,V,{N})
7.11
where „(E,V,{N}) is the degeneracy of the phase. If the atoms contained in the system can be treated as classical particles then the degeneracy is the total number of distinct ways of assigning positions and momenta to the particles that are consistent with the definition of the phase and yield the right value of the total energy. In the context of quantum mechanics the degeneracy is the total number of distinct quantum states of the system that are consistent with its energy and phase. We shall not prove eq. 7.11 here, but will show that the entropy defined by 7.11 is consistent with the nature of the entropy as we have described it.
Properties of the statistical entropy First, eq. 7.11 predicts that the preferred phase of an isolated system is the one that maximizes its entropy. Suppose that there are two possible phases of an isolated system with parameters E, V, and {N}. Let the phases be designated å and ∫, and let them have, respectively, degeneracies „å and „∫. If there is no constraint on the system that prevents it from taking on the configurations of either phase then its instantaneous state is equally likely to be in any one of the possible states of either phase. The probability that an instantaneous measurement of the state of the system will find it in phase å is
Page 151
Materials Science
Fall, 2008
„å På = „ + „ å ∫
7.12
which is greater than P∫ if „å > „∫, or, by eq. 7.11, if Så > S∫. In the usual case the system is virtually certain to be in phase å if its entropy is greater; the statistical degeneracy of a state, „, increases exponentially with the number of particles it contains, and „å >> „∫ if it is greater at all. It follows that an isolated system evolves into the phase (or mixture of phases) that has the highest statistical entropy. Second, the statistical entropy is additive. Let system 1 have energy E1, volume V1 and chemical content {N1} while system 2 has E2, V2 and {N2}. When they are separated, system 1 has degeneracy „1(E1,V1,{N1}) and system 2 has degeneracy „2(E2,V2,{N2}). The degeneracy of the two separated systems taken together is the product „0 = „1„2
7.13
since each individual state in „1 can coexist with any state in „2, and conversely. Hence the joint entropy of the separated systems is S0 = k ln(„1„2) = k ln(„1) + k ln(„2) = S1 + S2
7.14
and the statistical entropy is additive. Third, if two isolated systems are joined so that they interact with one another the interaction can only increase the statistical entropy. When the two systems are joined and interact all of the states in „1 and „2 are still possible. Since the total energy, volume and chemical content are E = E1+ E2, V = V1 + V2 and {N} = {N1 + N2}, it is still possible for the volume V1 to have the chemical content {N1} and the energy E1, in which case V2 contains {N2} and E2. The total system has at least the degeneracy „1„2 and the entropy S1 + S2. But the interaction between the two systems creates the possibility of new states that, depending on the nature of the interaction, correspond to new distributions of E, V or {N}. If the interaction does result in a redistribution of E, V or {N} the states associated with the new distribution add to the degeneracy. Hence, after the interaction, „ ≥ „0 = „1„2
7.15
S ≥ S0 = S1 + S2
7.16
and
It follows that the statistical entropy of an isolated system can only increase. Page 152
Materials Science
Fall, 2008
The statistical entropy of a system that is not isolated can also be calculated. The method is given in standard texts on Statistical Mechanics. We limit this discussion to the simpler, isolated system.
The statistical entropy of a solid of given energy The degeneracy in the energy states of a typical crystalline solid has three principle sources. First, the valence electrons in the solid can be distributed in many different ways over the quantum states available to them, which produces the electronic entropy, Se. The electronic entropy is much higher in a metal than in a semiconductor or insulator. There are many empty electron states that have energies comparable to those that are occupied by the valence electrons, and hence many distinguishable ways in which the electrons can be distributed without changing the energy or violating the Pauli Exclusion Principle. The electronic entropy of a semiconductor or insulator is much smaller since almost all of the valence electrons are confined to particular atomic or bonding states that are very nearly full. Second, a crystalline solid has a vibrational entropy, Sv, that is due to the thermal oscillations of its atoms about their equilibrium positions on the crystal lattice. The small displacements associated with the lattice vibrations significantly increase the degeneracy of the crystalline phase (the degeneracy remains finite because the motions of the individual atoms are correlated in quantized vibrational states called phonons). The vibrational entropy decreases with the strength and directionality of the crystal bonds, which inhibit atom displacements, and increases with the openness of the crystal structure. Hence very stable, high-melting solids tend to have relatively low vibrational entropies, and metals with the more open BCC structure tend to have higher vibrational entropies than the same metals in the close-packed structures. Third, a multi-component solid has a configurational entropy, Sc, which is due to the many different ways in which the various chemical species can be distributed over the different atom sites in its structure. The number of distinct configurations is relatively easy to calculate, and it is useful to do so. Assume that NA atoms of type A and NB of type B are distributed over N = NA + NB atom sites. There are N! different ways of distributing N particles over N sites (the number of permutations of an ordered list of N objects). However, distributions that differ only through the interchange of A atoms with one another or B atoms with one another are physically indistinguishable. Since there are NA! ways of redistributing the A atoms over the sites occupied by A atoms in a given configuration, and NB! ways of redistributing the B atoms, the total number of distinguishable configurations is N! „ = N !N ! A B
7.17
If the energy of the crystal is approximately independent of the way in which the atoms are distributed then its configurational entropy is Page 153
Materials Science
Fall, 2008
Sc = k ln(„) = k [ln(N!) - ln(NA!) - ln(NB!)]
7.18
This expression can be simplified considerably by using Sterling's Approximation, ln(N!) = N ln(N) - N which is a very good approximation when N is greater than about 10. configurational entropy can then be written
7.19 The
Sc = k [N ln(N) - NA ln(NA) - NB ln(NB)] = - kN [x ln(x) + (1-x) ln(1-x)]
7.20
where x is the atom fraction of component A, x = NA/N. An amorphous solid or glass has an additional entropy due to the irregularity of its spatial configuration; the atom positions are not confined to a crystal lattice but are distributed in a less regular way. It follows that the entropy of an amorphous configuration of atoms or molecules is always greater than that of a crystalline state of the same material.
Low-temperature behavior: the Third Law The Third Law of Thermodynamics states that the entropy of the preferred, or ultimate equilibrium phase of a system vanishes in the limit of zero absolute temperature. This is equivalent to the statement that the ground state of the system, the state of lowest energy, is unique (non-degenerate). The electronic and vibrational entropies vanish naturally at zero temperature. The electrons are reduced to their ground state. While the atoms continue to vibrate in the zero-temperature limit, these ground state vibrations are quantum phenomena that have no entropy since they are associated with a single quantum state. However, the configurational entropy is independent of the temperature. It vanishes at zero temperature only if the solid takes on a perfectly ordered state that has no configurational entropy. This leads to the important conclusion that the preferred state of a solid at low temperature is a perfectly ordered crystal or macromolecule.
7.3 THE CONDITIONS OF EQUILIBRIUM 7.3.1 The equilibrium of an isolated system If the entropy of an isolated system can be increased by an infinitesimal change in its state, then that change will inevitably occur. The reason is that any real system is in constant thermal agitation on a microscopic scale. Through local fluctuations it constantly samples thermodynamic states that are incrementally close to whatever Page 154
Materials Science
Fall, 2008
macroscopic state it is currently in. If any of these nearby states have higher entropy the system reaches them, but then cannot return. It necessarily evolves in the direction of increasing entropy. Its evolution continues until it finds itself in a state that provides a local extremum in the entropy such that no infinitesimally adjacent state increases the entropy further. The states that correspond to local extrema in the entropy can be maintained, at least momentarily, and are hence possible equilibrium states. They satisfy a condition of equilibrium that can be written (∂S)E,V,{N} ≤ 0
7.21
where ∂S is the infinitesimal change in the entropy that would result if the state of the system were changed infinitesimally at the given values of E, V and {N}, and the inequality holds for every possible infinitesimal change. There are four kinds of equilibria that satisfy the condition expressed in the inequality 7.21, of which only two are of any real interest. The four are illustrated in Fig. 7.2, where we have assumed that a single parameter, x, describes the path between states in order to make a two-dimensional plot (since the possible states differ in microstructure their differences are described by many independent variables; the entropy should be plotted in a multi-dimensional space).
metastable unstable stable
S
X
Fig. 7.2: Illustration of four kinds of equilibrium: metastable, unstable (inflection point), unstable (minimum), and stable equilibrium. A local minimum or saddle point in the entropy creates a state of unstable equilibrium. Such states satisfy the mathematical condition for equilibrium but cannot be preserved in practice since they are unstable under small, finite perturbations. A local maximum in the entropy defines a state that is stable with respect to small changes and can, in principle, be preserved for a very long time. However, if the locally stable state does not provide the maximum entropy for all possible states of the system then it will transform if it is given a sufficiently large perturbation in the right direction. Such states are called metastable states. They must eventually evolve to the state of maximum entropy , or
Page 155
Materials Science
Fall, 2008
stable equilibrium state, since the natural fluctuations of the system will eventually cause an appropriate perturbation, however large that perturbation must be. How long a metastable state can be preserved is a kinetic issue that depends on the size of the perturbation that is required to change it and the frequency with which perturbations occur. Many engineering materials are used in metastable states that are preserved almost indefinitely, including all amorphous materials and glasses and many crystalline solids. 7.3.2 Internal equilibrium The general condition of equilibrium, equation 7.21, leads to specific conditions of thermal, mechanical and chemical equilibrium that are satisfied in all equilibrium states. To find the conditions of internal equilibrium we consider two subsystems of the system that have volumes V1 and V2, energies E1 and E2 and chemical contents {N1} and {N2}, and are in contact with one another, as illustrated in Fig. 7.3. If the system is to be in equilibrium its entropy must not increase if the energy, volume or chemical content of the two subsystems is redistributed between them.
1
2
Fig. 7.3: Two subsystems within an isolated system that interact only with one another.
Thermal equilibrium First let the two subsystems exchange an infinitesimal amount of energy without changing the volume or chemical content of either. This exchange describes a thermal interaction. Since energy is conserved the energy gained by subsystem 1 must be lost by subsystem 2. Hence dE2 = - dE1
7.22
By eq. 7.5 the total entropy change in the process is dE1 dE2 dS = dS1 + dS2 = T + T 1 2 1 1 = dE1T - T 1 2
Page 156
7.23
Materials Science
Fall, 2008
where T1 and T2 are the temperatures of the two subsystems. Eq. 7.23 shows that the system can only be in equilibrium if T1 = T2. If this is not the case then dS is positive for a transfer of energy from the high-temperature subsystem to the low-temperature one. Hence the two subvolumes are not in equilibrium with one another unless their temperatures are the same. The same reasoning applies to any choice of subvolumes within the system. It also applies when the system is not isolated, since a system cannot be in equilibrium if it is out of equilibrium with respect to internal changes that do not affect its environment. We are therefore led to the condition of thermal equilibrium, which holds in general: a system in equilibrium has a uniform temperature. T = const.
7.24
Mechanical equilibrium A second possible interaction between two subsystems is a mechanical interaction in which they distort one another. Mechanical interactions in a solid can be rather complicated; they may be elastic in the sense that they stretch the bonds without rearranging the atoms, or plastic, in the sense that the atoms are permanently displaced. We will discuss these deformations at a later point in the course. However, if the system is in mechanical equilibrium then it must at least be in equilibrium with respect to the simple, fluid-like mechanical interaction that exchanges volume between the two subsystems by displacing the boundary between them, and must satisfy the condition of mechanical equilibrium that governs that interaction. Even then the mechanical interaction is complicated by the presence of external fields, such as the gravitational field, that impose mechanical forces directly on the material. However, external fields, including the gravitational field, have a negligible effect in most situations and we shall ignore them (the usual negligibility of gravity is one reason why materials processing in space has turned out to be less attractive than many had hoped, although there are potential applications that may prove important). Assuming fluid-like deformation and neglecting external fields, the mechanical interaction displaces the boundary between the two subvolumes so that dV2 = - dV1
7.25
Since energy is conserved (dE2 = - dE1) and temperature is constant (thermal equilibrium), the associated change in the entropy is dV1 1 dS = T [P1dV1 + P2dV2] = T [P1 - P2]
Page 157
7.26
Materials Science
Fall, 2008
If P1 ≠ P2, dS is positive when the volume of the high-pressure subsystem increases at the expense of that of the low-pressure one. We are hence led to the condition of mechanical equilibrium in the absence of external fields: a system in mechanical equilibrium in the absence of external fields has a uniform pressure. P = const.
7.27
Chemical equilibrium The third type of interaction between the two subsystems is a chemical interaction in which they exchange matter. Since any one of the independent chemical components may be exchanged there is a separate condition of chemical equilibrium for each. As in the case of mechanical equilibrium, the conditions of chemical equilibrium are affected by external fields since external fields such as the gravitational field apply forces to the individual atoms that do work when they are moved. We assume that external fields can be neglected. Since the total amount of the kth component is conserved, the quantity added to V2 is equal to that lost from V1, 2
1
dNk = - dNk
7.28
Letting the exchange occur at constant volume (we have already found the condition of mechanical equilibrium) the entropy change at constant total energy and uniform temperature is 1 1 1 1 1 2 2 2 1 dS = - T µkdNk + µkdNk = - T µk - µk dNk i
7.29 1
where µk is the chemical potential of the kth component in the ith subsystem. If µk ≠ 2 µk the entropy increases if a quantity of the kth component is transferred from the system of higher chemical potential to that in which the potential is lower. We are therefore led to the condition of chemical equilibrium in the absence of external fields: a system in chemical equilibrium in the absence of external fields has a uniform value of the chemical potential of each of its components. µk = const.
7.30
7.3.3 Non-equilibrium states; constrained equilibria The materials that are used in engineering are not usually in equilibrium states, even in the sense of metastable equilibrium. They are in non-equilibrium states that evolve continuously. The materials can be used as if their properties were constant only because the rate of evolution is so slow that it can be neglected.
Page 158
Materials Science
Fall, 2008
A familiar example of a non-equilibrium material is a semiconductor that has been chemically doped to create islands that contain local concentrations of electrically active solutes. The chemical potentials of the solute species are not uniform; the material evolves toward an equilibrium state in which the solutes are spread uniformly through the semiconductor crystal. However, non-equilibrium behavior can ordinarily be neglected because the materials are used at relatively low temperature where the rate of solute diffusion is so slow that the solute distribution changes very little over the useful life of the device. A second example is a polygranular crystalline solid. The crystal grains never have equilibrium shapes. Their surfaces tend to become slightly curved to achieve local mechanical equilibrium. It can be shown that the chemical potential is slightly higher on the convex side than on the concave side of the grain boundary. Atoms move across the grain boundary in response to the chemical potential gradient so that the grain boundary migrates toward its center of curvature. The smaller grains disappear with time and the average grain size grows. However, at the normal temperatures at which polygranular materials are used the rate of grain growth is negligible. A third example is a crystalline solid that contains dislocations. The dislocations are non-equilibrium defects. They exert forces on one another and, given sufficient time, move by glide and climb to annihilate by interacting with one another or with free surfaces. Again, this process is very slow at ordinary temperatures, so the dislocation distribution is nearly fixed in the absence of mechanical forces that are large enough to force dislocation motion. To apply thermodynamics to the behavior of non-equilibrium systems like these, which include almost all real materials, we treat them as idealized systems in constrained equilibria, that is, we analyze the behavior of a hypothetical system that is physically similar to the material of interest, but includes physical constraints that maintain it in thermodynamic equilibrium. Formally, we replace the kinetic constraints that maintain the non-equilibrium state by imaginary physical constraints that accomplish the same purpose in a static way. For example, an idealized polygranular solid whose grain boundaries are rigid, impermeable membranes behaves in many ways like a real polygranular solid, but its grains cannot grow. One can gain insight into the thermochemical behavior of polygranular solids by considering the behavior of solids with impermeable grain boundaries, which can reach thermodynamic equilibrium in the polygranular state. A silicon crystal in which certain regions have been doped with active solutes behaves at low temperature very much like a hypothetical crystal in which the doped regions are surrounded by impermeable membranes. The low-temperature thermochemical behavior of a dislocated solid is very much like that of a hypothetical solid whose dislocations are artificially pinned in space. We implicitly use constrained equilibrium models of this sort almost whenever we apply thermodynamics to real systems. The constraints fix those non-equilibrium features of the microstructure that remain nearly constant with time. The conditions of equilibrium govern those features of the microstructure that are kinetically capable of Page 159
Materials Science
Fall, 2008
reconfiguration during the time of experimental interest. The microstructural evolution of a real solid can be incorporated in this kind of model by periodically relaxing the hypothetical constraints so that the grains can grow, the solutes migrate or the dislocations move.
7.4 THE THERMODYNAMIC POTENTIALS The condition of maximum entropy is sufficient to analyze the equilibrium states of any system; it applies to an isolated system, and any system can be made part of an isolated system by joining it to its environment. However, it is usually inconvenient to do this. An engineering material is almost never used in a condition that can be regarded as isolated. It interacts with its environment. The relevant isolated system is a composite system that contains both the material and the environment that interacts with it. But the physical nature of the environment is rarely of interest. In engineering practice the "environment" may be nothing more than a heat source, such as an oven or furnace, that is used to maintain temperature, or a mechanical linkage that maintains the load on a material that is used as a structural member, or an electrical condenser that sets the electric field in a material that is used as a capacitor. Its only function is to establish the conditions under which the material is used; its detailed internal state is uninteresting. If the environment is regarded as part of the system to which the conditions of equilibrium are applied then one has to worry about it. It is far preferable to find an alternate way of phrasing the conditions of equilibrium so that they can be applied to a system that is not isolated, and involve only the state of the material itself. We accomplish this by representing the environment as a thermodynamic reservoir. A thermodynamic reservoir is a system that is so large compared to the system of interest that it can interact by exchanging energy, volume or chemical species without its own state being affected in any sensible way. The reservoir is assumed to be in equilibrium to the extent that it has a well-defined temperature, pressure and set of chemical potentials. Since its state is unaffected by its interaction with the system the values of these intensities remain constant, and the reservoir serves to fix their values within the system. When the environment can be approximated as a reservoir it is always possible to treat the behavior of the system in terms of its own properties and thermodynamic states without reference to the physical nature of the environment. However, when the environment is treated as a reservoir the thermodynamic quantity that governs the equilibrium of the system is not its entropy, but an appropriate thermodynamic potential whose identity depends on the nature of the interaction between the system and environment. The thermodynamic potentials that are most often useful are the Helmholtz free energy, which governs the equilibrium of a system with fixed volume and chemical content that interacts with a thermal reservoir, and the Gibbs free energy, which governs the equilibrium of a system with fixed composition that interacts with a reservoir that sets both its temperature and its pressure. Other common potentials include the enthalpy, which governs a system of fixed entropy and composition that interacts with a pressure reservoir (the enthalpy is frequently used in fluid dynamics, but Page 160
Materials Science
Fall, 2008
rarely in materials science), and the work function, which governs the behavior of an open system, a system that is enclosed by imaginary boundaries that fix its volume, and interacts with a reservoir that sets its temperature and chemical potentials. The various thermodynamic potentials are readily defined by considering the various experimental situations to which they naturally apply. Solids are ordinarily used in one of three experimental situations. First, a solid of given composition and volume has its temperature controlled by a reservoir. An example is a material that is heated in a furnace, when we can neglect the thermal expansion of the solid and the possibility of chemical interaction with the atmosphere in the furnace. In this case the controlled variables are V, {N} and T, and, as we shall see, the relevant thermodynamic potential is the Helmholtz free energy, F = E - TS. Second, and more realistically, the solid has a constant composition, but has both its temperature and pressure controlled by the environment. The temperature may be controlled by a furnace, or, simply, by ambient air, and the pressure may be fixed by the ambient, or by some mechanical device that loads the solid. In this case controlled variables are T, P and {N}, and the relevant thermodynamic potential is the Gibbs free energy, G = E - TS + PV. Third, the solid has a fixed size and shape, and its temperature and chemical potentials controlled by its environment. The common example is a small subvolume within a larger body, which we define by simply drawing an imaginary boundary around it. This open system has fixed volume, but can freely exchange energy and matter with its environment. In this case the controlled variables are T, V and {µ} (the set of chemical potentials), and the relevant thermodynamic potential is the work function, „ = E - TS - ͵kNk. We consider each case in turn. 7.4.1 The Helmholtz free energy Consider a system that has fixed volume, V, and chemical content, {N}, and is in contact with a reservoir with fixed temperature, T. The system and reservoir together form an isolated system as shown in Fig. 7.4. The system can exchange energy with the reservoir through a thermal interaction across its boundary, but the boundary is rigid and impermeable. The system (1) and reservoir (2) together make up an isolated system. By eq. 7.21 the system is in equilibrium only if every possible infinitesimal change of state leads to a decrease in the entropy, that is, if (eq. 7.21) (∂S)E,V,{N} ≤ 0
7.21
Assume an infinitesimal transfer of energy (∂E) between the system and the reservoir. The reservoir acquires the energy increment ∂E2 = - ∂E1. The total entropy change in the system is ∂S1, which may include an entropy increment due to internal changes as Page 161
Materials Science
Fall, 2008
well as that associated with the energy transfer. The entropy change in the reservoir may also include internal changes, but these must be positive, and hence cannot affect the equilibrium of the system. We can, therefore, neglect them and use eq. 7.5 to write ∂E2 ∂E1 ∂S2 = T = - T
7.31
T {N},V
dE
Fig. 7.4: A system of fixed volume and composition that is in contact with a thermal reservoir. The total entropy change is ∂E1 1 ∂S = ∂S1 - T = - T[∂E1 - T∂S1] 1 = - T ∂[E1 - TS1] ≤ 0
7.32
where the last form of the right-hand side follows since T is constant. Defining the quantity F1 = E1 - TS1
7.33
and using the fact that the temperature, T, is constant and the same in the system and the reservoir, the condition of equilibrium becomes (∂F1)T,V,{N} ≥ 0
7.34
The quantity F = E -TS is called the Helmholtz free energy. Eq. 7.34 states that if a system with fixed volume and chemical composition interacts with a thermal reservoir, its behavior is governed by its Helmholtz free energy, F, which has a minimum value for an equilibrium state. It is useful to examine the functional dependence of the Helmholtz free energy. Using eq. 7.6, the total differential of F is
Page 162
Materials Science
Fall, 2008
dF = dE - d(TS) = TdS - PdV + ∑ µkdNk - TdS - SdT k
= - SdT - PdV + ∑ µkdNk
7.35
k
If the Helmholtz free energy is evaluated as a function of the temperature, T, volume, V, and composition, {N}, as F = ¡F(T,V,{N})
7.36
∆¡F ∆¡F ∆¡F dF = ∆T dT + ∆V dV + ∑ ∆N dNk k
7.37
then
k
It follows by comparison with equation 7.35 that ∆¡F =-S ∆T
7.38
∆¡F =-P ∆V
7.39
∆¡F ∆Nk = µk
7.40
which show the dependence of the Helmholtz free energy on its natural variables, T, V and {N}. 7.4.2 The Gibbs free energy
dV {N}
P, T dE
Fig. 7.5: A system with fixed composition, {N}, that is in mechanical and thermal contact with a reservoir with fixed P and T.
Page 163
Materials Science
Fall, 2008
In a second common situation (which is, ordinarily, more realistic) the system has a given chemical content, but interacts with a reservoir that fixes both its temperature and pressure. This case is diagrammed in Fig. 7.5. The system can exchange volume and energy with the reservoir through mechanical and thermal interactions at its boundary, but the boundary is impermeable. As in the previous example, the system (1) and reservoir (2) together make up an isolated system, which is in equilibrium only if every possible infinitesimal change of state leads to a decrease in the entropy. Assume an infinitesimal transfer of energy (∂E) between the system and the reservoir that is accompanied by an infinitesimal displacement of the boundary so that the volume of the system changes by ∂V. The reservoir acquires the energy increment ∂E2 = - ∂E1 and the volume increment ∂V2 = ∂V1. The total entropy change in the system is ∂S1. The relevant entropy change in the reservoir can be written, according to eq. 7.5, 1 1 ∂S2 = T[∂E2 + P∂V2] = - T[∂E1 + P∂V1]
7.41
Hence the total entropy change is 1 1 ∂S = ∂S1 - T[∂E1 + P∂V1] = - T[∂E1 - T∂S1 + P∂V1] 1 = - T ∂[E1 - TS1 + PV1] ≤ 0
7.42
where the last form of the right-hand side follows since T and P are constant. Defining the quantity G1 = E1 - TS1 + PV1
7.43
and using the fact that T and P are constant, the condition of equilibrium becomes (∂G1)T,P,{N} ≥ 0
7.44
The quantity G = E -TS + PV is called the Gibbs free energy. Eq. 7.44 states that if a system with fixed chemical content interacts both thermally and mechanically with a reservoir that fixes its pressure and temperature, its behavior is governed by its Gibbs free energy, G, which has a minimum value for an equilibrium state. The functional dependence of the Gibbs free energy can also be found with the help of eq. 7.6. The total differential of G is dG = dE - d(TS) + d(PV)
Page 164
Materials Science
Fall, 2008
= - SdT + VdP + ∑ µkdNk
7.45
k
If the Gibbs free energy is written as a function of the temperature, T, pressure, P, and composition, {N}, as G = ¡G(T,P,{N})
7.46
then ∆¡G ∆¡G dG = ∆T dT + ∆P dP + ∑ k
∆¡G ∆Nk dNk
7.47
It follows by comparison with eq. 7.45 that ∆¡G =-S ∆T
7.48
∆¡G =V ∆P
7.49
∆¡G ∆Nk = µk
7.50
which show the dependence of the Gibbs free energy on its natural variables, T, P and {N}. In real systems the environment usually fixes the pressure and temperature, so the Gibbs free energy is the pertinent thermodynamic potential. However, when the system is a solid at atmospheric pressure, which is the case that is most frequently of interest, the PV term is almost always negligible compared to F and there is virtually no numerical difference between the Gibbs and Helmholtz free energies. 7.4.3 The work function
T, {µ} V
Page 165
dE dN
Materials Science
Fig. 7.6:
Fall, 2008
An open system, with fixed volume, in contact with a reservoir that fixes T and {µ}.
We are often interested in behavior in the interior of a material. Since regions in the interior of a material can, ordinarily, exchange both energy and chemical content with one another, the best way to study their behavior is usually by defining an open system, of the kind diagrammed in Fig. 7.6. A volume within the material is surrounded by an imaginary boundary. Since the boundary is fixed in space, its volume is given. However, its energy and chemical content are not. The material beyond the boundary acts as a thermal and chemical reservoir that fixes the temperature and chemical potentials within the system. Again, the combination of system (1) and reservoir (2) make up an isolated system, which is in equilibrium only if every possible infinitesimal change of state leads to a decrease in the entropy. Assume an infinitesimal transfer of energy (∂E) that may be accompanied by infinitesimal transfers (∂Nk, k = 1,...,n) of each of the n chemical species present. The reservoir acquires the energy increment ∂E2 = - ∂E1 and the chemical 2 1 additions ∂Nk = - ∂Nk , k = 1,...,n. The total entropy change in the system is ∂S1. The entropy change in the reservoir is (eq. 7.5), 1 1 2 1 ∂S2 = T∂E2 - ∑ µk∂Nk = - T∂E1 - ∑ µk∂Nk k k
7.51
Hence the total entropy change is 1 1 1 1 ∂S = ∂S1 - T∂E1 - ∑ µk∂Nk = - T∂E1 - T∂S1 - ∑ µk∂Nk k k 1 1 = - T ∂E1 - TS1 - ∑ µkNk ≤ 0 k
7.52
where the last form of the right-hand side follows since T and the chemical potentials, µk, are held constant by the reservoir. Defining the work function, „ = E - TS - ∑ µkNk
7.53
k
the condition of equilibrium becomes (∂„)T,V,{µ} ≥ 0
7.54
Eq. 7.54 states that if a system with fixed boundaries interacts both thermally and chemically with a reservoir that fixes the temperature and chemical potentials its behavior is governed by its work function, „, which has a minimum value for an equilibrium state.
Page 166
Materials Science
Fall, 2008
Using eq. 7.6, the total differential of „ is d„ = dE - d(TS) - d∑ µkNk k = - SdT - PdV - ∑ Nkd µk
7.55
k
If the work function is written as a function of the temperature, T, volume, V, and chemical potentials, {µ}, as „ = ¡„(T,V,{µ})
7.56
then ∆¡„ ∆¡„ ∆¡„ d„ = ∆T dT + ∆V dV + ∑ ∆µ dµk k
k
7.57
It follows by comparison with eq. 7.55 that ∆¡„ =-S ∆T
7.58
∆¡„ =-P ∆V
7.59
∆¡G =N k ∆µk
7.60
which show the dependence of the work function on its natural variables. A fourth thermodynamic potential, the enthalpy, H = E + PV
7.61
governs the behavior of systems whose entropy, pressure, and chemical content are controlled. This is seldom the situation in Materials Science, where the enthalpy is rarely used for any purpose other than as a shorthand notation for the sum E + PV. However, the enthalpy is often the preferred potential in fluid mechanics, since entropy is locally conserved in many types of fluid flow.
Page 167
Materials Science
Fall, 2008
7.5 THE FUNDAMENTAL EQUATION The various thermodynamic functions defined in the previous section are different and equivalent forms of the fundamental equation of the system. The fundamental equation is a concept introduced by Gibbs, who recognized that there is a single thermodynamic function that contains a complete description of the thermodynamic state of a material, provides values for all of its thermodynamic properties, and governs equilibrium. 7.5.1 The entropy function The entropy function, S = ¡S(E,V,{N})
7.62
is the form of the fundamental equation that is provided directly by the Second Law of thermodynamics. It is the most convenient form of the fundamental equation when the system of interest is isolated (or adiabatic). It then has the following features. First, the natural variables of the entropy function, E, V and {N}, are precisely the variables that can be controlled experimentally when the system is isolated. Their values are fixed by setting the content of the system at the time it is isolated, and cannot be altered afterwards. Second, the values of E, V, {N} are sufficient to fix the equilibrium state of an isolated system. The Second Law asserts that the equilibrium state has a maximum value of the entropy, S, with respect to all other ways of configuring the system with the given values of E, V and {N}. The entropy function, 7.62, is just the function that gives the entropy of the equilibrium state as E, V and {N} are varied. Of course, there may be physical or kinetic constraints on the system that limit the configurations it can take on, and set it in a metastable equilibrium state or a constrained equilibrium state. In that case the entropy function incorporates the constraints; the entropy has the largest value it can have for given E, V, and {N} when those constraints are imposed. Third, the conjugate forces, T, P and {µ}, are determined by the first partial derivatives of the entropy function ∆¡S =1 ∆E T
7.63
∆¡S =P ∆V T
7.64
∆¡S = - µk ∆Nk T
7.65
Page 168
Materials Science
Fall, 2008
That is, each thermodynamic force is specified by the partial derivative of the entropy function with respect to the thermodynamic quantity that is conjugate to it. Fourth, the thermodynamic properties of the system are specified by the second and higher derivatives of the fundamental equation. The thermodynamic properties are material properties that govern the changes in the thermodynamic contents, E, V, and {N}, with the values of the thermodynamic forces. The most commonly used are the specific heat, which governs the change of energy with temperature, the compressibility, which governs the change of volume with pressure, and the coefficient of thermal expansion , which governs the change of volume with temperature. The thermodynamic properties are discussed in more detail in the following section. Finally, the fundamental equation can be transformed so that its independent variables are changed from the set E, V and {N} to some other set that is more natural for a particular experimental situation. The simplest transformation is accomplished by solving the entropy function for the energy. The result is the energy function, E = ¡E(S,V,{N})
7.66
which, as we shall see, contains precisely the same information as the entropy function, but with the different set of independent variables {S,V,{N}} replacing {E,V,{N}}. Since the energy function contains the same information, it is also a form of the fundamental equation. Other useful forms of the fundamental equation are obtained by applying a technique known as the Legendre transform to the energy function. The various Legendre transforms of the energy function are the thermodynamic potentials we discussed in the previous section. 7.5.2 The energy function The independent variables that appear in the energy function (7.66) are the set S, V and {N}. While we shall not prove it here, it is a consequence of the Second Law that the energy of an equilibrium state has the minimum value that it can have for any way of reconfiguring the system at given values of S, V and {N}. This is the energy minimum principle, which you have probably encountered fairly frequently in elementary physics. It is the complement of the entropy maximum principle that the entropy has the largest possible value for given E, V and {N}. The thermodynamic quantities S, V and {N} that appear as independent variables in the energy function (7.66) characterize the system; the equilibrium state of the system is that which has the least value of the energy for the given values of S, V and {N}. The function ¡E(S,V,{N}) gives the minimum (equilibrium) value of the energy. While it is not easy to construct experimental systems in which the entropy is controlled, there are hypothetical situations in which the variables S, V and {N} are fixed, and these are important in theoretical analysis. First, in the limit of low temperature the Third Law of thermodynamics asserts that the entropy of a system approaches a least Page 169
Materials Science
Fall, 2008
value that is independent of V and {N}; in this limit the value of the entropy is fixed. Since it is much easier to compute the energy of a solid than its entropy, most of the available theoretical analysis of the solid state concerns the identification and properties of minimum energy states. Since many stable crystalline solids assume their minimum energy structures at temperatures well above room temperature, this approach is often fruitful. In many other cases we can usefully discuss the properties of solids at finite temperature by identifying the properties of the minimum-energy state and inferring the changes that should occur as the temperature is raised. We have already used this approach to discuss the electronic configurations of atoms and solids in terms of their minimum-energy, ground states. The change in the energy in an infinitesimal change in the state is dE = TdS - PdV + ∑ µkdNk 7.67 k
where the successive terms on the right-hand side correspond to the thermal, mechanical and chemical work done if the change of state is accomplished without turbulence or friction. Note that each term in this expression has the form of a thermodynamic force, T, P or µk, multiplied by the differential change in one of the thermodynamic quantities, S, V or Nk. Each of the forces, f = T, P or µk, is said to be conjugate to the corresponding quantity, x = S, V or Nk, in that the work done by that force in an infinitesimal change of state is obtained by multiplying it by the differential change in its conjugate quantity. To complete the specification of the state of a system that has given values of S, V and {N} we need to know the values of the conjugate forces, T, P and {µ}. These are determined from the fundamental equation by the first partial derivatives of the energy function ∆¡E =T ∆S
7.68
∆¡E =-P ∆V
7.69
∆¡E ∆Nk = µk
7.70
That is, each thermodynamic force is given by the partial derivative of the function ¡E(S,V,{N}) with respect to its conjugate quantity. The thermodynamic properties of the system are the materials properties that govern the changes in the values of the thermodynamic forces with a change of state. For example, the isometric specific heat, CV, governs the change in the temperature with a
Page 170
Materials Science
Fall, 2008
differential change in the energy at constant volume and composition according to the relation dE dT = C
7.71
v
while the isentropic compressibility, ˚S, gives the change in pressure on a differential change in volume at constant entropy and composition according to the relation dV dP = V˚
7.72
S
These and the other first-order thermodynamic properties of the system are given by the second partial derivatives of the energy function. For example, ∆2¡E ∆T T ∆S2 = ∆SV{N} = CV
7.73
where the subscripts on the partial derivative indicate that it is to be taken at constant volume and composition, and the final form of the right-hand side follows from the fact that if a sample is heated reversibly at constant V and {N} the change in entropy is CVdT dQ dE dS = T = T = T
7.74
The isentropic compressibility is given by the second derivative of the energy function with respect to the volume, ∆2¡E ∆P 1 ∆V2 = - ∆VS{N} = V˚S
7.75
where the isentropic compressibility is defined as 1 ∆V ˚S = - V ∆P
7.76
S{N}
If the system is an n-component fluid there is a total of (n+2)2 second partial derivatives, 1 of which only 2(n+2)(n+3) are independent since the value of a second partial derivative is independent of the order of differentiation. The only other first-order thermodynamic property that is commonly given a name is the isentropic coefficient of thermal expansion, 1 ∆V åS = V ∆T
7.77
S{N}
Page 171
Materials Science
Fall, 2008
This property is related to the cross-derivative of the energy with respect to entropy and volume: ∆2¡E ∆T 1 ∆S∆V = ∆VS{N} = VåS The third and higher derivatives of the energy function are higher order thermodynamic properties that determine how the first-order thermodynamic properties vary with a change in state. To recapitulate, the energy function is a form of the fundamental equation. Its independent variables are a sufficient set of thermodynamic quantities to determine the thermodynamic state. Its first partial derivatives with respect to these quantities give the thermodynamic forces. Its second and higher partial derivatives give the thermodynamic properties of the system. Moreover, the fundamental equation determines the conditions of equilibrium for a system in which the variables S, V and {N} are experimentally controlled; by the minimum energy principle the energy has a minimum value for all possible changes in the state of the system that maintain the total values of these quantities. 7.5.3 Alternate forms of the fundamental equation Ordinarily the variables that are controlled when a material is processed, used, or experimented on are some mixture of thermodynamic forces and quantities. For example, when the system is in a rigid, impermeable, diathermal container, T, V and {N} are controlled. Note, however, that one can never control a thermodynamic force and its conjugate thermodynamic quantity at the same time; to control the temperature of a system it must interact thermally with a reservoir, and hence its entropy cannot be controlled, to control the pressure it must interact mechanically with a reservoir, and hence its volume cannot be controlled, to control its chemical potential it must interact chemically, and hence the composition cannot be controlled. More generally, if we divide the thermodynamic variables into the conjugate pairs (S,T), (V,P), (Nk,µk) then we can control only one variable within each conjugate pair during an experiment. As we have already seen, it is useful to express the condition of equilibrium for a given experimental situation in terms of the minima of that thermodynamic potential whose natural variables are the variables that are controlled experimentally (maxima in case of the entropy of an isolated system). We can now see that when these thermodynamic potentials are written as functions of their natural variables they are alternate forms of the fundamental equation.
Page 172
7.78
Materials Science
Fall, 2008
Consider, for example, the Gibbs free energy, which governs the equilibrium of a fluid system whose temperature, pressure and composition are controlled. In terms of its natural variables, the Gibbs free energy is given by the function G = E - TS + PV = ¡G(T,P,{N})
7.79
We have already found that the first partial derivatives of the Gibbs free energy are ∆¡G =-S ∆T
7.48
∆¡G =V ∆P
7.49
∆¡G ∆Nk = µk
7.50
Hence the thermodynamic quantities and forces that are not fixed by the experimental situation are determined by the first derivatives of the Gibbs free energy function. The second derivatives of the Gibbs free energy function given the first-order thermodynamic properties. These appear in a slightly different form from those derived from the energy function. For example, the second partial derivative of the Gibbs free energy with respect to the temperature is ∆2¡G ¡ CP = ∆ ∆G = - ∆S = ∆T2 ∆T ∆T ∆T P{N} T
7.80
which defines the isobaric specific heat, CP, rather than the isometric specific heat, CV. The second derivative of ¡G(T,P,{N}) with respect to the pressure is ∆2¡G ¡ = ∆ ∆G = ∆V ∆P T{N} = - V˚T ∆P2 ∆P ∆P
7.81
where ˚T is the isothermal compressibility, 1 ∆V ˚T = - V ∆P
7.82
T{N}
The cross-derivative of ¡G(T,P,{N}) with respect to temperature and pressure is ∆2¡G ¡ = ∆ ∆G = ∆V ∆T∆P ∆T P{N} = VåT ∆T ∆P
Page 173
7.83
Materials Science
Fall, 2008
where åT is the coefficient of thermal expansion, 1 ∆V åT = V ∆T P{N}
7.84
While the thermodynamic properties defined by second derivatives of the Gibbs free energy function differ slightly from those derived from the energy function, it can be shown that these (and the properties defined by the second derivatives of any other thermodynamic potential) can be derived from those defined by the second derivatives of the energy function. There is a general mathematical method for doing this, called the Jacobian method, which we shall not review here. The same results follow for any of the other thermodynamic potentials defined in the previous section. When they are written as function of their natural variables their first derivatives give the values of the thermodynamic forces and quantities that are conjugate to the set of independent variables, and the second and higher partial derivatives give the thermodynamic properties. When the natural variables of a given thermodynamic potential are the variables that are experimentally controlled, that potential governs the equilibrium of the system. Hence, as Gibbs recognized, the problem of determining the macroscopic equilibrium behavior of a material is reduced to the problem of finding its fundamental equation. This can be done experimentally, or, in principle, theoretically using the methods of statistical thermodynamics. 7.5.4 The integrated form of the fundamental equation The energy of a system can be written in the integrated form E = TS - PV + ∑ µkNk
7.85
k
To prove this equation, we use the fact that E, S, V and N are all additive quantities. If we double the size of the system without changing its internal state, we simply double the value of each. However, the energy is a function of S, V and {N}, E = ⁄E(S,V,{N}). This function must have the mathematical property that, if we multiply each of the independent variables, S, V, and {N}, by the same constant, å, the energy is also multiplied by å. That is, ⁄E(åS,åV,{åN})
= å⁄E(S,V,N) = åE
7.86
Functions that have this property are called homogeneous functions of the first order. If we now differentiate both sides of eq. 7.86 with respect to å (which we can do, since å can be any number), the result is
Page 174
Materials Science
Fall, 2008
¡ ∆¡E ¡ S + ∆E V + ∑ ∆E N = E ∆(åS) ∆(åV) ∆(åNk) k
7.87
k
Since eq. 7.87 holds for any value of å, it holds if å = 1. Setting å = 1 and using eqs. 7.68-70 yields eq. 7.85. Given eq. 7.85, the integrated forms of the other common thermodynamic potentials are: F = E - TS = - PV + ∑ µkNk
7.88
k
G = E - TS + PV = ∑ µkNk
7.89
k
„ = E - TS -
∑
µkNk = - PV
7.90
k
Eq. 7.90 reveals why „ is called the work function of the system; it is given by the product of pressure and volume. Eq. 7.89 shows that the Gibbs free energy is associated with the total chemical energy of the material. 7.5.5 The statistical form of the fundamental equation One can, in principle, calculate the fundamental equation of a material by applying the techniques of Statistical Thermodynamics. In the formal sense, these are straight-forward, and were introduced in Section 7.2.3 when we discussed the relation between the entropy and the degeneracy, or randomness of the system. In that case we considered a material that was isolated from its environment, so that its energy, E, volume, V, and composition. {N} were fixed. Suppose that we are able to identify every state that this material can possibly have, consistent with the rules of quantum mechanics. The possible states include the various ways of configuring the ion cores into static equilibrium configurations, the various ways of distributing the electrons among the allowable electron states for a given distribution of the ion cores, and the possible vibrational states of the ion cores about their equilibrium positions, under the restriction that all of these states have the same energy, E. If the number of these states (the degeneracy of the system) is „(E,V,{N}), then the entropy is given by S(E,V,{N}) = k ln„(E,V,{N})
7.11
and we have evaluated the fundamental equation of the material. The problem, of course, is counting all the states. This is always a difficult exercise, and is made particularly
Page 175
Materials Science
Fall, 2008
difficult by the restriction that the states must have the same energy. When the configuration of atoms or electrons is changed, the energy ordinarily changes as well. It is almost always easier to evaluate the Helmholtz free energy, F(T,V,{N}). To do this we let the material have a given volume and composition, but let it exchange energy with a reservoir that fixes its temperature. In this case the state of the material can have any energy, provided that it has the given volume and composition. Let an arbitrary state be denoted by the index, n, and let the material have energy, En, when it is in its nth state. To compute the fundamental equation of the solid we form the canonical partition function, - ∫En Z(T,V,N) = ∑ e
7.91
n
where the coefficient, ∫ = 1/kT and the sum is taken over all admissible states. The partition function, Z, is a function of T, V and N since the energy of the nth state depends on the volume of the system and the number of particles, while the coefficient, ∫, is the reciprocal temperature. The Helmholtz free energy, ¡F(T,V,N), is obtained directly from the partition function by the relation ¡F(T,V,N) = - kT ln[Z(T,V,N)]
7.92
Once the Helmholtz free energy has been found all other thermodynamic potentials and properties can be computed from it. The statistical relations that evaluate the fundamental equation, eqs. 7.91-92, are formally simple. The problem is to identify and compute the energies of all the possible states, which is not simple at all. While a number of interesting problems have been solved, at least approximately, with the techniques of statistical thermodynamics, in almost all cases the thermodynamic properties of real materials must be measured experimentally.
7.6 THE THERMODYNAMICS OF SURFACES External surfaces and internal interfaces in solids influence their behavior in many important ways. Their energies influence the shapes of solids, including grains and phases within solids, and, as we shall see in a later chapter, have a strong influence on whether and where a new phase will form when it is thermodynamically favorable for it to do so. Their permeability controls the exchange of material from phase to phase. Interfaces often have chemical content of their own. Adsorbed species influence the reactivity of a solid, and may strongly affect its mechanical properties. For example, superficially minor concentrations of metalloid impurities in structural metals, such as sulfur or phosphorous in steel, can cause catastrophic embrittlement when they are adsorbed on grain boundaries.
Page 176
Materials Science
Fall, 2008
The thermodynamics of interfaces is made difficult by their complex structure. The interfaces that separate phases in contact are not strict discontinuities. They are rather thin transition shells across which the materials properties and thermodynamic densities change from the values appropriate to one phase to those appropriate to the other. The reason for the thickness of the interface is relatively straightforward: the two phases perturb one another over a distance that is at least equal to the effective range of atomic interaction. Even in the simplest case, a crystalline solid that presents a closepacked surface to a vacuum, the atomic packing of the first few layers below the surface is distorted; there is an asymmetry in bonding since there are no atoms beyond the surface. The interface structure is difficult to predict, and often impossible to observe. Even the best modern characterization tools reveal very little about the internal structure of real interfaces. We are hence faced with the problem of describing the behavior of an inhomogeneous material whose internal structure we know very little about. The thermodynamics of surfaces was developed by Gibbs (Equilibrium of Heterogeneous Substances), who devised a mathematical technique that acknowledges the finite thickness of the interface while avoiding it in a simple formal way. The method uses a geometric representation of the interface, the Gibbs construction, together with the assumption that while the internal state of the interfacial shell may be unknown it is fixed by equilibrium with the bulk phases on either side. 7.6.1 The Gibbs construction
å transition shell
dividing surface
∫
... Fig. 7.7:
The Gibbs model of an interface. The three-dimensional transition shell is replaced by a two-dimensional surface.
The model that is used for the thermodynamics of surfaces is illustrated in Fig. 7.7. The figure shows two homogeneous phases separated by an interface. The transition shell between the two phases is shaded, and includes the whole volume of material that is perturbed by the interface. In the Gibbs construction the three-dimensional transition shell is replaced by a hypothetical, two-dimensional dividing surface, which is placed so that it is roughly coincident with the physical interface. The bounding phases are imagined to extend homogeneously up to the dividing surface from either side. The model system is then given a thermodynamic content that duplicates that of the actual
Page 177
Materials Science
Fall, 2008
transition shell by defining surface excess quantities of energy, entropy and mass that are imputed to the dividing surface itself. I
Let EI, SI and Nk be the actual quantities of energy, entropy and mole number of the kth chemical species contained in a segment of the interfacial shell. Let the dividing surface that replaces that segment have area, A, and let it divide the volume, V, of the transition shell into subvolumes Vå and V∫, which lie on the å and ∫ sides of the interface, respectively. If the thermodynamic content of the model of the interfacial shell is to reproduce the content of the actual shell, then the dividing surface must have surface I excesses of energy, ES, entropy, SS, and chemical content, Nk , such that å
∫
å
∫
ES = EI - EV Vå - EV V∫
7.93
SS = SI - SV Vå - SV V∫ S
S
å
∫
Nk = Nk - nk Vå - nk V∫
7.94 (k = 1,...,m)
7.95
where EV and SV are the energy and entropy per unit volume, nk is the molar density of the kth component, and there are m chemical components in the system. The surface excess quantities are said to be adsorbed on the surface. Their densities are twodimensional (quantity per unit area), and are given by ES ES = S
7.96
SS SS = S
7.97
S
Nk ©k = S
(k = 1,...,m)
7.98
7.6.2 The fundamental equation of an interface Since the thermodynamic state of the transition region is given by its entropy, chemical content and volume, the state of the surface can be characterized by the adsorbed entropy and chemical species and the surface area. (As Gibbs showed, this is true even when the surface is curved, if the dividing surface is placed properly at the interface.) It follows that the fundamental equation for the interface can be written ES = ⁄ES(SS, {NS}, A)
Page 178
7.99
Materials Science
Fall, 2008
By analogy to the energy function for a bulk phase, the partial derivatives of this function with respect to entropy and chemical content define the temperature and chemical potentials at the interface: ∆¡ES ∆SS = T
7.100
∆¡ES ∆NS = µk k
7.101
The change of energy with area defines the interfacial tension, ß: ∆¡ES ∆A = ß
7.102
The interfacial, or surface tension is the two-dimensional analog of the pressure, and is the force that resists the extension of the surface. It is necessarily positive: ß>0
7.103
since, if it were negative, the surface would grow spontaneously. Since the adsorbed energy, entropy and chemical content of a homogeneous interface increase linearly with its area, the fundamental equation of the surface has the integrated form (cf. Sec. 7.5.4 above) S
ES = TSS + ßA + ∑ µkNk
7.104
k
7.6.3 The conditions of equilibrium at an interface Since the interface can exchange energy and chemical species with either of the adjacent phases, it can be easily shown that, when the system is in equilibrium, its temperature, T, and chemical potentials, µk (k = 1,...,m), are the same as those in the surrounding phases. It follows that the bulk phases act as a thermochemical reservoir for the interface, which is, effectively, an open system. The controllable variables are T, {µ} and A, where the temperature and chemical potentials are controlled by setting their values in the bulk phases, and A is the area of the interface. As discussed in Sec. 7.4.3, the equilibrium of an open system requires that its work function, S
„ = E - TS - ∑ µkNk k
Page 179
7.105
Materials Science
Fall, 2008
have a minimum value for given values of T, V and the set {µ}. The work function of the interface is S
„S = E - TSS - ∑ µkNk
= ßA
7.106
k
Equation 7.54 shows that the surface tension is just the surface excess of the work function per unit area, just as the pressure is the (negative) work function per unit volume of the bulk. The general condition of equilibrium is, by analogy to eq. 7.54, (∂„S)T,{µ} = ∂(ßA)T,{µ} ≥ 0
7.107
that is, the interface is in equilibrium only if every possible change in the state of the interface that does not alter its temperature or chemical potentials increases „S. The possible changes in the interface are of two types: those that change the state of the interface at fixed area (that is, those that change the internal state of the transition shell), and those that change the area. Consider each of these in turn.
Changes in the internal state of the interface If we fix the area, eq. 7.105 becomes (∂ß)T,{µ} ≥ 0
7.108
Hence, the equilibrium state of the interface is that which has the least value of ß for given values of T and {µ}. If we were trying to compute the internal structure of the interface, we would do that by comparing various possible structures and accepting that which leads to the lowest value of ß. It is useful to recognize that the condition 7.108 can be read in reverse: any spontaneous change in the state of the interface must decrease its interfacial tension. Many important interfacial phenomena can be easily understood on the basis of this simple rule. For example, an impurity that is adsorbed onto a solid surface from the atmosphere will remain on the surface if its presence decreases ß, but will be drawn into the interior, or repelled back into the atmosphere, if it increases ß. An impurity in a polygranular solid, such as sulfur in steel, will segregate to grain boundaries if it lowers the interfacial tension of the boundary, but will otherwise remain in the bulk. Two solids will spontaneously bond together only if their interfacial tension is less that the sum of their surface tensions in air. We will encounter other examples later in the course.
Equilibrium at a curved interface Next, consider changes in the shape of the interface at constant interfacial tension, for example, when a curved interface is deformed or displaced. In the simplest example,
Page 180
Materials Science
Fall, 2008
the interface is spherical, and separates phases (call them å and ∫) that have different pressures (På and P∫), as illustrated in Fig. 7.8. If the radius, R, of the interface changes, it also changes the volumes of the bounding phases. The work function for the system shown in the figure is „ = - PåVå - P∫V∫ + ßA
7.109
When R is changed, infinitesimally, to R + ∂R the change in „ is ∂„ = - (P∫ - På)4πR2∂R + ß(8πR)∂R 2ß = - 4πR2(P∫ - På) - R ∂R
7.110
The change in „ can always be made negative, by choosing the sign of ∂R, unless the bracketed term vanishes. Hence the sphere is only in equilibrium with respect to expansion or contraction of the interface if 2ß (P∫ - På) = R
7.111
Note that equilibrium is only possible if the pressure inside the sphere, P∫, is greater than that outside, På. In the limit of a plane interface, R “ ∞, the pressures are the same.
∂R
P
... Fig. 7.8:
å
P∫
R
A sphere of phase ∫, with pressure, P∫, and radius, R, enclosed by a surface and embedded in phase å with pressure På.
Equation 7.111 is a special case of a condition of mechanical equilibrium that applies to curved surfaces in general. Whatever the shape of a curved surface, it is always possible to characterize its local curvature by measuring its radius of curvature along two perpendicular axes that lie in the surface. Its mean curvature, –K is related to these two radii by the equation
Page 181
Materials Science
Fall, 2008
–K
1 =R
1
1 +R
7.112
2
The condition for mechanical equilibrium across the interface is, then, (P∫ - På) = ß–K
7.113
where P∫ is the pressure in the interior, which is defined as the side of the interface that contains the shortest radius of curvature. In the case of a sphere, –K = 2/R, which regenerates eq. 7.111. In the case of a cylinder, –K = 1/R, where R is the cylinder radius.
Equilibrium at a three-phase junction line A two-phase interface cannot simply end; it either closes on itself, as it does in the case of the sphere shown in Fig. 7.8, or terminates along a line where three interfaces meet, as illustrated in Figs. 7.9 and 7.10. We distinguish two types of three-phase junction lines. In the first type, which is exemplified by the line around the periphery of a drop of oil floating on water and by a three-grain junction line in a polygranular solid, all three phases can change shape by growth or mechanical deformation. In this case the three interfaces met at angles that are, ordinarily, not all that far from 120º, as illustrated in Fig. 7.9.
∫ ß∫©
ßå∫
©
å å©
ß ... Fig. 7.9:
View perpendicular to a three-phase junction line where phases å, ∫ and © meet.
In the second case, which is exemplified by a liquid droplet sitting on a solid surface, one of the phase is much more rigid than the other two, and the surface of that phase continues straight through the junction line (Fig. 7.10). To find the equilibrium configuration at a three-phase junction line it is simplest to use a force balance, in which each interface is imagined to exert a perpendicular pull on the junction line with a force per unit length equal to its surface tension, ß. The justification for this approach is illustrated in Fig. 7.11, which shows an element of surface that terminates in a junction line. If a unit length of the junction line is displaced normal to itself, as shown in the figure, the work done is W = ∂„S = ß∂A
7.114 Page 182
Materials Science
Fall, 2008
which is exactly equal to the work done by a force, ß, per unit length of junction line that acts perpendicular to it. It follows that equilibrium can only be obtained at a junction line if the three interfacial tensions are in balance. å ∫
ßå∫ ß∫©
ßå©
© R
Fig. 7.10: A liquid-like droplet (∫) sitting on a rigid surface (©). The periphery of the droplet is a three-phase junction line where the å∫ and ∫© interfaces form the contact angle, œ. In the case illustrated in Fig. 7.9, the force balance leads to an equation that is known as the Neumann triangle of forces. If we define the vector force, ß ij , as a force that has a magnitude equal to the interfacial tension of the ij interface, and a direction that lies in the plane of the ij interface perpendicular to the three-phase junction line, then the condition of equilibrium at the line illustrated in Fig. 7.9 is that ß å∫ + ß ∫© + ß å© = 0
7.115
When the three tensions are identical, as they are, for example, when the interfaces are grain boundaries in a solid that is isotropic in its surface properties, then the equilibrium condition is that the three interfaces make angles of 120º to one another.
ß
dA
Fig. 7.11: Illustration of the action of the surface tension on a three-phase junction line. In the case illustrated in Fig. 7.10, the force balance leads to a relation that is known as the Young equation. Since the substrate solid is rigid, we need only balance forces parallel to the substrate surface. These forces are in balance if the å∫ interface meets the © surface at an angle, œ, the contact angle, that satisfies the relation cos(œ) =
ßå© - ß∫© ßå∫
7.116
Page 183
Materials Science
Fall, 2008
Of course, cos(œ) must have a value that lies between -1 and 1. If the right-hand side of eq. 7.116 is greater than 1 or less than -1, the equation has no solution. In the former case the ∫ phase spreads over the © surface to form a continuous film, and is said to wet the surface. In the latter case, the ∫ phase is repelled by ©, and is separated from it by a thin film of å.
The macroscopic shape of a crystal If a crystal grows in free contact with the atmosphere then it can take on any shape it chooses. The preferred shape is that which minimizes the total surface energy „S = ∑ ßkAk
7.117
k
where ßk is the surface tension of the kth element of the external surface of the crystal, Ak is its area, and the sum is taken over the whole external surface. Every crystal is at least slightly anisotropic in its surface tension; close-packed or, in ionic crystals, electrically neutral surface planes invariably have lower surface tension. On the other hand, the total surface area is minimized if the crystal is a sphere, which uses all planes, and increases as the crystal develops facets with preferred orientations. The macroscopic shape of a crystal reflects the competition between the drive to minimize area, which favors a spherical shape, and the drive to present low-tension surfaces, which favors a faceted shape. In most metallic and covalently bonded solids the anisotropy in the surface tension is not great enough to drive a faceted shape. The faceted "crystals" that form in nature are almost always strongly ionic materials whose interfacial anisotropy is due to the fact that only certain planes are electrically neutral.
Page 184
Materials Science
Fall, 2008
Chapter 8: Simple Solids Once out of nature I shall never take My bodily form from any natural thing But such a form as Grecian goldsmiths make Of hammered gold and gold enamelling - William Butler Yeats, "Sailing to Byzantium"
8.1 INTRODUCTION While it is difficult to calculate the thermodynamic properties of solids with numerical accuracy, it is relatively easy to develop a qualitative understanding of them. For this purpose, it is useful to consider the three model solids: a perfect crystal, a random solid solution with near-neighbor bonding, and a slightly imperfect crystal. First, we shall consider the perfect crystal. Its energy can be written as the sum of three terms: the net binding energy at zero temperature, which is the binding energy when the atoms are located at their equilibrium positions on the crystal lattice, the vibrational energy, which is the energy of atom motion about the equilibrium positions, and the electronic energy, which is the energy due to thermal excitations of the electrons. As we have defined it, the binding energy depends only on the volume of the crystal. The vibrational and electronic energies increase with temperature. That increase, which is measured by the specific heat, CV, is primarily due to the thermal excitation of lattice vibrations. In most solids, the vibrational specific heat is governed by a material property, the Debye temperature, ŒD, which provides a rough measure of the energy required to excite all possible lattice vibrations. In the low-temperature limit (T < ŒD/4), the specific heat increases with the cube of the temperature, Cv fi T3, essentially because more and more lattice vibrational modes are excited as the temperature is raised. At high temperature, T > ŒD, the specific heat approaches a constant, Cv « 3Nk, essentially because all possible lattice vibrations are excited. The electrons also contribute to the specific heat since they are also excited to higher energy levels as the temperature is raised. However, because of the Pauli exclusion principle, only those electrons that are very close to the Fermi energy can be excited. Since the number of such electrons is always small, their contribution to Cv is negligible at ordinary temperatures. Given the specific heat, it is possible to find the thermal contribution to the Helmholtz free energy of a crystal, which establishes the form of the fundamental equation. Differentiating this equation leads to the entropy, and differentiating it again produces the compressibility and the coefficient of thermal expansion (as well as regenerating Cv). The entropy depends on the Debye temperature. The compressibility and coefficient of thermal expansion depend both on ŒD and on its derivative with respect to volume, which is specified by a material property known as the Grªuneisen parameter, ©. Page 185
Materials Science
Fall, 2008
Second, we shall consider a random solid solution. When the solid is a solution with more than one chemical component, as are most of the materials of interest to us, its free energy is affected by the configurational entropy that arises from the many different ways in which distinct kinds of atoms can be arranged over the lattice sites. When all configurations are equally likely, the configurational energy can be calculated as described in the previous chapter. We use it to compute the fundamental equation and explore the behavior of a simple solution, in which each atom is assumed to bond to its nearest neighbors only. The model illustrates how the configurational entropy dominates behavior when the temperature is sufficiently high, and produces mutual solubility between chemical species that would segregate apart (or form ordered compounds) at lower temperature. Finally, we shall consider an imperfect crystal and calculate the equilibrium density of defects as a function of temperature. The results show that the crystal always contains vacancies, whose concentration increases exponentially with the temperature. However, the other, high-energy defects, such as dislocations and grain boundaries, are non-equilibrium features that would seldom be found if an unconstrained equilibrium were easily attained.
8.2 THE PERFECT CRYSTAL 8.2.1 The internal energy Consider a perfect, crystalline solid that contains a given number of atoms (N) in an essentially fixed volume (V), and whose temperature, T, is fixed by its environment. As discussed in the previous chapter, the Helmholtz free energy of this solid can be found by identifying all of its possible states, computing their energies, and using the results to find the canonical partition function (eq. 7.91). This method is difficult, but it works for any system. When the system is a single, perfect crystal. however, there is an easier way. We need only calculate the mean value of the internal energy, E, of the material as a function of temperature. The equation that relates the internal energy of a system to its temperature, volume and particle (or mole) number is called the caloric equation of state: E = ™E(T,V,N)
8.1
In the general case, the caloric equation of state is not equivalent to the fundamental equation. From the definition of the Helmholtz free energy, ∆¡F E = F + TS = F - T∆T ∆ ¡F ∆TT
=-
T2
8.2
Page 186
Materials Science
Fall, 2008
Eq. 8.2 shows that the caloric equation of state is related to the partial derivative of ¡F(T,V,N) with respect to T. If there is a part of ¡F that is linear in T (that is, an additive term that has the form Tg(V,N), where g is a function of V and N only) this part is not determined by the caloric equation of state. To completely determine ¡F we also need the thermal equation of state, ∆¡F P = ™P(T,V,N) = - ∆V
8.3
to fix the volume dependence of ¡F. However, in the specific case of a perfect crystal, the fundamental equation does not contain volume-dependent terms that divide out of 8.2. It follows that the fundamental equation of a perfect crystal, ¡F(T,V,N), can be calculated from the caloric equation of state. To compute the internal energy of a perfect crystal we write it in the form E = E0(V) + ED(V,T)
8.4
where we have left the dependence on the particle number, N, implicit (N is fixed). E0(V) is the energy of the solid in the limit T = 0. It is essentially equal to the binding energy of the solid when all atoms are in their static equilibrium positions and the electrons are in the lowest-energy electron states. (There is also a small zero-point vibrational energy which is a quantum-mechanical effect that makes a negligible contribution to the energy at finite temperature). We discussed the binding energy in Chapter 3. In this chapter we are concerned with the temperature-dependent part of the energy, ED(V,T), the thermal energy. There are two potentially significant contributions to the thermal energy: the vibration of atoms about their equilibrium positions, which become increasingly violent as the temperature is raised, and the excitation of electrons, particularly the valence electrons that have energies near the Fermi level and can be excited to relatively high-energy states. We consider these in turn, and will find that, except in the case of a metal at very low temperature, almost all of the thermal energy is due to the atomic vibrations. To appreciate the vibrational energy we need to understand the nature of lattice vibrations in a solid. 8.2.2 Lattice vibrations At finite temperature the atoms in a solid are in continuous thermal motion about their equilibrium positions. To describe the vibration of an atom it is necessary to specify its motion along each of the three perpendicular directions in space. Since an atom can move at different velocities along the three directions in space, each atom has three viPage 187
Materials Science
Fall, 2008
brational degrees of freedom. It follows that a solid that contains N atoms has 3N vibrational degrees of freedom, three for each atom. However, the motions of the atoms in a solid are not independent of one another. The atoms are coupled together by strong bonding forces. If a particular atom is displaced from its equilibrium position the lengths and angles of the bonds it makes with its neighbors are changed. The resulting forces act both to restore the displaced atom to its equilibrium position and to displace its neighbors to accommodate its displaced position. The displaced neighbors, in turn, exert forces that tend to displace their neighbors. In this way the displacement of an atom generates a vibrational wave that propagates through the solid. These waves are called the normal modes of vibration of the solid. Since there are a total of 3N vibrational degrees of freedom of the atoms, there are 3N normal modes.
Transverse vibration of a linear chain of atoms A particular example of a displacement wave in a linear chain of atoms is shown in Fig. 8.1. Each atom displaces its neighbor with the result that the vertical displacements of the atoms are described by a sinusoidal wave with wavelength, ¬. The normal vibrations include waves with all possible values of the wavelength, ¬.
¬/2
a Fig. 8.1: A transverse displacement wave of wavelength, ¬, in a linear chain of atoms. To find all of the possible vibrational waves in a linear chain like that shown in Fig. 8.1, let the chain have a total of N atoms, located at positions x = na, where n is an integer. A transverse wave like that illustrated can be described mathematically by the relation ikx u(x) = u0e
8.5
where u(x) is the vertical displacement of the atom at x = na and k is the wave vector, 2π k= ¬
8.6
eik(x+¬) = eikx eik¬ = eikx e2πi = eikx
8.7
Since
Page 188
Materials Science
Fall, 2008
the displacement is periodic with period, ¬. The displacement waves that actually can appear in a finite chain of N atoms are affected by end effects at the terminations of the chain. If N is large, these are surface effects that make a negligible contribution to the energy. To remove them we adopt what are called periodic boundary conditions. We imagine the chain of N atoms to be embedded in an infinite chain that has the property that every N-atom segment is like every other, in the sense that identically situated atoms in every segment have the same displacement. The displacement is then periodic with the macroscopic period, Na: u(x + Na) = eik(x+Na) = u(x) = eikx
8.8
By an analysis like that used in eq. 8.7, equation 8.8 holds only if kNa = 2πm
8.9
where m is an integer. Hence the allowed values of the wave vector, k, are 2πm k = Na
8.10
There are exactly N independent values of the integer, m. To see this, let m = N + p, where p is an integer. Then 2πinp eikx = eikna = expi2πn + N 2πinp = exp N = eik'x
8.11
where k' = 2πp/Na. Hence the waves that are generated by wave vectors for which m > N simply repeat those for which m ≤ N. It is most convenient to chose the N values of m to be the set N N - 2 ≤m≤ 2
8.12
(there are only N independent values since - N/2 gives the same wave as N/2). With this choice the N independent values of k are π π -a ŒD, the high-temperature limit has certainly been reached. If we substitute these approximations and results into equation 8.36, the vibrational energy is given by the integral ∑D
⌡ ´n(∑,T)¨Ó∑g(∑)d∑ Ev(T,V,N) = Eº + ⌠ 0
4πn0 ∑D Ó∑3 ⌡ Ó∑/kT = Eº + 3 ⌠ d∑ c e -1
8.41
0
Now defining the integration variable, x = Ó∑/kT
8.42
and substituting the value of n0, we obtain, after a bit of algebra,
Page 200
Materials Science
Fall, 2008
3V(kT)4 xD x3 ⌠ ⌡ Ev(T,V,N) = Eº + 2 dx 2π (Óc)3 ex - 1 0
x
T 3 ⌠D x3 = Eº + 9NkTŒ ⌡ x dx e -1 D
8.43
0
where the limit of integration is xD = ŒD/T
8.44
Since the value of the integral depends on its upper limit, xD, equation 8.43 can be written in the simpler form Ev(T,V,N) = Eº(V,N) + 3NkTf(ŒD/T)
8.45
where Eº is the zero-point vibrational energy and f(ŒD/T) is a universal function (the Debye function) of the variable ŒD/T: x
T 3 ⌠D x3 f(ŒD/T) = 3Œ ⌡ x dx e -1 D
8.46
0
The Debye function estimates the correction to the Dulong-Petit relation for the vibrational energy when the temperature is below ŒD. Its value is always less than 1.
The specific heat at low temperature In the limit of low temperature xD “ ∞, and the Debye function can be found analytically. The definite integral, ∞
x3 π4 ⌠ ⌡ x dx = 15 e -1
8.47
0
so the vibrational energy is π4 T 3 Ev(T,V,N) = Eº(V,N) + 3NkT 5 Œ D
8.48
The vibrational specific heat in the low-temperature limit is obtained by differentiating equation 8.48 with respect to T, and is CV =
4 4π T 3 3Nk 5 Œ D
Page 201
8.49
Materials Science
Fall, 2008
which differs from the Dulong-Petit law by the factor in braces. Equation 8.49 is exact in the limit of low temperature, and shows that the vibrational specific heat of a solid varies as T3 in the limit of low temperature.
CV
T
Fig. 8.8:
œD
The vibrational specific heat in the Debye approximation. For T > ŒD/2 the specific heat is approximately constant, and given by the Dulong-Petit Law.
The specific heat at finite temperature in the Debye model The Debye function can be solved numerically to estimate the vibrational energy and the specific heat for all temperatures. The result is important because it is universal. In the Debye approximation the vibrational energy depends on only a single material parameter, the Debye temperature, ŒD. In particular, the vibrational specific heat depends only on the dimensionless combination, T/ŒD. To see this we differentiate equation 8.43: ∆ CV = ∆T [Ev(T,V,N)] ŒD = 3Nk{f(ŒD/T) - T f'(ŒD/T)}
8.50
where f'(ŒD/T) is the derivative of f(ŒD/T) with respect to its argument: df(ŒD/T) f'(ŒD,T) = d(Œ /T) D
8.51
Eq. 8.50 shows that, to within the accuracy of the Debye approximation, the vibrational specific heats of all simple solids should have the same form when expressed as a function of the dimensionless temperature, T/ŒD. This universal function is shown in Fig. 8.8. It is reasonably well obeyed by all solids that have primitive unit cells.
Page 202
Materials Science
Fall, 2008
Solids with non-primitive unit cells Strictly speaking, the Debye model is restricted to solids that have only acoustic vibrational modes, that is, to solids that have primitive unit cells such as FCC and BCC solids. It can be extended to solids with non-primitive cells by accounting for optical modes of vibration. It is often possible to do this to a reasonable approximation by assigning a single frequency to all of the optical modes. In any case, the Debye T3 law governs the vibrational specific heat at very low temperature, since only the acoustic modes are activated when T is small. We can simply define a value for ŒD for solids with non-primitive structures, such as diamond and the HCP metals, by fitting equation 7.126 to the low-temperature specific heat. With this approximation the Debye model can be applied to crystals with nonprimitive structures, and works reasonably well for elemental solids, solid solutions and ordered substitutional compounds. The model is less useful for interstitial compounds and molecular solids, since the optical modes in these solids often have high frequencies. The same approximation can be used to define the Debye temperatures of amorphous solids, and works reasonably well for amorphous metals, semiconductors and simple glasses. It is less useful for polymeric and other molecular glasses.
The Debye temperature as a material property Given that the Debye temperature characterizes the vibrational specific heat of a solid it is useful to see how it can be related to other material properties. According to equation 7.117, the only material property on which it depends is the mean speed of sound, the mean velocity of propagation of waves of long wavelength. The speed of sound can be calculated from the elastic constants of the material, which we shall discuss later, but can also be related to the strength of bonding in the solid. The stronger the bonds, the stronger the force that tends to restore a displaced atom to its equilibrium position, and, hence, the higher the frequency and velocity of a sound wave. Hence strongly bonded solids have high Debye temperatures (diamond has ŒD « 2000K), while weakly bonded solids have low values (Pb has ŒD « 95K). For most metals, ŒD lies in the range 200-500K. 8.2.5 A qualitative version of the Debye model As the theoretical models of solid state physics go, the Debye model is a relatively simple one. Nonetheless, it involves some rather messy integrals whose form obscures the qualitative features of the physics that govern the lattice contribution to the specific heat. For that reason, it is useful to describe an even simpler model that may help to clarify the physics of the problem. While it is seriously deficient in mathematical rigor, it does give the right functional dependence of the specific heat. Following Debye, let us assume that the frequency of a lattice vibration is linearly related to its wave number, ∑ = ck, as in eq. 8.37. At temperature, T, the lattice vibrations can be divided into two sets: those for which the phonon energy, Ó∑, is small comPage 203
Materials Science
Fall, 2008
pared to kT, and those for which it is large compared to kT. The former, low-energy vibrational modes are strongly excited, while the latter are not. Let ∑T be the cut-off frequency that divides strongly excited vibrations from weakly excited ones, as illustrated in Fig. 8.9. A crude representation of the Bose-Einstein function suggests that Ó∑T « 2kT is a reasonable choice. ∑D ∑ ∑T
0 k
- π/a
... Fig. 8.9:
π/a
Schematic drawing illustrating the division between excited modes (∑ < ∑T) and quiescent modes at temperature T.
Now make the approximation that all vibrations with ∑ < ∑T are excited into the high-temperature limit, with energy kT, while those with ∑ > ∑T remain quiet. Since Ó∑D = kŒD, and since the 3N vibrational modes are spherically distributed through the Brillouin zone, the number of excited modes is
3N 2T 3 ŒD N(T) ~ 3N
TŒD/2
the vibrational energy is
3NkT 2T 3 ŒD E(T) ~ 3NKT
TŒD/2
and the specific heat is
12Nk 2T 3 ŒD Cv(T) ~ 3Nk
TŒD/2
Page 204
8.54
Materials Science
Fall, 2008
The vibrational energy and specific heat given by eqs. 8.53 and 8.54 are in reasonable numerical agreement with those predicted by the Debye model, and have the right temperature dependence in both the high-T and low-T limits. A useful qualitative feature of this model is that it makes it clear why the specific heat changes with temperature. At high temperature all of the available vibrational modes are strongly activated. Increasing the temperature further raises the level of activation of each of the 3N modes by the same amount, so the specific heat is constant. At low temperature, however, only some of the modes are strongly activated. Increasing the temperature raises the energy in two ways: the energy of the activated modes is raised, and vibrational modes that were previously quiet are activated. In this case the temperature increases both the energy per mode and the effective number of modes. As a consequence, the specific heat is a string function of the temperature. 8.2.6 The electronic contribution to the specific heat The second contribution to the thermal energy of a perfect crystal is made by the valence electrons, which reconfigure into excited states above the Fermi energy, EF, as the temperature rises. There are Nz electrons, where N is the number of atoms and z is the valence. Since these are in rapid motion through the crystal lattice one might expect them to make a large contribution to the specific heat. However, they do not. The electronic specific heat is very small at all realistic temperatures, even in metals with high electrical conductivity. The reason is that only a few of the valence electrons are excited by an increase in temperature.
CV B
T A
T2
Fig. 8.10:
Plot of CV/T against T2. The slope gives the coefficient of the vibrational contribution; the intercept is the coefficient of the electronic term.
On the other hand, the electronic contribution to the thermal energy is responsible for an anomalous feature of the specific heats of metals at very low temperature. In the limit of low temperature the specific heat of a metal obeys an equation of the form CV = AT + BT3
8.55
Page 205
Materials Science
Fall, 2008
where A and B are constants. The T3 term is due to lattice vibrations. While A is much smaller than B, at temperatures so low that T >> T3 the linear term dominates. If one plots the specific heat of a metal in the manner shown in Fig. 7.14, as CV/T against T2, then the result is a straight line with slope, B, and asymptote, A. The behavior of the electronic specific heat has its source in the Pauli exclusion principle, which requires that there be only one electron in each permissible state. Quantum particles that obey the Pauli exclusion principle are called Fermions. At finite temperature they are distributed over the set of states available to them according to the Fermi-Dirac distribution function, which states that the expected number of particles, ´n(E)¨, in a state with energy, E, at temperature, T is 1 ´n(E)¨ = (E-EF)/kT e +1
8.56
where EF is the Fermi energy. Note that n(E) is always less than 1, as required by the Pauli exclusion principle. The Fermi energy is the energy at which the probability that a state is occupied is exactly 1/2: 1 ´n(EF)¨ = 2
8.57
In the limit T “ 0 the exponential in the denominator of eq. 8.57 vanishes if E < EF, and is arbitrarily large if E > EF. It follows that 1 limT“0[´n(E)¨] = 0
E < EF E > EF
8.58
As we assumed in Chapter 2, all states below EF are filled in the ground state of the solid (T « 0) while all states above EF are empty. The Fermi level, EF, itself lies half-way between the highest filled and lowest empty states. In a metal the allowed electron states are so close together at EF that the Fermi energy is essentially equal to the energy of the highest filled state. In an intrinsic semiconductor or insulator the highest filled state lies at the top of the valence band while the lowest empty state is at the bottom of the conduction band; EF is in the center of the band gap. The Fermi-Dirac distribution function is plotted for some finite T in Fig. 8.11. In the limit T “ 0 all states below EF are filled, all states above EF are empty. At finite temperature the distribution differs significantly from the low temperature limit only for states whose energies differ from EF by no more than the thermal energy, kT. When EEF > kT, - (E-EF)/kT ´n(E)¨ « e
Page 206
(E-EF > kT)
8.59
Materials Science
Fall, 2008
and rapidly asymptotes to 0 as the energy increases. When EF-E > kT, (E-EF)/kT
´n(E)¨ « 1 - e
(EF -E > kT)
8.60
and asymptotes rapidly to 1 as the energy falls below EF. [To derive equation 8.60 note that (1 + x)-1 « 1 - x when x is small.]
EF
1
kT n(E) .5
0 E ... Fig. 8.11:
The Fermi-Dirac distribution function for electrons at finite temperature. Note that the difference from the ground state is essentially confined to the shaded band of energies within kT of EF.
The electronic specific heat of a metal In a metal the Fermi energy lies within a band of allowed electron states. As the temperature increases the electron distribution is smeared over the states within a distance, kT, of EF as shown in Fig. 8.11. Since the widths of bands of electron states are of the order of several electron volts (eV), and 1 eV is the value of the thermal energy at nearly 104K, only a very small fraction of the electrons within the valence band of the metal are affected. The thermal contribution to the electron energy can be roughly estimated in the following way. Let Ng(EF)dE be the electron density of states at the Fermi level, the number of electron states with energies between EF and EF+dE. The factor g(EF) is the density of states per atom, and is a small number since there are only 1-4 valence electrons per atom. Assuming that g(EF) is approximately constant for a range of width kT on either side of EF, then the total number of electrons that are excited at temperature, T, is approximately NkT Ne(T) « 2 [g(EF)]
Page 207
8.61
Materials Science
Fall, 2008
where we have also assumed that 1/2 of the available electrons are excited. Each of these electrons acquires a thermal energy of the order of kT. Hence the thermal contribution to the electron energy is, approximately, Ee(T) «
N[g(EF)] (kT) 2 2
8.62 EF
1
kT n(E) .5
valence band
conduction band
EG 0 E
... Fig. 8.12: Fermi-Dirac distribution for a narrow-gap semiconductor at moderate temperature. It follows from eq. 8.62 that the electronic specific heat is of the form suggested by eq. 8.55: (CV)e = AT
8.63
where the coefficient, A, is A « Nk2[g(EF)]
8.64
A much more rigorous theoretical calculation [J. Ziman, Principles of the Theory of Solids, p. 144] gives the result π2 A = 3 Nk2[g(EF)]
8.65
which shows that the simple model is not so bad. The coefficient, A, is a small number. At ordinary temperatures the electronic term accounts for only about 1% of the value of the specific heat. The electronic contribution is only observable at very low temperatures, where T >> T3. The electronic contribution to the thermal energy is otherwise negligible, and is not ordinarily included in calculations of the fundamental equation of solids.
Page 208
Materials Science
Fall, 2008
Semiconductors and insulators The electronic contribution to the specific heat of a semiconductor or insulator is significantly smaller than in a metal. The reason is shown schematically in Fig. 8.12, in which the Fermi-Dirac distribution is plotted against the distribution of allowed states in an intrinsic semiconductor or insulator. The Fermi energy is located within the energy gap. The magnitude of kT is less than .1eV at normal temperatures. The band gaps of typical semiconductors are of the order of an electron volt while the band gaps of insulators are several eV. Hence very few electrons are excited across the band gap at finite temperature, and the electronic specific heat is negligible. 8.2.7 The Helmholtz free energy We can find the Helmholtz free energy of the perfect crystal by integrating the differential expression given in eq. 7.82: ∆ ¡F ∆TT
E=-
T2
8.2
Since eq. 8.2 is a partial differential equation in which the partial differential is taken at constant volume, it actually determines ¡F/T only to within an additive function of volume. However, in the case of the simple vibrating solid we are considering here no such additive function appears (a fact that can be proved by invoking the Third Law of thermodynamics).
The low-temperature form of the fundamental equation Equation 8.2 is easily solved in the low-temperature limit. Using eq. 8.48, ∆ ¡F Eº 3π2Nk 2 = T ∆TT T2 5ŒD3
8.66
4 ¡F(T,V,N) = Eº(V,N) - π NkT T 3 ŒD 5
8.67
whose solution is
The Debye temperature, ŒD, is a function of the volume per atom, since expanding the solid changes the strength of binding between the atoms and, hence, changes the vibrational frequencies. Hence the right-hand side of eq. 8.67 is a function of T, V and N. Since these are the natural variables for the Helmholtz free energy, eq. 8.67 is the lowtemperature form of the fundamental equation of a simple solid. The entropy of the solid at low temperature is the vibrational entropy
Page 209
Materials Science
Fall, 2008
∆¡F T 3 4π4 S = - ∆T = 5 NkŒ D
8.68
The pressure; the Grªuneisen equation of state The pressure of the solid is given as a function of (T,V,N) by the partial derivative ∆¡F 1 dŒD ∆Eº 3π4 P = - ∆V = - ∆V - 5 NkT4 4 dV ŒD T 3 1 dln(ŒD) ∆Eº 3π4 = - ∆V - 5 NkTŒ V dln(V) D
8.69
where dln(x) = dx/x. The Debye temperature depends on the volume through the atomic volume, v = V/N; the vibrational spectrum of the solid does not change if we simply increase the size of the solid by adding atoms at constant atomic volume. Since the partial derivative in eq. 8.69 is taken at constant N, dln(V) = dln(v). If we now define the Grªuneisen parameter dln(ŒD) V ∆ŒD © = - dln(v) = - Œ ∆V D
8.70
which specifies the volume dependence of the Debye temperature, the pressure is given by T 3 1 dln(ŒD) ∆Eº 3π4 P = P(N,V,T) = - ∆V - 5 NkTŒ V dln(v) D ©ED ∆Eº = - ∆V + V
8.71
where ED is the vibrational energy. It can be shown that equation 8.71 holds at all temperatures for solids whose specific heats are well represented by the Debye interpolation formula. Equation 8.71 is called the Grªuneisen equation of state of a solid, and is obeyed reasonably well by all simple solids, including simple compounds. The Grªuneisen parameter, ©, is a dimensionless constant whose value lies in the range 1-3 for elemental solids, solutions and compounds that have simple crystal structures. The two terms that contribute to the pressure represent two different physical processes. The first term contains the effect of the interatomic bonding. The binding energy has a minimum at V0, the volume of the solid in the limit of zero pressure and temperature. If the solid is compressed to V < V0 the bonding term exerts a positive pressure
Page 210
Materials Science
Fall, 2008
that tends to expand it to restore V0. If the solid is expanded to V > V0 the solid tends to contract, and the bonding contribution to the pressure is negative. The second term includes the effect of the lattice vibrations. It is always positive. The lattice vibrations tend to push the atoms apart. 8.2.8 Thermodynamic properties Given the fundamental equation, we can also find the thermodynamic properties of the solid. We have already discussed the specific heat, CV. The isothermal compressibility was defined in Chapter 7, and is: 1 ∆V ˚T = - V ∆P T,{N}
8.72
Its reciprocal (the bulk modulus) is obtained by differentiating the Grªuneisen equation of state with respect to V. Assuming that © is constant, the low-temperature result is ∆P (˚T)-1 = - V∆V
T,{N}
∆2Eº ∆ED ©ED = V 2 + V - © ∆V ∆V {N} T,{N} ∆2Eº ©ED = V 2 - V [3© - 1] ∆V {N}
Ï
r0
r
Ï(r)
Fig. 8.13:
8.73
change of equilibrium radius with vibration vibrational energy states
. The binding potential of a diatomic molecule, showing the vibrational energy states and the increase of the equilibrium radius as the vibrational energy increases.
The first term on the right in 8.73 is the bulk modulus at zero T, which is determined by the interatomic binding. The second term is much smaller in magnitude and is due to the lattice vibrations. Since 3© > 1, the vibrational term decreases (˚T)-1, and hence decreases the bulk modulus. The physical reason for this behavior is that, since ©
Page 211
Materials Science
Fall, 2008
is positive, an increase in volume decreases the Debye temperature, ŒD, and hence increases the vibrational energy, which is proportional to [ŒD]-3. In effect, the binding strength, which provides the spring constant for the lattice vibrations, goes down. This has the result that the solid contracts more for a given increase in the pressure than it would at T = 0. Hence the compressibility increases with the temperature. The coefficient of thermal expansion is (Chapter 7): 1 ∆V åT = V ∆T P,{N}
8.74
It can be shown that ∆P 1 ∆V 1 ∆V V ∆T P,{N} = - V ∆P V,{N}∆TV,{N} ∆P = ˚T∆T V,{N}
8.75
By differentiating the Grªuneisen equation of state (8.71) with respect to the temperature we find ∆P ©CV ∆TV,{N} = V
8.76
©CV˚T V
8.77
and, hence, åT =
Equation 8.77 is known as the Grªuneisen relation. Its validity is not confined to low temperature; it applies whenever the Grªuneisen equation of state is valid. The physical source of the coefficient of thermal expansion lies in the anharmonicity of the atom vibrations in the solid. To visualize this consider a diatomic molecule, as we did in Chapter 2. The binding potential of the molecule, Ï(r), has the form shown in Fig. 8.13. The vibrations of the molecule have quantized values that are indicated by the horizontal lines within the potential well. Because of the anharmonic shape of the binding potential, the restoring force on the atoms is smaller when r > r0 than it is when r < r0, and, hence, the atoms spend a greater fraction of the time at r > r0. This has the consequence that the time average interatomic separation, r0(T), increases as the vibrational energy increases, as shown in the figure. The situation in a vibrating solid is qualitatively the same. The total binding potential between atoms is anharmonic, roughly as shown in the figure. As the vibrational
Page 212
Materials Science
Fall, 2008
energy increases the time average of the interatomic separation increases, so the solid expands.
8.3 THE RANDOM SOLID SOLUTION 8.3.1 The Bragg-Williams model The previous section showed how the fundamental equation of a perfect crystal could be found by considering its binding energy at zero temperature together with the changes in its energy due to vibrational and electronic excitations as the temperature is raised. In this section we formulate the fundamental equation of a random solid solution, so that we can gain some insight into the influence of composition on the thermochemical properties of solid solutions. To do this, we shall use a model called the Bragg-Williams model, which is the simplest model of a solid solution that includes compositional effects. The Bragg-Williams model treats a binary solution of two kinds of atoms, A and B, that are distributed over the sites of the fixed crystal lattice.. It assumes that the internal energy of the solution is equal to its binding energy, and assumes that E is independent of temperature (we discussed the thermal contribution to the energy in the previous Section, and can add it to the model later, if we wish). To evaluate the binding energy, it assumes that each atom interacts only with the atoms that are closest to it on the crystal lattice. Since the energy is independent of temperature, the entropy is just the configurational entropy of the solution (Section 7.2.3), and is also independent of temperature. The free energy of the solution is, then, F(T,{N}) = Eº({N}) - TS({N})
8.78
where Eº is the binding energy at the equilibrium volume, V0, and S is the configurational entropy of the distribution of atoms over the atom sites of the solid solution. Both Eº and S depend on the composition. VAB
VAA =A =B V BB
... Fig. 8.14: Solution of A and B atoms, illustrating AA, BB and AB bonds. To evaluate Eº, let each atom interact only with those atoms that are nearest neighbors to it, as illustrated in Fig. 8.14, and assume that the bonding interaction is inde-
Page 213
Materials Science
Fall, 2008
pendent of temperature and concentration. The atom fraction of component B is x =NB/N, where NB is the number of B-atoms and N is the total number of atoms (equal to the total number of lattice sites). The atom fraction of A is (1-x). 8.3.2 The internal energy If VAA is the energy of a bond between A atoms, VBB the energy of a B-B bond, and VAB the energy of an A-B bond then the energy of the solution is E = VAANAA + VBBNBB + VABNAB
8.79
where Nij is the number of nearest neighbor bonds of type ij (= AA, BB or AB). Let each atom have z nearest neighbors. If the solution is random, the probability that an atom site is occupied by a B-atom is x, the atom fraction of B, while the probability that it contains an A-atom is (1-x). Hence each A atom has, in the mean, z(1-x) AA bonds and zx AB bonds, while each B atoms has z(1-x) AB bonds and zx BB bonds. The total energy is, then 1 E = 2 {NAz[(1-x)VAA + xVAB] + NBz[(1-x)VAB + xVBB]}
8.80
where NA is the number of A atoms, NB is the number of B atoms, and the factor 1/2 is included because each bond is counted twice in the expression in braces. Since the number of A-atoms, NA = (1-x)N and NB = xN, equation 8.80 can be re-written: Nz E = 2 {(1-x)2VAA + x2 VBB + 2x(1-x)VAB} Nz 1 = 2 (1-x)VAA + xVBB + 2x(1-x)VAB - 2(VAA + VBB) Nz = 2 {(1-x)VAA + xVBB + 2x(1-x)V}
8.81
where V is the relative binding energy 1 V = VAB - 2(VAA + VBB)
8.82
The relative binding energy, V, has a simple physical meaning: it is the difference between the energy of an AB bond and the average of the energies of AA and BB bonds. If V is positive the energy of the system is decreased if 2 AB bonds are replaced by an AA bond and a BB bond, that is, if A atoms and B atoms associate preferentially with one another. In this case the energy of the system is lowered if it decomposes into A-rich and B-rich solutions. If V is negative, on the other hand, the energy is decreased if AA and BB bonds are replaced by AB bonds, and we should expect the system to prefer a
Page 214
Materials Science
Fall, 2008
solid solution that eventually orders into an AB-type compound. The energy of the random solution is graphed in Fig. 8.15 for the cases V > 0, V < 0 and V = 0.
V>0 E VAA
A
V=0
V BB
V Sord, the random solution has the lower free energy when T is large, and is thermodynamically preferred. However, since Eord < Edis, there is a temperature low enough that the ordered state has the lower free energy. It follows that the random solid solution becomes at least metastable with respect to the ordered phase as the temperature is lowered. This is precisely the behavior observed in ∫-brass (CuZn). It is a solid solution with a BCC structure at high T, but an ordered compound with the CsCl structure at low T. While eqs. 8.86-8.88 are written for a specific case, the qualitative result is general. When V< 0 the random solid solution is always at least metastable with respect to transformation into an ordered phase (or mixture of ordered phases) at sufficiently low temperature. This result is in keeping with the Third Law of thermodynamics; when the temperature is very small the equilibrium state of a multicomponent solid is always an ordered structure or mixture of ordered structures.
Like-atom attraction; low-temperature decomposition If V > 0, then the energy is a convex function of x while the entropy term, -Ts, is concave. The behavior of the free energy per atom f(x) (= F(x)/N) is plotted in Fig. 8.17 for three values of the temperature. At high temperature the entropy dominates and the function f(x) is concave; the system is a random solid solution at all compositions. At low temperature the energy dominates the behavior of f(x) at intermediate values of x but, because of the singularity in its compositional derivative, the entropy still dominates in the limits x “ 0 or 1. Hence f(x) is a concave function near x = 0 and x = 1, but is convex (curved downward) at intermediate values of the composition. T3
VBB
VAA f
T2 T1 A
x
B
... Fig. 8.17: Free energy curves for a random solution with V > 0 at three temperatures, T1 > T2 > T3. As we shall see in the following chapter, a solid solution cannot exist when its free energy function, f(x), is a concave function of x. If we try to mix equal concentrations of two components so that x = 0.5 at temperature T2 or T3 in Fig. 8.17, we will discover that we cannot do so; the solution will spontaneously decompose into a mixture of Page 217
Materials Science
Fall, 2008
two solutions with different concentrations, one A-rich (x < 0.5) and one B-rich (x > 0.5). We say that the A-B solution has a miscibility gap. We can form a solid solution at all compositions at high temperature where the configurational entropy dominates the free energy, but, if V > 0, cannot make a solid solution at intermediate compositions at low temperature where the energy dominates. Note, however, that the singularity in the slope of the free energy at x = 0 and 1, which is due to the singularity in the slope of the configurational entropy, has the consequence that the free energy function is always concave when x is close to 0 or 1. This is true no matter how high V may be, that is, no matter how much the species A and B dislike one another chemically. As we shall see in the following Chapter, this has the consequence that there is always some solubility for A in B and B in A. If the solution decomposes, it does not decompose into pure A and pure B, but into two solid solutions, one of which is predominantly A with some B in solution, and the other predominantly B with some A in solution. This is one example of the general thermodynamic principle that everything is at least slightly soluble in everything else.
8.4 EQUILIBRIUM DEFECT CONCENTRATIONS All real crystals are defective. They not only contain impurities in solution, but are filled with internal defects, including vacancies and interstitialcies, dislocations, and grain boundaries. It is important to recognize that vacancies, and, to a much lesser extent, interstitialcies, are inherent defects that are present in finite concentration in the equilibrium state of the crystal. They can be created or destroyed by processes that occur spontaneously within the solid. Moreover, there are active mechanisms in the solid that maintain the vacancy concentration at near-equilibrium values. Other crystal defects, such as dislocations and grain boundaries, have very high formation energies and essentially zero concentrations in the equilibrium state. They are non-equilibrium defects that are created during processing, and preserved because of the slow kinetics of the mechanisms that eliminate them. To establish this, we compute the equilibrium concentration of defects in an otherwise perfect crystal. 8.4.1 The equilibrium vacancy concentration When a crystalline solid is held at fixed temperature and pressure its equilibria are controlled by the Gibbs free energy. The addition of a single vacancy to a crystalline solid increases its Gibbs free energy by the amount Îgv = Îev + PÎvv - TÎsv
8.89
where Îev is the formation energy of the vacancy, Îvv is the associated volume increase, and Îsv is the entropy change which, in the case of an isolated vacancy, is due to the change in the vibrational and electronic entropies of the atoms in the neighborhood of the Page 218
Materials Science
Fall, 2008
vacancy. In most crystalline solids it is reasonable to neglect the volume and entropy increment for the addition of a single vacancy, in which case Îgv « Îev
8.90
Let there be n vacancies in the solid, where n is sufficiently small compared to the number of atoms, N, that the vacancies do not interact significantly with one another. The free energy increment due to the vacancies is, then, ÎG = nÎgv - kT ln(„) « nÎev - kT[(N+n) ln(N+n) - n ln(n) - N ln(N)]
8.91
where the second term on the right-hand side is the configurational entropy, which we have evaluated with the help of eq. 7.20. N+n is the total number of lattice sites; since the number of atoms is conserved, the number of lattice sites in the solid changes when vacancies are added. The equilibrium number of vacancies (ne) minimizes the Gibbs free energy. Differentiating eq. 8.91 and setting the derivative equal to zero yields the condition: d(ÎG) dn n = n = 0 = Îev - kT[ln(N+n) - ln(n)] e
8.92
The solution is ne e N+ne = xv
Îev
= e kT
e
8.93
where xv is the equilibrium concentration of vacancies, the fraction of lattice sties that are vacant in the equilibrium state. [Note that the left-hand side of eq. 8.92 is the definition of the chemical potential of a vacancy, regarded as a chemical species that occupies atomic positions in the crystal. Eq. 8.92 shows that, in equilibrium, the chemical potential of a lattice vacancy is zero. This is true because the number of vacancies is not conserved. Vacancies can be created and destroyed, so their concentration adjusts to minimize G.]
... Fig. 8.18: Illustration of the creation or annihilation of vacancies by exchange with adsorbed atoms at an interface. Page 219
Materials Science
Fall, 2008
While the vacancy concentration in a typical crystalline solid at moderate temperature is not large, it can be significant. The energy to form a vacancy, Îev, is of the order of 1 eV (electron volt) in a typical solid. An electron volt corresponds to the value of kT at about 104 K. Hence the vacancy concentration in a typical solid at 1000 K (« 700 ºC) is on the order 10-4, giving about 1018 vacancies per cm3 of crystal. The equilibrium vacancy concentration drops exponentially as the temperature decreases. At moderate temperature most solids maintain vacancy concentrations that are close to the equilibrium value. The reason is that defects within the solid act as sources and sinks for vacancies, which are reasonably mobile at moderate temperature. Efficient sources and sinks for vacancies include free surfaces, which can add or subtract vacancies by adjusting the number of adsorbed atoms (as illustrated in Fig. 8.18), grain boundaries, which can absorb or emit vacancies by reconfiguring the atoms in the boundary, and dislocations, which absorb or emit vacancies during climb, as discussed in Chapter 4. At low temperature the vacancy mobility decreases dramatically. When a solid is quenched from high temperature the vacancy concentration is frozen in for some time after the quench. This behavior affects atomic diffusion in the solid, as we shall discuss in the next chapter. It is also very important in the processing of semiconductor crystals such as silicon. Vacancies are electrically active defects in a semiconductor and must be held to a low concentration so that they do not alter its electrical properties. Hence, after silicon crystals are processed at high temperature they are cooled slowly to achieve an equilibrium vacancy concentration at a temperature where the vacancy concentration has an acceptably small value before cooling to room temperature where Si vacancies are nearly immobile. 8.4.2 Dislocations and grain boundaries It is possible to compute the equilibrium concentrations of dislocations and grain boundaries just as we calculated the concentration of vacancies. However, the formation energies of these defects is so high that their equilibrium concentrations are negligible at all normal temperatures. The energy of formation of a dislocation is of the order of 1 eV for each plane through which the dislocation threads. The energy of formation of a grain boundary is a fraction of an electron volt for each atom it contains (as one can see by recognizing that a low-angle, relatively low-energy grain boundary is often an array of dislocations). These defects are non-equilibrium defects in normal solids at normal temperatures. Dislocations form and multiply not only during plastic deformation, but also during crystal growth, processing, structural phase transformations, and heating or cooling, particularly when the material contains more than one phase. The principle source of dislocations in the latter cases are the mechanical stresses that are introduced when different grains or phases do not quite fit together in the solid, or contract at different rates when the solid is cooled. Since the dislocations are non-equilibrium defects there is a thermodynamic driving force to eliminate them. However, a dislocation can only disappear at a position in the crystal where the crystal discontinuity (the extra half-plane in the case of Page 220
Materials Science
Fall, 2008
an edge dislocation) can be eliminated, such as a boundary or matching dislocation of opposite sense. Dislocations are eliminated at a finite rate at moderate temperature. The process is called recovery. However, since recovery requires solid state diffusion its rate is relatively slow at all temperatures and is nearly zero at low temperature. Grain boundaries are formed during the original solidification of the solid or during recrystallization or structural transformations that reconstitute the grain structure. The grain boundaries are always non-equilibrium, high-energy defects. There is, hence, a thermodynamic driving force to eliminate them. The smaller the grain the higher its surface-to-volume ratio; hence larger grains grow at the expense of smaller ones to decrease the grain boundary area, causing gradual grain growth. In order for a grain to grow atoms must move across its boundary into its interior. This diffusional process is slow at ordinary temperatures, so grain growth only occurs at an appreciable rate at high temperature. Moreover, the decrease in boundary area per incremental increase in volume becomes smaller as the grain grows. As a consequence the rate of grain growth decreases as the grain size increases. In many cases the rate of grain growth is such that the grain size increases with the square root of the aging time. The equilibrium state of a macroscopic solid is a single crystal, and in some cases it is possible to make single crystals by growing grains until the individual crystallites become large enough to separate and use individually. However, in most cases it is not practical to make single crystals by simply growing grains since the rate of growth is slow, and the driving force for growth becomes vanishingly small as the grain size becomes large.
Page 221
Materials Science
Fall, 2008
Chapter 9: Phases and Phase Equilibrium Phases and stages Circles and cycles Scenes that we've all seen before... - Willie Nelson, "Phases and Stages"
9.1 INTRODUCTION All pure (one-component) materials take on several different structures, or phases, as the temperature is varied at atmospheric pressure. At high temperature the material is a vapor, at lower temperature it condenses into a liquid phase, and at still lower temperature it freezes into a solid. Moreover, many common solids can be found in several different crystal structures. In many cases, the multiple crystal structures of the solid are different equilibrium phases, as in iron, which transforms from FCC to BCC as the temperature is lowered, and in quartz, which takes on several different crystal structures as the temperature is lowered. In other cases the multiple structures are due to metastable equilibria. Common examples include diamond, which forms at high temperature and pressure but can be retained indefinitely under atmospheric conditions because of the difficulty of its structural transformation into graphite, which is the more stable form, and silica glasses, which form from the liquid and can be retained indefinitely because of the difficulty of the structural transformation to crystalline quartz. When a system contains more than one component, it is often a multiphase mixture in which volumes that have two or more distinct structures or compositions are intermingled on a very fine scale. For example, if water that contains a small amount of salt is held at a temperature slightly below Oº C it becomes a mixture of solid ice that contains only a small concentration of salt and liquid water that is relatively salty. This two-phase mixture persists with time, and hence must represent an equilibrium state. If iron that contains a bit of carbon in solution is cooled from a high temperature at which the FCC structure is preferred to a lower temperature, the iron not only transforms in structure from FCC to BCC, but also segregates into a two-phase mixture in which small precipitates of carbide, Fe3C, appear in a parent matrix of BCC iron that has a very small carbon content in solid solution. These and many other phenomena are examples of phase equilibria and equilibrium phase transformations in solids. They can be understood and predicted from the thermodynamic behavior of materials, and can be represented graphically in equilibrium phase diagrams, which show the equilibrium phases that appear at given values of the thermodynamic variables. We treat phase equilibria and phase diagrams in this chapter.
Page 222
Materials Science
Fall, 2008
The principles of phase equilibria help to understand why materials have the microstructures they do, and why these microstructures are often complex mixtures of two or more distinct phases. However, the equilibrium behavior gives only part of the story. To understand microstructure we must also appreciate why the equilibrium distribution of phases is often not observed, and why the phases that do appear have the shapes and distributions that they do. These phenomena reflect the kinetics of transformations in materials. Even though they are influenced by kinetics, many of the important features of real microstructures can be inferred from the equilibrium phase diagrams, and will be discussed in this chapter. Other important features can only be understood after a more detailed study of the kinetics of changes in solids, which will be presented in the following chapter. The second law of thermodynamics provides a general criterion that governs the equilibrium of all systems: the equilibrium state of an isolated system maximizes the entropy of the system for given values of its energy, volume, and chemical content. The material systems in which we are interested usually are not isolated, but interact with the environment. As we discussed in Chapter 7, it is possible to rephrase the condition of equilibrium so that it applies to a system under the experimental conditions that pertain to it. Often the temperature and pressure are controlled by the environment, and the chemical content is fixed. In this situation the condition of equilibrium is that the Gibbs free energy of the material has a minimum value with respect to all changes that may occur at the given T, P and {N}. Mathematically, if (ÎG)T,P,{N} ≥ 0
9.1
for all possible changes in the way the atoms are configured in the material at the given T, P, and {N}, the system is at equilibrium. Since this is the most common experimental situation, we ordinarily describe the equilibrium states of materials in terms of their Gibbs free energy. We saw in the previous chapter that the Helmholtz free energy (F) of a material is somewhat easier to compute theoretically than the Gibbs free energy. In fact, when the material is solid it does not matter very much whether we choose the Gibbs or the Helmholtz free energy to describe its equilibrium states. The two differ by the factor PV: G = E - TS + PV = F + PV
9.2
At atmospheric pressure the PV product is very small, and the Gibbs and Helmholtz free energies are numerically almost equal. While we shall use the Gibbs free energy in the following, we can often speak of equilibrium in the solid state as representing the minimum free energy without being specific about which free energy is meant.
9.2 PHASE EQUILIBRIA IN A ONE-COMPONENT SYSTEM
Page 223
Materials Science
Fall, 2008
The phases of one-component systems (including molecular solids and compounds with fixed composition) are distinguished by their structures. In the vapor and liquid phases the atoms or molecules are randomly distributed, but differ in the fact that the liquid is condensed while the vapor is not. Crystalline solid phases differ in crystal structure. Amorphous solids (glasses) are essentially continuations of the liquid state that differ only in properties. Phase relations like that between a liquid and a glass are examples of what we shall call mutations, in which one phase simply becomes another at a particular value of T and P. The phase behavior of one-component systems can be understood by comparing the Gibbs free energies of the various phases. In particular, we can understand why the equilibrium phase changes, usually discontinuously, as the temperature is lowered at constant pressure.
E + PV
G
-S
T
Fig. 9.1: The Gibbs free energy of a phase as a function of T. Fig. 9.1 shows a plot of the Gibbs free energy of a phase as a function of temperature at constant pressure. The slope of the curve at any temperature is given by equation 7.48, and is equal to - S. Since S is positive the curve slopes down. However, as T “ 0 the entropy of an ordered phase vanishes according to the Third Law; even when the phase is a non-equilibrium one that retains some entropy, the entropy becomes small. Hence the Gibbs free energy approaches the constant value, E + PV, as the temperature approaches zero (in the usual case PV is negligible compared to E). 9.2.1 Phase equilibria and equilibrium phase transformations A one-component material has at least three possible phases, solid (S), liquid (L) and vapor (V). The internal energy of these phases increases in the order S “ L “ V, since the energy is determined by the bonding. The entropy increases in the same sequence since the system is increasingly disordered as it changes from solid to liquid to gas. As a consequence the Gibbs free energy curves for the three phases appear roughly as shown in Fig. 9.2. At low temperature the solid has the least free energy and is the stable phase. Because of the higher entropy of the liquid, however, the free energy of the liquid drops below that of the solid at the melting temperature, Tm, and the liquid is the stable phase at higher temperature. The vapor has still higher entropy, so its free energy
Page 224
Materials Science
Fall, 2008
eventually falls below that of the liquid, at Tv, and the vapor phase is preferred at all higher temperatures. The free energy relations that lead to multiple solid structures are essentially identical to those shown in Fig. 9.2. Suppose that a particular one-component solid has three possible crystal structures, as shown in Fig. 9.3 and labeled å, ∫ and ©. If the ∫ structure has the lowest energy then it will be the equilibrium phase at sufficiently low temperature. If another structure (å) has greater entropy then its free energy may drop below that of the ∫ phase as the temperature rises. The å-phase is then the equilibrium structure. The range of preference of å and ∫ phases in a hypothetical material is shown in Fig. 9.3. The figure also includes the free energy curve for a third phase, ©, which does not appear at equilibrium because its free energy is always above that of the preferred phase. V L S G
solid
vapor
liquid
Tm T
Tv
Fig. 9.2: Free energies and equilibrium temperatures for solid, liquid, and vapor phases of a hypothetical material. å © ∫ G © ∫
å Tå∫
T å© T
Fig. 9.3: Free energy relations in a material that has three phases. Note that while phase © does not appear at equilibrium, it can still form as a metastable phase.
Page 225
Materials Science
Fall, 2008
9.2.2 Metastability In addition to revealing the equilibrium state of a material, the free energy curves also suggest how metastable phases can be formed. Suppose, for example, that the material of Fig. 9.3 is made in the å phase and then cooled. The equilibrium diagram suggests that it should transform to the phase ∫ at the temperature Tå∫. However, if the transformation is kinetically difficult, as structural transformations generally are, then phase å may be retained at temperatures below Tå∫. When this happens the free energy varies along the continuation of the å free energy curve as the temperature drops. As the system is cooled further the thermodynamic driving force for the å “ ∫ transformation, the free energy difference between the two phases, increases monotonically. If the driving force becomes sufficient to force the transformation then å will transform to ∫ at a lower temperature. If it does not, then å will be retained as a metastable structure. The possible retention of phase å in a metastable state may also make it possible to form phase ©, which does not appear in the equilibrium sequence at all. When phase å is cooled below the temperature Tå© then there is a net thermodynamic driving force for the transformation å “ ©, and this transformation will occur if it is kinetically possible. The phase © may then transform to the equilibrium phase, ∫, but if the © “ ∫ transformation is kinetically difficult phase © may be retained indefinitely as a metastable phase. It is possible to form © from å at any temperature in the hatched region of the figure. Metastable behavior of the type exhibited by the å, ∫ and © phases in Fig. 9.3 is common, and greatly increases the variety of structures that can be realized in engineering solids. A classic example of the use of this behavior is the formation of an amorphous film from a vapor phase. In this case the phase å is the vapor, ∫ is the crystalline solid, and © is the amorphous solid. By condensing the vapor onto a cold substrate it is brought to below the temperature Tå© at which it can transform to the amorphous phase. Since the amorphous phase is kinetically easier the film takes on an amorphous structure and is trapped in it. Metastability also increases the variety of transformation paths that can be used in the processing of a material. By using the metastability of the å phase one can often control the temperature at which it transforms to ∫, and can also sometimes force it to take an indirect path in which it first forms an intermediate structure like ©. Since the defect type and concentration in the microstructure of the final product is sensitive to the path of the transformation, metastable states are often utilized in materials processing to control the ultimate microstructure. 9.2.3 First-order phase transitions: latent heat The phase changes that we considered above are all first-order phase transitions. One phase transforms into another because the free energy of the product phase is lower than that of the parent phase. The phases are distinct at the transition temperature, and will continue to exist as metastable phases at temperatures beyond the transition Page 226
Materials Science
Fall, 2008
temperature unless there is an easy kinetic path that facilitates the transformation between them. Transitions of this type are called first-order phase transitions because they involve discontinuous changes in the first derivatives of the free energy function: the entropy and the volume. The substantial majority of the phase transitions that are important in engineering are of this type, including normal vapor-liquid and liquid-solid transitions, and transitions between solid phases that differ in crystal structure. Since the entropy changes discontinuously in a first-order phase transition the transition always involves a latent heat. Heat is released or absorbed when the transformation happens. To find the latent heat, note that the equilibrium transformation occurs when the Gibbs free energies of the two phases are the same. When this is true, ÎG = 0 = ÎE + PÎV - TÎS = ÎH - TÎS
9.3
where H is the enthalpy, H = E + PV. Hence the entropy change is ÎH ÎE ÎS = T ~ T Q =-T
9.4
Where Q = - ÎH ~ - ÎE
9.5
is the latent heat of the transition, the heat released to the environment when the transformation happens. It follows from eq. 9.5 that the latent heat is positive when the transformation is from a relatively high-energy phase to a relatively low-energy one. Heat is released, for example, when a vapor condenses, a liquid solidifies, or a high-temperature solid phase transforms to a phase that is preferred at low temperature. This has the consequence, for example, that the temperature remains almost constant while a system is undergoing a phase transformation that happens on cooling, such as solidification. If the system is cooled, the rate of transformation increases, and the heat released causes the temperature to rise to the transformation temperature. The temperature cannot increase further since the transformation would then reverse itself. Conversely, the latent heat is negative in a transition from a low-energy phase to a phase of higher energy, as, for example, when a liquid vaporizes or a solid melts or vaporizes. This phenomenon is used in a number of practical ways. An intelligent but Page 227
Materials Science
Fall, 2008
overheated undergraduate pours water on his head. The evaporation of water absorbs heat and cools him. In an identical, but more high-tech example, the exposed surfaces of missiles and spacecraft are often coated with ablative materials that vaporize at high temperature with very high latent heat. These materials are used to cool the exposed surfaces of a spacecraft during its re-entry into the atmosphere, since atmospheric friction would otherwise raise the surface temperature to values that might destroy it. 9.2.4 Transformation from a metastable state The latent heat of a transformation is only slightly changed by heating or cooling beyond the transformation point, which reflects the fact that ÎS and ÎH are approximately constant. Hence the thermodynamic driving force for transformation from a metastable phase at a temperature T that is not too far from the equilibrium transformation temperature, T0, is T ÎG(T) = ÎH - TÎS = QT - 1 0 Q = T [T - T0] 0
9.6
Equation 9.6 is reasonably accurate even when at temperatures well below or well above the transformation point.
9.3 MUTATIONS 9.3.1 The Nature of a Mutation The phase changes we have considered thus far are first-order transitions that connect states that are physically different from one another. Since both of the phases in a first-order transition are stable at the equilibrium transformation point, metastability is always possible for at least a small range of conditions around the transformation point. There is, however, a second type of phase transformation, in which two phases that are otherwise distinct become identical at the transformation point. Rather than transforming discontinuously, the two phases simply mutate into one another at the transformation point, although they have quite different properties when they are in states some distance away. We therefore call them mutations.
Page 228
Materials Science
Fall, 2008
g
C å'
å
å'
Tc T
å Tc
T
... Fig. 9.4: (a) The free energy curve of a system that passes through a mutation at Tc. (b) Possible behavior of the specific heat near the mutation. Fig. 9.4 shows the free energy curve of a system that mutates at the temperature Tc. The free energy curve is uninteresting and uninformative unless it is examined in minute detail. The free energy and its first derivatives, the entropy and volume, have well-defined values at the mutation, so the curve is continuous there to first order. A mutation affects the second and higher derivatives of the free energy. The specific heat is ordinarily singular at Tc, and usually has a behavior near Tc like that shown in Fig. 9.4b. The singularities in the thermodynamic properties at Tc have the consequence that both phases are unstable there. Metastability is impossible; the two phases become one another at Tc. The kind of transition we have called a mutation here is called by several other names in the literature. The most common is second-order transition, since the transition affects the second or higher derivatives of the free energy. Mutations whose free energy functions resemble Fig. 9.4b near the transition point (most do) are sometimes called ¬transitions because of the shape of the curve. 9.3.2 Common Transitions that are mutations The simplest mutation is the glass transition we discussed in Chapter 5. At the glass transition point, Tg, the liquid simply becomes solid. The free volume available to the molecules in the liquid is no longer sufficient for them to move around one another, so they are frozen into position. In contrast to the case of solidification into a crystalline solid, there is no discontinuity in the volume at a glass transition. The discontinuities at the glass transition occur in physical properties, for example, the coefficient of thermal expansion changes discontinuously there.
...
Page 229
Materials Science
Fall, 2008
Fig. 9.5: Examples of mutations: (a) ferromagnetic crystal; (b) ordered crystal structure of ∫'-CuZn; (c) ferroelectric displacement in BaTiO3. There are also important examples of mutations in crystalline solids. However, the requirement that two otherwise distinct phases become identical at a mutation restricts severely restricts the kinds of phases that can be connected by a mutation. Their crystallographic or physical symmetries must satisfy a relation called the Landau symmetry rule. Four classes of crystalline phase transitions that are important in materials science satisfy this rule and are always or often mutations. They are magnetic transitions from paramagnetic to ferromagnetic or antiferromagnetic order, some chemical ordering reactions, ferroelectric transitions, and superconducting transitions. In each case the mutation introduces a type of order that is not present above the mutation temperature. The first three classes of mutation are illustrated in Fig. 9.5.
Ferromagnetism and antiferromagnetism The mutation that is probably most familiar is the ferromagnetic transition that occurs in many metals, alloys and ceramics. We shall discuss this transition in some detail when we come to consider magnetic properties. For the moment it is sufficient to recognize that magnetic materials contain transition metals, such as Fe and Ni, or rare earth elements that have partly filled d- or f-shells in the electron configurations of the atoms. The electrons in the partly filled shells have aligned spins so that the ion core of the atom has a net magnetic moment. At high temperature the magnetic moments of adjacent atoms are uncorrelated (to maximize the entropy) so the crystal has no net magnetic moment. In this state it is said to be paramagnetic. At a critical temperature, called the Curie temperature, Tc, the magnetic moments spontaneously align as shown in Fig. 9.5a. At the Curie point itself the degree of alignment is zero. However, at a temperature very slightly below Tc there is a measurable degree of magnetic order. The degree of alignment in the magnetic moments can be measured by the value of a long-range order parameter, ˙, that is zero when the spin directions are random, but takes a finite value when the spins are aligned and approaches 1 in the limit of complete alignment. The long-range order parameter is zero at and above Tc, but finite at any lower temperature, as illustrated in Fig. 9.6a. At Tc the ferromagnetic and paramagnetic states become identical; the transition is a mutation at Tc.
˙
å'
å
T
Tc
...
Page 230
˙
å'
T
å Tc
Materials Science
Fall, 2008
Fig. 9.6: The behavior of the long-range order parameter (˙) near (a) a mutation; (b) a first-order transition. All transition metals and rare earth elements with unpaired inner shell electrons have magnetic moments. It is a consequence of the Third Law of thermodynamics that these moments must align at sufficiently low temperature. However, alignment does not always lead to ferromagnetism. Adjacent magnetic moments can also align anti-parallel, in which case they cancel one another and the crystal has no net magnetic moment. This type of order is called antiferromagnetism, and appears through a mutation at a critical temperature called the Neel point. There are even materials that are ferromagnetic and antiferromagnetic at the same time. These are materials like the ferrites that have antiferromagnetic order, but are chemically ordered so that species with different magnetic moments appear on adjacent sites, and the net magnetic moment is non-zero. The Curie and Neel points of these materials are the same.
Chemical order A second class of mutations includes chemical ordering reactions like that which leads to the ∫'-CuZn structure (CsCl) that was discussed in Chapter 3 and is drawn in Fig. 9.5b. Above the transition point the crystal is disordered; all sites have the same probability of being filled by either chemical specie. At the transition the lattice sites divide into two sets, each of which is preferentially occupied by one specie. In the example given, ∫'-CuZn, the corner sites of the basic BCC cell are preferentially filled by one specie while the center sites are filled by the other. Let x be the composition of the solute in CuZn, chosen so that x ≤ .5, and let the solute fill the body-centered sites. Then the long-range order parameter (˙) is defined by the equation x1 = x(1 + ˙)
9.7
where x1 is the fraction of body-center sites filled by solute atoms. The atom fraction on the corner sites is x2 = 1 - x1. As T“0, ˙“1 and all the solute atoms are in center sites. As T “ Tc there are two possibilities. If ˙ “ 0 as in Fig. 9.6a the ordering reaction at Tc is a mutation; if ˙ is discontinuous at Tc, as in Fig. 9.6b, the ordering reaction is a firstorder transition. Only certain ordering reactions can be mutations since only these have symmetries that satisfy the Landau rule. An ordering reaction that can be a mutation may be a first-order transition for some systems or under some conditions. Theory tells us which reactions can be mutations; we must rely on more precise analysis or experiment to know which of these are mutations, and under what conditions. For example, the order that creates ∫'-CuZn from the disordered BCC solution in the Cu-Zn system is a mutation, the order reaction that creates the Ni3Al structure from disordered FCC is always first-order, and the order reaction that creates FeAl3 in the Fe-Al system is a mutation for one range of composition and temperature and a first-order transition for another.
Ferroelectric transitions
Page 231
Materials Science
Fall, 2008
The third class of mutations that is illustrated in Fig. 9.5 is the ferroelectric transition. The classic examples of materials with ferroelectric behavior are perovskite crystals like barium titanate (BaTiO3), which is drawn in Fig. 9.5c. The cations in BaTiO3 have a ∫'-CuZn configuration, which is drawn with Ti+4 in the center of the cell and Ba+2 at the corners. At high temperature the Ti ion has a average position precisely in the center of the cube. However, at low temperature the Ti ion is slightly displaced from the cube center so that the cell has a permanent dipole moment. The displacement is shown, though it is greatly exaggerated, by the arrow in Fig. 9.5c. The displacements in neighboring cells are correlated so that the crystal has a net dipole moment and a very high dielectric constant. This is the phenomenon of ferroelectricity, which has technological applications in many electronic devices. The ferroelectric transition is a mutation. Letting the order parameter, ˙, measure the average displacement of the central ion as a fraction of its displacement in the limit T “ 0, the order parameter behaves as shown in Fig. 9.6a. It vanishes at Tc, the Curie point for the transition.
Superconducting transitions The final class of mutations that is of great interest in materials science is the superconducting transition, which is discussed in some detail in Chapter 19. While the mechanism of superconductivity is not fully understood in all cases (the mechanism in the high-Tc oxides is still under dispute) the normal source of superconductivity is the formation of coupled pairs of electrons (Cooper pairs) that also couple to lattice vibrations (phonons) so that they can move through the crystal without being scattered. The ordering reaction is the formation of Cooper pairs, and it occurs through a mutation at a specific temperature, the superconducting critical temperature, Tc.
9.4 PHASE EQUILIBRIA IN TWO-COMPONENT SYSTEMS When a system contains more than one component it is not always meaningful to speak of the phase that minimizes its free energy. The equilibrium state can also be a mixture of two or more phases with different compositions. While the conditions of equilibrium require that the temperature and chemical potentials be the same everywhere in the system, it is possible to have the same values of the chemical potentials in two or more phases that have very different compositions. These phases can then coexist at equilibrium. We shall confine this discussion to binary (two-component) systems since binary systems are relatively easy to understand and since they exhibit most of the new phenomena that appear in multicomponent systems, such as the coexistence of phases of different composition. Moreover, systems of engineering interest can often be treated as binary systems, either because they are essentially binary or because the phenomena of interest involve changing the content of one component while the remainder of the system is Page 232
Materials Science
Fall, 2008
fixed in its chemistry. Phase equilibria in two-component systems are often presented in binary phase diagrams, which are simply maps that show which phases appear at given values of the temperature and composition. To understand the equilibrium relations that give phase diagrams their characteristic forms it is necessary to understand the concept of free energy curves and the common tangent rule. 9.4.1 The free energy function Consider a two-component system for which the temperature, pressure and composition are controlled. Let the two components be labeled A and B. Let component A be taken as the solvent, or reference component. If the mole fraction of the solute, B, is given by the variable, x, the mole fraction of A is (1-x). The equilibrium of the system is governed by the Gibbs free energy. The Gibbs free energy per mole is given by the function g = ¡g(T,P,x)
9.8
To find the partial derivatives of the molar free energy, begin from equation 7.45 of the previous chapter: dG = - SdT + VdP + µA dNA + µBdNB
9.9
if we fix the value of N and divide through by it, the result is dg = - sdT + vdP + µAd(1-x) + µBdx = - sdT + vdP + (µB - µA)dx
9.10
Defining the relative chemical potential, ¡µ = µB - µA
9.11
the partial derivatives of the function ¡g are ∆¡g =-s ∆T
9.12
∆¡g =v ∆P
9.13
∆¡g = ¡µ = µB - µA ∆x
9.14
Page 233
Materials Science
Fall, 2008
We have already studied the behavior of the free energy function as T and P are varied. We now study the behavior of the function ¡g(x) when the temperature and pressure are fixed. Three useful general statements can be made about the free energy function, ¡g(x), that governs a particular phase at given T and P. First, the free energy curve of a given phase of the system must be concave, as drawn in Fig. 9.7. The slope of the curve at a given point (say, x1) is the relative chemical potential, ¡µ, defined by equation 9.14. The condition of concavity is that ∆2¡g = ∆¡µ ∆x PT > 0 2 ∆x
9.15
which is the condition of local stability. If 9.15 is not satisfied for a state of the system then, as we shall demonstrate below, that state is unstable with respect to decomposition into regions of slightly different composition and can only exist as a transient state in an on-going transformation.
å
g
¡µ( x1)
A
x1
x
Fig. 9.7: The free energy curve of a phase of a binary system. There is some subtlety in the definition of a phase since the free energy function of a continuous set of states of a system, for example, a random solution of two atoms on the sites of a FCC lattice, may include unstable states whose free energies can be computed or even measured by achieving them in a transient sense. One example is the simple binary solution whose free energy was computed in the previous chapter. When the temperature is low and the relative bonding potential is positive, V > 0, its free energy curve is like that shown in Fig. 9.8, with two minima and an intervening maximum as x is varied from zero to one. All states in the region shown shaded in the figure, where ∆¡µ/∆x < 0, are unstable and cannot persist. Hence these states cannot meaningfully be ascribed to any phase. The stable states, where ∆¡µ/∆x > 0, are divided into two sets that fall in different ranges of composition. The corresponding portions of the free energy curve are marked å1 and å2 in the figure. Not only are these states stable, we shall see below that states of å1 can coexist in equilibrium with states of å2 to form two-phase mixtures. Hence a free energy curve like that shown in Fig. 9.8 contains two distinct phases; each separate concave segment of the free energy curve represents a different phase.
Page 234
Materials Science
Fall, 2008
g
å1 A
unstable
x
å2
B
Fig. 9.8: Free energy curve of a system that has two distinct phases separated by a continuous set of unstable states. An important theorem follows from this definition of a phase of a binary system:
two distinct examples of the same phase can never be in equilibrium with one another. This important result is a consequence of the conditions of equilibrium. If two states of the same phase were in equilibrium then they would have to have the same values of T and ¡µ. But since ∆¡µ/∆x > 0 for a given phase the chemical potential increases monotonically with composition. Two states of the same phase cannot have the same chemical potential, ¡µ, and hence cannot be in equilibrium with one another. When a system contains more than one distinguishable state at equilibrium, these states must be different phases. The second universal property of the free energy curve is its behavior in the limit of zero concentration (x “ 0). It can be shown that as x “ 0 the relative chemical potential takes the form x ¡µ = ¡µ0(T,P) + RTln1-x
9.16
where ¡µ0(T,P) is a function of the temperature and pressure only. We shall not prove this relation, which is due to Gibbs, but note that it is obeyed by the model solid solution studied in Chapter 8. Eq. 9.16 has the consequence that the free energy curve asymptotes to become parallel to the g-axis as x“0, as drawn in Figs. 9.7 and 9.8. Since eq. 9.16 holds whatever the chemical nature of the solvent and solute, it has the consequence that everything is at least slightly soluble in everything else.
Page 235
Materials Science
Fall, 2008
T2 < T1 T2 < T1
g
g T1
A
T1 A
x
x0
x
Fig. 9.9: The behavior of the free energy curve as T decreases toward zero: (a) solution; (b) compound stoichiometric at x0. The third universal property of the free energy curve concerns its behavior in the limit of zero temperature. The Third Law of thermodynamics requires that the entropy of an equilibrium phase vanishes (or reaches a small constant value in the case of a metastable equilibrium) in the limit of zero temperature. This has the consequence that the free energy curve develops a sharper and sharper trough as T “ 0, as illustrated in Fig. 9.9. When the phase is a random solution the free energy curve gradually folds onto the axis, as shown in Fig. 9.9a. When the phase is a compound with a stoichiometric composition, x0, the free energy curve becomes more and more sharply centered about the equilibrium composition, as shown in Fig. 9.9b. This behavior has the consequence that solubility disappears as T approaches zero; the only possible phases are pure components or stoichiometric compounds. 9.4.2 The common tangent rule In a binary system two-phase, three-phase and four-phase equilibria are possible. However, when the pressure is fixed it is extremely unlikely that more than two phases will appear at equilibrium. Four-phase equilibrium requires that the pressure, temperature and relative chemical potential have specific values. It is extremely unlikely that the pressure of interest for a particular system would be the four-phase equilibrium pressure. At any other pressure the maximum number of possible phases is three. But when the pressure is fixed, three-phase equilibria occur at specific values of T and ¡µ. Since ¡µ = ¡µ(T,x) at given P, three-phase equilibrium requires that the temperature and composition have mathematically precise values. But it is impossible to control these variables to mathematically precise values in a real system. Hence only two-phase equilibria are ordinarily observed. Two-phase equilibria occur when the Gibbs free energy of a two-phase mixture is lower than that of either phase alone. We can find a general mathematical condition that must be satisfied for two-phase equilibrium directly from the conditions of equilibrium and stability. However, there is an alternative, geometric method that is less formal and somewhat easier to visualize.
Page 236
Materials Science
Fall, 2008 x ∫ - xå x - xå
å g
∫
A
xå
x
x∫ x
B
Fig. 9.10: Free energy curves for a binary system. The overall composition is x. A possible two-phase state would include å at composition xå and ∫ at x∫. The free energy of this state is marked by the dot in the figure. Consider a binary system that has two possible phase, å and ∫, at given T, P. Possible free energy curves for the two phase are drawn in Fig. 9.10. Let the overall composition of the system be x, as indicated by the vertical dashed line in the figure. Since x is beyond the limit of the ∫ free energy curve, the system cannot be in a singlephase state of phase ∫. However, it can be either in a single-phase state, phase å of composition, x, or in a two-phase state that is a mixture of å and ∫ phase with different composition. The equilibrium state is that which minimizes the Gibbs free energy. To determine whether a two-phase state minimizes the free energy we compute the free energy of an arbitrary two-phase mixture with net composition x. Since states of phase å cannot be in equilibrium with one another we need only consider equilibria between states of phase å and states of phase ∫. The states must be on opposite sides of the composition, x, so that the average composition can be x. The free energy of a two-phase mixture that has a mole fraction få of phase å in composition xå and a fraction f∫ of ∫ at composition x∫ is g = få¡gå(xå) + f∫¡g∫(x∫) = ¡gå(xå) + f∫[¡g∫(x∫) - ¡gå(xå)]
9.17
where we have used the condition få + f∫ = 1. The mole fraction of the ∫ phase can be found from the condition that the overall composition is x: x = fåxå + x∫f∫ = xå + f∫(x∫ - xå)
9.18
from which it follows that x - xå f∫ = ∫ å x -x
9.19
Page 237
Materials Science
Fall, 2008
Equation 9.19 is called the lever rule, since the fraction of a phase, say f∫, is equal to the length on the concentration axis, x - xå, on the far side of the fulcrum at x divided by the total length, x∫ - xå. It follows from equation 9.17 and the lever rule that
the free energy of a two-phase mixture of å with composition xå and ∫ with composition x∫ is given by the intersection of a straight line connecting the free energies of the two states on a g-x plot with a vertical line at the average composition, x. This relation is illustrated in Fig. 9.10. The equilibrium state of a binary system of overall composition, x, is given by the phase or two-phase mixture that intersects a vertical line at x at the lowest value of g. There are two cases. First, let the composition be such that a tangent line to the free energy curve of the phase with least free energy never intersects the free energy curve of the other. This situation is diagrammed in Fig. 9.12 for a case in which phase å has the least free energy. As can be seen by inspecting the figure, in this case there is no twophase combination of å and ∫ that leads to a free energy below that of the å phase alone. This result is an alternative proof of the general theorem that was established in the previous section. When the tangent to the free energy curve of the å phase at x never touches the free energy curve of the ∫ phase, phase å is stable with respect to the formation of ∫ and the equilibrium state is single-phase.
å g ∫
A
xå x
x∫ x
B
... Fig. 9.12: Free energy relations for a binary system at a composition where the å phase is stable. Second, let the composition be such that a tangent line to the phase with the lower free energy at x cuts the free energy curve of the other phase, as illustrated in Fig. 9.13. In this case it is simple to find two-phase states that lead to free energies lower than that of the homogeneous å phase. It can be seen by inspection that the two-phase mixture that provides the least free energy at composition x is a mixture of phases å and ∫ with the compositions xå and x∫ indicated in the figure. These are the states of å and ∫ that
Page 238
Materials Science
Fall, 2008
are connected by a common tangent, a straight line that just touches the free energy curves of both phases.
å g
∫
A
xå
x∫
x
B
x
... Fig. 9.13: Free energy relations in a binary system at a composition where a two-phase equilibrium is stable. Note that the same result holds for every composition between xå and x∫, that is, every composition of the system that is internal to the common tangent. Throughout this range of composition the least free energy is obtained when the system is a two-phase mixture of phase å with composition xå and phase ∫ with composition x∫. The states of the phases in the two-phase mixture are the same for all x; however, their proportions change according to the lever law, eq. 9.19. The rule that governs this behavior is called the common tangent rule:
if the composition of a binary system is internal to a common tangent between the free energy curves of two phases, and if the common tangent line lies below the free energy curve of any other possible phase, then the equilibrium state of the system is a two-phase mixture in which the states of the two phases are the states at the terminal points of the common tangent and the fractions of the two phases are given by the lever rule. 9.4.3 The phases present at given T, P We are now in a position to construct the minimum free energy curve for a system and find the equilibrium phases as a function of composition. To do this we construct the common tangents between the free energy curves of the possible phases. The states of least free energy lie on a curve that can be drawn by connecting segments of the free energy curves of the individual phases with their common tangents.
Page 239
Materials Science
Fall, 2008
å
∫
g
å
å+∫ xå
A
∫ B
x∫
x
... Fig. 9.14: The minimum free energy curve and equilibrium phases for a binary system at given temperature and pressure. The minimum free energy curve for an example binary system that contains two stable phases is drawn in Fig. 9.14. When the composition of this system is x < xå, the equilibrium state at the given values of T and P is homogeneous phase å. When the composition is x > x∫ the equilibrium state is homogeneous phase ∫. When the composition is in the range xå < x < x∫ the equilibrium state is a two-phase mixture of å and ∫ in which phase å has composition xå and phase ∫ has composition x∫. The relative fractions of the two phases are given by the lever rule. It is common to find three or more phases in a binary system as the composition is varied at given temperature and pressure. Such a case is diagrammed in Fig. 9.15. Its analysis is straightforward. To find the minimum free energy curve we construct the common tangents between the free energy curves of each of the phases. The common tangent lines join segments of the free energy curves of the individual phases. If a segment of a single-phase curve has the lowest free energy for given composition, x, then the equilibrium state at that composition is a homogeneous state of the preferred phase. If a common tangent has the lowest free energy at x then the equilibrium state is a twophase mixture. The compositions of the two phases are the compositions of the states that are connected by the common tangent. The mole fractions of the two phases are determined by the lever rule.
å ∫
g © å+©
å A
x1
© x2 x3 x
Page 240
∫
∫+© x4
B
Materials Science
Fall, 2008
Fig. 9.15: Minimum free energy surface and phase relations in a binary system in which three phases appear at T, P. In the case illustrated the equilibrium state of the system is a homogeneous å solution when 0 ≤ x < x1, a two phase mixture of å (composition x1) and © (composition x2) when x1 < x < x2, a homogeneous © phase when x2 < x < x3, a two phase mixture of © (composition x3) and ∫ (composition x4) when x3 < x < x4, and a homogeneous ∫ solution when x4 < x ≤ 1. More complex situations are treated in the same way. Note, however, that it is possible for three phases, say å, © and ∫ in Fig. 9.15, to touch the same common tangent line. For given pressure, this happens at a unique value of the temperature; since the phases have different entropies their free energy curves shift by different amounts as the temperature changes. Since it is impossible to control an exact temperature, three-phase equilibrium is rarely important in the engineering sense. If three phases touch the common tangent all three can coexist at equilibrium. The compositions of the three phases are given by the compositions at which their free energy curves touch the tangent, just as in the case of two-phase equilibrium. However, the lever rule does not apply and their relative fractions cannot be determined from the information contained in a plot like Fig. 9.15. 9.4.4 Equilibrium at a congruent point In a one-component system two phases are in equilibrium when their free energies are equal. Their equilibrium is said to be congruent because they have the same composition and can transform into one another without changing composition. A similar situation often occurs in two-component systems; the free energy curves of two different phases touch at particular values of T and x so that the two phases are in equilibrium there. An example of the situation that leads to a congruent point is drawn in Fig. 9.16b. In order that the two-phase equilibrium at the congruent point be an equilibrium state the free energy curves of the two phases must touch without crossing; hence they have the same common tangent there (¡µå = ¡µ∫). However, since the phases are distinct, ∆¡µå/∆x ≠ ∆¡µ∫/∆x, so they touch only at the congruent point. It follows that if å and ∫ have a congruent point at a given temperature, one of the phases (say, ∫) is present in the equilibrium state only at the congruent composition; å is the equilibrium phase at all surrounding compositions.
Page 241
Materials Science
å
Fall, 2008
å
∫
∫
å
∫
g å
A
x0
A x
x0
A x
å + ∫
å ∫ + ∫
x0
å
x
... Fig. 9.16: Possible relation between the free energy curves of phases å and ∫ that have a congruent point at (T0,x0). (a) T > T0; phase å is stable. (b) T = T0; phase å is stable, two-phase equilibrium at x0. (c) T < T0; å, ∫ and å+∫ phase fields appear. The usual reason for the appearance of a congruent point is that the system has two phases of different entropy whose free energy curves pass through one another as the temperature is varied, and first touch at some non-zero value of x. The situation is illustrated in Fig. 9.16 for the case in which å is a high-temperature phase whose free energy curve is penetrated by that of the low-temperature phase ∫ as the temperature decreases. Above the congruent point, that is, when T > T0, phase å is stable at all compositions near x0 (Fig. 9.16a). Just below the congruent point the å phase appears on both sides of the ∫ phase, with two-phase regions separating them (Fig. 9.16c). At the congruent point the two free energy curves just touch. 9.4.5 Equilibrium at the critical point of a miscibility gap Sometimes a phase that is stable develops an instability at an interior point of its composition range as the temperature changes so that it decomposes into two phases. This behavior is exhibited by many systems that are solutions at high temperature, but decompose on cooling. An example is the model solution discussed in Chapter 8. There are also unusual systems (long-chain polymers or complex organic systems) that decompose on heating. The temperature and composition at which the instability that leads to decomposition first appears is called the critical point of a miscibility gap, since a single-phase solution divides into two phases of distinct composition (i.e., the components become immiscible) as the state of the system passes through the critical point. An example of the behavior of the free energy function near the critical point of a miscibility gap is shown in Fig. 9.17. In this case a high-temperature solution decomposes on cooling. When the temperature is above the critical point, T > Tc, a homogeneous å solution is stable everywhere near the critical composition, xc; that is, ∆¡µå/∆x > 0 in this composition range. As the critical point is approached, ∆¡µå/∆x decreases near xc, and vanishes at xc when the critical temperature, Tc, is reached. At Tc the å phase is stable everywhere with the possible exception of the isolated point at xc. When the temperature is below Tc there is a range of compositions about xc for which
Page 242
Materials Science
Fall, 2008
∆¡µ/∆x < 0, so the homogeneous state is unstable. The single, å-phase has decomposed into two distinct phases, å' and å", which produce a two-phase equilibrium at compositions near xc. Above Tc the equilibrium state is a single-phase solution; the two components are miscible. Below Tc the equilibrium state near xc is a two-phase mixture; the components are immiscible in this composition range. The temperature, Tc, and composition, xc, define the critical state at the miscibility gap. å
å
å'
å"
g
å'
A
xc
A x
xc
A x
å"
å'+å"
xc
x
... Fig. 9.17: Behavior of the free energy curve of a phase that decomposes at a miscibility critical point. (a) T > Tc. (b) T = Tc; å is stable except at the point xc. (c) T < Tc; two distinct phases, å' and å" have a two-phase equilibrium at xc.
9.5 BINARY PHASE DIAGRAMS Equilibrium phase relations are represented by phase diagrams, which are maps that present the equilibrium phases as a function of the thermodynamic variables. The simplest phase diagrams are binary phase diagrams that show the equilibrium phases of a two-component system as a function of the temperature, T, and composition, x. While the Gibbs free energy of a two-component system also depends on the pressure, the pressure is ordinarily fixed by the atmosphere. Binary phase diagrams have the advantage that they can be drawn in two dimensions. While most engineering materials are multicomponent systems, binary phase diagrams are often useful for analyzing their equilibria. Many important materials have only two dominant components, and can hence be treated as binary systems that are perturbed by the addition of other components as minor solutes or impurities. Examples include most alloy steels, which are solutions of Fe with C, Ni, Mn or Cr, with third, fourth, and often fifth, sixth and seventh species included as minor alloying additions. Other important engineering materials can be approximated as two-component systems in which the components are compounds. Examples include many of the oxide ceramics, which are approximately binary systems of oxide compounds. Still other engineering materials can be approximated as two-component systems in which one element is added against a fixed background provided by the other elements. An example that is of great current interest is the high-temperature superconductor, YBa2Cu3O6+∂, whose variable
Page 243
Materials Science
Fall, 2008
oxygen content is expressed by the inclusion of the variable, ∂, in the chemical formula, where ∂ varies between 0 and 1. The solid phases that appear when two elements are joined are of two types. The first type is a primary solution, in which the basic structure of the phase is the structure of the pure component, for example, phase å of component A, and the second element, B, is added to form a substitutional or interstitial solution. Since every element has at least a small solubility in every possible phase, a primary solid solution is always formed when a sufficiently small quantity of B is added to a phase å of the pure component, A, or when a sufficiently small quantity of A is added to phase ∫ of the pure component, B. The second type of solid phase is an ordered compound, which is an ordered arrangement of A and B over the sites of a crystal lattice in stoichiometric proportions that are expressed by the chemical formula AxBy. Ordered compounds also give rise to solid solutions. It is always possible to add some excess of A or B to the compound AxBy to create a nonstoichiometric compound which is essentially a solid solution of A or B in AxBy. In the previous section we discussed how the equilibrium phase fields at any given temperature and pressure are determined by the free energy curves. The phase diagram in the T-x plane is a plot of the equilibrium phase fields as the temperature is varied at fixed pressure. The phase fields, or regions of T and x over which the primary solid solutions or compounds exist at equilibrium, are separated by two-phase fields in which two phases coexist. As discussed in the previous section, the two-phase fields are a consequence of the common tangent rule. It is also possible for three phases to be in equilibrium at a particular value of the temperature, although three-phase equilibrium is almost never observed in practice since the temperature cannot be precisely controlled. Because of the possibility of forming ordered compounds at intermediate compositions, the phase diagrams of most binary systems include a number of distinct phases separated by two-phase regions and are very complicated in their appearance. Whatever its complexity, however, a binary phase diagram is always just a map of the equilibrium phases of the system A+B that contains three pieces of information: (1) the phase or phases present at equilibrium at given temperature and composition (T,x); (2) the compositions of the phases if the point (T,x) lies in a two-phase region; (3) the fractions of the phases at a point in a two-phase region. To establish this and illustrate how phase diagrams are generated from the free energy curves we shall discuss two classic types of binary phase diagram in some detail: the solid solution, or phase diagram of a system that forms solutions at all compositions, and the simple eutectic, or phase diagram of a system that has two distinct phases in the solid state. This discussion is followed by a systematic enumeration of the possible shapes of the phase diagrams of simple systems.
9.6 THE SOLID SOLUTION DIAGRAM
Page 244
Materials Science
Fall, 2008
Let a binary system of elements A and B have two phases: a liquid solution (L) that is preferred at high temperature and a solid solution (å), preferred at low temperature, in which the atoms are distributed over the sites of a particular crystal lattice. A possible phase diagram for the system is shown in Fig. 9.18. It contains a liquid phase at high T, a solid solution phase, å, at all compositions at lower temperature, and a two-phase å+L region that separates the two. (Here, as in the following, we shall ignore the vapor phase that appears at still higher temperature.) L
TB
L+å
T TA
å A
B x
... Fig. 9.18: Phase diagram of a binary system that forms solutions at all compositions. The most interesting feature of the diagram is the two-phase å + L region. At the limits of the phase diagram, x = 0 and x = 1, the binary system contains only one component. The å “ L transition in a one-component system occurs at a particular temperature, the melting point of the pure component in phase å. Hence the two-phase region terminates at the melting points of the two pure components, TA and TB, respectively. Between these limits it opens out into a region in which solid and liquid phases coexist. 9.6.1 The thermodynamics of the solid solution diagram The two-phase region is a consequence of the difference between the shapes of the free energy curves of the liquid and solid phases. The relevant thermodynamics are illustrated in Fig. 9.19, which shows hypothetical forms of the free energy curves at three temperatures. When T > TB, the higher of the two melting points, then the free energy curve of the liquid lies below that of the å phase at all compositions, and the liquid is preferred at all compositions. When T < TA, the lower of the two melting points, the free energy curve of the å phase lies below that of the liquid at all compositions, and å is preferred. When TA < T < TB, however, the two curves intersect, as shown in Fig. 9.19b. The common tangent rule has the consequence that when the composition lies in the range xL < x < xå both å and L are present at equilibrium. The liquid has composition xL, the solid has composition xå, and the fractions of the two phases are given by the lever rule.
Page 245
Materials Science
Fall, 2008
L
g
L
å
L å å + L
L A
x
xL
B A
x
å
å xå
B A
x
B
(a)
... Fig. 9.19: Possible free energy curves of the liquid and solid phases at three temperatures: (a) T > TB, (b) TA < T < TB, (c) T < TA. L L+å
T
å xL
x x
xå
... Fig. 9.20: A section of the phase diagram including an isotherm (tie-line) that connects the equilibrium concentration xL and xå. The free energy relations drawn in Fig. 9.19 occur sequentially as the temperature drops because the entropy of the liquid phase is higher than that of the solid. Since, for any composition, ∆¡g =-s ∆T
9.20
where s is the molar entropy, the free energy curve of the relatively low-entropy solid phase is displaced downward with respect to that of the liquid as the temperature is lowered. In the example drawn, it first touches the liquid free energy curve at TB, and moves through it as the temperature is lowered to TA, producing a continuous sequence of relations like that drawn in Fig. 9.19b that generate the two-phase region of the phase diagram. At any given temperature between TA and TB the free energy curves overlap, and the extent of the two-phase region is governed by their common tangent. Hence the boundaries of the L+å region in the phase diagram at a temperature T such that TA < T < TB are just the compositions xL and xå that are in equilibrium by the common tangent rule, as shown in Fig. 9.20.
Page 246
Materials Science
Fall, 2008
9.6.2 Equilibrium information contained in the phase diagram Once the relation between the binary phase diagram (Fig. 9.20) and the free energy curves (Fig. 9.15) is understood, it becomes clear that three types of information can be found from the binary phase diagram. First, given a temperature and composition (T,x), the equilibrium phase or phases that appear can be read off the phase diagram by simply identifying the phase field in which the point (T,x) lies. If the point (T,x) is in a one-phase field the equilibrium state is a homogeneous state of that phase (å or L in the above example). If the point (T,x) falls in a two-phase field then the equilibrium state is a two-phase mixture of the phases that label the field (å and L in the above example). Second, the compositions of the phases can be determined from the phase diagram. If only one phase is present then its composition is the overall composition, x, of the system. If two phases are present their compositions can be found by drawing an isothermal line through the point (T,x). The compositions of the two phases are given by the intersections of that isothermal line with the boundaries of the two-phase region, as illustrated in Fig. 9.20. The isothermal line connecting the boundaries of the two-phase region is called a tie-line. Note that the compositions of the two phases that are in equilibrium depend on the temperature only, and are the same for every composition of the system that falls within the two-phase region at that temperature. If the overall composition of the system is changed at given T so that it remains within the two-phase region, the fractions of the two phases change, but their compositions remain the same. Third, the fractions of the phases present at equilibrium can be determined from the phase diagram. If the point (T,x) falls in a single-phase region then the system contains that phase alone. If (T,x) lies in a two-phase region then the fractions can be computed from the equilibrium compositions at that temperature by applying the lever rule. In the present example, for a point within the two-phase å + L region, x - xL få = å L x -x
9.21
xå - x fL = 1 - få = å L x -x
9.22
and
9.6.3 Equilibrium phase changes In addition to showing the phases, compositions and phase fractions that are present, the phase diagram also permits an analysis of the phase changes that occur when a system is cooled or heated slowly enough to preserve equilibrium. The most obvious difference between a one-component system and a binary system is in its solidification
Page 247
Materials Science
Fall, 2008
behavior. While a one-component system freezes at a particular temperature, a twocomponent system freezes over a range of temperature through a gradual increase in the fraction of solid as the temperature is lowered through the two-phase region. TB
L T TA å A
x
B
x
... Fig. 9.21: Phase diagram indicating cooling of sample with composition x = 0.8. For example, Fig. 9.21 depicts the cooling of a sample of the system diagrammed in Fig. 9.18 at x = 0.8. Following the vertical line downward shows that the system remains liquid until the temperature drops to slightly below TB, at which point it enters the two-phase field. According to the tie-line at the top of the two-phase field, the first solid to form within the two-phase region is rich in B, with x « 0.98. As the temperature is lowered, more and more solid appears, while the composition of the solid adjusts along the å boundary of the two-phase field, becoming progressively less rich in B. At the same time the composition of the residual liquid evolves along the L boundary of the two-phase field and becomes richer in A. When the temperature reaches the bottom of the two-phase field for x = 0.8 the solidification is completed and the system becomes homogeneous in phase å at a composition of x = 0.8. Note that two phases that are in equilibrium in the two-phase field each have uniform composition. If the system solidifies in equilibrium then on each increment of cooling two changes occur in the solid phase: the fraction of solid increases, and the composition of the solid evolves to the new equilibrium value. This latter change requires that the composition of the sold that has already formed adjust to the new equilibrium value. The composition change requires diffusion in the solid state. As we shall see, this is a slow process in most solids, so the system must ordinarily be cooled very slowly to maintain equilibrium during solidification. The microstructure that results from equilibrium solidification will ordinarily be a polygranular å phase in which the grains are equiaxed and fairly large since they remain at temperatures near the melting point for a significant period of time.
Page 248
Materials Science
Fall, 2008
9.7 THE EUTECTIC PHASE DIAGRAM The second classic type of binary phase diagram applies to a system that has two distinct equilibrium phases in the solid state, one an A-rich terminal solution, å, and the other a B-rich solution, ∫. There must be A- and B-rich terminal solutions with different structures when the components A and B have different crystal structures in their pure forms. It is also possible to have A- and B-rich terminal solutions that are distinct phases with the same structure. This happens, for example, when the system has a miscibility gap like that shown in Fig. 9.17. Assuming a liquid solution at all compositions at high temperature, the phase diagram of the system appears as shown in Fig. 9.22.
L ∫+L
å+L
T å
∫ å+∫
A
x
B
Fig. 9.22: A eutectic phase diagram of a binary system. The eutectic phase diagram derives its name from the reaction that occurs where the liquid phase field touches the two-phase å+∫ region. If the system is cooled slowly through this point the reaction is L “ å + ∫, which is called a eutectic reaction. 9.7.1 Thermodynamics of the eutectic phase diagram The free energy relations that lead to the eutectic phase diagram are shown in Fig. 9.23. The central feature is the eutectic reaction. The thermodynamic reason for this reaction is contained in the behavior of the free energy curves near the eutectic point. Just above the eutectic point all three phases appear in an isothermal section through the phase diagram. The liquid phase has common tangents with both the å and ∫ phases. However, the liquid phase also has higher entropy, so its free energy curve rises with respect to those of the å and ∫ phases when the temperature is lowered. At the eutectic point the free energy curve for the liquid just touches the common tangent between the å and ∫ curves to establish a three-phase equilibrium. Just below the eutectic temperature the å-∫ common tangent passes below the free energy curve for the liquid phase, which no longer appears in the phase diagram. For any combination of the temperature and composition (T,x), the phases present at equilibrium, their compositions and their phase fractions can be found from the phase diagram. The label of the phase field in which the point (T,x) falls identifies the phases that are present. If there are two phases, their compositions can be found by drawing an Page 249
Materials Science
Fall, 2008
isotherm across the two -phase region that passes through the state (T,x), the tie-line shown in Fig. 9.24. The two compositions are given by the intersections of the tie-line with the boundaries of the two-phase region. The fractions of the two phases are given by the lever rule.
å
å ∫
g
L
å A
å+L
L x2 x 3 x
x1
∫
∫+L x4
∫
L
å B
A
∫
å+∫ x1
x
x2
B
(a) (b) ... Fig. 9.23: (a) Free energy curves for the å, ∫ and L phases just above the eutectic temperature. (b) Free energy curves just below the eutectic temperature.
L ∫+L
å+L
T å
∫ å+∫
A xå
x
x
x∫ B
... Fig. 9.24: The compositions of the å and ∫ phases in equilibrium in the state shown by the dot. It is also possible to have three-phase equilibrium in a system with a eutectic phase diagram, but only when the system is held at precisely the eutectic temperature. The compositions of the three phases in equilibrium at the eutectic temperature are given by the points at which the isotherm at the eutectic temperature touches the å, L and ∫ single-phase fields. However, the lever rule cannot be applied to determine the quantities of the three phases. This is not an important limitation; since it is not physically possible to control the temperature of a system to a precise point, three-phase equilibrium is not an important case in the practical applications of binary systems.
Page 250
Materials Science
Fall, 2008
9.7.2 Equilibrium phase changes Characteristic phase transformations occur in a system that has a eutectic phase diagram when it is cooled slowly enough that equilibrium phase relations are preserved. The behavior of the system and the resulting microstructure depend on where its overall composition falls in the phase diagram. Three distinct cases are indicated in Fig. 9.25. x1
x2
x3 L ∫+L
å+L
T å
∫ å+∫
A
x
B
Fig. 9.25: Compositions leading to three distinct microstructures in a eutectic system. 9.7.3 Precipitation from the α phase
(a)
(b)
... Fig. 9.26: (a) Equiaxed grain structure of the primary å solid solution. (b) Precipitates of ∫ in grain interiors and on boundaries. First consider an alloy of composition x1, which is less than the å-phase solubility limit at the eutectic temperature. Let this alloy be cooled slowly enough to maintain equilibrium, beginning from the temperature at the top of the dotted line shown in Fig. 9.25. The system remains liquid until it reaches the temperature at which the dotted line drops into the two-phase, å + L field. It then solidifies over a range of temperature as it is cooled through the two-phase field to become a homogeneous solid in the å phase. It remains homogeneous until its temperature drops into the two-phase, å + ∫ field. At that point a small amount of B-rich ∫-phase precipitates out of the å. The ∫-phase increases in volume and B-content as the temperature is decreased further.
Page 251
Materials Science
Fall, 2008
The probable microstructure of the system can be inferred from the equilibrium phase diagram. If the system solidifies slowly enough to remain close to equilibrium then the microstructure of the primary å solid solution is normally a polygranular aggregate of equiaxed å grains, as shown in Fig. 9.26a. The ∫ phase typically forms as small precipitates either in the interiors or along the boundaries of the å grains (Fig. 9.26b), depending on how rapidly the system is cooled. 9.7.4 The eutectic microstructure When the system has composition x3, the composition of the eutectic point on the phase diagram in Fig. 9.25, its cooling behavior is qualitatively different. On cooling, the system remains liquid until the eutectic temperature is reached. At this temperature the system freezes completely, just as if it had a single component, but freezes into a twophase mixture of å and ∫ phases whose compositions and phase fractions are given by the tie-line and lever rule just below the eutectic temperature. If the system is cooled further the compositions and phase fractions adjust according to the tie-lines across the two-phase å+∫ region. The eutectic reaction, L “ å+∫, ordinarily produces a characteristic microstructure, called the eutectic microstructure, which is illustrated in Fig. 9.27. The elementary constituent of eutectic microstructure is a very fine-scale mixture of the two phases in which thin plates of one phase alternate with thin plates of the other or aligned rods of one phase sit in a continuous matrix of the other. The plates or rods are ordinarily single crystals, but may be polycrystalline on a fine scale. The eutectic microstructure is made up of grain-like colonies within which the plates or rods have a common orientation.
... Fig. 9.27: Schematic drawing of a eutectic microstructure. The grain-like features are eutectic colonies containing aligned plates or rods. The eutectic microstructure forms for kinetic reasons. Consider a stacking of parallel plates of å and ∫, and let the stack grow into the liquid phase along the long axis of the plates, as shown in Fig. 9.28. In order for the A-rich plates of å phase to grow, A atoms must diffuse from the front of the ∫ plates. If the å and ∫ plates are immediately adjacent to one another the distance the A atoms must travel is very small, and lies entirely in the liquid phase, where the atom mobility is high. The B atoms counterflow
Page 252
Materials Science
Fall, 2008
through the liquid from the front of the growing å plates to the ∫ plates. Hence the eutectic microstructure grows with relative ease. ∫ å ∫ å ∫ å ∫
L
Fig. 9.28: Illustration of a growing eutectic colony. 9.7.5 Mixed microstructures in a eutectic system The composition x2 in Fig. 9.25 lies between the solubility limit of the å phase and the eutectic composition. If a system of this composition is cooled from a temperature at the top of the dotted line in the figure then it begins to solidify, by forming islands of å phase, when the temperature drops into the two-phase region, and solidifies continuously as the temperature is lowered to the eutectic temperature. However, the system is only partly solidified when the temperature reaches the eutectic line; according to the lever rule only about 60% of the system is solid at a temperature incrementally above the eutectic temperature. The residual liquid has precisely the eutectic composition and solidifies by the eutectic reaction, L “ å + ∫. Hence the product ordinarily has a mixed microstructure that includes islands of "proeutectic" å phase that formed during cooling through the two-phase region. These are embedded in a eutectic constituent that formed through solidification of the residual liquid at the eutectic temperature, as shown in Fig. 9.29. Proeutectic å Eutectic colony
... Fig. 9.29: Probable microstructure of a solidified sample having the composition x2 in Fig. 9.21. Note that å phase is present both in the proeutectic å that formed during cooling through the two-phase region, and in the å-phase plates within the eutectic constituent. According to the lever rule the system is about 85% å just below the eutectic temperature. This total is the sum of « 60% proeutectic å phase and « 25% å that is contained in a eutectic constituent that is « 60% å and has « 40% molar fraction in the microstructure.
Page 253
Materials Science
Fall, 2008
9.7.6 Phase diagrams that include a eutectoid reaction In strict metallurgical terminology the term "eutectic reaction" is reserved for a reaction of the type L “ å+∫ in which a liquid solidifies to a mixture of two solid phases. There are also many systems whose phase diagrams include points at which a high-temperature solid solution transforms into a mixture of two other solid solutions. The reaction is of the type © “ å+∫, where å, ∫ and © are solid solutions with different structures. An important example occurs in the Fe-C phase diagram that governs the behavior of carbon steels. A section of the Fe-C diagram is shown in Fig. 9.30. The high-temperature phase is the © phase, which is the FCC phase of iron, and is an FCC solution of carbon in iron in this case. The phase diagram has a eutectic-like shape and undergoes a reaction of the form © “ å + carbide at the bottom of the © phase field.
1000 ©
T (ºC)
© + carbide
800 å+© å å + carbide
600
0.5
1.0 1.5 weight percent carbon
2.0
... Fig. 9.30: A section of the Fe-C diagram that includes the eutectoid reaction © “ å + carbide(Fe3C) For historical reasons a reaction of the type © “ å+∫, where all three phases are solid, is called a eutectoid reaction. The shape of the diagram and the behavior of the system are essentially identical to those in the eutectic diagram. The eutectoid reaction yields a eutectic-like microstructure (Fig. 9.28) for precisely the same kinetic reasons that govern the eutectic case. If a system that has a composition below the eutectoid composition is cooled from the © field, the microstructure contains a proeutectoid constituent as in Fig. 9.29. There is, in fact, no reason to distinguish between the eutectic and eutectoid reactions, and we shall rarely do so in the following.
9.8 COMMON BINARY PHASE DIAGRAMS Many binary systems contain several solid phases, and, hence, have rather complicated phase diagrams. However, most of these diagrams can be simplified and understood by breaking them into parts that involve the equilibrium of only a few phases.
Page 254
Materials Science
Fall, 2008
In this section we consider possible binary phase diagrams for systems that contain one, two or three solid phases, and also describe one common example of a phase diagram with two liquid phases. Almost all binary phase diagrams can be divided into segments whose behavior is like that of one of the diagrams listed below. 9.8.1 Solid solution diagrams The systems that form solid solutions at all compositions (at least at intermediate temperature) have one of three phase diagrams: the simple solution diagram discussed in Section 9.5, or a slight modification of it that has a congruent point either at the top or the bottom of the two-phase (å+L) region. Of course, solid solutions are only possible when the two components have the same crystal structure in the solid state. L T å
A
B
x
... Fig. 9.31: The simplest phase diagram for the solid solution. The simplest phase diagram for the solid solution is re-drawn in Fig. 9.31. This diagram appears when the free energy curve of the solid solution first cuts the liquid free energy curve at the higher of the two melting points of the pure components, and cuts it last at the lower melting point, so there is no congruent point. Many binary systems have this simple phase diagram, including Ag-Au, Ag-Pd, Au-Pd, Bi-Sb, Nb-Ti, Nb-W, CdMg, Cr-W, Cu-Ni, Cu-Pt, Cu-Pd, Hf-Zr, Mo-Ta, Mo-Ti, Mo-V, Mo-W, Ge-Si, Pd-Rh, Ta-Ti, Ta-V, Ta-Zr, U-Zr, and V-W. L
T
TB å
TA
A
x
B
... Fig. 9.32: Solid solution with a high-temperature congruent point.
Page 255
Materials Science
Fall, 2008
However, almost as many binary solutions have congruent points in their phase diagrams, which shows that the free energy curves of the liquid and the solid solution touch before they cross at x = 0 or x = 1. If the first contact that happens between the liquid and solid free energy curves on cooling falls at an intermediate composition then the system has a elevated congruent point, as in Fig. 9.32. If the last contact on cooling falls at an intermediate composition the system has a depressed congruent point, as in Fig. 9.33.
L T
TB
TA
å A
B x
... Fig. 9.33: Solid solution with a low-temperature congruent point. There are very few binary systems with elevated congruent points; the Pb-rich solution in the Pb-Tl system is one of the few examples. On the other hand, depressed congruent points are common. Au-Cu, Au-Ni, Nb-Mo, Nb-Ni, Nb-V, Co-Pd, Cs-K, FeNi, Fe-Pd, Hf-Ta, Mn-Ni, Mn-Fe, Pu-U, Ti-V and Ti-Zr, show this behavior, among others. Perhaps the strangest example is the behavior of the Ti-Zr system. The high temperature solid structures of both components are BCC, and both transform to HCP on cooling. The phase diagram contains two solid solutions, ∫(BCC) at high temperature and å(HCP) at lower temperature. Both the liquid-å equilibrium and the å-∫ equilibrium are separated by two-phase regions with depressed congruent points like that shown in Fig. 9.33. There is a simple thermodynamic reason for the preference for a low-temperature congruent point. The molar free energy is g = h - Ts
9.23
where h is the molar enthalpy, e + Pv, and s is the molar entropy. The high-temperature phase is the more disordered one, and generally has a higher entropy of mixing. As a consequence its free energy curve tends to have a deeper trough at intermediate composition, so that the liquid and solid free energy curves contact at intermediate composition at a temperature below the melting points of the pure components. 9.8.2 Low-temperature behavior of a solid solution
Page 256
Materials Science
Fall, 2008
One of the fundamental laws of thermodynamics is the Third Law, which asserts that the entropy of an equilibrium phase vanishes in the limit T “ 0. The Third Law has the consequence that a solid solution cannot be the equilibrium state in the limit of zero temperature. At sufficiently low temperature, the equilibrium state must be a perfectly ordered phase or a simple mixture of perfectly ordered phases. This criterion can be satisfied in two simple ways: the solid solution can decompose into two terminal solutions at low temperature or the system can rearrange itself into an ordered compound or mixture of ordered compounds. The two possibilities are illustrated by the BraggWilliams model of the solid solution that is described in Chapter 8. The Bragg-Williams solution decomposes via a miscibility gap if its components prefer like bonds, and orders if they prefer unlike bonds. In many real systems this low-temperature behavior intrudes at temperatures so low that it is never observed; such systems are solid solutions for all practical purposes. However, in other cases complete solubility is lost at moderate temperature through the formation of either a miscibility gap or an ordered phase. We consider the two possibilities in turn.
Phase diagrams containing a miscibility gap A possible binary phase diagram that contains a miscibility gap is shown in Fig. 9.34. The system freezes into a solid solution (å) at all compositions. However, at lower temperature the solid solution spontaneously decomposes into two solid solutions, å' and å'', that have the same structure but different compositions. The two-phase, å' + å'' region within the miscibility gap contains the same information as any other two-phase region in a binary phase diagram. The compositions of the two phases, å' and å'', are determined as a function of temperature by the isothermal tie-lines. The phase fractions are determined from the tie-line by the lever rule. L
å
T
å''
å' A
x
B
... Fig. 9.34: The phase diagram of a binary system that contains a miscibility gap in a homogeneous solid solution. The two-phase regions are shown shaded; the horizontal lines are the tie-lines. A miscibility gap is caused by an instability in the free energy curve that develops as the temperature decreases. The sequence of free energy curves that lead to a miscibility gap like that in Fig. 9.34 were presented in Fig. 9.17. Let Tc be the Page 257
Materials Science
Fall, 2008
temperature at the top of the miscibility gap. Well above Tc the å free energy curve is concave and well behaved. As Tc it approached the free energy curve flattens, until just below Tc it develops a small convex region, as shown in Fig. 9.17. The convex region has the consequence that two points on the free energy curve are connected by a common tangent. Hence the system decomposes into two solutions with different compositions, but the same structure. As the temperature decreases further the convex region becomes more pronounced and the miscibility gap broadens. As T “ 0 the compositions of the terminal solid solutions approach x = 0 and x = 1 to satisfy the Third Law. As suggested by the Bragg-Williams model (Chapter 8), a miscibility gap is due to a preference for bonds between atoms of like kind, with the consequence that the energy is lowered when the system segregates into A-rich and B-rich solutions. At higher temperature the energetic preference for decomposition is outweighed by the entropic preference for the solid solution. The binary systems whose components are mutually soluble at intermediate temperature, but become immiscible at lower temperature include Au-Ni, Au-Pt, Cr-Mo, CrW, Cu-Ni, Cu-Rh, Ir-Pa, Ir-Pt and Ta-Zr. Ceramic systems such as NiO-CaO also form solid solutions with low-temperature miscibility gaps. According to the Third Law, the miscibility gap must extend to the pure component lines at T = 0, as drawn in Fig. 9.34. Not all of the diagrams that appear in compilations of binary phase diagrams are drawn this way since decomposition is kinetically slow and difficult to observe at low temperature.
Phase diagrams with low-temperature ordered phases L
å
T
©
A
x
B
... Fig. 9.35: Phase diagram of a binary system that has an ordered phase (©) at low temperature. The two-phase regions are shaded with horizontal tie-lines. The phase diagram of a binary system that forms an ordered phase at low temperature is shown in Fig. 9.35. The single-phase region of the ordered phase is closed at a congruent point at its top (T0) and asymptotes to a point in the limit T “ 0, to satisfy the Third Law. The conditions at the two limiting temperatures have the consequence that the ordered phase field has a shape something like that of an inverted teardrop. At finite temperatures the ordered phase has at least a slight solubility for the species A and Page 258
Materials Science
Fall, 2008
B, and is in equilibrium over a range of compositions about its stoichiometric value. The single-phase © field is bounded by two-phase (å+©) fields that separate it from the singlephase å field on either side. If only one ordered phase is present, then the two-phase regions that bound it must spread across the phase diagram in the limit T “ 0 so that the equilibrium phases at T = 0 are the stoichiometric © ordered phase and the å phase at x = 0 or x = 1, in agreement with the Third Law. In many systems that order, several ordered phases are present. The free energy relations that give rise to a phase diagram like that in Fig. 9.35 are illustrated in Fig. 9.36. The free energy curve of the ordered compound lies above that of the å solid solution when T > T0 and passes through it to create a congruent point at T = T0. The free energy curve of the ordered © phase has a strong minimum at its stoichiometric composition. When T < T0 the free energy curve of the © phase lies below that of the å solid solution only at compositions near the stoichiometric value. Hence there are common tangents between the © and å free energy curves on both sides of the © curve. The © phase is the equilibrium phase at compositions near the stoichiometric value; the å phase is at equilibrium at compositions that deviate significantly from the stoichiometric value to either side. ©
©
å
å g
g
å A
x
B
A
å + ©
©
x
å + ©
å B
... Fig. 9.36: Free energy relations leading to the appearance of an ordered compound: (a) T > Tc; (b) T < Tc. Many binary systems whose components have complete or extensive solid solubility at intermediate temperature are known to order into one or more stoichiometric compounds at lower temperature. Examples include Au-Cu, Cu-Pt, Cd-Mg, Co-Pt, CuPd, Fe-Ni, Fe-Pt, Fe-V, Mn-Ni, Ni-Pt, and Ta-V. 9.8.3 Phase diagrams with eutectic or peritectic reactions Binary systems that have two distinct phases in the solid state often have phase diagrams of the simple eutectic or peritectic (inverted eutectic) form.
The eutectic diagram
Page 259
Materials Science
Fall, 2008
The simple eutectic diagram was discussed in some detail in Section 9.5, and is re-drawn in Fig. 9.37. It takes its name from the eutectic reaction, L“å+∫
9.24
which occurs at the minimum point of the liquid phase field. A system that has a eutectic phase diagram is usually one whose components have different crystal structures in the pure form. Since components with different structures cannot form a continuous range of solid solutions, there are always at least two phases in the solid state and the eutectic diagram is one of the simplest the system can have. Among the systems with simple eutectic diagrams are Ag-Bi, Al-Ge, Al-Si, Al-Sn, AuCo, Au-Si, Bi-Cu, Bi-Cd, Bi-Sn, Cd-Pb, Cu-Li, In-Zn, Pb-Sb, Pb-Sn, Si-Zn and Sn-Zn. Ceramic systems with simple eutectic diagrams include MgO-CaO, among others. All of these systems have components with different crystal structures, and are sufficiently different chemically that it is plausible that they form no stable compounds.
L T å
∫
A
... Fig. 9.37:
B
x
A binary system with a simple eutectic diagram. The twophase regions are shown shaded with horizontal tie-lines.
However, there are also systems that have simple eutectic phase diagrams even though their components have the same crystal structure. Examples include Ag-Cu, in which both components are FCC, Cd-Zn, both components HCP, and Na-Rb, both components BCC.
g
L
∫
å
å
∫
å
∫
L L
A
x
å
B A
å+L
L
∫+L
x
Page 260
∫
å
BA
å+∫
x
∫
B
...
Materials Science
Fall, 2008
Fig. 9.38: Free energy relations leading to a eutectic diagram for a system whose components have a miscibility gap at high temperature. (a) Liquid phase stable; (b) three phases appear at lower T; (c) two solid phase appear below the eutectic point. The most plausible interpretation of the eutectic behavior in this case is that the two terminal solutions have a miscibility gap at a temperature so high that the liquid phase is retained to temperatures well below Tc. The relations between the free energy curves that lead to a eutectic diagram in a system whose solid phases have a miscibility gap is diagrammed in Fig. 9.38. Fig. 9.38a diagrams a situation in which a solid phase has decomposed into two solutions with the same structure, å and ∫, at a temperature at which the liquid is still stable. As the temperature decreases the solid free energy curve drops with respect to the liquid, and leads to a eutectic diagram. Fig. 9.38b shows the situation just above the eutectic point where all three phases appear. Fig. 9.38c shows the situation just below the eutectic point where only å and ∫ solid solutions appear at equilibrium.
The peritectic diagram The classic peritectic phase diagram is drawn in Fig. 9.39. It is characterized by the appearance of a peritectic reaction of the form ∫+L“å
9.25
that appears at the top of the å field. L
T
∫ å
A
x
B
... Fig. 9.39: A simple peritectic phase diagram in a binary system. The peritectic reaction is, essentially, an inverse eutectic. The classic eutectic reaction occurs when the free energy curve of the liquid cuts through a common tangent to the curves of two solid phases on heating. The peritectic occurs when the free energy curve of a solid phase cuts through a common tangent to the curves of liquid and solid phases on cooling. The relations between the free energy curves just above and just below the peritectic point are illustrated in Fig. 9.40. Just above the peritectic the common tangent to the L and ∫ phases lies below the å free energy curve, as in Fig. Page 261
Materials Science
Fall, 2008
9.40a. At the peritectic the å free energy curve contacts that common tangent, and drops below it as the system is cooled further to create the configuration shown in Fig. 9.40b. Simple peritectic diagrams are much less common that simple eutectic ones. The thermodynamic reason is apparent from Fig. 9.40. To create a peritectic point the free energy curve of the solid must cut that of the liquid at finite x, that is, at a composition away from the axis. For that to happen the free energy of the solid phase must decrease more quickly than that of the liquid at small x. Since the liquid has higher entropy, this is only likely to happen when the enthalpy of the solid phase drops rapidly with the solute content, that is, when there is a strong preferential bonding between the two components in the å phase. But this is precisely the situation that leads to the formation of stable compounds between the two components. A system that has a simple peritectic diagram is, therefore, likely to be one that almost forms stable compounds. å
å
∫
L
∫
L
g
g
L
L+∫
A
L L+å å
∫
A
B
x
å+∫
∫
B x
... Fig. 9.40: Free energy relations leading to a peritectic phase diagram. (a) T just above the peritectic temperature; (b) T just below. Examples of metal systems that have peritectic diagrams are Cu-Nb, Cu-Co, NiMo, Ni-Re, Ni-Ru, Os-Ir, Os-Pd, Re-Rh, W-Pd and W-Pt. Not surprisingly, some of these systems, such as Ni-Mo, form intermetallic compounds at lower temperature. There are also several examples of systems that have peritectic reactions in which the high-temperature solid phase is an intermetallic compound. Examples include Al-Al3Ti, Al-Al3Zr and Al-AlSb.
∫
∫
g
L
L L
A
∫
å
å
x
å
B A
å+L
L
x
å å
BA
å+∫
x
∫
B
Fig. 9.41: A peritectic reaction in a system with a high-temperature miscibility gap. (a) The liquid phase is stable at a high temperature below the miscibility gap in the solid; (b) two-phase equi-
Page 262
...
Materials Science
Fall, 2008
librium just above the peritectic; (c) two-phase equilibrium below the peritectic temperature. There is at least one example, Ag-Pt (FCC), in which two components with the same crystal structure have a peritectic phase diagram. As in the case of a eutectic diagram between elements with the same structure, this suggests that the components have a high-temperature miscibility gap, leading to free energy relations like those shown in Fig. 9.41. The condition is that the second solid phase that intersects the liquid curve (∫ in the case shown) makes its first appearance as a stable phase by cutting the tie-line between the liquid and the å solid solution. 9.8.4 Structural transformations in the solid state When one of the components of a binary system undergoes a structural transformation on cooling, not only is a new structure introduced into the binary phase diagram, but new two-phase equilibria appear. Since the phases that are connected by the structural transformation are different, they respond differently to the introduction of the solute. The result is a two-phase equilibrium field between them. Figs. 9.42 and 9.43 show the common forms of the binary phase diagram of a system in which one component (A) undergoes a structural transformation (© “ å) as the temperature is lowered. The configurations at the solid-solid transformation are geometrically identical to those at the eutectic or peritectic points of the liquid-solid transformation.
L ©+L T
©
∫+L ©+∫
∫
å å+∫ A
x
B
... Fig. 9.42: A binary system with a eutectic reaction at the bottom of the liquid phase field and a eutectoid reaction at the bottom of the phase field of the high-temperature (©) phase. The two-phase fields are shaded with isothermal tie-lines. If å is the low-temperature phase of a component (A) that also has a hightemperature phase, ©, then the free energy of å falls below that of © as the temperature is lowered. On the x=0 axis (where the system has only one component) the two free energies cross at a particular transition temperature. However, since the two phases are distinct they respond differently to the solute, and hence have different free energy curves at finite x. As these curves pass through one another they generate a two-phase region, just as in the liquid-solid case. The shape of the two-phase (å+©) region depends on Page 263
Materials Science
Fall, 2008
where the å free energy curve first contacts the © curve as the temperature is lowered. If the first contact is between the å and © curves rather than between the å curve and the ©-∫ common tangent then the behavior is just like that near a eutectic point in the liquid-solid case; the high-temperature phase field (©) extends to a temperature minimum at finite x, as shown in Fig. 9.42. The reaction at the bottom of the © field is ©“å+∫
9.26
This reaction is called a eutectoid reaction since it is eutectic-like, but involves only solid phases. In the writer's opinion the separate terminology is redundant. In a system that has a eutectoid reaction the first contact between å and © is ordinarily at x = 0, in which case the phase diagram near the reaction looks like that shown in Fig. 9.42. However, it is also possible that the first contact occurs slightly off the x-axis at finite composition. In this case the å field has a maximum at a congruent point between © and å, while the © field has a minimum at a eutectic point at slightly higher composition. We shall not illustrate this case. Eutectoid reactions are common in binary systems that include a component that transforms in the solid state. We discussed the case of the ©“å transformation in FeFe3C in Section 9.5. Other examples include the ∫“å transformation of Ti in Ti-Cr and Ti-W, the å“∫ transformation in Mn in Ni-Mn, and the å“∫ transformation in Th in Th-U and Th-Zr. Eutectoid reactions also occur in the transformations of many intermetallic and oxide compounds. The second possibility is that, on cooling, the å free energy curve cuts the ©-∫ common tangent before it contacts the © free energy curve. Then the situation near the transition temperature is like that illustrated for the peritectic transition in the liquid-solid case. The å phase field extends to a temperature maximum at finite x, and the configuration near the transition has a shape like that drawn in Fig. 9.43. The reaction at the maximum point of the å field is ©+∫“å
9.27
and is called a peritectoid reaction.
L T
©
∫
å A
x
Page 264
B
Materials Science
Fall, 2008
... Fig. 9.43: A binary system with a eutectic reaction at the bottom of the liquid phase field and a peritectoid reaction at the top of the phase field of the low-temperature (å) phase. The two-phase fields are shaded with isothermal tie-lines. Peritectoid reactions are reasonably common. Examples include the ©“å transition of Fe in Fe-Nb and Fe-Ta, the 哉 transition of Co in Co-Cr and Co-W, and the å“∫ transition of Mn in Mn-Cr. Peritectic reactions are also found in a number of intermetallic and oxide compounds. 9.8.5 Systems that form compounds A substantial fraction of all binary systems form ordered compounds in the solid state. In fact, it is common for several compounds to appear in the phase diagram. To explore the influence of ordered compounds on the shape of the phase diagram we consider systems that contain a single one. Fig. 9.35 illustrates the phase field of a compound that emerges directly from a solid solution. We now consider compounds in systems that contain two terminal solid solutions. Four cases are reasonably common: (1) a compound first appears at a congruent point in the liquid; (2) a compound first appears at a peritectic point in a two-phase region (å+L); (3) a high-temperature compound disappears at a eutectoid; (4) a low-temperature first appears at a peritectoid. Finally, we consider the equilibrium phase fields near a structural transformation of an ordered compound.
Compounds that form directly from the liquid Many binary systems have stable compounds that can be formed directly from the liquid at a congruent point. The simplest phase diagram for a system of this type is shown in Fig. 9.44. The compound essentially divides the phase diagram into two eutectic diagrams between the compound and the terminal solid solutions. L
T
å
© ∫
A
x
B
... Fig. 9.44: The phase diagram of a system that forms a stable compound at an intermediate composition.
Page 265
Materials Science
Fall, 2008
Phase diagrams like that shown in Fig. 9.44 govern a large number of binary systems, including Al-Sb, Al-Ca, Al-Au, As-In, As-Pb, Ca-Mg, Nb-Cr, Cd-Sb, Cd-Te, CrTa, Cr-Zr, Ga-Sb, Hf-V, In-Sb, Mg-Pb, Mg-Si, Mg-Sn, Mo-Pt, Pb-Te, Sn-Te, and Zn-Te, among others. Phase diagrams of this type are particularly common in the III-V and IIVI systems. In these cases the stable compounds are the semiconducting III-V and II-VI compounds. The reaction at the congruent point in Fig. 9.44 is L “ ©, where © is the compound. Compounds that form congruently are particularly easy to make since they can be gotten by direct solidification (casting or crystal growth) from a liquid of appropriate composition. Phase diagrams of this type are basic to a number of technologically important processes. Perhaps the most important is the growth of large crystals of III-V and II-VI semiconducting compounds from the melt, which is only possible when the compound has a congruent point with the liquid.
Compounds that form through a peritectic reaction If a binary system contains a single compound (©) whose free energy curve is such that its first appearance breaks a solid-liquid (å+L) tie-line then the compound is derived from a peritectic reaction (å + L “ ©) and the simplest phase diagram is like that shown in Fig. 9.45.
L å T
©
A
∫
x
B
... Fig. 9.45: The phase diagram of a binary system with an intermediate compound that forms by a peritectic reaction. Several binary systems that have phase diagrams that closely resemble Fig. 9.45, including Bi-Pb, Cd-Sn, Hg-Pb, In-Pb, Hf-W, Mo-Zr, Ru-W and Sn-Tl. The phase diagrams of Mo-Hf and Sb-Sn differ from Fig. 9.45 only in that the ∫ phase at the far end of the phase diagram has a peritectic rather than a eutectic relation to ©. The phase diagrams of many binary systems that form multiple compounds are such that some of these compounds form through peritectic reactions and have local phase relationships like those in the left-hand side of Fig. 9.45. Because the © compound in Fig. 9.45 is the product of a peritectic reaction it cannot be cast or grown directly from the melt. Moreover, many of the more useful Page 266
Materials Science
Fall, 2008
compounds of this type include elements that diffuse slowly in the solid state so that it is difficult to make the compound by holding the system at a point within equilibrium phase field. Technologically important compounds that have this behavior include the A15 superconducting compounds such as Nb3Sn and Nb3Al, high-temperature intermetallic structural materials like Ni3Al, and low-density intermetallics with potential hightemperature structural applications such as the Al-Ti intermetallics. Complex processing techniques such as reaction from a ternary solution, vapor deposition, or powder processing are required to synthesize these compounds. Finally, note that a phase diagram like that shown in Fig. 9.45 has a eutectic reaction, but the phases that border the eutectic include intermediate compounds (© in the figure). Nonetheless, a system that has the eutectic composition will solidify into a eutectic microstructure like that illustrated in Fig. 9.27. One or both of the interleaved phases are intermetallics rather than terminal solid solutions.
Compounds that disappear at a eutectoid Many binary phase diagrams contain ordered compounds that only appear at intermediate temperature. They are stable at high temperature, but eventually disappear if the system is cooled. For this to happen in a simple system that contains only one ordered compound the common tangent to the free energy curves of the terminal solid solutions must fall beneath the free energy curve of the compound at sufficiently low temperature. This is more likely to happen if the terminal solution is more stable than the compound, and is hence most often observed in systems whose compounds result from a peritectic reaction like that shown in Fig. 9.45. Fig. 9.46 contains a sketch of a simple phase diagram containing a single ordered compound that is confined to intermediate temperature. The top of the phase field of the compound is a peritectic point, å+L “ ©. The phase field terminates in a eutectoid reaction, © “ å+∫.
L å T
©
A
∫
x
B
... Fig. 9.46: Phase diagram of a simple binary system that forms a compound at intermediate temperature.
Page 267
Materials Science
Fall, 2008
Several binary systems have phase diagrams that resemble Fig. 9.46 very closely, including Bi-Pb, Cd-Sn and Ru-W. Many other systems contain compounds whose thermal stability is limited by the intrusion of other ordered compounds.
Compounds that form at a peritectoid If an intermetallic compound first appears in the solid state then it intrudes either into a single-phase region or a two-phase region. In the former case the maximum temperature of the equilibrium field of the compound is a congruent point, as in Fig. 9.35, where the compound forms by a reaction of the type å “ ©. In the latter case the maximum temperature is the temperature at which the free energy curve of the compound cuts a two-phase tangent line. The top of the field is a peritectoid point, and the compound forms by a reaction of the type å+∫ “ ©. A simple phase diagram for a binary system with a compound that forms by a peritectoid reaction is shown in Fig. 9.47. Several binary systems have phase diagrams that resemble this one, including Ru-Mo, Ru-Nb and Pd-V. In binary systems that contain several compounds it is common that one or more appear at low temperature through peritectic reactions. L ∫
å
T
©
A
x
B
... Fig. 9.47: Phase diagram of a simple system in which a compound appears through a low-temperature peritectoid reaction.
Structural transformation of a compound Compounds may undergo structural transformations just as pure phases do. In fact, there is a richer set of possibilities for transformations in compounds, since compounds can change in chemical order as well as in basic lattice structure. A compounds transforms when there are two separate phases of essentially the same compound (nearly the same stoichiometric composition) whose free energies become equal at some temperature and composition. If the high-temperature phase of the compound were cooled to that temperature and composition it would transform homogeneously to the low-temperature phase. If the two phases are related by a first-order transformation, that is, if they are distinct phases at the transformation point, then they
Page 268
Materials Science
Fall, 2008
are represented by different free energy curves and their first contact on lowering the temperature is at an isolated point. A compound differs from a pure component in that its composition can deviate from stoichiometry in either the positive or the negative sense. At finite temperature the free energy curve of a compound is continuous through its stoichiometric composition (as illustrated, for example, in Fig. 9.25) and its chemical potential is not singular there. This has the consequence that the free energy curves of two phases of essentially the same compound (that is, compounds that have the same stoichiometric composition in the limit T “ 0) may first touch one another on cooling at a composition that is off-stoichiometric and possibly outside the equilibrium phase field of the high-temperature phase. The two possibilities are illustrated in Fig. 9.48, which shows free energy curves at a temperature just above that at which a compound, ©, transforms to a second compound, ∂, in a system whose phase diagram is like that in Fig. 9.44 at temperatures above that shown. In the left-hand figure the free energy curve of the ∂ phase contacts that of the © phase within its region of stability. In the right-hand figure the ∂ free energy curve contacts the common tangent between the © phase and the primary ∫ solution.
å
å
∂ ∫
©
g
å
å+©
∂
∫
∫+©
∫
©
©
∫+©
∫
å B A
A x
å+©
© x
... Fig. 9.48: Possible shapes of the free energy curves near the transformation ©“∂. (a) The ∂ free energy curve contacts within the © stability range. (b) The ∂ free energy curve contacts the ©-∫ common tangent.
Page 269
B
Materials Science
Fall, 2008 L
T
å
© ∫ ∂
A
B
x
... Fig. 9.49: Phase diagram of a binary system in which a high-temperature compound, ©, transforms to a low-temperature compound, ∂, at a congruent point, as in Fig. 9.48a. The situation shown in Fig. 9.48a leads to a phase diagram like that shown in Fig. 9.49. The contact of the © and ∂ free energy curves gives rise to a congruent point in the © phase field at which © “ ∂ without change of composition. The congruent point is enclosed by two-phase (©+∂) fields that terminate at eutectic points for the reactions © “ å+∂ and © “ ∫+∂. Structural transformations of compounds that lead to a phase relationship like that drawn in Fig. 9.49 occur in a number of binary systems, including W-C, Ag-Ga, Ag-Li, Au-Zn, Cu-In, Cu-Sn, Mo-C, Mn-Zn, Ni-S, Ni-Sn and Ni-Sb. When the configuration of the free energy curves near the structural transformation resembles that in Fig. 9.48b, the low-temperature phase, ∂, first appears as the product of a peritectic reaction, ∫+© “ ∂. The given fact that ∂ becomes stable means that the ∂ curve is displaced downward relative to the © free energy curve as the temperature decreases. The form of the phase diagram near the peritectic can be approximated by translating the ∂ free energy curve in 9.48b downwards as T decreases and constructing the successive common tangents. The resulting phase diagram is drawn in Fig. 9.50. The © and ∂ stability fields never touch; they are separated by a narrow twophase region that terminates at a eutectic point where the reaction is © “ å + ∂. L
T
å
© ∫ ∂
A
x
Page 270
B
...
Materials Science
Fall, 2008
Fig. 9.50: Phase diagram for a system in which a compound transforms through a peritectic reaction, as in Fig. 9.48b. Compound structural transformations of the type that appears in Fig. 9.50 are found in many binary phase diagrams. Among the systems that have reactions of this type are Ag-Cd, Ag-In, Bi-Mg, Co-Cr, Ge-Cu, Cu-Sn, Hf-Ir, Mn-Ni, Mn-Pt, Mn-Zn, MoPt, Ni-V, and Zn-Sb. As this extensive list suggests, the geometry of the transformation in Fig. 9.50 is, in fact, more common than that the congruent geometry shown in Fig. 9.49. Its prevalence reflects the narrow width of the equilibrium phase fields of most solid compounds; a small difference in the relative composition dependence of the free energies of the two phases can then shift the first intersection of the two curves out of the stability field of the high-temperature phase. 9.8.6 Mutation lines in binary phase diagrams The kind of phase transition known as a mutation, or second-order phase transition, was discussed in Section 9.3. In a mutation, one phase simply becomes another. There is no two-phase equilibrium and, hence, there are no two-phase regions associated with mutations. However, in a binary system the critical temperature for a mutation can be a function of composition, and almost always is. Hence the mutation appears as a simple curve in a pseudo-single phase region that contains both of the phases that are related by the mutation. There is also no discontinuity in the boundary of the pseudo-single phase region where the mutation line contacts it. The composition of the phase in equilibrium in a two-phase region is fixed by temperature. Hence a mutation line is a horizontal isotherm through a two-phase region that gives the temperature at which one phase mutates.
L T
å
∫
å' A
x
B
... Fig. 9.51: A eutectic system with a mutation in the å terminal solid solution, indicated by the dashed line.
Page 271
Materials Science
Fall, 2008 L
T
©
å
∫ ©'
A
B
x
... Fig. 9.52: A binary system with an intermediate compound that undergoes a mutation, indicated by the dashed line. Fig. 9.51 illustrates the appearance of a eutectic phase diagram with a mutation in the å-rich solid solution. The ferromagnetic transition in Fe and Ni and the rare earths leads to phase relationships like those shown in Fig. 9.51. Fig. 9.52 illustrates the phase relationships in a simple system with an intermediate ordered phase that mutates. Many intermediate compounds undergo ordering reactions that are mutations. The classic example is the ∫ “ ∫' transition in Cu-Zn. 9.8.7 Miscibility gap in the liquid As a final example we consider a binary system in which a miscibility gap intrudes in the liquid, as it does in many real systems. The simplest system of this type has only the two terminal solid solutions in the solid state. The phase diagram is drawn in Fig. 9.53. L1
L2
å T ∫ A
x
B
... Fig. 9.53: A possible phase diagram for a binary system with a miscibility gap in the liquid. The shaded region is an equilibrium between two liquid phases. A sequence of free energy relations that lead to a phase diagram like that shown in Fig. 9.53 is drawn in Fig. 9.54. Fig. 9.54a pertains to a temperature just below the
Page 272
Materials Science
Fall, 2008
melting point of the å phase. The miscibility gap in the liquid is due to the inflection in its free energy curve, which divides it into two stable phases with a common tangent. The three phases å, L1 and L2 appear in the section. Fig. 9.54b is drawn at a lower temperature at which the liquid phase L1 no longer appears. As shown in the diagram the free energy curve of the å phase has dropped with respect to that of the liquid, with the consequence that the lowest common tangent connects the å and L2 free energy curves directly. The ∫ free energy curve is everywhere above that of L2. As a consequence there are two phases in the section, å and L2. Fig. 9.54c illustrates behavior at a still lower temperature where both the å and ∫ free energy curves are well below that of the liquid. The lowest common tangent in this case connects å and ∫ directly; only these phases appear in an isothermal section through the phase diagram. å ∫
g
å L' å å + L' L'
A
∫
L'+L"
x
å
L'
L" L"
å
B A
L" å+L"
x
L"
å
B A
L'
L"
å+∫
x
∫
∫
B
... Fig. 9.54: Free energy relations at three temperatures in a system with the phase diagram shown in Fig. 9.53: (a) just below the melting point of å; (b) at a T where only å and L2 appear; (c) at a T where only å and ∫ appear. The binary systems that have phase diagrams that resemble Fig. 9.53 include AlBi, Al-In, Bi-Zn, Cu-Cr, Cu-Pb, Cu-Tl, Ni-Pb, Pb-Zn and Th-U. These systems contain species that are very different in their chemical behavior, which leads to the miscibility gap in the liquid. A like behavior is seen on the silica-rich side of the SiO2-MgO diagram. Similar phase relations are found at low temperature in the solid state in a number of systems that form extensive solid solutions, including Al-Zn, Nb-Zr and Hf-Ta. Phase fields like those in Fig. 9.53 result from a miscibility gap in the solid solution. In the case of Al-Zn the miscibility gap has a bottom because of its interaction with the terminal Zn solid solution. In Nb-Zr and Hf-Ta the bottom of the miscibility gap is due to interference by the low-temperature phase of one of the components; both Hf and Zr have structural transformations at low temperature.
Page 273
Materials Science
Fall, 2008
Chapter 10: Kinetics Many years ago, on a visit to Korea I was introduced to a famous palmist. After examining my palm, he noted that I have an exceptionally long "life line" "Great", I said, "does that mean I can expect a long, long life?" "Perhaps." he replied, "It is not just the length of the line, but how fast you are moving along it."
10.1 INTRODUCTION The science of kinetics is concerned with the rate of change. Thermodynamics sets the driving forces that impel changes of state; kinetics determines how quickly those changes happen. When the process can happen in more than one way, the relative kinetics also determine the path taken in a change of state; the system evolves along the path that permits the most rapid rate of change. The path that is preferred kinetically does not necessarily lead to the preferred equilibrium state. It may, instead, terminate in a metastable state where the system is trapped and cannot evolve further. This fact is central to materials processing, which is the art of manipulating the microstructure to control engineering properties. To achieve a desired microstructure one brings the system into a non-equilibrium state that is chosen so that the desired microstructure is both thermodynamically possible and kinetically achievable. Once the desired microstructure is established, one changes the forces or constraints on the system to "freeze" the microstructure so that it is retained for at least the intended service life of the material. It is useful to divide changes of state into two types that are based, respectively, on internal and global equilibrium. Internal equilibrium requires that the temperature (T), the pressure (P) and the chemical potentials {µ} have uniform values. If any of these conditions is violated, the material evolves to re-establish it. A local gradient in the temperature causes a flow of heat, a local gradient in the pressure causes a mechanical relaxation that redistributes the volume, and a local gradient in the chemical potential induces a flow of the associated chemical species. These are, ordinarily, continuous changes of state that do not change the basic structure, or phase of the material. The stronger condition of thermodynamic equilibrium is the global condition of equilibrium, which asserts that the total entropy of the system and its environment should be as large as possible or, equivalently, that the appropriate thermodynamic potential of the system should be as small as possible. Changes of state that result from violations of the global conditions of equilibrium are called phase transformations. They are discon-
Page 274
Materials Science
Fall, 2008
tinuous transitions in the sense that the microstructure is physically different after the transformation has happened. We shall defer the discussion of phase transformations to the following chapter, and consider the kinetics of continuous transitions here. We shall specifically describe the conduction of heat in solids and the diffusion of chemical species through solids. While the microstructural mechanisms of heat and mass flow are very different, they are governed by a differential equation, called the diffusion equation, that has the same form for both. Heat is transported by particles whose motion is controlled by collisions with the lattice and with one another. Kinetic processes of this sort are called frictional processes. A familiar example is the fall of a solid body through the air; its terminal (steady state) velocity is determined by the balance between the force of gravity, which tends to accelerate it, and the frictional force of the air, which is due to collisions with air molecules that tend to slow it down. Mass diffusion through solids ordinarily occurs by discrete atom jumps from one equilibrium site to another. In moving from site to site the atom must pass through intermediate positions in which it has a relatively high energy. The energy required to pass through these intermediate configurations is supplied by thermal fluctuations. Kinetic processes of this type are called thermally activated processes. Diffusion in the solid state is a prototypic example. This chapter begins with a discussion of heat transfer as an example of a continuous transition. This example is followed by a brief introduction to the general theory of continuous transitions, called non-equilibrium thermodynamics. The chapter concludes with the analysis of diffusion in solids.
10.2 LOCAL EQUILIBRIUM We ordinarily describe the states of systems that are not in equilibrium by giving the distribution of the thermodynamic forces within them, with the understanding that, if the thermodynamic forces are not uniform, the system will evolve in a direction that makes them so. For example, we describe a material that is heated on one side as having a non-uniform temperature. Strictly speaking, it is incorrect to do this. Temperature is a characteristic of thermodynamic equilibrium, and is not well defined when the conditions of equilibrium are violated. (As an example, those of you who are studying chemical or mechanical engineering may have already encountered the practical problem of measuring the temperature of a flowing fluid, and learned that the temperature you measure may depend on how the measurement is done.) However, provided that the system is not too far from equilibrium we can construct a perfectly useful theory that uses equilibrium properties like the local temperature and entropy and chemical potential. It is worth taking a moment to consider how this can be done.
Page 275
Materials Science
Fall, 2008
To apply equilibrium thermodynamics to non-equilibrium states we use the assumption of local equilibrium. In most of the continuous transitions that are of interest in materials science, the deviation from equilibrium is imperceptible on the atomic scale. The variation in temperature or composition is significant on a scale of millimeters, and possibly micrometers (microns), but even a micron corresponds to several thousand interatomic spacings. Moreover, the rate of change of thermodynamic quantities is usually very slow compared to the processes that establish equilibrium on the atomic level. The electron cloud around an atom adjusts itself at a natural frequency that is called the plasmon frequency, and is about 1015 cycles per second, and the atom as a whole vibrates with a mean frequency on the order of 1013 cycles per second. In contrast, sensible changes in the temperature ordinarily require at least a fraction of a second and significant chemical redistributions require minutes or hours. In the usual situation, an atom comes to an essentially complete equilibrium with its immediate environment long before it senses any macroscopic violation of the conditions of equilibrium. It is, therefore, usually reasonable to assume that any microscopic subvolume of a material is locally in equilibrium at any instant of time.
x1
dV(R) R x2
Fig. 10.1: Macroscopic system divided into differential subvolumes. We can construct a non-equilibrium thermodynamics of continuous processes in the following formal way. Let the system have total volume, V, and let it be divided into subvolumes, dV, that are microscopic, but still large on the atomic scale (Fig. 10.1). Each differential volume is instantaneously in equilibrium so that it has well-defined values of the thermodynamic variables. The position of a particular volume element, dV, can be designated by its vector position, R, in a coordinate system fixed in space. It has the instantaneous temperature T(R), pressure P(R), and chemical potentials {µ(R)}. When the thermodynamic state is non-uniform these are continuous functions of the position variable, R. When two adjacent differential volumes are not in equilibrium with one another their interaction changes the state of the system. They exchange heat and mass across their fixed boundaries. The process by which this happens can be modeled by replacing the system by an idealized one that remains in equilibrium at all times. Let the differential volume elements be separated by partitions that isolate them from one another, but periodically become transparent so that heat or mass can move across them. Page 276
Materials Science
Fall, 2008
While the partitions are impermeable each differential volume element comes to equilibrium. When the partitions are transparent each element exchanges incremental quantities of heat and mass with its neighbors. The time interval between periods of transparency is taken to be long enough to establish internal equilibrium, but short enough that the system seems to evolve continuously on the time scale that is pertinent to macroscopic changes. Each differential volume element of the ideal system is in a welldefined equilibrium thermodynamic state at any instant of time. However, the system as a whole is in a non-equilibrium state that changes continuously. We use this idealized system to model and predict the behavior of real systems in non-equilibrium situations. The simplest case of non-equilibrium behavior is the conduction of heat through a material whose pressure and composition are fixed. We consider this case first, and then return to a discussion of the general equations of non-equilibrium thermodynamics.
10.3 THE CONDUCTION OF HEAT 10.3.1 Heat conduction in one dimension: Fourier's Law Consider a solid whose atoms are immobile, so that atomic diffusion can be neglected, and assume that the temperature varies in the x-direction, T = T(x). This means that two adjacent subvolumes have different temperatures, as shown in Fig. 10.2. According to the Second Law, the temperature difference induces a flow of heat across the boundary, as shown in the figure. The heat flow (°Q) is in the direction of decreasing temperature and vanishes when the temperature difference vanishes. The simplest equation that has these properties is °Q = - C(ÎT)
10.1
where C is a positive constant.
JQ T2
T1
Fig. 10.2: Heat flow from one differential volume to another in response to a temperature difference, T2 > T1. The constant, C, in equation 10.1 cannot be a material property since both the temperature difference, ÎT, and the heat flow, °Q, depend on the specific geometry of the differential volume elements. When the local temperature variation is small the temperature difference can be written
Page 277
Materials Science
Fall, 2008
dT ÎT = dx Îx
10.2
where x is a coordinate in the direction that connects the centers of the volume elements, Îx is the distance between the centers and dT/dx is the temperature gradient in the direction, x. The flow of heat per unit time from one volume element to the other can be written °Q = JQ dA
10.3
where dA is the area of the interface and JQ is the heat flux perpendicular to the interface, the heat flow across the interface per unit area per unit time. Substituting equations 10.2 and 10.3 into 10.1 yields the relation dT JQ = - k dx
10.4
Equation 10.4 relates the heat flux to the temperature gradient. The equation holds independent of the geometry of the differential volume elements, and the coefficient, k, is a material property, which is called the thermal conductivity. Equation 10.4 is the one-dimensional form of Fourier's Law of Heat Conduction. According to equation 10.4 the local deviation from thermal equilibrium produces a driving force, the temperature gradient, dT/dx. The response to this force is a thermodynamic flux, JQ, in a direction that would relieve the driving force. The magnitude of the flux induced by a given force is determined by the thermal conductivity, which is a property of the material. When heat flows in accordance with Fourier's law it may or may not change the temperature of the volume element that receives it. For the temperature to change there must be an accumulation of energy in the volume element, which only happens if there is a net flux of heat across its boundary. To find the equation that governs the temperature change consider the one-dimensional case shown in Fig. 10.3.
JQ 23
JQ 12
T3
T2
T1
Fig. 10.3: One-dimensional flow of heat. Heat flows into the central volume element in Fig. 10.3 from the left, and flows out across the interface to the right. The net heat added per unit time, °QT, is Q Q °QT = J23 - J12 dA
Page 278
10.5
Materials Science
Fall, 2008
where dA is the interface area. If the volume element is small and the flux is continuous then the flux at the 12 interface is related to that at the 23 interface by dJQ Q Q J12 = J23 + dx dx
10.6
where dx is the length of the volume element. It follows that dJQ d dT °QT = - dx dV = dxk dx dV
10.7
where we have used Fourier's Law (equation 10.4). When the temperature of a differential volume element is changed by dT at constant volume (dV) and composition the change in energy is equal to the heat added, and is proportional to the change in temperature: dE = dQ = CvdTdV
10.8
where Cv is the isometric specific heat. Dividing equation 10.8 by the time differential, dt, substituting equation 10.7, and dividing through by the volume, dV, we obtain the partial differential equation that determines the change in the temperature as a function of position and time, that is, the partial differential equation that determines the function, T(x,t), ∆ 1 ∆ ∆ T(x,t) = k T(x,t) ∆t Cv ∆x ∆x
10.9
When the thermal conductivity is constant this simplifies to ∆T k ∆2T = ∆t Cv ∆x2
10.10
The partial differential equation 10.10 is an example of the diffusion equation, whose solutions are tabulated in standard texts on differential equations. 10.3.2 Heat conduction in three dimensions In a more typical case the temperature is a function of position in threedimensional space, and is described by the function, T(R,t), where R = x1e 1 + x2e 2 + x3e 3
10.11
is the position vector, and x1, x2, x3 are the coordinates of the point, R, in a Cartesian coordinate system with the unit vectors e 1 , e 2 , e 3 . In the three-dimensional case the local variation in the temperature is described by the gradient,
Page 279
Materials Science
Fall, 2008
∆T ∆T ∆T ÂT = ∆x e 1 + ∆x e 2 + ∆x e 3 1 2 3
10.12
which is a vector that has both magnitude and direction. The heat flux that is induced by the temperature gradient is also a vector, J Q . The equation that relates J Q to ÂT (the three-dimensional form of Fourier's Law) is complicated by the fact that the heat flux is not necessarily parallel to the temperature gradient. The component of the heat flux in the direction e 1 depends on all three components of the temperature gradient: ∆T ∆T ∆T Q J1 = - k11∆x - k12∆x - k13∆x 1
2
3
10.13
where k11, k12 and k13 may be different material constants. The other two components of the heat flux obey similar equations. Hence there are a total of nine thermal conductivities of the form kij (i,j = 1,2,3). These nine coefficients obey the symmetry relation, kij = kji (for example, k12 = k21) and, hence, only six of them are independent. However, the most general material still has six independent thermal conductivities. Fortunately, equation 10.13 has a much simpler form in most of the cases of interest to us. Equations like 10.13 that describe the behavior of materials are called constitutive equations. A constitutive equation must predict a behavior that is compatible with the symmetry of the material; if the direction of the temperature gradient is changed to a symmetrically equivalent direction, as from one direction to another in a cubic crystal, then the direction of the induced flux must change in exactly the same way. A material with an amorphous structure is the same in every direction, and is said to have isotropic symmetry. A material that has a cubic crystal structure is said to have cubic symmetry. It can be shown that the symmetry of an isotropic or cubic material has the consequence that the off-diagonal coefficients, kij (i ≠ j), vanish, while the diagonal components, kij (i = j), are all equal. Hence when the material is isotropic or cubic the heat flux is parallel to the temperature gradient J Q = - kÂT
10.14
where the thermal conductivity, k, has the same value in all directions. In a cubic or isotropic material the heat flux in each of the three coordinate directions obeys an equation of the form 10.4, for example, ∆T Q J1 = - k ∆x
10.15
1
Most materials of engineering interest are isotropic or cubic to a good approximation. Amorphous solids and glasses are isotropic, materials with cubic crystal structures
Page 280
Materials Science
Fall, 2008
are cubic, and polygranular materials that are aggregates of many crystalline grains with random orientations (random polycrystals) are isotropic in the macroscopic sense whatever their crystal structures. Hence the vector form of the Fourier Law of Heat Conduction given in equation 10.14 is usually adequate. When the material is cubic or isotropic the three-dimensional equation that governs the change in the temperature with time can be derived by considering the total rate of accumulation of heat in a cubic volume element. The heat flow along each of the three coordinate directions obeys an equation of the form 10.15, and the total heat accumulated per unit time is obtained by summing the heat contributed by flow in the three coordinate directions. It follows from equation 10.7 that Q Q Q ∆J1 ∆J2 ∆J3 °QT = - ∆x + ∆x + ∆x dV 1 2 3
= - (Â^J Q )dV
10.16
where the symbol Â^J Q is the vector notation for the quantity in brackets, and is called the divergence of the vector J Q . Using equation 10.8 and assuming that the thermal conductivity is constant, it follows by a derivation analogous to the one that leads to equation 10.10 that the temperature distribution through the body, T(R,t), is governed by the partial differential equation ∆T k ∆2T ∆2T ∆2T = ∆t Cv ∆x12 + ∆x22 + ∆x32 k =C
v
Â2T
10.17
where symbol Â2T is a vector notation for the quantity in brackets, which is called is called the Laplacian of T. 10.3.3 Heat sources The temperature change in a differential volume of the solid may also be affected by the presence of heat sources, where heat is created, or heat sinks, where heat is lost. If there are heat sources or sinks distributed through the volume that produce a net heat, •q, per unit volume per unit time then equation 10.16 must be changed to •QT = - [Â^J Q + •q] dV
10.18
so that the evolution of the temperature is governed by the partial differential equation ∆T Cv ∆t = kÂ2T + •q Page 281
10.19
Materials Science
Fall, 2008
The most common heat source is a chemical reaction or first-order phase transition within the volume that produces a latent heat, QL, per mole of product. The latent heat was discussed in Chapter 7. It may be positive or negative. A phase transition to a high-temperature phase is endothermic; heat is absorbed and QL is negative. The rate at which heat is evolved per unit volume of material, •q, is the product of the latent heat per mole and the number of moles created per unit volume per unit time, •n: •q = QL•n
10.20
A second source of heat added to a volume of material is the absorption or emission of radiation. The most common type of radiation is the thermal radiation between bodies of different temperature, which gives rise to a heat flux that is proportional to the fourth power of the temperature. Thermal radiation provides a heat source or sink at the external surface of an opaque body. The net flux to a surface at temperature T from a medium of temperature T0 is JQ = aßB(T04 - T4)
10.21
where (a) is the absorption coefficient for the material and ßB is the Stefan-Boltzmann constant.. The absorption coefficient is a material property that depends both on the nature of the material and on the nature of its surface. The more reflective the surface, the lower the value of the absorption coefficient. There is also a net radiative heat transfer from point to point within a material whose temperature is non-uniform. However, this internal radiative transfer is included in the thermal conductivity. According to equation 10.21, radiant heat transfer in an opaque body gives a contribution to the thermal conductivity that is proportional to T3. This contribution is negligible except in semi-transparent materials at very high temperature. The absorption of radiation creates a heat source when the radiant photons are absorbed at discrete sites within the body. Examples include x-rays and energetic ions, which penetrate well into a solid before being absorbed, and light photons in a transparent body, which are often absorbed at discrete impurity atom sites (color centers) that are distributed through the body. Radiation can also create an effective heat sink, for example, when energy is liberated by the radioactive decay of atoms within the material.
10.4 MECHANISMS OF HEAT CONDUCTION Heat is transported through a solid by three carriers: lattice vibrations, conduction electrons, and optical photons. Since elementary lattice vibrations can be treated as particles, called phonons, as we discussed in Chapter 7, all three cases can be visualized as heat conduction by energetic particles that move through the solid. In the course of their
Page 282
Materials Science
Fall, 2008
motion they occasionally collide with one another or with the crystal lattice itself. These collisions produce an effective friction that opposes their motion. The type of carrier that is mainly responsible for the transport of heat depends on the type of material and the temperature, and is also sensitive to the microstructure. Phonons are the predominant carriers in all materials at very low temperature. Electrons become the predominant carriers in metals at moderate temperature. Photons are relatively unimportant except in insulators at very high temperature. 10.4.1 Heat conduction by a gas of colliding particles The basic equation that governs thermal conductivity can be derived by considering thermal conduction in a dilute gas of particles that collide with one another. The theory of this process was originally developed to treat the thermal conductivity of a gas of atoms, but also provides a reasonable description of thermal conduction by "gases" of electrons, phonons or photons in solids. Assume a gas of particles that move at an average velocity, v, that is randomly oriented in space. The particles occasionally collide with one another. Assume that they travel an average distance, ´l¨, the mean free path, between collisions, where ´l¨ = v† and † is the mean time between collisions. Let the gas have a temperature, T(x), that varies in the x-direction, and assume that the particles come to thermal equilibrium at the local value of the temperature when they collide.
J = (1/2)nev
v
... Fig. 10.4: Flux of particles in one dimension. The particles in the region shown that travel in the positive direction cross the shaded surface in unit time. Now let an imaginary plane be placed perpendicular to the x-axis, as shown in Fig. 10.4. The number of particles that cross the plane in the positive direction per unit 1 time is 2 nvx, where vx is the average speed of travel in the x-direction and the factor 1/2 appears because half of the particles are moving in the positive x-direction at any given time. If these particles carry an average energy, ‰, then the net flux of energy across the plane in the positive x-direction is 1 1 J+ = 2 n‰vx = 2 EV(T)vx
Page 283
10.22
Materials Science
Fall, 2008
where EV(T) (= n‰) is the energy of the mobile particles per unit volume, which depends on the temperature. The typical particle that crosses the plane had its most recent collision a distance of the order ´lx¨ away, where ´lx¨ is the mean free path of travel in the x-direction between collisions. Hence 1 J+(x) = 2 EV[T(x-´lx¨)]vx
10.23
where J+(x) is the flux in the positive direction across a plane at x, and EV[T(x-´lx¨)] is the energy density at the position, x - ´lx¨, where the previous collisions occurred. The energy density depends on the position, x, through the local value of T at that position. Since particles move in both directions along the x-axis, there is also a flux of energy across the plane in the negative x-direction, equal to 1 J-(x) = 2 EV[T(x+´lx¨)]vx
10.24
Hence the net flux in the positive x-direction across a plane at x is 1 J(x) = 2 vx{EV[T(x-´lx¨)] - EV[T(x+´lx)]}
10.25
Let the temperature gradient be constant over the mean free path, ´lx¨, as it is, at least approximately, when ´lx¨ is a microscopic distance and the temperature variation is macroscopic. Then dEVdT EV[T(x-´lx¨)] = EV[T(x)] - dT dx ´lx¨ dT = EV[T(x)] - Cv´lx¨ dx
10.26
where Cv is the isometric specific heat. If we write a similar relation for the energy density at x + ´lx¨, and substitute the two relations into equation 10.25, the result is dT J(x) = - Cvvx´lx¨ dx
10.27
which has the form of Fourier's Law for heat conduction, with the thermal conductivity k = Cvvx´lx¨ = Cvvx2†
10.28
where † is the mean time between collisions. Since the particles are equally likely to move in any direction,
Page 284
Materials Science
Fall, 2008
1 vx2 = vy2 = vz2 = 3 v2
10.29
where v is the average particle speed, and 1 1 k = 3 Cvv2† = 3 Cvv´l¨
10.30
where ´l¨ is the mean free path. Equation 10.30 is valid whatever the nature of the conducting particles, provided that they travel with velocity, v, and equilibrate by collisions with the mean free path, ´l¨. The electrons, phonons and photons that carry heat through a solid behave in this way. 10.4.2 Heat conduction by mobile electrons Conduction electrons are the primary carriers of heat in metals at normal temperatures. This may seem surprising, since we found in Chapter 8 that the specific heat is primarily due to lattice vibrations (phonons). However, while the phonon specific heat of a metal at room temperature is roughly an order of magnitude greater than the electron specific heat, electrons travel at speeds approaching the speed of light, « 108 cm/sec, while phonons travel at the speed of sound, « 105 cm/sec. Hence the electrons make a greater contribution to the thermal conductivity. In a pure metal at room temperature the electron conductivity is « 30 times the phonon conductivity. In a disordered alloy it is roughly 3 times greater. The mean free path of the electrons is determined by their collisions with the crystal lattice. These collisions transfer energy to the lattice and cause the electron distribution to equilibrate at the local value of the lattice temperature. The electronlattice collisions are also responsible for the electrical resistance of the metal, and we shall discuss them further when we treat electronic properties. The collisions have two sources: collisions with phonons and collisions with defects, primarily solute or impurity atoms. Both types of collisions are due to the interaction of the electron with a local disturbance of the crystal pattern of the ion cores. If the solid were a perfectly ordered crystal in which all ion cores were sited in their equilibrium positions, the valence electrons would not interact with the lattice at all. The reason is that the electrons are in quantum states that are determined by the ion configuration. The interaction with the ions is incorporated into the wave function of the electron itself. The similar situation in the free atom is, perhaps, more familiar. The electrons in an atom orbit around the nucleus. Their interaction with the nucleus determines the wave functions that describe their states. They do not collide with the nucleus in any other way as they move about in the ion core. In a real crystal the perfect pattern of the ion cores is disturbed in two ways. Because of thermal oscillations the ions are not instantaneously situated on their Page 285
Materials Science
Fall, 2008
equilibrium sites, and in the neighborhood of defects or solutes the charge distribution is locally disturbed by the different charge distribution of the solute and the distortion of the positions of the lattice atoms around it. Hence a traveling electron "sees" lattice vibrations and defects as disturbances in the perfectly periodic charge distribution its wave function is designed to expect. Its motion is perturbed by these disturbances. We describe the perturbation by saying that the electron periodically "collides" with them and is "scattered" by them. The mean free path between collisions of conduction electrons and phonons in a pure metal at room temperature is 100-600‹. The mean free path between collisions with solute atoms is of the order of the mean separation of the solutes, and is, hence, proportional to n-1/3, where n is the number of impurity atoms per unit volume. The collisions between the electrons and the lattice determine the electrical conductivity as well as the thermal conductivity, so we might expect the two to be related. The relation between them is known as the Wiedemann-Franz Law, LT k = LßT = ®
10.31
where L is a constant, called the Lorenz number, ß is the electrical conductivity, and ® = 1/ß is the electrical resistivity. The Wiedemann-Franz Law can be understood by referring back to equation 10.30. The electrical resistivity, ®, is inversely proportional to the mean free path, ´l¨, since the motion of an electron through the lattice is resisted by its collisions. The electronic specific heat, CV, is proportional to the temperature, T, as we found in Chapter 8. Hence the product, CV´l¨, yields a relation of the form given in equation 10.31. The resistivity of a typical metal varies linearly with the temperature: ® = ®0 + bT
10.32
where ®0 is the residual resistivity, due to solutes, impurities and lattice defects, and the linear increase in ® with T is due to electron collisions with lattice phonons. This simple equation leads to the following rough approximation for the thermal conductivity: LT k ~ ® + bT 0
10.33
At high temperature the thermal contribution to the resistivity dominates, and L k~b
10.34
so the thermal conductivity approaches a constant. At low temperature the residual resistivity, ®0, dominates the denominator, and
Page 286
Materials Science
Fall, 2008
LT k~ ® 0
10.35
so thermal conduction by electrons increases approximately linearly with T. The lowtemperature limit, equation 10.35, is only reached in alloys and defective metals with high values of ®0, since, as we shall see, phonon conductivity becomes dominant at sufficiently low temperatures. The thermal conductivity of a metallic solid solution (alloy) is always much lower than that of the pure metal because of its higher residual resistivity, ®0. Highly alloyed materials, such as stainless steels, have relatively low thermal conductivities. A familiar example of the engineering application of this result is in the design of stainless steel skillets, which are sometimes plated with high-conductivity copper to distribute heat and prevent the development of hot spots in the skillet. For the same reason aluminum kitchen foil is made of aluminum that is alloyed as little as possible. 10.4.3 Heat conduction by phonons Lattice vibrations, or phonons, are the principle carriers of heat in all materials at very low temperature, and dominate the thermal conductivity of insulating materials at all but very high temperatures. When a solid is heated, for example, on one of its surfaces, the local increase in temperature excites lattice vibrations that propagate through the lattice. Since the vibrations are harmonic waves, we can picture them as particles (phonons) that move through the lattice. The phonons experience two distinct types of collisions: they collide with one another, and they collide with defects in the lattice, such as solute atoms.
Phonon-phonon collisions Phonon-phonon collisions occur because the interatomic forces are slightly anharmonic, as discussed in Chapter 8. Because of this anharmonicity, an atom that is initially given a simple harmonic vibration gradually changes its frequency. We can model this process by saying that lattice phonons excite one another by collisions. As we discussed in Chapter 8, a one-dimensional lattice vibration is described by a wave function of the form u(x,t) = u0 exp[i(kx - ∑t)]
10.36
where u(x,t) is the displacement of an atom at position, x, at time, t, k is the wave vector of the vibrational wave, and ∑ is its frequency, which depends on k through the dispersion relation. The wave vector, k, lies in the range - π/a ≤ k ≤ π/a that defines the first Brillouin zone, where a is the atom spacing in the x-direction.
Page 287
Materials Science
Fall, 2008
The momentum of the lattice wave is given, in quantum mechanics, by the relation ∆u px = - iÓ∆x = Ók
10.37
Hence conservation of momentum in phonon-phonon collisions requires that the total wave vector be conserved. If phonons with wave vectors k1 and k2 collide to produce phonons with wave vectors k3 and k4, we must have k1 + k2 = k3 + k4
10.38
so the net direction of propagation of the lattice waves is not changed. However, it is possible to have a phonon-phonon collision that is consistent with equation 10.38 in which one of the product vectors exceeds the value, π/a. We saw in Chapter 8 that a phonon with k ≥ π/a is indistinguishable from one with 2π k' = k - a
10.39
Hence when a collision produces a phonon with k ≥ π/a, it is indistinguishable from one with k' = k - 2π/a; a large fraction of its momentum is apparently lost. This momentum is not actually lost from the system. It is transferred to the crystal lattice. However, it is lost from the phonon spectrum. Since the net phonon momentum decreases, the net transport of energy by phonons goes down, and the phonon thermal conductivity is diminished. Inelastic phonon collisions of this sort are called Umklapp processes, and they are responsible for the finite value of the phonon thermal conductivity in a solid. A more elaborate analysis shows that an equation of the form 10.30, 1 k = 3 Cvv´l¨
10.30
applies to the phonon conductivity, k, where Cv is the vibrational specific heat and ´l¨ is the mean free path of the phonons between inelastic collisions. The mean free path between inelastic collisions is strongly dependent on the temperature. A simple relation can be found from the following one-dimensional argument. Since inelastic collisions require a product phonon with k ≥ π/a, and since momentum is conserved, these collisions will involve at least one phonon with an initial wave vector, k ≥ π/2a. In the Debye model these phonons have energies, E ≥ kŒD/2, where ŒD is the Debye temperature. The mean free path between such collisions should decrease with the density of these high-energy phonons, giving
Page 288
Materials Science
Fall, 2008
1 ´l¨ fi n(kŒ /2) D
10.40
where n(kŒD/2) is the equilibrium density of high-energy phonons. We found in Chapter 8 that the equilibrium density of phonons with energy, Ó∑, has a simple form in the highand low-temperature limits: e- Ó∑/kT n(Ó∑) ~ kT
Ó∑>> kT Ó∑ ŒD. When T is above the Debye temperature, ŒD, the specific heat is constant and the phonon mean free path is inversely proportional to the temperature (eq. 10.42). It follows that the phonon conductivity, k, is given by a relation of the form 1 A k = 3 Cvv´l¨ ~ T
(T ≥ ŒD)
Page 290
10.44
Materials Science
Fall, 2008
where A is a constant. (2) 0 Ti, but becomes spontaneous at the instability temperature, Ti. First consider transformation at the stoichiometric composition, indicated by x2 in Fig. 11.53. In this case the å solution becomes metastable with respect to congruent nucleation of the © phase as soon as its temperature falls into the © region. It remains metastable until the temperature drops below the instability line shown in the figure. If nucleation has not happened when this point is reached the system spontaneously becomes ordered; A-atoms spontaneously move to A-sites and B-atoms to B-sites. The kinetic diagram for the stoichiometric reaction is like that shown in Fig. 11.54. The nucleated transformation becomes possible when the temperature drops below T0. If the transformation has not happened when the temperature reaches Ti, the instability temperature, then a homogeneous ordering reaction initiates spontaneously. 11.16.4 Ordering at an off-stoichiometric composition The transformation behavior is qualitatively different when the composition is off-stoichiometric, for example, the composition x1 in Fig. 11.53. To understand the transformation behavior in this case it is useful to consider the free energy curves of the disordered, å, and ordered, ©, phases, which change with temperature roughly as shown in Fig. 11.55. As the temperature is lowered the free energy curve for the © phase drops relative to that of the å phase. In addition, the composition at which the å-phase becomes unstable (the solid curve in Fig. 11.53) decreases. This composition is indicated by the termination of the å free energy curve in Fig. 11.55.
Page 382
Materials Science
Fall, 2008
T1 g
T2 < T1
å
å
å ©
x1 x
Ti
© x1 x
© x1 x
Fig. 11.55: Possible form of the free energy curves of the å and © phases in Fig. 11.53 as the temperature decreases. The dot marks a disordered solution with composition, x1. When a solution of composition, x1, is cooled to a temperature just inside the twophase region the free energy curves appear like those shown in Fig. 11.55a. The system is metastable, but can only transform by nucleating a solute-rich © phase. If the sample is cooled to a lower temperature at a rate that is fast enough to suppress nucleation of the equilibrium phase then å becomes metastable with respect to the formation of a non-stoichiometric ordered phase, ©, that has the same composition, as illustrated in Fig. 11.55b. Hence congruent nucleation becomes possible. However, as the temperature drops the composition at which the disordered solution becomes thermodynamically unstable decreases, so the termination of the free energy curve of the å phase moves closer to x1. Finally, at the instability temperature, Ti, an å phase of composition, x1, becomes unstable with respect to spontaneous order into the © phase, as illustrated in Fig. 11.55c. If the solution is cooled to Ti quickly enough to suppress congruent nucleation, the system orders spontaneously at Ti. ln(†) incongruent nucleation
ÎT
Ti
congruent nucleation spontaneous order
Fig. 11.56: Kinetic diagram for the ordering of an off-stoichiometric disordered solution. The kinetic diagram for a transformation at composition, x1, appears as shown in Fig. 11.56. Slow cooling leads to nucleation of the equilibrium ordered phase, more rapid cooling produces congruent nucleation of an off-stoichiometric ordered phase, and quenching leads to spontaneous order.
Page 383
Materials Science
Fall, 2008
Note that when an off-stoichiometric solution of composition like x1 transforms through congruent nucleation or spontaneous ordering the product of the transformation is a non-equilibrium ordered phase. To achieve equilibrium the homogeneous reaction, å “ ©, must be followed by a second reaction, © “ å + ©, in which islands of disordered solution reform out of the ordered phase. 11.16.5 Implications for materials processing Many of the materials that are coming into use in advanced engineering systems are ceramics or intermetallic compounds with complex, ordered structures. These often have multiple ordered structures that are, in practice, connected to one another by congruent ordering reactions. Largely for this reason, ordering reactions and ordering instabilities are becoming increasingly important in the processing of modern materials. Ordering reactions can also be exploited for the control of microstructure. This is particular true in solids with very low diffusivities. If diffusion is very slow, as it is in many ceramic and intermetallic phases, then decomposition is easily suppressed and nonstoichiometric ordered phases can be made and retained almost indefinitely. Hence the basic crystal structure of the compound can be controlled by adjusting the cooling rate. When the diffusivity is higher, it is possible to engineer the microstructure of a two-phase mixture of ordered and disordered phases by controlling the reaction sequence that leads to the equilibrium state. A solution like that shown in Fig. 11.56 can be brought to equilibrium by two very different paths that may lead to very different final microstructures. If the disordered parent solution is cooled slowly, it decomposes by nucleation of the equilibrium ordered phase. The reaction is å “ å + ©. If the sample is cooled quickly it orders before decomposing. The reaction is a two-step one in which å “ © “ å + ©.
Page 384
Materials Science
Fall, 2008
Chapter 12: Environmental Interactions In lodgings frail as dew upon the reeds I left you, and the four winds tear at me - Murasaki Shikibu, "The Tale of Genji"
12.1 INTRODUCTION In the last few chapters we have considered the changes that occur within a material when we alter its temperature or chemical composition. In the present chapter we consider the changes that are brought about in a material when it reacts with its environment. The important environmental interactions are of three types. The first type includes reactions that change the microstructure of the material near the surface by adding heat or chemical species from the environment. These include the surface hardening reactions that are widely applied to structural materials and the doping reactions that control the properties of semiconductors. The second type includes chemical reactions between the metal and its environment that form compounds at the interface. There are many important reactions of this type. We shall specifically consider the oxidation reaction, since it is both important in engineering and illustrative of the mechanisms that govern interfacial reactions. The third type includes electrochemical reactions, in which the chemical reaction at the interface is assisted by electric currents so that it has an appreciable rate at low temperature. The most important electrochemical reaction is aqueous corrosion, which is responsible for the gradual deterioration of metallic structures that are exposed to water, soil or moist air.
12.2 CHEMICAL CHANGES NEAR THE SURFACE The first class of environmental interactions we shall consider are those that change the microstructure of the solid immediately beneath its surface. It is often desirable to do that to control the mechanical, chemical or electrical properties of the material near the surface. Rapid thermal treatments are used to change the phase, grain structure or chemical distribution near the interface. Chemical diffusion or ion implantation is used to change the chemical composition near the surface. 12.2.1 Thermal treatment To control the microstructure of the layer of material just beneath a solid surface it is useful to have a way of heating that particular material without significantly heating the interior. It is difficult to do this because heat is conducted so rapidly in most materials that heating the surface quickly raises the temperature in the interior. The methods that are useful for the thermal treatment of surfaces create intense, local heat Page 385
Materials Science
Fall, 2008
sources in the material immediately beneath the surface. The possible heat sources include electrical currents and locally absorbed beams of light or high-energy electrons. These can cause local heating at the surface that is so rapid that a high temperature is reached before heat can be conducted away. The thermal conductivity of the solid then becomes an advantage, since the hot surface layer is rapidly quenched by heat flow into the bulk, often freezing it in a non-equilibrium microstructure that has desirable properties. The common methods of rapid thermal treatment are induction heating, laser surface processing, and electron beam processing.
Induction hardening If a high-frequency alternating current is passed through a coil located near the surface of a metal it induces electrical currents. These raise the temperature by Joule heating. If the coil is small and close to the surface the induced current does not penetrate very far into the interior. If the induction current is shut off after a short time, the heated layer cools rapidly as heat flows into the interior.
hardness case depth
Fig. 12.1:
x The variation of hardness with depth below the surface for a case-hardened part.
Local induction heating is a common method for producing a hard, wear-resistant surface on steel parts, such as gears, shafts and bearing races. A layer of material adjacent to the surface is heated to a temperature above that at which the low-temperature BCC (ferrite) phase reverts to the high-temperature FCC (austenite) phase, and then quenched to form a layer of fresh martensite. The fresh martensite layer is very hard because of its high defect density. The induction hardening of a typical carbon steel produces a hardness profile like that shown in Fig. 12.1. The rapid drop in hardness defines the limit of the martensite layer, and measures what is called the "case depth" of the hardened part. The manufacturing specifications for hardened gears set both the level of surface hardness and the case depth over which some minimum hardness must be maintained.
Laser surface processing Laser surface processing is an even more efficient method of thermal treatment. In this method the surface is subjected to an intense pulse of coherent light from a highpower laser. The frequency of the light is chosen so that a high fraction of the incident Page 386
Materials Science
Fall, 2008
energy is absorbed within a short distance of the surface. The almost instantaneous absorption of the optical energy of the laser beam by a small volume of solid raises its temperature dramatically. Commonly a layer of material parallel to the surface is actually melted by the beam. However, since the melted layer is small in thickness, it is very rapidly quenched back into the solid state as heat flows into the bulk. A significant advantage of laser surface treatment is the thin layer that is processed; properly designed laser treatments can create treated layers whose thickness is on the order of microns. Laser surface processing is sometimes used simply to modify the microstructure of the surface layer. Passing a laser beam over the surface of a metal may create a thin surface layer with a highly defective microstructure that has exceptional hardness, and, hence, good wear resistance. Laser surface treatment of a semiconductor, such as silicon, can produce a thin amorphous layer that has useful electrical properties. Lasers are also used to make surface layers with combinations of composition and microstructure that are not easily obtained in other ways. For example, elements such as carbon, phosphorous and boron have very limited solubility in solid steel, but are much more soluble in the liquid. If a thin layer of carbon or phosphorous is deposited onto the surface of a steel and then treated with a laser pulse, the carbon or phosphorous dissolves into the surface layer while it is molten, and is trapped there by rapid solidification. The result is a very highly doped surface layer with very high hardness. It may also be amorphous in its structure. Among other potentially useful properties, amorphous surface films on metals often lead to exceptional corrosion resistance. The uniformity of an amorphous coating eliminates the local galvanic couples that promote corrosion of normal metal surfaces.
Electron beam treatments High-energy electron beams are also used for surface treatments. When an electron beam strikes a metallic conductor, its energy is absorbed in the layer immediately beneath the metal surface, so the beam has nearly the same effect as a laser pulse. Electron beams are advantageous in that they easier and cheaper to create than intense laser pulses, and provide a more efficient energy transfer to the metal. However, it is difficult to produce electron beams with the short, intense pulses that are possible with lasers, so it is more difficult to confine the heating to the immediate surface region. High-energy electrons have complex interactions with semiconductors; laser processing is usually more controllable and useful for these materials. 12.2.2 Diffusion Across the interface The simplest method for changing the composition near the solid surface is by diffusion from the environment. Diffusion can be induced by heating the solid in a vapor that contains the solute or by plating the solute onto the surface and maintaining a high temperature until it diffuses in. The two processes are roughly equivalent. In the former case the surface concentration is fixed by adsorption of solute from the vapor phase. In the latter, it is fixed by the composition of the deposited surface film. Page 387
Materials Science
Fall, 2008
The concentration profile The solute concentration at a distance, x, below the surface at a time, t, can be found by solving the diffusion equation (Fick's Second Law; Chapter 9): ∆ ∆2 c(x,t) = D c(x,t) ∆t ∆x2
12.1
where D is the diffusion coefficient for the solute. If the solute concentration at the surface is fixed (c = c0 at x = 0), the initial concentration in the solid is negligible, and the solid is very large compared to the effective diffusion distance, then the solution to equation 12.1 is x c(x,t) = c01 - erf 2 Dt
12.2
where erf(∫) is a function that is called the error function, and has the value ∫
2 ⌠ ⌡ erf(∫) = π0
2
e- u
du
12.3
The error function is tabulated in most compilations of mathematical functions. c0
c
Fig. 12.2:
increasing time
x Successive profiles for diffusion from the interface.
The concentration profile is plotted for several values of the time in Fig. 12.2. The equation x = 2∫ Dt
12.4
gives the value of x at which the concentration is c = c0[1 - erf(∫)] at time t, so the thickness of the layer that has been doped to a given concentration increases with the square root of the time. Since erf(∫) « ∫ to within a few percent when ∫ is less than about 0.75, the concentration profile satisfies the simple equation Page 388
Materials Science
Fall, 2008
c c0
«1-
x 2 Dt
12.5
for c/c0 greater than about 0.25. Equation 12.5 can be used to set the temperature and time for a process that is intended to insure a minimum dopant concentration within a pre-selected depth. At the mean diffusion distance, –x = 2Dt , the solute concentration is approximately 0.3c0. Surface diffusion has the advantage of being a relatively simple and inexpensive processing step. Its principle disadvantages are that it produces a steep concentration gradient rather than the flat concentration profile that is often desirable, and requires a relatively high temperature so that diffusion is reasonable fast.
Surface hardening: carburizing and nitriding A common use of diffusion from the vapor is in the hardening of metal surfaces through carburizing and nitriding. The most common method of carburizing a steel part is to pass the part through a furnace that contains an atmosphere from which carbon diffuses into the metal. The carbon hardens the steel in one of two ways, depending on the temperature at which the carbon is added. If the carburizing temperature is below the temperature at which the steel reverts from the BCC to the FCC phase (the austenitization temperature), then the carbon is added to a microstructure that is more or less fixed. Carbon hardens the steel by an amount that is roughly proportional to the carbon concentration, but is limited by the very low solubility of carbon in the ferritic phase; if the carbon concentration exceeds the solubility limit it precipitates into carbides that form primarily on the grain boundaries and do not increase the hardness very much further. In the usual case carbon is added at high temperature where the steel has the FCC austenite phase. The FCC phase has a relatively high carbon solubility. On subsequent cooling, the austenite transforms to martensite with carbon trapped at high concentration. This martensite is very hard because of its high density of crystal defects, and forms a hardened surface layer. The hardness of the carburized layer is roughly proportional to the fraction of martensite and varies with distance beneath the surface roughly as shown in Fig. 12.1. Nitrogen is an alternative to carbon for surface hardening. Nitrogen also has a very low solubility in ferritic steel, but has an advantage over carbon in that excess nitrogen forms hardening precipitates within the ferrite grains, particularly in steels that are alloyed with Mo, Cr or Al. For this reason steels can be nitrided at relatively low temperature where they retain the ferritic structure. In addition, nitrogen has a high solubility in the FCC phase of iron (austenite) and is, therefore, particularly useful in hardening high-alloy stainless steels that retain the FCC structure at room temperature.
Page 389
Materials Science
Fall, 2008
Semiconductor doping As we shall discuss below, the electrical properties of semiconductors are controlled by adding small concentrations of electrically active impurities that act as donors or acceptors of electrons. Almost all semiconducting devices are based on semiconductor junctions, which are surfaces across which the nature of the conductivity changes from ntype conduction based on excess electrons in the conduction band from donor impurities to p-type conduction based on the holes in the valence band from acceptor impurities. A simple example is diagrammed in Fig. 12.3. To create these devices it is necessary to introduce controlled distributions of selected impurities. n
Fig. 12.3:
p
n
A semiconductor with an n-p-n junction near its surface. The n- and p-type regions are doped with donor and acceptor impurities, respectively.
A typical method for doing this is diagrammed in Fig. 12.4, where we have used silicon as a specific example. A thin layer of oxide is plated over the silicon surface by reacting with oxygen. Using techniques that will be described in a later chapter, the oxide is etched away over the region that is to be doped. The crystal is then exposed to an atmosphere that contains the dopant impurity, which adsorbs or deposits onto its surface. The sample is held at a temperature high enough to stimulate diffusion into the silicon, but low enough that diffusion through the oxide layer is negligible. The result is a distribution of dopant through the surface region beneath the cut in the oxide layer, as illustrated in Fig. 12.4b. If the cut in the oxide layer is large compared to the mean diffusion distance the concentration profile well inside the cut is the one-dimensional profile given by eq. 12.2. In any case, the diffusion profile varies with distance into the solid roughly as illustrated in Fig. 12.2.
Fig. 12.4:
(a) (b) Locally doping a semiconductor. (a) An oxide layer is etched to expose surface and the dopant is deposited. (b) Dopant is diffused in to create a diffuse doped region and the oxide removed.
12.2.3 Ion implantation Ion implantation provides a more direct means for controlling the composition beneath the surface. In this technique the dopant species is ionized and accelerated toward the solid surface by an electric field. The kinetic energy of ions causes them to penetrate Page 390
Materials Science
Fall, 2008
a short distance into the solid before decelerating and embedding. The mean penetration distance is fixed by the kinetic energy, which can be controlled by setting the potential through which the ions are accelerated. Ion implantation ordinarily leads to a composition profile like that shown in Fig. 12.5; the profile has a Gaussian form that is centered about the mean penetration distance, ∂. Ion implantation has three advantages over diffusional doping. First, virtually any species can be implanted by ionization, including those whose low diffusivities or low solubilities make them difficult to add by diffusional doping. Second, it is not necessary to heat the material before implantation, so dopants can be implanted into crystals that would degrade if they were heated to the temperatures required for diffusional doping. Third, ion implantation creates a very different initial composition profile than diffusional doping. If the material is heated slightly to diffuse the ion after implantation, the Gaussian profile of implanted ions spreads to become relatively flat, as illustrated in Fig. 12.5. The result is a much more uniform composition over the doped region. This is a particularly desirable feature in the manufacture of semiconducting devices, which should have relatively constant electrical properties within each separately doped region.
ion implantation
c
diffusion
x
Fig. 12.5:
Typical dopant profiles produced by ion implantation, and by impantation followed by diffusion.
The disadvantages of ion implantation include the relative complexity of the equipment required, which makes it expensive to use ion implantation for the surface treatment of large metal parts, and the physical damage that is done by the ions as they burrow into the solid at high energy, which causes defects that degrade the electrical properties of semiconductors. The latter problem requires that ion implantation processes for semiconductors be carefully designed, by adjusting the kinetic energy of the ions, their implantation rate, and subsequent annealing treatments, so that residual defects are held within tolerable levels.
Page 391
Materials Science
Fall, 2008
12.3 CHEMICAL REACTIONS AT THE SURFACE: OXIDATION The second class of reactions that occur between a solid and its environment are chemical reactions that cause the formation and growth of new compounds at the interface. The most familiar of these is oxidation. If a typical metal or semiconductor is heated to high temperature in air an oxide forms on its surface that grows until the material is consumed. We shall focus on the oxidation reaction because of its familiarity, its importance in engineering, and its value as a prototypic example of compound formation at an interface. However, oxidation is only one of many interfacial reactions that are important in the processing and use of engineering materials, and is a particular example of a reaction at a solid-vapor interface. Reactions that occur at solid-solid interfaces have distinct features that should also be appreciated. 12.3.1 Thermodynamics of oxidation When a metal or elemental semiconductor is exposed to an atmosphere that contains strong oxiding agents such as oxygen, sulfur or chlorine, the solid and gas react at the interface to form a scale of oxide, sulfide or chloride. If the reactant is oxygen the chemical reaction is y xM + 2 O2 = MxOy
12.6
where M is the solid MxOy is the stoichiometric formula for the oxide. The free energy change per mole of oxygen consumed is 2 2x Îg = y µ(MxOy) - y µ(M) - µ(O2)
12.7
The chemical potential is usually written in the form µ(T,P,c) = µ0(T) + RTln(a)
12.8
where µ0 is a function of T (we assume atmospheric pressure), and a, which is called the activity of the species, gives its dependence on the composition, c. Eq. 12.8 is, in fact, the definition of the activity. The activity is a simple function of the composition in four limiting cases: (1) if the specie is pure (c = 1), then a = 1 and µ0 is its molar Gibbs free energy in the pure state; (2) if the specie is the solvent of a dilute solution (c « 1), then a = c and µ0 is the molar free energy in the pure state (this is called Raoult's Law); (3) if the specie is a gas in an approximately ideal gas mixture then µ0 is its free energy in the pure state and a = p, its partial pressure in the gas; (4) if the specie is a solute in a dilute liquid or solid solution (c
View more...
Comments