Non-equilibrium Worlds (physics)
NON-EQUILIBRIUM WORLDS
Jean Pierre PETIT – Former research director – CNRS FR.
12 January 2013
When the man in the street thinks about the equilibrium of a system, he usually imagines a ball at the bottom of a well, or something similar.
The theory of thermodynamic equilibrium contains something more subtle: dynamic equilibrium. The simplest example is the air we breathe. Its molecules are agitated in all directions, with a mean thermal velocity of 400 m/s. At a tremendous rate, these molecules collide, interact. These shocks change their speed. However, the physicist translates this into statistical stationarity (the term used is "detailed balance"). Imagine a goblin who, at any time and any point in the room, could measure the molecular velocity in a given direction, with a slight angular uncertainty. At every time interval, our goblin counts the velocities V and V + ΔV, algebraic value. Then he plots these values on a graph, and observes a nice Gaussian curve appearing, with a mean value at the top close to 400 m/s. The faster or slower the molecules, the smaller their population.
He repeats this operation by pointing his measuring device in any direction of space, and, surprise, surprise, gets the same result. The molecular agitation in the room is isotropic. Moreover, nothing can disturb this dynamic equilibrium if the temperature remains constant, because the gas temperature is exactly the average kinetic energy coming from this thermal agitation. The physicist will describe this gas as being in thermodynamic equilibrium. This state is multifaceted: the air molecules do not have spherical symmetry. Diatomic molecules, like oxygen or helium, are peanut-shaped. Those of carbon dioxide or water vapor have other shapes. All these objects, when rotating, can store energy like tiny flywheels. These molecules can also vibrate. The concept of equal energy distribution says that energy must be distributed equally in all these different "modes". During a collision, part of the kinetic energy can be transformed into vibrational or rotational energy of a molecule. The reverse is also valid. Therefore, all this is based on statistics, and our goblin can count how many molecules are in such and such state, have such kinetic energy, are in such a vibrating state. Back to our breathing air, this census leads to a stationary state. This medium is then said to be in thermodynamic equilibrium, that is, relaxed. Imagine a wizard who has the power to stop these molecules, to freeze their rotational or vibrational movements, to modify them at will, creating a new statistical law, deforming this beautiful Gaussian curve, or even creating anisotropic events, for example by doubling the thermal speed in one direction relative to the transverse directions. Finally, he lets the system evolve according to new collisions. How many of these collisions are necessary for the system to regain thermodynamic equilibrium? Answer: very few. The mean free path of a molecule, between two collisions, gives an idea of the relaxation time in a gas, of its return time to thermodynamic equilibrium.
Are there non-equilibrium media, where the statistical molecular velocities significantly deviate from this comfortable isotropy and the beauty of Gaussian curves?
Oh yes! And it's even the majority case in the universe. A galaxy, this "island-universe," composed of several hundred billion stars, whose mass is more or less comparable, can be seen as a gaseous medium, in which the molecules should be... stars. In this specific case, we discover a disconcerting world where the mean free travel time of a star, before any encounter with a neighboring star, is ten thousand times the age of the universe. What do we mean by "encounter"? Is it a collision where the two stars violently crash into each other? Not at all! In the theoretical physics domain, called the kinetic theory of gases, a collision is considered when the trajectory of a star is significantly modified when passing near a neighboring star.
However, calculations show that these events are extremely rare, and our system of several hundred billion stars can be seen as generally collision-free.
For billions of years, the trajectory of our Sun has been regular, almost circular. If our Sun were self-aware, and if it did not change its pace due to encounters, it would completely ignore the presence of neighbors. It only feels the gravitational field as "smooth." It proceeds at its own pace like in a basin, without feeling any bump created by other stars. Immediately, the consequence appears: place our goblin, now an astronomer, near the Sun in our galaxy, and ask him to build a velocity statistic of neighboring stars in all directions. The obvious fact now emerges. Dynamically speaking, the medium is strongly anisotropic. There exists a direction where the agitation speeds of stars (called residual speeds by astronomers, relative to the average rotation of the galaxy, quite circular and about 230 km/s near the Sun) are practically twice as high as in any other transverse direction. In our breathing air, this was called a spheroidal velocity distribution – now, it becomes an ellipsoidal velocity distribution. So far, so good? How does this affect our vision, our understanding of the world? It changes everything! Because, from afar, we cannot deal with the theories of such drastically non-equilibrium systems.
Leaving aside the paradoxical status of galaxies, due to this demoniac effect of dark matter (missing mass), discovered in 1930 by the American, Swiss-originated, Fritz Zwicky, and in any case, we cannot produce any model of self-gravitating, point mass (orbiting in its own gravitational field). Our physics remains always close to a state of thermodynamic equilibrium. Obviously, any deviation from this or that represents a deviation from equilibrium, for example, a temperature difference between two gaseous regions, which will lead to heat transfer, a transfer of kinetic energy from thermal agitation. In this case, if we put our goblin back to work, he would conclude that the medium, dynamically speaking, is "almost isotropic." This will be the case of our atmosphere, even crossed by the most violent storms.
So then, is it impossible to encounter, "to put the fingers on," situations where a gaseous medium, a fluid, is clearly out of equilibrium? Such occurrences are found when crossing shock waves. They are limited areas, precisely the thickness of the shock wave being on the order of a small number of mean free paths.
When a gas crosses a shock wave, it abruptly switches from a state close to thermodynamic equilibrium to a "shocked" state, and thermodynamic equilibrium is restored after a few mean free paths.
We reported an observation, forty years ago, in the laboratory where I worked, now dismantled, the "Institut de Mécanique des Fluides de Marseille." We had then some sort of gas guns we called "shock tubes." Principle: using an explosion, we triggered a shock wave propagating at several thousand meters per second in a rarefied gas – initially, this gas was at a pressure of a few millimeters of mercury. The passage of the shock wave recompressed the gas, increasing its density.
We could easily and precisely follow the increase in density using interferometry. At the time, we also measured the heat flow at the surface of Plexiglas mock-ups. Since the experiments lasted only fractions of a millisecond, our measuring devices had to have a fast response time. Precisely, they were thin metallic films of one micrometer thickness, deposited under vacuum on the wall, acting as thermistors. We evaluated the heat flow by recording the resistance of these sensors while they heated.
One day, we placed a sensor directly on the tube wall. Then we observed that the heat flow reached the sensor after a certain delay following the shock wave passage, materialized by an abrupt density jump. However, we checked that the thermal lag of the sensor was small enough for this delay not to come from it. In fact, we had put our finger on a return phenomenon towards a quasi-thermodynamic equilibrium, downstream of the shock wave.
We can compare this to a hammer strike. Not only is the density abruptly increased, but we also observed a temperature jump, meaning an increase in the thermal speed of the molecules. But behind this wave, isotropy is only restored after several mean free path times. Immediately before the density front, the thermal agitation is translated by movements starting perpendicular to the wave direction.
When our sensor receives the heat, it results from the impact of air molecules on its surface. However, immediately before the density front, on some distance, the thermal agitation was developing parallel to the wall. The gas was well "heated" but temporarily unable to transfer this heat to the wall. During collisions, the "ellipsoid of speeds" transformed into a "spheroid of speeds," and the sensor finally returned the heat flow it had received. I believe I remember, with the experimental setup we had, that we recorded this heat flow about one centimeter before the density front.
Thus, shock waves represent areas of very small thickness, where the gaseous medium is strongly out of equilibrium.
How do we deal with this? We make these areas equivalent to surfaces of zero thickness. And this has worked for almost a century.
I am old enough to have known almost all the history of computers, since the beginning. When I was a student at the "École Nationale Supérieure de l'Aéronautique," there was no computer in the building. These were installed in sanctuaries called "calculation centers" which we could not access. We calculated using slide rules, objects of curiosity for today's generation. In the classes of the higher school, we all had our logarithm book, and each exam included a tedious numerical calculation test using these objects, now displayed in museums.
When I left the aeronautics school, mechanical calculators (FACIT) were just appearing, hand-operated. To multiply numbers, you turned a crank in one direction, to divide, you turned it in the other.
The professors or department heads had electrical machines, which made the silence of the offices resound with the noise of gears at the Institute of Fluid Mechanics in 1964. Computers occupied a place of honor, like distant gods visible only through a window, in these calculation centers. These computers, whose power was equivalent to that of a current pocket computer, were served by priests in white cassocks. Communication with them was only possible through a thick stack of punched cards read noisily by a mechanical card reader. We bought "calculation time" by the second, so expensive that it was almost Neolithic for today's youth.
The invasion of microcomputers has changed everything. Moreover, the explosion of computer power has been so rapid that the Net is now full of images showing vast rooms filled with mysterious black cabinets, managing staggering amounts of data.
Megaflops, gigaflops, petaflops, in abundance! In the 1970s, you could easily read the content of an Apple II RAM, entirely written as a small booklet.
We live in a Promethean world. Can we say that these modern tools increase our mastery of physics? An anecdote comes to mind. In France, I was a pioneer of microcomputing, having managed one of the first centers (based on Apple II) dedicated to this technology. At that time, I was also a sculpture teacher at the École des Beaux-Arts in Aix-en-Provence. One day, I presented a system using a flatbed plotter that could draw any perspective at will. An old professor, frowning, then said: "Don't tell me the computer will replace the artist?"
Paraphrasing, we could imagine a colleague, after visiting a mega data center, claiming: "Don't tell me the computer will replace the brain?"
Despite the unstoppable increase in computing power, and the massive multi-processors, we are still far from that. However, in certain areas, these systems have thrown our logarithm books and slide rules into the trash, among others. Who still plays at calculating integrals with pen and paper? Who still juggles with differential calculus, except for pure mathematicians?
Today, we believe that "the computer does everything." We write algorithms, provide data, run calculations until we get results. If we want to draw a building or a beautiful engineering work, it works perfectly. The theory of fluids is also a success.
We can place a surface element of any shape perpendicular to a gaseous flow and compute the swirling flow pattern around it, regardless of its form. Does this match experiment? Not always. Qualitatively, we understand the phenomenon—for instance, we can reliably calculate aerodynamic drag resulting from gas swirling. Similarly, we compute burning efficiency within a cylinder or convection currents in an enclosure. Predictive meteorology is advancing rapidly, providing forecasts for a few days ahead, except for "micro-events," which are highly localized and still unmanageable. Is this the case in every domain?
There are systems that refuse to be tamed by this modern-day lion tamer called the computer. These are "non-equilibrium" plasmas—champions across all categories. They also deviate from fluid theory, despite superficial similarities, because they are subject to long-range interactions due to electromagnetic fields, whose effects can only be evaluated by accounting for all ionic particles in the system.
Never mind, you say. It's enough to treat plasma as an N-body system. Easier said than done! Earlier we discussed galaxies as examples of collisionless systems. Tokamaks are another such example (ITER is a giant tokamak). The gas they contain is extremely sparse. Before operation begins, the internal pressure in ITER’s 840 cubic meter volume would be less than a fraction of a mercury millimeter. Why such low pressure? Because we aim to heat this gas beyond 100 million degrees. Yet you know that pressure is expressed as: p = nkT — where k is Boltzmann’s constant, T is absolute temperature, and n is the number of particles per cubic meter. Plasma confinement relies solely on magnetic pressure, which increases with the square of the magnetic field.
With a field intensity of 5.2 Tesla, magnetic pressure reaches 200 atmospheres. For plasma confinement, its pressure must remain far below this value. Due to the use of superconducting devices, the magnetic field cannot be increased indefinitely; thus, plasma density inside the reactor chamber remains limited to very low levels. From these facts, we see a system entirely free of collisions, escaping any reliable macroscopic description. Can we treat it like an N-body problem? Don’t even dream about it—neither now nor in the future. Local calculations, as possible with neutral fluid mechanics, are impossible. Every region is coupled to every other via electromagnetic fields. Take, for example, the transfer of energy from the plasma core to the walls. Besides conduction-like mechanisms and turbulence, a third mode emerges—called “abnormal transport”—which involves… waves.
In short, and in essence, a tokamak is an absolute nightmare for a theorist.
Plasma itself, aside from its uncontrollable behavior, is not the only factor involved. Everything else matters: among them, the inevitable ablation of particles from the walls. Glider pilots know that the key parameter of these machines is the lift-to-drag ratio—the number of meters flown per meter of altitude lost (the glide ratio). At a given speed, the sailplane wing generates a certain lift force. At the same speed, there is a drag force with two components: first, induced drag—a loss of energy due to vortices at the wingtips.
You cannot avoid it unless you have infinite wingspan. That’s why gliders have such large wingspans, often exceeding 20 meters, and aspect ratios—defined as half the wingspan divided by mean chord width—greater than 20. The second source of drag is viscous drag. This can be reduced by achieving the smoothest possible wing surface. Through careful polishing, we delay the onset of turbulence near the wing surface. This phenomenon is a fundamental fluid instability; the excellence of surface polish can only delay its appearance. Conversely, turbulence can be triggered by a disturbance. If you look at a stream of smoke in calm air—hot gas colored by its particles—it starts calm but becomes intensely turbulent after just a few millimeters of rise, regardless of how still the surrounding air is. Introducing an obstacle, like a needle, into this rising flow can trigger irreversible turbulence. Similarly, even a tiny imperfection on a polished sailplane wing can initiate turbulent phenomena, locally increasing air friction by up to a hundredfold and thus raising total drag. In modern sailplanes, we succeed in maintaining laminar airflow (non-turbulent, parallel layers) over more than 60% of the chord line. If, by chance, a mosquito crashes into the leading edge, this minute irregularity will trigger turbulence within a zone extending 30 degrees downstream. For this reason, in competition sailplanes—whose glide ratio exceeds 50—there is an automatic, timely leading-edge cleaning device: akin to a linear windshield, a sort of brush travels back and forth along the leading edge and returns to a hidden position.
Extensive efforts have been made to improve the overall glide ratio of airliners, aiming to reduce fuel consumption. In the 1960s, the Caravelle, capable of flying between Orly and Dijon, had a glide factor of 12. Today, even massive aircraft like the Airbus A380 achieve a glide factor exceeding 20.
That is, when propulsion is lost—four engines idle—starting from 10,000 meters altitude, they can glide over 200 kilometers.
Returning to plasmas and tokamaks: in these machines, microscopic turbulence can be triggered by minute particles torn from the walls and will spread throughout the reaction chamber. In terms of turbulence, the range is extremely broad, extending from micro-turbulence to large-scale electromagnetic plasma convulsions involving the entire volume.
In conclusion, engineers do not truly control the machine unless relying on approximate empirical “engineering laws” of limited reliability regarding operational systems. In a domain where non-equilibrium reigns supreme and measurements are extremely difficult, computers offer no real help. Experiment is the only guide. Moreover, extrapolation leads to discovering unforeseen phenomena—such as vertical plasma movement (VDE, Vertical Displacement Event)—which emerged when scaling up from the TFR at Fontenay-aux-Roses to the JET at Culham.
The recent failure of the NIF (National Ignition Facility, located in Livermore, California) is a striking example of a major setback in large, costly facilities, despite using the world’s most powerful computers. This conclusion was drawn by the NIC (National Ignition Campaign) after two years of trials, from 2010 to 2012. The system consists of 192 lasers delivering 500 terawatts—more than a thousand times the power of the entire U.S. electrical grid—in a few nanoseconds onto a spherical target 2 mm in diameter. This target is filled with a deuterium-tritium mixture and placed at the center of a cylindrical chamber 2 cm long and 1 cm in diameter, known as the Holraum (oven in German).
The plan is as follows: half of the lasers’ disc-shaped beams enter one side of the Holraum, while the other half strike the opposite side. These ultra-thin UV beams hit the inner walls of the oven, made of gold. The gold re-emits X radiation. Precisely focused laser beams create three distinct spots on the inner wall. The re-emitted X radiation then strikes the spherical target. We are now discussing indirect irradiation. This system was designed primarily to mimic the fusion stage of a hydrogen bomb, where X radiation (generated this time by a fission device) hits the walls of a shell called an ablator, containing the fusion fuel (lithium deuteride). In the NIF, this was replaced with a deuterium-tritium mixture that initiates fusion at a lower temperature—around 100 million degrees. The ablator (a thin spherical shell) sublimates and explodes outward and inward. We use this inward compression to create a “hot spot” at the target’s center, hoping to trigger ignition in an inertial confinement scheme.
All of this was calculated under the direction of John Lindl. In 2007, a paper presented during the Maxwell Prize ceremony described in detail what would happen. The theorists were so convinced that Lindl did not hesitate to claim ignition would mark the beginning of a vast series of experiments. The test manager shared this confidence and even set a deadline for operational success: October 2012—meant to crown thirty years of theoretical and technological effort.
The result was an immense failure, highlighted in a report issued by the U.S. Department of Energy (DOE) on July 19, 2012, under the supervision of Davis H. Crandall.
What must remain from this report—so crucial and detailed—is that despite the excellence of the work, both technologically and in measurement, nothing observed in the experiment bore any relation to the computed data or predictions derived from the world’s most powerful computers.
So much so that some observers began questioning whether these simulations offered any real value for future experiments.
The NIF crisis is clear: it is impossible to increase the number of lasers (neodymium-doped glass) due to cost. It is also impossible to increase their individual power—because, when overloaded beyond a certain energy threshold, they are prone to explode, regardless of homogeneity or glass quality.
To achieve ignition and inertial confinement fusion, implosion speed must reach at least 370 km/sec. Not only is this speed not achieved, but far more seriously, when the ablator shell turns into plasma and pushes its D-T fuel, “the piston mixes with the fuel” due to a well-known instability—the Raleigh-Taylor instability. To minimize this effect, we would need to thicken the ablator. But then its inertia increases, making it impossible to reach the required implosion speed again.
Computer simulations have yielded false results in every domain. As stated in the DOE report, modeling of interactions between lasers and walls (X-ray impact on gold walls) remains unsatisfactory, despite decades of research and hundreds of theses and papers. The same applies to interactions between X-ray beams—governed by a phenomenon known as “inverse Raman scattering”—with the gold plasma formed by sublimation of chamber wall gold. The interaction of X radiation with the ablator is also inaccurately simulated. Finally, the calculation algorithms (LASNEX) completely underestimated the impact of the Raleigh-Taylor instability—the deformation of the ablator/DT contact surface, resembling intestinal villi.
These failures reveal the limits of confidence we can place in highly sophisticated computer simulations when these machines confront truly out-of-equilibrium problems, especially nonlinear ones where multiple poorly modeled mechanisms play a role.
Dr. Jean Pierre Petit