非平衡世界(物理)
非平衡世界
让·皮埃尔·皮特 – 前研究主任 – 法国国家科学研究中心(CNRS)
2013年1月12日
当普通人想到一个系统的平衡时,他通常会想象一个球在井底,或者类似的东西。
热力学平衡理论包含一些更微妙的东西:动态平衡。最简单的例子就是我们呼吸的空气。其分子在各个方向上剧烈运动,平均热速度为400米/秒。这些分子以惊人的速度碰撞、相互作用。这些碰撞改变了它们的速度。然而,物理学家将这描述为统计上的稳定状态(使用的术语是“详细平衡”)。想象一个精灵,它可以在任何时间、任何地点测量某一方向上的分子速度,带有轻微的角度不确定性。在每个时间间隔,我们的精灵会统计速度V和V + ΔV,代数值。然后,它将这些值绘制在图表上,会看到一条漂亮的高斯曲线,其平均值在400米/秒附近。分子越快或越慢,其数量就越少。
他将测量设备指向空间中的任何方向重复这一操作,结果令人惊讶地一致。房间中的分子运动是各向同性的。此外,如果温度保持恒定,没有任何东西可以扰乱这种动态平衡,因为气体的温度正好是这种热运动的平均动能。物理学家会将这种气体描述为处于热力学平衡状态。这种状态是多方面的:空气分子没有球形对称性。双原子分子,如氧气或氦气,呈花生状。二氧化碳或水蒸气分子则有其他形状。所有这些物体在旋转时,可以像微型飞轮一样储存能量。这些分子也可以振动。能量均分的概念指出,能量必须在所有这些不同的“模式”中平均分配。在碰撞过程中,部分动能可以转化为分子的振动或旋转能量。相反的过程也是成立的。因此,这一切都基于统计学,我们的精灵可以统计有多少分子处于某种状态,具有某种动能,处于某种振动状态。回到我们呼吸的空气,这种统计导致了一个稳定状态。因此,这种介质被称为处于热力学平衡状态,即处于松弛状态。想象一个巫师拥有停止这些分子、冻结它们的旋转或振动、随意修改它们的能力,从而创造一种新的统计规律,扭曲这条美丽的高斯曲线,甚至创造出各向异性的事件,例如在某一方向上热速度是横向方向的两倍。最后,他让系统根据新的碰撞继续演化。需要多少次碰撞才能使系统恢复热力学平衡?答案是:非常少。分子在两次碰撞之间的平均自由时间给出了气体中弛豫时间的概念,即其回到热力学平衡的时间。
是否存在非平衡介质,其中分子的统计速度明显偏离这种舒适的各向同性和高斯曲线的美感?
哦,是的!而且这在宇宙中是大多数情况。一个星系,这个“宇宙岛屿”,由数百亿颗恒星组成,其质量大致相当,可以被视为一种气体介质,其中的“分子”应该是恒星。在这种情况下,我们发现了一个令人困惑的世界,其中恒星在与邻近恒星相遇之前,其平均自由时间是宇宙年龄的10000倍。我们所说的“相遇”是什么意思?是指两颗恒星猛烈碰撞吗?完全不是!在理论物理领域,称为气体动力学理论中,当一颗恒星在经过邻近恒星时其轨迹明显改变时,我们才认为发生了碰撞。
然而,计算证明这些事件极其罕见,因此我们由数百亿颗恒星组成的系统可以被视为通常没有碰撞的系统。
数十亿年来,我们的太阳轨迹是规律的,几乎是圆形的。如果我们的太阳有自我意识,并且由于没有相遇而不会改变节奏,它将完全忽略邻居的存在。它只感受到重力场是“平滑的”。它按照自己的节奏前进,就像在水池中一样,感觉不到其他恒星造成的任何起伏。立即出现的后果是:将我们的精灵,现在作为天文学家,放置在我们星系中的太阳附近,并要求他构建邻近恒星在所有方向上的速度统计。现在显而易见的事实出现了。从动态角度来看,介质是高度各向异性的。存在一个方向,其中恒星的运动速度(天文学家称为“残余速度”,相对于银河系平均旋转速度,大约在太阳附近为230公里/秒)几乎是其他任何横向方向的两倍。在我们呼吸的空气中,这被称为球形速度分布——现在,这变成了椭球形速度分布。到目前为止,一切顺利吗?这如何影响我们的视野和对世界的理解?这会改变一切!因为从远处,我们无法处理如此强烈非平衡系统的理论。
撇开由于暗物质(缺失质量)这一恶魔效应导致的星系矛盾状态不谈(这一现象于1930年由瑞士裔美国人弗里茨·兹维基发现),无论如何,我们无法制造任何自引力的点质量模型(在自身引力场中运行)。我们的物理仍然始终接近热力学平衡状态。显然,任何偏离这一状态的偏差都意味着偏离平衡,例如两种气体区域之间的温度差,这将导致热量的传递,来自热运动的动能传递。在这种情况下,如果我们让我们的精灵重新工作,他会得出结论:从动态角度来看,介质是“几乎各向同性”的。这将是我们的大气层的情况,即使被最猛烈的风暴穿过。
那么,是否不可能遇到、甚至“触碰到”一种气体介质或流体明显处于非平衡状态的情况?当穿过激波时,我们会发现这样的情况。这些是有限的区域,激波的厚度大约是平均自由程的数量级。
当气体穿过激波时,它会突然从接近热力学平衡的状态转变为“被冲击”的状态,而在几个平均自由程之后,热力学平衡会恢复。
四十年前,我们在现已拆除的“马赛流体力学研究所”实验室中报告了一项观察结果。当时我们有被称为“激波管”的设备,一种气体枪。原理是:利用爆炸,我们在稀薄气体中引发一个以数千米每秒的速度传播的激波——最初,这种气体的压力仅为几毫米汞柱。激波通过使气体重新压缩,从而增加其密度。
我们可以通过干涉测量法轻松而精确地跟踪密度的增加。当时,我们还测量了 Plexiglas 模型表面的热流。由于实验只持续了几毫秒,我们的测量设备必须具有快速响应。具体来说,它们是由一微米厚的金属薄膜在真空中涂覆在壁上,充当热敏电阻。我们通过记录这些传感器在加热时的电阻来评估热流。
有一天,我们将一个传感器直接放在管壁上。然后我们观察到,热流在激波通过后经过一定延迟才到达传感器,激波通过表现为密度的突然跳跃。然而,我们确认传感器的热延迟足够小,不会导致这种延迟。实际上,我们触碰到了一个向准热力学平衡状态回归的现象,位于激波下游。
我们可以将这与锤击进行比较。不仅密度突然增加,我们还观察到温度跳跃,这意味着分子的热速度增加。但在这波之后,各向同性要经过几个平均自由程后才恢复。在密度前沿之前,热运动被转化为与波方向垂直的运动。
当我们的传感器接收到热量时,这是由于空气分子撞击其表面。然而,在密度前沿之前的一段距离内,热运动是沿着壁面发展的。气体确实“被加热”了,但暂时无法将热量传递给壁面。在碰撞过程中,“速度椭球”转变为“速度球形”,传感器最终释放了它接收到的热流。我相信,根据我们当时使用的实验装置,我们记录到的热流是在密度前沿前约一厘米处。
因此,激波代表了非常薄的区域,其中气体介质处于高度非平衡状态。
我们如何应对这种情况?我们让这些区域等同于零厚度的表面。并且这已经有效了一个多世纪。
我年纪足够大,可以亲历计算机的整个历史,从开始到现在。当我还是“法国航空航天高等学院”学生时,大楼里没有计算机。这些计算机被安装在我们无法进入的圣地——“计算中心”。我们使用计算尺进行计算,这些计算尺对现在的年轻人来说是稀奇古怪的东西。在高等学校的课堂上,我们每个人都有一本对数表,每次考试都包括一个枯燥的数值计算测试,使用这些计算尺,现在这些计算尺被陈列在博物馆里。
当我离开航空航天高等学院时,机械计算机(FACIT)刚刚出现,是手动操作的。要乘以数字,就向一个方向转动曲柄;要除以数字,就向另一个方向转动。
1964年,教授或系主任使用了电动机器,这些机器在流体力学研究所的办公室中发出齿轮的噪音。计算机占据了尊贵的位置,像遥远的神灵一样,只能通过计算中心的窗户看到。这些计算机的功率相当于现在的掌上计算机,由穿着白袍的“祭司”服务。我们只能通过一堆被机械读卡器嘈杂地读取的穿孔卡片与它们交流。我们按秒购买“计算时间”,其价格昂贵得让现在的年轻人觉得是石器时代的事情。
微型计算机的入侵改变了这一切。此外,计算机的计算能力增长如此迅速,如今网络上充满了图片,显示着大量房间中装满了神秘的黑色箱子,处理着惊人的数据量。
兆次浮点运算、千兆次浮点运算、千万亿次浮点运算,应有尽有!在70年代,你可以轻松读取苹果II计算机的RAM内容,这些内容全部写成了一本小册子。
我们生活在一个普罗米修斯式的世界。我们能否说这些现代工具增强了我们对物理学的掌握?有一个轶事浮现在我的脑海中。在法国,我曾是微型计算机的先驱者,管理过一个基于苹果II的早期中心。当时,我也是艾克斯-普罗旺斯美术学院的雕塑教授。有一天,我展示了一个系统,使用平绘仪可以随意绘制出精确的透视图。一位老教授皱着眉头说:“别告诉我电脑会取代艺术家?”
如果转述一下,我们可以想象一位同事在参观了一个大型数据中心后,声称:“别告诉我电脑会取代大脑?”
尽管计算能力不断上升,以及多核处理器的大量使用,我们还远未达到那种程度。然而,在某些领域,这些系统已经将我们的对数表和计算尺等丢进了垃圾桶。还有谁在用纸笔手动计算积分?除了纯粹的数学家,还有谁在玩微积分?
如今,我们相信“电脑能做一切”。我们编写算法,提供数据,运行直到得到结果。如果要绘制建筑或精美的工程作品,这非常有效。流体力学理论也取得了成功。
我们可以将任意形状的表面垂直放置在气流中,并计算出围绕它的涡流流动图,无论其形状如何。这与实验相符吗?不总是。定性上,我们掌握了事件:例如,我们可以计算出由于这种气体涡流产生的可靠阻力值。同样,我们计算了气缸内部的燃烧效率,以及腔体内的对流。预测气象学正在迅速发展,可以预测几天的天气,除了“微小事件”,这些事件非常局部,仍然无法触及。这在所有领域都适用吗?
有一些物体拒绝被这个现代的“驯兽师”电脑所束缚。这些是等离子体,它们在所有类别中都拥有“非平衡”的称号。它们也偏离了流体力学理论,尽管有相似之处,因为它们受到电磁场的远程作用,这种作用只能通过考虑系统中所有离子粒子来评估。
无论怎样,有人可能会说。只需将等离子体视为一个N体系统。说起来容易做起来难!我们之前提到的星系就是无碰撞世界的例子。托卡马克装置也是(ITER是一个巨大的托卡马克装置)。它们所含的气体极其稀薄。在启动之前,ITER的840立方米内部压力将低于汞柱的几分之一。为什么压力如此之低?因为我们需要将这种气体加热到超过1亿度。而您知道,压力由公式p = nkT表示,其中k是玻尔兹曼常数,T是绝对温度,n是每立方米的粒子数。等离子体的约束仅依靠磁压,磁压随磁场的平方增加。
在5.2特斯拉的磁场强度下,磁压为200个大气压。为了约束等离子体,其压力必须远低于这个值。由于使用了超导装置,磁场不能无限增加,因此反应堆腔体内的等离子体密度被限制在非常低的水平。从这些事实可以看出,这是一个完全无碰撞的物体,无法被任何可靠的宏观定义所描述。我们可以将其视为N体问题吗?别做梦了,现在和未来都不行——无法像处理中性流体动力学那样进行局部计算。每一区域都通过电磁场与所有其他区域耦合。例如,考虑等离子体核心向壁面的能量传输问题。除了类似于传导的机制,以及由湍流引起的现象外,还出现了一种第三种模式,称为“异常传输”,它利用……波。
总之,托卡马克对理论家来说是一个真正的噩梦。
等离子体本身除了其不可控的行为外,还有其他因素。还有所有其他部分:特别是来自壁面不可避免的粒子剥落。滑翔者知道这些机器的基本参数:升阻比:它表示每米下降距离所行进的米数(滑翔比)。在一定速度下,滑翔机的机翼产生一定的升力。在相同速度下,会产生阻力,其有两个来源:首先,诱导阻力:一个
原始版本(英语)
Non equilibrium worlds (physics)
NON-EQUILIBRIUM WORLDS
Jean Pierre PETIT – Former research director – CNRS FR.
12 January 2013
When the man of the street thinks about equilibrium of a system, he usually sees a ball standing at bottom of a well, or something like.
The basics of thermodynamic equilibrium theory contain something more subtle: the dynamic equilibrium. The simplest example is the air we breathe. Its molecules are shaken in every direction, showing a mean thermal velocity of 400 m/s. At a tremendous rate, these molecules collide, interact. These shocks change their speed. However, the physicist will translate this into statistical stationary (the used word is “detailed balancing”). Imagine a goblin who, at any time and any point of room can measure molecular speed in a certain direction, modulo a slight angular uncertainty. At every time increment, our goblin counts V and V + AV, algebraic value. Then he plots these values on a graph, and sees growing a nice Gaussian curve, with a top mean value near 400 m/s. Then, the faster or slower are the molecules, the smaller is their population.
He repeats this work, aiming his measuring device towards any space direction, and, surprise, surprise, gets the same result. Molecules agitation in the room is isotropic. More, nothing can disturb this dynamic equilibrium, if temperature remains constant, because gas temperature is exactly the mean kinetic energy coming from this thermal agitation. The physicist will describe this gas in thermodynamic equilibrium. This state is multifaceted: air molecules have no spherical symmetry. Di-atomic molecules, oxygen or helium, are peanut shaped. Those of carbonic gas or water vapor have other shapes. All those objects, when rotating, can store energy as tiny flywheels. These molecules can also vibrate. Energy even-distribution concept says energy must be distributed equally into all these various “modes”. During a collision, some kinetic energy can be converted into vibrational or rotational energy of a molecule. The reverse is also valid. Then all this is statistics and our goblin can count how many molecules are in such and such state, have such kinetic energy, are in such vibrating state. Back to our breathing air, this census leads to the stationary status. This medium is then said to be in thermodynamic equilibrium, namely relaxed. Imagine a wizard having power to stop those molecules, stays put their rotation or vibration moves, modifying them at will, creating a new statistic law, deforming that beautiful Gaussian curve, even creating some anisotropic event, where for example thermal speed in one direction is increased two fold relative to transverse directions. After all, he would let the system evolve following further collisions. How many of these should be necessary for the system to go back towards thermodynamic equilibrium? Answer: very few. The mean free travel time of a molecule, in between two collisions, gives an idea of relaxation time in a gas, of its return time towards thermodynamic equilibrium.
Does-it exist non-equilibrium media, where molecular statistic speeds go notably out of this comfortable isotropy and beauty of these nice Gaussian curves?
Oh yes! And it’s even case majority in universe. A galaxy, this “universe-island”, comprised of many hundreds billions stars, whose mass are more or less some, can be seen as a gaseous medium, in which molecules should be… stars. In this precise case, one discovers a disconcerting world where the mean time free travel of a star, before any encounter with neighbor star, is ten thousand times age of universe. What do we mean by encounter? Should it be a collision where the two stars heavily smashed? Not even! In the theoretical physics domain we call kinetic theory of gases, we will consider collision when star trajectory is noticeably modified while crossing neighbor star.
However, calculation proves these events are extremely rare, and our hundreds billions stars system can be seen as usually no-colliding.
Since billions years, trajectory of our Sun is regular, quasi circular. If our Sun was self conscious, providing he did not change pace due to encounters, he would ignore having neighbors. He only senses the gravitational field as “smooth”. He walks along his pace like in a basin, while not sensing any bump created by other stars. Immediately the corollary surfaces: place our goblin, now an astronomer, in the vicinity of Sun in our Galaxy and ask him to build a speed statistic of neighbor stars in any direction. Obvious fact comes now. The medium, dynamically speaking, is strongly anisotropic. It does exist a direction where the stars’ agitation speeds (called residual speed by astronomers, relative to the mean rotation drive of the galaxy, quite circular and at 230 km/s near Sun) are practically two times more than in any other transverse direction. In our breathing air, this was called spheroid speed distribution – Now, this becomes ellipsoid speed distribution. So far, so good? How this does affect our vision, our understanding of the world? Change everything! Because by far we cannot deal with theories of so drastically non equilibrium systems.
Leaving aside paradoxical status where stand galaxies due to this damned effect of dark matter (missing mass), discovered in 1930 by the American, Swiss originated, Fritz Zwicky, and in any case we could produce any model of self gravitating, punctual mass (orbiting in their own gravitational field). Our physics stands always near a state of thermodynamic equilibrium. Obviously, any deviation of this or that represents a deviation against equilibrium, for example temperature gap between two gaseous areas, which will lead to heat transfer, a transfer of kinetic energy from thermal agitation. In this case, if we put back to work our goblin, he would conclude that medium, dynamically speaking, is “almost isotopic”. This will be case of our atmosphere, even crossed by the utmost windstorms.
Well then, is that impossible to encounter, “to put the fingers on” situations where a gaseous medium, a fluid, are frankly out of equilibrium? One will found such occurrences when crossing shock waves. They are limited areas, as precisely the thickness of shock wave has order of magnitude of small number of mean free path.
When it is crossed by a shock wave, a gas switches between states very abruptly, considering a state near thermodynamic equilibrium, in the “shocked” gas, is recovered after some mean free path times.
We reported an observation, forty years ago, in the laboratory where I worked, now dismantled, the “Institut de Mécanique des Fluides de Marseille”. We had then some sort of gas guns we called “shock tubes”. Outline is: using an explosive, we ignited a shock wave, propagating at several thousand meters/sec in a rare gas – Initially this gas was at some millimeter mercury pressure. The shock wave move did recompress gas, increasing its density.
We could follow easily and precisely the increase in density using interferometry. At the time, we also measured the heat flow at surface of Plexiglas mock-ups. As the experiments did last only fractions of milliseconds, our measuring devices must have fast response time. Precisely they where metallic films of some micron thickness, vacuum coated on the wall, they acted as thermistors. We evaluated heat flow by recording resistance of these wall sensors while they heated.
One day we placed a sensor straight on the tube wall. Then we observed the heat flow was reaching the sensor after a certain delay following shock wave passage, materialized by an abrupt density jump. Yet we made sure the thermal lag of the sensor was small enough for this delay was not coming from it. In fact we put the finger on a return phenomenon towards a quasi thermodynamic equilibrium, downstream the shock wave.
We can compare this one to a hammer slam. Not only the density is brutally increased, also we observed a temperature jump, meaning an increase of thermal speed of molecules. But behind this wave, isotropy is only seen after several mean free path times. Immediately before density front, increase of thermal agitation is translated by movements starting perpendicular to wave direction.
When our sensor collects heat, this results from impact of air molecules on its surface. Yet immediately before the density front, on some distance, thermal agitation was developing parallel to the wall. The gas was well “heated” but momentarily unable to transfer this heat to the wall. Over the collisions the “ellipsoid of speeds” was transforming itself into “spheroid of speeds”, and the sensor ended in giving return of the heat flow it received. I believe remembering, with the experimental setup we had, that we recorded this heat flow near one centimeter before density front.
So the shock waves represent tiny thickness areas, where the gaseous medium is strongly out of equilibrium.
How do we manage this? To make equal these areas to no thickness surfaces. And this works since almost one century.
I am old enough to have known almost all of the computer history, since beginning. When I was a student at “Ecole Nationale Supérieure de l’Aéronautique”, there was no computer in house. These ones were installed inside sanctuaries called “calculation centers” we could not access. We were calculating using sliding rules, curiosity objects for today’s generation. In superior school classes we all had our logarithm book, and every examinations include a boring numeric calculation test using these items, which are by now exposed in museums.
When I leaved Sup Aero School were just coming mechanical calculators (FACIT), hand powered. To multiply numbers your turn crank one way, to divide you turn opposite.
The professors, or department managers, had electrical machines, which were breaking the silent’s office with their cog noise at Institut Mécanique des Fluides, 1964. Computers had the place of honor, as distant gods only seen through a window, in these calculation centers. These computers, having the power of a today’s pocket computer, were served by priests in white cassocks. You could only communicate with them via a thick amount of punched cards noisily read by a mechanical “card reader”. We bought “calculation time” by the second, so costly it was. It is Neolithic vision for young people of today.
Micro Computers invasion has changed all this. More, the increase of computers power being eruptive, the Net is now full of pictures where you see vast rooms filled with mysterious black cabinets, managing jaw dropping quantities of data.
Megaflops, gigaflops, petaflops, galore! Back into seventies, you could easily read the content of an Apple II RAM, which was entirely written as a small booklet.
We are in a promethean world. Can we say these modem tools increase our physics mastering? An anecdote is coming to my mind. In France, I have been a pioneer for micro computing, having managing one of the first centers (based on Apple II) dedicated to this technology. By this time, also sculpture professor to Ecole des Beaux Arts of Aix en Provence, one day I presented a system, which a flatbed plotter which drew at will master perspective drawings. An old professor, raising eyebrows, said then “don’t tell me computer will replace artist?”
Paraphrasing this we could imagine any fellow who, after visiting a mega data center, claimed: “Don’t tell me computer will replace brain?”
In spite of unstoppable computing power escalation, and massive multi processors, we are far away from it. However, in certain areas, these systems have sent to scrap our logarithm books and sliding rules, amongst others. Who is still playing to calculate integrals, pen and paper? Who is still juggling with differential calculus, apart pure mathematicians?
Nowadays we believe in “computer’s doing everything”. We built algorithms, we supply data, we run until we receive results. If it is to draw any building or nice engineering work, this works so well. Theory of fluids is also a success.
我们可以将任意形状的表面元素垂直放置于某种气流中,计算其周围的旋涡流动模式,无论其外形如何。这是否符合实验结果?并不总是如此。定性上,我们能够掌握这一现象,例如通过气体旋涡计算出可靠的空气动力学阻力数值。同样地,我们也能计算气缸内的燃烧效率,以及封闭空间中的对流情况。预测性气象学发展迅速,可提供数天时间范围的预报,除了“微事件”——这些事件非常局部化,目前尚无法处理。在所有领域都是如此吗?
有些物体拒绝被这种所谓的现代“驯狮人”计算机所束缚。这些就是“非平衡态”等离子体,无论何种类别均属此类。尽管它们与流体力学理论有某种相似之处,但依然偏离该理论,因为它们受远程作用力影响,即电磁场的作用,而这种作用只有在考虑构成系统的全部离子粒子时才能评估。
你可能会说:“没关系,只需将等离子体视为N体系统即可。”说起来容易做起来难!我们之前提到过星系,作为无碰撞世界的例子。托卡马克(ITER就是一个巨大的托卡马克)是另一种情况。它们所包含的气体极为稀薄。在启动前,ITER 840立方米内部的填充压力将低于汞柱毫米的几分之一。为何压力如此之低?因为我们需要将这种气体加热至超过一亿度。然而你知道,压力表达式为:p = nkT —— k为玻尔兹曼常数,T为绝对温度,n为每立方米的粒子数。等离子体的约束完全依赖于磁压,而磁压随磁场强度的平方增加。
当磁场强度达到5.2特斯拉时,磁压可达200个大气压。为了实现等离子体约束,其内部压力必须远低于这一数值。由于使用了超导装置,磁场无法无限增强,因此反应腔内的等离子体密度只能维持在极低水平。从这些事实可以看出,这完全是一个无碰撞的系统,无法用任何可靠的宏观定义来描述。我们能否将其当作N体问题来处理?别做梦了——无论是现在还是未来,都不可能像处理中性流体力学那样进行局部计算。每个区域都通过电磁场与其他区域耦合。以等离子体核心向壁面的能量传输问题为例,除了类似传导的现象和湍流机制外,还存在一种被称为“异常输运”的第三种方式,它利用的是……波。
简而言之,对于理论家而言,托卡马克无疑是一场绝对的噩梦。
等离子体本身(除其不可控行为外)并非唯一涉及的因素。还有其他一切:其中不可避免的是壁面粒子的烧蚀。滑翔机飞行员都知道,这类飞行器的基本参数是升阻比:它表示每下降一米高度所飞行的水平距离(即滑翔比)。在特定速度下,滑翔机机翼产生一定的升力。在同一速度下,也会产生阻力,这种阻力分为两部分:首先是诱导阻力,即由于翼尖涡流造成的能量损失。
除非拥有无限翼展,否则无法避免这一现象。因此滑翔机才拥有极长的翼展,通常超过20米,并且展弦比(半翼展与平均弦长之比)往往大于20。第二类阻力是粘性阻力。通过追求最光滑的翼面来减小这种阻力。良好的抛光可以延迟翼面附近湍流的产生。然而,这一现象本质上是流体不稳定性的一种表现,即使表面抛光再好,也只能延缓其发生。相反,只要出现扰动,湍流就可能被触发。如果观察在静止大气中上升的一缕烟雾,它最初是平静的,但上升不到一厘米后就会变得极度湍流,无论周围空气多么平静。若在上升气流中插入一根针状障碍物,即可引发不可逆的湍流。同样的情况也会发生在滑翔机机翼光滑表面的微小粗糙处,这些微小不平会局部触发湍流现象,使空气摩擦力增加约一百倍,从而显著提高总阻力。在现代滑翔机中,我们已能将层流(非湍流、平行分层)气流保持在弦长60%以上的区域。如果恰巧有一只蚊子撞上机翼前缘,这微小的凸起会在前方约30度范围内引发湍流。正因为如此,在竞赛用滑翔机中,其滑翔比超过50,必须配备自动触发的前缘清洁装置——可类比为线性风挡,一种刷子沿着前缘来回移动,然后返回隐藏位置。
为了提高客机整体滑翔比以降低油耗,人们投入了大量努力。上世纪六十年代的“卡拉维尔”客机(能从奥利机场飞至第戎)滑翔比仅为12。如今,即便是巨大的A380客机,滑翔比也已超过20。
也就是说,当失去推进力、四台发动机停转时,它们从一万米高空起飞,仍可滑翔超过200公里。
回到等离子体与托卡马克:在这些装置中,微小的湍流可能由从壁面剥落的微粒触发,并迅速蔓延至整个反应腔。就湍流而言,其尺度范围极为广泛,从小尺度湍流到涉及整个体积的电磁等离子体剧烈振荡均有涵盖。
综上所述,工程师若不借助近似经验性的“工程定律”(这些定律可靠性较低)来操作系统,就根本无法掌控机器。在这个非平衡态为王、测量极其困难的领域,计算机毫无帮助。唯有实验才是唯一的引领者。也正是通过实验外推,人们才发现了诸如垂直等离子体位移事件(VDE)这类未曾预料的新现象,这种现象在从法国福尔纳耶-欧-罗兹TFR装置跃迁至英国库尔汉JET装置时首次显现。
最近位于加州利弗莫尔的国家点火装置(NIF)的失败,是大型昂贵设施在世界最强计算机协助下遭遇严重挫折的典型例证。这是美国能源部(DOE)于2012年7月19日发布的报告所揭示的结果,该报告由戴维斯·H·克兰德尔监督撰写,总结了2010至2012年为期两年的“国家点火计划”(NIC)试验。
该系统由192束激光组成,能在数纳秒内释放出500太瓦的能量(超过美国电网功率的千倍),照射在直径仅2毫米的球形靶上,靶内填充氘-氚混合物,而该靶又被置于一个长2厘米、直径1厘米的圆柱形腔体(德语“Holraum”意为“炉膛”)中心。
其原理如下:一半激光束以碟状光束形式从Holraum一侧开口射入,另一半则从相对侧射入。这些极薄的紫外光束击中由金制成的炉膛内壁,金层随即重新辐射出X射线。精确聚焦的激光在内壁形成三组光斑。重新辐射出的X射线随后轰击球形靶。这被称为间接照射。该系统的设计初衷是模拟氢弹的聚变阶段:此时X射线(由裂变装置产生)撞击一个称为“烧蚀层”的壳体,其中包含聚变炸药(氘化锂)。在NIF中,这一部分被氘-氚混合物取代,其聚变可在较低温度(约一亿度)下启动。烧蚀层(即薄球壳)受热升华并朝内外两个方向爆炸。我们利用这种反向压缩,在靶中心制造出“热点”,希望以此在惯性约束条件下引发点火。
这一切均在约翰·林德(John Lindl)的指导下完成计算。2007年,在授予麦克斯韦奖时,一篇关于这位科学家的论文详细描述了预期发生的情况。理论家们信心十足,林德甚至断言点火将成为一系列大规模实验的起点。测试负责人也设定了明确目标:2012年10月为成功运行的最后期限,这本应是理论与技术三十年努力的巅峰。
然而结果却是一场巨大的失败。这一结论由美国能源部于2012年7月19日发布的报告明确指出,报告由戴维斯·H·克兰德尔监督撰写。
该报告最核心的观察在于:尽管这项工作在技术和测量方面均达到极高水准,但实验中所获得的结果与借助世界最强计算机计算出的数据和预测毫无关联。
甚至有观察者开始质疑:这些模拟是否还能为后续实验提供任何投资价值。
NIF危机显而易见——由于成本原因,无法增加激光数量(钕掺杂玻璃激光器)。同样,也无法提高单束激光的功率——事实上,当能量输入超过某一阈值时,无论均匀性或多晶玻璃质量如何,激光器都极易爆炸。
要成功实现点火与惯性约束聚变,内爆速度必须至少达到每秒370公里。然而不仅未达到这一速度,更严重的是,当构成烧蚀装置的壳层转变为等离子体并推动其氘-氚燃料时,“活塞”会与燃料混合,这是由著名的瑞利-泰勒不稳定性导致的。为减小其影响,我们必须加厚烧蚀层。但这样一来,惯性增大,内爆速度阈值又无法达到。
计算机模拟在所有领域均给出错误结果。正如美国能源部报告所指出的,激光与壁面相互作用(X射线对金壁的冲击)的建模并不理想,尽管在此问题上已投入数十载研究,撰写成百上千篇论文和学位论文。同样,X射线束与金等离子体之间的相互作用(遵循“逆拉曼散射”定律,源于腔体内壁金的升华)也未被正确模拟。X辐射与烧蚀层之间的相互作用同样建模失败。最后,计算算法(LASNEX)完全低估了瑞利-泰勒不稳定性的影响,对烧蚀层与氘-氚接触面的形变(类似肠道绒毛)预测严重不足。
这些失败揭示了我们对超级计算机模拟结果所能建立的信心极限——当这些机器试图直接应对严重非平衡问题,尤其是高度非线性问题时,许多建模较差的机制共同作用,使结果不可靠。
让·皮埃尔·佩蒂博士