Last week I promised to continue talking about the Private-Sector meeting hosted at NCSA, but as I started collecting my thoughts, I realized that I will probably be talking about this for the next several weeks, and that I needed to provide at least some background and context so that people can appreciate just how dramatic the recent advances in supercomputer-based modeling, simulation, and analysis have been. Broadly speaking, most commercial computational modeling falls into three categories: Computational Fluid Dynamics (CFD), Finite Element Analysis (FEM), and Computational Chemistry. The first two categories got their start in the mid-1960’s when mainframe computers finally became powerful enough to start to address simplified versions of interesting problems. Computational chemistry was actually started earlier, but the first efficient ab initio methods didn’t appear until the mid-1970’s.
One of the first real success stories for CFD was the Boeing 737. Boeing had been studying the idea of a small, (less than 100 passengers) airplane that would be self-servicing, and could fly in and out of airports too small to have terminals equipped with jet-ways. Since Boeing was late to the game, the engineers decided to use the fuselage structure from the 707, which not only saved time, it allowed 6-abreast seating (a plus for airline advertising – the inside didn’t look so small). This created a fairly short wide fuselage, which would have made rear-mounted engines both inefficient and sources of stress on the fuselage. Mounting the engines under the wings solved this problem, and in addition put their weight close to the center of lift, which reduced stress even further. However, in order to make the plane self-servicing, the fuselage had to be close enough to the ground to allow for internally stowed boarding stairs. This in turn meant that the engines couldn’t be pylon-mounted below the wings. Instead the engine mounts were made a forward extension of the airfoil and the tops of the engine nacelles blended into the wings. The wind tunnel tests were disastrous. The wing area above and to either side of the blended engine mount wasn’t providing any lift and was producing ferocious amounts of drag. Extending the wingspan could fix the missing lift, but the drag was so bad as to call into question whether the plane could ever fly profitably.
It was at this point that water cooler conversation between the wind tunnel engineers and the CFD modelers sparked interest on the part of the modelers. Because of the extensive wind tunnel data, it was a perfect opportunity for the CFD engineers to refine their models and confirm their accuracy. In the end, the answer turned out to be fairly prosaic; it was that the wind tunnel models had been built using the same approach Boeing had used for all its other jet planes with pylon-mounted engines. The engines were mocked-up as non-breathing solid slugs. This worked fine for engines at the end of long pylons, but didn’t work at all for engines that were, in effect, extensions of the leading edge airfoil. The answer, which was simply to replace the solid engine models with ones that were aspirated, was diagnosed by CFD modeling, and later confirmed by wind tunnel tests. The 737 went on to become the single most popular commercial aircraft ever built.
An analogous story exists for FEA. In late 1970s, the Ford motor company was starting development of the Taurus sedan, and for the first time decided to use a finite element model to simulate the results of crash tests. The codes had become good enough at that point that they could reproduce deformations of even fairly complex shapes fold-for-fold, crease-for-crease and bend-for-bend as long as the material in the individual parts was homogeneous. The idea was not to supplant the USDOT-mandated crash tests, but to figure out through a process of iterative refinement how to simplify the design, make it cheaper to manufacture, and still pass the crash test. That process was not only successful, it was so successful that GM immediately instituted a similar program.
So, what’s the point? The point is that these watershed events in commercial modeling and simulation were carried out on computers that, at best, were capable of 160 million floating point operations per second (160 MFLOPS) and had a maximum of 8 megabytes of main memory. Twelve years later, in the early 1990’s, the Intel Paragon supercomputer was being delivered with 1000 times this level of power (160 GigaFLOPS), and twelve years after that, Cray was delivering Cray XT3 computers with 1000 times this again (160 TeraFLOPS).
What can you do with one million times more computing capability than was used to crash test the Ford Taurus or model the aerodynamics of the Boeing 737? You can do amazing things. Finite Element Modeling is no longer limited to components that made from homogeneous materials. They can be laminated, fiber-reinforced, and even include materials that change phase or other behavior under stress or shock. This has led to major advances in crash worthiness, and weight reduction, to name a single industry.
Computational Fluid Dynamics has moved from incompressible, inviscid (no viscosity, no friction), “panel” modeling to compressible, turbulent, dissipative flow. More importantly, it has started merging with computational chemistry to model processes like combustion where it follows not only the fluid flow, but also the chemical reactions and reaction products. This is allowing modeling of everything from jet engine combustor cans, to diesel engines. In both cases, these are areas where even one percent improvements in efficiency have major economic consequences.
Tags: HPC, industry, NCSA
