Mathematics and Science


Download Mathematics and Science


Preview text

National Science Foundation

Division of Mathematical Sciences
Mathematics and Science
Dr. Margaret Wright Prof. Alexandre Chorin
April 5, 1999

PREFACE
Today's challenges faced by science and engineering are so complex that they can only be solved through the help and participation of mathematical scientists. All three approaches to science, observation and experiment, theory, and modeling are needed to understand the complex phenomena investigated today by scientists and engineers, and each approach requires the mathematical sciences. Currently observationalists are producing enormous data sets that can only be mined and patterns discerned by the use of deep statistical and visualization tools. Indeed, there is a need to fashion new tools and, at least initially, they will need to be fashioned specifically for the data involved. Such will require the scientists, engineers, and mathematical scientists to work closely together.
Scientific theory is always expressed in mathematical language. Modeling is done via the mathematical formulation using computational algorithms with the observations providing initial data for the model and serving as a check on the accuracy of the model. Modeling is used to predict behavior and in doing so validate the theory or raise new questions as to the reasonableness of the theory and often suggests the need of sharper experiments and more focused observations. Thus, observation and experiment, theory, and modeling reinforce each other and together lead to our understanding of scientific phenomena. As with data mining, the other approaches are only successful if there is close collaboration between mathematical scientists and the other disciplinarians.
Dr. Margaret Wright of Bell Labs and Professor Alexandre Chorin of the University of California-Berkeley (both past and present members of the Advisory Committee for the Directorate for Mathematical and Physical Sciences) volunteered to address the need for this interplay between the mathematical sciences and other sciences and engineering in a report to the Division of Mathematical Sciences. Their report identifies six themes where there is opportunity for interaction between the mathematical sciences and other sciences and engineering, and goes one to give examples where these themes are essential for the research. These examples represent only a few of the many possibilities. Further, the report addresses the need to rethink how we train future scientists, engineers, and mathematical scientists.
The report illustrates that some mathematical scientists, through collaborative efforts in research, will discover new and challenging problems. In turn, these problems will open whole new areas of research of interest and challenge to all mathematical scientists. The fundamental mathematical and statistical development of these new areas will naturally cycle back and provide new and substantial tools for attacking scientific and engineering problems.
The report is exciting reading. The Division of Mathematical Sciences is greatly indebted to Dr. Wright and Professor Chorin for their effort.
Donald J. Lewis Director (1995-1999) Division of Mathematical Science National Science Foundation

1 Overview
Mathematics and science1 have a long and close relationship that is of crucial and growing importance for both. Mathematics is an intrinsic component of science, part of its fabric, its universal language and indispensable source of intellectual tools. Reciprocally, science inspires and stimulates mathematics, posing new questions, engendering new ways of thinking, and ultimately conditioning the value system of mathematics.
Fields such as physics and electrical engineering that have always been mathematical are becoming even more so. Sciences that have not been heavily mathematical in the past---for example, biology, physiology, and medicine---are moving from description and taxonomy to analysis and explanation; many of their problems involve systems that are only partially understood and are therefore inherently uncertain, demanding exploration with new mathematical tools. Outside the traditional spheres of science and engineering, mathematics is being called upon to analyze and solve a widening array of problems in communication, finance, manufacturing, and business. Progress in science, in all its branches, requires close involvement and strengthening of the mathematical enterprise; new science and new mathematics go hand in hand.
The present document cannot be an exhaustive survey of the interactions between mathematics and science. Its purpose is to present examples of scientific advances made possible by a close interaction between science and mathematics, and draw conclusions whose validity should transcend the examples. We have labeled the examples by words that describe their scientific content; we could have chosen to use mathematical categories and reached the very same conclusions. A section labeled “partial differential equations” would have described their roles in combustion, cosmology, finance, hybrid system theory, Internet analysis, materials science, mixing, physiology, iterative control, and moving boundaries; a section on statistics would have described its contributions to the analysis of the massive data sets associated with cosmology, finance, functional MRI, and the Internet; and a section on computation would have conveyed its key role in all areas of science. This alternative would have highlighted the mathematical virtues of generality and abstraction; the approach we have taken emphasizes the ubiquity and centrality of mathematics from the point of view of science.
2 Themes
As Section 3 illustrates, certain themes consistently emerge in the closest relationships between mathematics and science:
• modeling • complexity and size • uncertainty • multiple scales • computation • large data sets.
1 For compactness, throughout this document “mathematics” should be interpreted as “the mathematical sciences”, and “science” as “science, engineering, technology, medicine, business, and other applications”.

2.1 Modeling
Mathematical modeling, the process of describing scientific phenomena in a mathematical framework, brings the powerful machinery of mathematics---its ability to generalize, to extract what is common in diverse problems, and to build effective algorithms---to bear on characterization, analysis, and prediction in scientific problems. Mathematical models lead to “virtual experiments” whose real-world analogues would be expensive, dangerous, or even impossible; they obviate the need to actually crash an airplane, spread a deadly virus, or witness the origin of the universe. Mathematical models help to clarify relationships among a system's components as well as their relative significance. Through modeling, speculations about a system are given a form that allows them to be examined qualitatively and quantitatively from many angles; in particular, modeling allows the detection of discrepancies between theory and reality.
2.2 Complexity and Size
Because reality is almost never simple, there is constant demand for more complex models. However, ever more complex models lead eventually---sometimes immediately--to problems that are fundamentally different, not just larger and more complicated. It is impossible to characterize disordered systems with the very same tools that are perfectly adequate for well-behaved systems. Size can be regarded as a manifestation of complexity because substantially larger models seldom behave like expanded versions of smaller models; large chaotic systems cannot be described in the same terms as smalldimensional chaotic systems.
2.3 Uncertainty
Although uncertainty is unavoidable, ignoring it can be justified when one is studying isolated, small-scale, well-understood physical processes. This is not so for large-scale systems with many components, such as the atmosphere and the oceans, chemical processes where there is no good way to determine reaction paths exactly, and of course in biological and medical applications, or in systems that rely on human participation. Uncertainty cannot be treated properly using ad hoc rules of thumb, but requires serious mathematical study. Issues that require further analysis include: the correct classification of the various ways in which uncertainty affects mathematical models; the sensitivities to uncertainty of both the models and the methods of analysis; the influence of uncertainty on computing methods; and the interactions between uncertainty in the models themselves and the added uncertainty arising from the limitations of computers.
Uncertainty of outcome is not necessarily directly related to uncertainty in the system or in the model. Very noisy systems can give rise to reliable outcomes, and in such cases it is desirable to know how these outcomes arise and how to predict them. Another extreme can occur with strongly chaotic systems: even if a specific solution of a model can be found, the probability that it will actually be observed may be nil; thus it may be necessary to predict the average outcome of computations or experiments, or the most likely outcome, drawing on as yet untapped resources of statistics.

2.4 Multiple Scales
The need to model or compute on multiple scales arises when occurrences on vastly disparate scales (in space, time, or both) contribute simultaneously to an observable outcome. In turbulent combustion, for example, the shape of the vessel is important and so are the very small fluctuations in temperature that control the chemical reactions. Multiple scales are inherent in complex systems, a topic of great importance across science, whenever entities at microscales and macrolevels must be considered together.
When it is known in advance that phenomena on different scales are independent, one may rely on a separate model on each scale; but when different scales interact, or when the boundaries between scales become blurred, models are needed that allow interactions between scales without an undue sacrifice of structure or loss of information at any scale. A related complication is that the finiteness of computers limits the range of scales that can be represented in a given calculation; only mathematical analysis can overcome this built-in restriction.
2.5 Computation
Experiment and theory, the two classical elements of the scientific method, have been joined by computation as a third crucial component. Computations that were intractable even a few years ago are performed routinely today, and many people pin their hopes for mastering problem size and complexity on the continuing advent of faster, larger computers. This is a vain hope if the appropriate mathematics is lacking. For more than 40 years, gains in problem-solving power from better mathematical algorithms have been comparable to the growth of raw computing speed, and this pattern is likely to continue. In many situations, especially for multiscale and chaotic problems, fast hardware alone will never be sufficient; methods and theories must be developed that can extract the best possible numerical solutions from whatever computers are available.
It is important to remember that no amount of computing power or storage can overcome uncertainties in equations and data; computed solutions cannot be understood properly unless the right mathematical tools are used. A striking visualization produced over many days of computation is just a pretty picture if there are flaws in the underlying mathematical model or numerical methods, or if there are no good ways to represent, manipulate, and analyze the associated data.
It is also worthy of note that computation has come to permeate even the traditional core mathematical areas, which allot expanding roles for computation, both numerical and symbolic.
2.6 Large Data Sets
The enormous sets of data that are now being generated in many scientific areas must be displayed, analyzed, and otherwise “mined” to exhibit hidden order and patterns. However, large data sets do not all have similar characteristics, nor are they used in the same way. Their quality ranges from highly accurate to consistently noisy, sometimes with wide variations within the same data set. The definition of an “interesting” pattern is not the same nor even similar in different scientific fields, and may vary within a given field. Structure emerges in the small as well as in the large, often with differing

mathematical implications. Large data sets that need to be analyzed in real time---for instance, in guiding surgery or controlling aircraft---pose further challenges.
3 Examples
The examples in this section, described for a general scientific audience, illustrate the scientific and technological progress that can result from genuine, continuing, working relationships between mathematicians and scientists. Certain well publicized pairings, such as those between modern geometry and gauge field theory, cryptography and number theory, wavelets and fingerprint analysis, have been intentionally omitted---not to slight their remarkable accomplishments, but rather to demonstrate the breadth and power of connections between mathematics and science over a wide range of disparate, often unexpected, scientific applications.
3.1 Combustion
Combustion, a critical and ubiquitous technology, is the principal source of energy for transportation, for electric power production, and in a variety of industrial processes. Before actually building combustion systems, it is highly desirable to predict operating characteristics such as their safety, efficiency, and emissions. Mathematicians, in collaboration with scientists and engineers, have played and continue to play a central role in creating the analytical and computational tools used to model combustion systems. Two examples---modeling the chemistry of combustion and engineering-scale simulation---illustrate the ties between mathematics and practical combustion problems.
Modeling the chemistry of combustion. To model combustion it is necessary to understand the detailed chemical mechanisms by which fuel and air react to form combustion products. For a complex hydrocarbon fuel such as gasoline, whose burning involves thousands of distinct chemical species, one must identify the reactions that are most important for the combustion process. The rates of reaction, which are sensitive functions of temperature and pressure, must also be estimated, along with their energetics, e.g. the heats of formation of the various species.
For more than twenty years, mathematicians and chemists have worked together on computational tools that have become critical to the development of reaction mechanisms. The need for robust and accurate numerical solvers in combustion modeling was clearly understood as early as the 1970s. In response to this need, algorithms and software for solving stiff systems of ordinary differential equations were developed and combined into integrated packages for chemically reacting systems, such as the Chemkin package developed at the Sandia National Laboratory. Given arbitrarily complex chemical reaction mechanisms specified in a standard format, Chemkin automatically generates an interface to numerical methods that compute various chemically reacting systems. These include spatially homogeneous systems as well as a variety of one-dimensional systems, such as premixed flames, opposed-flow diffusion flames, and detonation waves.
The mathematical and numerical analysis embodied in Chemkin has been a key ingredient in designing and evaluating mechanisms, including those in wide laboratory

use. The existence of a reliable and generalizable mathematical model facilitates the testing of new ideas in mechanism design, since the effects of modifying a chemical mechanism can be assessed directly. Finally, the mathematical software is not only sufficiently robust to model arbitrarily complex chemical reaction mechanisms, but also accurate enough so that the numerical error is negligible relative to laboratory measurements.
Chemkin represents an amalgam of mathematical analysis, numerical methods, and software development. The history of Chemkin illustrates the fact that in many application areas advanced mathematical ideas are more likely to be used by scientists and engineers if they are embodied in software.
Engineering-scale simulation. The goal in this area is to represent the threedimensional fluid dynamics and other physical processes as they occur in combustion devices such as internal combustion engines, industrial and utility burners, and gas turbines. Two issues make these simulations particularly challenging. The first is the number and complexity of the physical processes that must be represented, which include fluid dynamics, heat and mass transport, radiative heat transfer, chemical kinetics, turbulence and turbulent combustion, and a variety of multiphase fluid flow phenomena. The second is the enormous range of length and time scales in such systems. The relevant physical processes must operate simultaneously on scales ranging from the smallest turbulent fluctuations (10-6 meters) up to a utility boiler (100 meters).
Mathematicians have consistently been at the forefront in developing innovative methods for modeling engineering combustion problems. Within computational fluid dynamics, a huge field that encompasses numerous applications, many of the mathematical methods have arisen as a direct response to specific difficulties presented by combustion problems. Examples include novel discretization techniques, such as high-order accurate finite-difference methods and vortex methods; adaptive gridding techniques, which estimate the error as a calculation is running and locally increase or decrease the grid density to maintain a uniform level of accuracy; and new methods for problems in complex geometries, such as the overset grid and embedded boundary methods.
A major mathematical contribution has been asymptotic analysis that makes possible an understanding of the coupling between different physical processes in these complex systems; insights from asymptotic analysis are used to find stable and accurate representations of these processes in terms of simpler subprocesses. Examples include the use of low Mach-number asymptotics to eliminate zero-energy acoustic waves while retaining the bulk effects of compression and expansion due to heat release, and fronttracking methods based on a separation-of-scales analysis for thin premixed flames.
Today, packages such as Chemkin are part of the standard toolkit for combustion researchers and engineers. New numerical methods for engineering-scale simulations of combustion systems have been extensively implemented as research codes, and are slowly making their way into production engineering software.
Looking ahead, the requirements of combustion simulation suggest promising directions for mathematics research that will make new science possible. Even with the most powerful computers, it is impossible to represent directly all of the processes

involved at all of the relevant length scales. Instead, one needs to introduce sub-grid models that capture the effect on the large scales of all the scales below the resolution limit of the calculation. In the area of chemical reaction mechanisms, this corresponds to the development of reduced mechanisms, i.e., reaction mechanisms with a few tens of species that accurately represent energy release and emissions. The systematic development of reduced mechanisms will involve a variety of mathematical tools, from statistical analysis and optimization to dynamical systems.
For engineering-scale simulations, modeling at the sub-grid scale is a central requirement for future progress. The development of sub-grid models for turbulent combustion is particularly difficult, since chemical reactions are sensitive to small-scale fluctuations in temperature and composition. The effect of these fluctuations must be separated from the larger-scale dynamics representable on the grid. There has been renewed progress in turbulence modeling in recent years, based on ideas from mathematical statistical mechanics, and extension of these ideas to turbulent combustion represents a substantial mathematical challenge; any successes will have enormous practical consequences.
3.2 Cosmology
Cosmology, which once consisted of speculations based on extremely scarce observations, has become a science rich in both data and theory. The relativistic “hot big bang” model for the expanding universe is widely accepted today and supported by a substantial body of evidence; just as significantly, no data are inconsistent with this model. But the standard cosmology leaves unanswered certain key questions about the nature and evolution of the universe, including the quantity and composition of energy and matter, and the origin and nature of the density perturbations that seeded all the structure in the universe. While a promising paradigm for extending the standard cosmology---inflation plus cold dark matter---is being developed and tested, many fundamental cosmological issues remain to be resolved or clarified. (“Inflation” refers to the quantum-mechanical fluctuations occurring during a very early burst of expansion driven by vacuum energy; cold dark matter consists of slowly moving elementary particles left over from the earliest fiery moments of the universe.) Mathematical progress in two broad areas will be essential for cosmology: techniques for dealing with massive data sets and large-scale, nonlinear, multiscale modeling and numerical simulation.
Massive data sets. As cosmology moves toward becoming an exact science, major mathematical challenges arise in coping with, displaying, understanding, and explaining the unprecedented avalanche of high-quality data expected during the next few years. To mention only a few sources, NASA's MAP and the European Space Agency's Planck Surveyor will map the full sky to an angular resolution of 0.1°, allowing determination of the mass distribution in the universe before nonlinear structures formed. The Sloan Digital Sky Survey will obtain the redshifts of a million galaxies over 25% of the northern sky, and the Two-Degree Field Survey will collect 250,000 redshifts in many 2° patches of the southern sky, together covering around 0.1% of the observable universe and mapping structures well beyond the largest presently known size. In addition,

experiments at accelerators, nuclear reactors, and large-underground detectors are planned or in place to search for neutralinos, explore the entire theoretically favored mass range, and pursue neutrino mass. The quantity, quality, and nature of the data require connections between mathematics and cosmology. Although some generic principles of data analysis have emerged, the various features to be “mined” in cosmological data differ from one another in ways whose definition remains far from precise. The patterns of interest change from application to application, and may even vary when several uses are made of the same data set. In contrast to data from other scientific areas, the cosmological data are likely to be of very high quality; thus it will be important to squeeze every possible insight from each data set.
A further striking feature of cosmological data is the vastness of the scale ranges in almost every dimension. Data will be gathered not only on the scale of galaxies, but also from particle physics; the “hot” part of big bang cosmology implies the need for physics of ever-higher energies and ever-shorter times.
Finally, astronomical data not only arrive at very high speed, but patterns detected in real time may be used to control subsequent data collection adaptively---for example, to concentrate on regions where something interesting is being observed. Careful mathematical analysis will be needed because techniques appropriate for “on the fly” data mining are quite different from those used to examine data at leisure.
Modeling and simulation. The mathematical models in cosmology typically involve highly nonlinear coupled partial differential equations that cannot conceivably be solved analytically---for instance, the equations may model turbulence in nuclear explosions that occur when stars blow themselves apart. Small differences in the mathematical form of these equations can lead to big variations in the predicted phenomena. Cosmological models need to be complex enough to capture all the phenomena reflected in the data, yet amenable to analysis. Important modeling questions arise in the inverse problem, reasoning backwards from observations and images to find the laws that created them. The hope is that, by varying the initial conditions and the parameters embedded in mathematical models, simulations can reveal the fundamental parameters that define the universe, such as the mean density and Einstein's cosmological constant Λ.
Like the associated data, cosmological models contain enormous ranges of scales that pose difficulties for both mathematical analysis and numerical solution. Creating a priori cutoffs that define different scale regimes is a common tactic, but it breaks down as the ends of the scales approach each other---when the noise for a large scale becomes comparable to the signal for the next-smaller scale. Subtle mathematical modeling is essential to separate the phenomena that can be ignored from those that count.
Carefully executed large-scale simulations match observations well, and have become a standard tool in modern astrophysics. Cosmological calculations consume a large portion of the available supercomputer cycles in the United States, and worldwide as well. This is because solving the complex partial differential equations of cosmology over the wide multidimensional range of scales for problems of realistic size is a massive undertaking at the edge of current mathematical and computational capabilities.
To illustrate these points, consider the formation and evolution of galaxy clusters, the largest objects in the universe. For a simulation to be credible, enormous dynamic ranges in size and density are required to resolve individual galaxies within a cluster; the range

of mass is perhaps 109, over a time period of 10 billion years. One approach is to begin with a “box” (part of the universe) that is initialized with a large number (say, 10 million) of uniformly distributed particles, and then to follow the motion of each particle as its position and velocity are perturbed following theoretical predictions.
This approach poses formidable difficulties for numerical methods in addition to those arising from the already-mentioned nonlinearities and ranges of scale: the particles move non-uniformly, model geometries are highly complex, and there is a demand for everfiner resolution. A fruitful arena for mathematical analysis is the effect of decisions about partition into scales on numerical accuracy; here the recent mathematical work on particle methods and on fast summation and multipoles may be of key importance.
Since cosmological calculations will continue to tax the capabilities of the highestperformance available hardware, further mathematical and algorithmic ingenuity is needed to make the implementations of these simulations run efficiently on parallel machines without inordinate specialization for a particular hardware configuration. Taking advantage of new computer architectures without unduly compromising generality is a problem for all applications that strain today's high-performance computers.
3.3 Finance
Modern finance, although not a science in the traditional sense, is intertwined with mathematics, and the connection is not limited to theory---mathematics is a central feature in the day-to-day functioning of the world's financial markets. Mathematics and finance are tightly connected in the two areas of derivative securities and risk management.
Derivative securities. In recent years, headlines about business have repeatedly mentioned “derivatives”. A financial derivative is an instrument that derives its value from other, more fundamental instruments, such as stocks, bonds, currencies, and commodities (any one of which is called an underlying). Typical derivatives include options, futures, interest rate swaps, and mortgage-backed securities. The Nobel-prizewinning papers on option pricing containing the famous Black-Scholes partial differential equation were published in 1973 as the Chicago Board of Options Exchange was being established, and within months the Black-Scholes model became a standard tool on the trading floor. Worldwide, the volume of trade in derivatives has since grown to rival the volume of trade in equities. One of the reasons for this phenomenal growth is the existence of reasonably reliable mathematical models to guide their pricing and trading.
In theory, derivatives are redundant because they can be synthesized by dynamic trading in the underlying instruments. Trading in derivatives thus rests on the possibility of finding the fair price of a derivative. Under standard assumptions, the unique fair price of an option can be found from the Black-Scholes equation. However, certain key parameters need to be determined before this equation can be used in practical settings.
One of these parameters, the volatility, has been the subject of intense mathematical and algorithmic attention for almost twenty years. The original Black-Scholes model requires the estimation of a constant volatility derived from a diffusion model of the underlying's price process. Multiple approaches have been devised to calculate this form

Preparing to load PDF file. please wait...

0 of 0
100%
Mathematics and Science