Jacques-Louis Lions, Institut de France
The Future of Scientific Computation
The main problems coming from society at large which seem to be coming or are already with us, are:
- Sustainable development of everything: environment, ...
- Global competition: optimization of everything
- Good health for everybody, and forever
- Safety of everything
What can be the role of scientific computation in these kind of questions?
For sustainable development we need simulations with a very large time horizon. Tools needed include models of the aging of materials: interfaces, corrosion, cracks. (For example, cracks are fundamental to the aging of planes.) For this you have to return to ab initio computation on the molecular level (Schrödinger equations). However, then you meet the curse of dimensionality (a large number of variables, the notion stems from dynamic programming in the early fifties). In connection with simulations with a very large time horizon you find the notion of an ensemble of solutions, a trend very important in climatology. Such an ensemble consists of the results of an off-line computation repeated several times with slightly different data. Then you try to extract useful information from this set of results using statistical methods. Another recent development is validated computation of functionals of interest. For example, you have a large climatological model, but you are only interested in a tiny part, let us say the probability of having typhoons. How do you extract this information only without having to compute everything?
Certainly in the industrial world global competition requires optimization of all kinds of things. For example, real-time control of extremely complex phenomena like fusion, turbulence, and combustion, or control of sophisticated processes. Here the curse of dimensionality strikes again. Brute force is impossible (one must think of nonlinear partial differential equations in 50 variables, not unknowns). Tricks used include decomposition of spaces, neural networks, and Monte Carlo (if everything else has failed). Main trends in solution methods are: decomposition of a problem in small pieces, and arrange things so that local solutions automatically give a good global performance (this is a general trend in computer science).
In the Good Health problem the challenge of challenges is the simulation of living systems: heart, liver, brain, for example to train surgeons. Here you need to have the answer of what you do (feedback) in real-time, and because there are large deformations (eg, surgery of a liver), the system is nonlinear as well. Solving such a problem in general is out of reach, but one can do something by linearization, which makes sense if the deformations are small (surgeons also make small incisions). In connection with computation also the genome problem should be mentioned here. Another fascinating problem is the production of new drugs, because it amounts to the control of Schrödinger equations. Once again these require ab initio computations on the molecular level.
For the safety of everything it is clear that an advanced vision of a system is needed (think of a new aeroplane), including maintenance, repairs, and waste, without risk at each step. Less obvious is to develop an advanced vision on the safety of a set of systems, such as global transportation (these systems are presently not sustainable to development). Important is here the search for feedbacks, such as anticipative functionals, which are very sensitive to change occurring in the years to come (coral growth is a good example). Finally, of course, the safety of software is a very important area.