Very often, scientific problems lead to the computation of integrals or sums, as well as to
determine the solution of differential or integral equations. In most of the
cases, exact analytical computations of integrals are impossible directly.
In the same way, summations involve so many terms that they can not be executed in reasonable time.
We then resort to approximation methods. Among the most well-known are standard numerical
analysis techniques. Efficicient in dimension 1, these techniques becomes irrelevant as the dimension
increases.
But the number of variables for some problems can be so large that
the future speed-up of of computer science tools will probably never be enough.
Consequently, statistical simulation methods called Monte
Carlo methods, look promising since their convergence speed is
independante of the mathematical problem dimension.
Since the estimation is statistic, the methods do not provide the exact solution of the problem, but
a confidence interval including the solution with a given probability.
Monte Carlo techniques have been used
for several centuries, even if it is only after the second world war that
they gained the status of method. The systematic use, by Ulam, Metropolis
and Von Neumann notably, occured at Los Alamos, during the preparation of
the atomic bomb, where many well-known mathematicians collaborated. The name
``Monte Carlo'' was given by Metropolis, inspired by the interest of
de Ulam for poker, since Monte Carlo is a big casino center,
and due to their similarities with random games. The work realized at
Los Alamos consisted in simulating directly the neutron
dispersion and absorption problems.
With the first applications, variance reduction techniques have been used.
The Los Alamos results being of course secret, the first
publications in the field appeared only in 1949.
Then, then developpement of Monte Carlo methods has followed the developpement of
computer science. Indeed, in 1945 already,
J. von Neumann conjectured the potential use of computers for stochastic simulation:
`Computer will certainly offer a new approach to mathematical statistics; the computer
experiment approach''. Monte Carlo techniques have then been frequently
used in various domains starting from the fifties, and sometimes overused, for instance
for problems where they were not adapted.
My previous work
A first important part of my work on Monte Carlo methods consists in acombination of Monte
Carlo and quasi-Monte Carlo methods (see the web page
on quasi-Monte Carlo).
A second part is about rare event simulation. We were interested in the analysis of complex
systems, a major topic from the applications point of view,
more specifically the simulation of highly reliable Markovian systems. Indeed, for such systems,
quasi-Monte Carlo methods are generally inefficient (since we simulate paths of a stochastic process in an unbounded
number of time-steps; the dimension is then infinie, case too compicated to e handed by a direct quasi-Monte
Carlo method). We have improved the existing methods. The main significant contribution
on the topic is nonetheless the introduction of a new concept, the so-called bounded normal approximation,
which allows to obtain a good normal
approximation in the central limit theorem, whatever the system reliablity.
We have also shown that the bounded normal approximation property
provides a good variance estimation.
Another point is the developpement of simulation techniques in
SPNP, a software package devoted to the analysis of Petri
nets. We have developped the simulator and integrated several simulation techniques (importance
sampling, importance splitting...).
Topics of interest
Rare event simulation.
Simulation of Internet protocols.
Application to 3G wireless networks performance evaluation.
My publications in the field (for a full and updated list, see my list of publications)
B. Tuffin, Simulation accélérée par les méthodes
de Monte Carlo et Quasi-Monte Carlo : théorie et applications. Thèse
de Doctorat, Université de Rennes 1. Octobre 1997.
H. Cancela, G. Rubino et B. Tuffin, Fast Monte Carlo Methods for evaluating
Highly Dependable Markovian Systems. Second International Conference
on Monte Carlo and quasi-Monte Carlo Methods in Scientific Computing,
Salzburg, Autria, July 1996.
B. Tuffin, Bounded Normal Approximation in Highly Reliable Markovian Systems.
Journal of Applied Probability, Vol. 36, Num.4, 1999.
B. Tuffin and K.S. Trivedi. Implementation of Importance Splitting techniques
in Stochastic Petri Net Package. In Haverkort, B.R., and Bohnenkamp,
H.C. and Smith, C.U. (eds), Computer performance evaluation: Modelling
tools and techniques; 11th International Conference; TOOLS 2000, Lecture
Notes in Computer Science, volume 1786, Springer Verlag, pages 216-229,
2000.
B. Tuffin. On Numerical Problems in Simulations of Highly Reliable Markovian
Systems soumis (Aussi PI IRISA 1191).
B. Tuffin and K.S. Trivedi. Importance Sampling for the Simulation of Stochastic
Petri nets and Fluid Stochastic Petri nets.
In Proceedings of High Performance
Computing (HPC), pages 228-235, Seattle, WA, USA, Avril 2001.
H. Cancela, G. Rubino and B. Tuffin. MTTF estimation using Importance Sampling
on Markov models. In Proceedings of the 5th International Conference
on Industrial Loigistics, Okinawa, Japan, 2001.
S. Collas, B. Tuffin. Creation of a Dynamic Model Language and Application
of Monte Carlo Methods for the Reliability Analysis of Industrial Complex
Systems. Dans Actes de lambda mu-13, Lyon, Mars 2002.
H. Cancela, G. Rubino and B. Tuffin. MTTF estimation by Monte Carlo methods
using Markov models. Monte Carlo Methods and Applications: 8(4),
pages 312-341, 2002.