You are currently viewing Monte Carlo Methods, where Math and Chance Meet

Can chance play a role in a mathematical calculation?

At first glance, it seems the answer is no. We usually think that a mathematical algorithm is deterministic, so that if you run the calculation again with the same input you should get the same result.

Yet there is a large class of algorithms that use random numbers to calculate their results. For that kind of algorithm, every time you repeat the calculation you get a different result. But what’s the usefulness of these algorithms?

The point is that in the field of applied mathematics the goal is to solve concrete problems. In these cases, if you can’t find the theoretical solution, finding a good enough solution may be acceptable. For this reason, if the next time you run the algorithm you obtain a different but still good enough result, that’s OK (what good enough means exactly depends on the particular application).

This kind of algorithms are called Monte Carlo methods or Monte Carlo simulations and they may be loosely defined as all those algorithms that make use of random number generators.

Approximate Pi through a Monte Carlo Simulation

Let’s see a classic example just to understand the usefulness and the main features of these methods: the calculation of an approximation of $\pi$ through a Monte Carlo simulation.

Suppose you know how to generate random numbers uniformly distributed in the $[0,1]$ interval (all high-level programming languages can do that).

If we draw a couple of values $(x,y)$ both uniformly distributed in $[0,1]$ we can interpret $(x,y)$ as a random point drown inside the unit square with vertices $(0,0)$, $(0,1)$, $(1,1)$, $(1,0)$.

Generating $n$ such points $(x_1,y_1), \ldots, (x_n,y_n)$ we can check which of them are inside the circle of unit radius centered in the origin: it is sufficient to check whether the distance from the origin $\sqrt{x_i^2+y_i^2}$ is less than 1.

The unit square in which random points are drawn and the circular sector of unit radius.

Note: points are uniformly distributed inside the square so the probability of a random point falling inside the circle is equal to the ratio between the area of the circular sector and the area of the square.

The area of the square is 1 while the circular sector area is $\pi/4$ (one-fourth of the area of a unitary circle) so that the probability of a random point falling inside the circle is given by:

$$\displaystyle\frac{\text{Circle Sector Area}}{\text{Square Area}} = \frac{\pi}{4}$$

As a consequence, the ratio between the number of points falling inside the circle over the total points will tend toward this value:

$$\displaystyle\frac{\text{Number of points inside the circle}}{\text{Number of total points}} \longrightarrow\frac{\pi}{4}$$

But then we could:

    1. draw a lot of points inside the unit square
    2. count how many of them satisfy $\sqrt{x^2+y^2} < 1$, namely how many fall inside the unit circle
    3. calculate the ratio between the number of points inside the circle over the total points, multiply the result by 4

obtaining in this way an approximation of $\pi$.

Here you have an animated gif that clarifies what’s going on.

You can easily make this simulation also in Excel, here you have an example file: MonteCarloPi.xlsx.

I have to underline that the purpose of this method is only educational, because much more efficient deterministic algorithms exist that can be used to calculate the value of $\pi$ with a given precision.

Nevertheless, this basic example is very useful to understand the logic behind Monte Carlo methods, and the final result is random but gradually converges to the theoretical result increasing the number of simulations.

It’s possible to demonstrate that the estimation error in a Monte Carlo simulation goes as $1/\sqrt{n}$ where $n$ is the number of simulations. To halve the error you have to quadruple the number of simulations.

The Oldest Monte Carlo Simulation

The Buffon’s needle experiment designed in the 1700s could be considered the oldest Monte Carlo simulation.

Suppose you have a sheet on which parallel equidistant lines are drawn. You also have many sticks and you randomly throw them on this sheet. Each stick can fall between two straight lines or intersect one or more (if long enough) parallel lines.

If the stick is smaller than the distance between the lines you can demonstrate that the probability of a stick intersecting a line is given by:

$$\displaystyle P = \frac{2l}{\pi d}$$

where $l$ is the stick length and $d$ is the distance between the parallel lines. Thanks to the fact that the formula contains $\pi$, this experiment can be used to calculate approximations of $\pi$ in a way that is similar to the preceding one.

It’s just a matter of throwing the sticks many times to estimate the probability $P$. The values for $l$ and $d$ are known so that you can invert the formula and obtain an estimate for $\pi$.

The IT Revolution

Experiments similar to this one of Buffon’s needle had no real application, because to obtain results with an acceptable degree of approximation, they required an unreasonably large number of repetitions.

However, around the middle of the last century, the advent of computers drastically changed the scenario. The new technology made it possible to run simulations with sufficient speed to solve practical problems that could not be solved in other ways.

The pioneers of the Monte Carlo methods were Stanislaw Ulam and John Von Neumann. In 1946 they both worked on the Manhattan Project aimed at building the atomic bomb, and they used Monte Carlo methods to carry out calculations related to neutron absorption that they couldn’t solve with more conventional approaches.

Ulam had the idea while he was recovering from an illness. While he was playing solitaire, he asked himself what the chance was of successfully finishing it. The rules of solitaire rules made it difficult to calculate this probability.

He realized it was possible to simulate many different games with a computer using random deck arrangements and check how many times the game of solitaire could be finished. This way you could empirically calculate the probability of winning that game of solitaire.

Because the results were part of the secret plans for building the atomic bomb, it was necessary to assign a code name to the project. Because chance played a fundamental role in the estimation method, the codename Monte Carlo was chosen, and this is why even today we use this terminology.

Since then, these methods have been used in many fields: weather forecasting, elementary particle physics, astrophysics, molecular chemistry, electronics, fluid dynamics, biology, computer graphics, artificial intelligence, finance, project evaluation… and many others!

Despite their many applications, all Monte Carlo methods fall roughly into three families: numerical integration, optimization, generation of probability distributions.

In the next post we will see an example for each of these families of Monte Carlo methods.

Stay tuned!

EnricoDeg

I live in Verona, Italy and I teach mathematics and physics at high school trying to make students understand the beauty and usefulness of STEM subjects. Before being a teacher, I worked for 12 years in the financial industry dealing with risk management, stochastic models for derivative pricing, and IT banking applications. My interests include chess, go, guitar, volleyball, trekking, snowboarding and of course reading and writing about mathematics and physics!

This Post Has 2 Comments

Leave a Reply