Can chance play a role in a mathematical calculation?

At first glance, it seems the answer is no. We usually think that a mathematical algorithm is deterministic, so that **if you run the calculation again with the same input you should get the same result.**

Yet there is a large class of algorithms that use random numbers to calculate their results. For that kind of algorithm, **every time you repeat the calculation you get a different result**. But what’s the usefulness of these algorithms?

The point is that in the field of applied mathematics the goal is to solve concrete problems. In these cases, **if you can’t find the theoretical solution, finding a good enough solution may be acceptable.** For this reason, if the next time you run the algorithm you obtain a different but still good enough result, that’s OK (what

*good enough*means exactly depends on the particular application).

This kind of algorithms are called **Monte Carlo methods or Monte Carlo simulations** and they may be loosely defined as all those algorithms that make use of random number generators.

**Approximate Pi through a Monte Carlo Simulation**

Let’s see a classic example just to understand the usefulness and the main features of these methods: the calculation of an approximation of through a Monte Carlo simulation.

Suppose you know how to generate **random numbers uniformly distributed in the interval** (all high-level programming languages can do that).

If we draw a couple of values both uniformly distributed in we can interpret as a **random point drown inside the unit square** with vertices , , , .

Generating such points we can check which of them are inside the circle of unit radius centered in the origin: it is sufficient to check whether the distance from the origin is less than 1.

**Note:** points are uniformly distributed inside the square so the probability of a random point falling inside the circle is equal to the ratio between the area of the circular sector and the area of the square.

The area of the square is 1 while the circular sector area is (one-fourth of the area of a unitary circle) so that the probability of a random point falling inside the circle is given by:

As a consequence, the ratio between the number of points falling inside the circle over the total points will tend toward this value:

But then we could:

- draw a lot of points inside the unit square
- count how many of them satisfy , namely how many fall inside the unit circle
- calculate the ratio between the number of points inside the circle over the total points, multiply the result by 4

obtaining in this way an approximation of .

Here you have an animated gif that clarifies what’s going on.

You can easily make this simulation also in Excel, here you have an example file: MonteCarloPi.xlsx.

I have to underline that the purpose of this method is only educational, because much more efficient deterministic algorithms exist that can be used to calculate the value of with a given precision.

Nevertheless, this basic example is very useful to understand the logic behind Monte Carlo methods, and **the final result is random but gradually converges to the theoretical result increasing the number of simulations.**

It’s possible to demonstrate that the estimation error in a Monte Carlo simulation goes as where is the number of simulations. To halve the error you have to quadruple the number of simulations.

**The Oldest Monte Carlo Simulation**

The **Buffon’s needle** experiment designed in the 1700s could be considered the oldest Monte Carlo simulation.

Suppose you have a sheet on which parallel equidistant lines are drawn. You also have many sticks and you randomly throw them on this sheet. Each stick can fall between two straight lines or intersect one or more (if long enough) parallel lines.

If the stick is smaller than the distance between the lines you can demonstrate that **the probability of a stick intersecting a line is given by:**

where is the stick length and is the distance between the parallel lines. Thanks to the fact that the formula contains , this experiment can be used to calculate approximations of in a way that is similar to the preceding one.

It’s just a matter of throwing the sticks many times to estimate the probability . The values for and are known so that you can invert the formula and obtain an estimate for .

**The IT Revolution**

Experiments similar to this one of Buffon’s needle had no real application, because to obtain results with an acceptable degree of approximation, they required an unreasonably large number of repetitions.

However, around the middle of the last century, the advent of computers drastically changed the scenario. The new technology made it possible to **run simulations with sufficient speed to solve practical problems that could not be solved in other ways.**

The pioneers of the Monte Carlo methods were Stanislaw Ulam and John Von Neumann. In 1946 they both worked on the Manhattan Project aimed at building the atomic bomb, and they used Monte Carlo methods to carry out calculations related to neutron absorption that they couldn’t solve with more conventional approaches.

Ulam had the idea while he was recovering from an illness. While he was playing solitaire, he asked himself what the chance was of successfully finishing it. The rules of solitaire rules made it difficult to calculate this probability.

He realized it was possible to simulate many different games with a computer using random deck arrangements and check how many times the game of solitaire could be finished. This way **you could empirically calculate the probability of winning that game of solitaire.**

Because the results were part of the secret plans for building the atomic bomb, it was necessary to assign a code name to the project. Because chance played a fundamental role in the estimation method, the codename Monte Carlo was chosen, and this is why even today we use this terminology.

Since then, these methods have been used in many fields: weather forecasting, elementary particle physics, astrophysics, molecular chemistry, electronics, fluid dynamics, biology, computer graphics, artificial intelligence, finance, project evaluation… and many others!

Despite their many applications, all Monte Carlo methods fall roughly into three families: **numerical integration, optimization, generation of probability distributions.**

In the next post we will see an example for each of these families of Monte Carlo methods.

Stay tuned!

## One comment