Home Business & Finance Frequently Asked Questions in Quantitative Finance

# What are Low-Discrepancy Numbers?

Low-discrepancy sequences are sequences of numbers that cover a space without clustering and without gaps, in such a way that adding another number to the sequence also avoids clustering and gaps. They give the appearance of randomness yet are deterministic. They are used for numerically estimating integrals, often in high dimensions. The best-known sequences are due to Faure, Halton, Hammersley, Niederreiter and Sobol'.

Example

You have an option that pays off the maximum of 20 exchange rates on a specified date. You know all the volatilities and correlations. How can you find the value of this contract? If we assume that each exchange rate follows a lognormal random walk, then this problem can be solved as a 20-dimensional integral. Such a high-dimensional integral must be evaluated by numerical quadrature, and an efficient way to do this is to use low-discrepancy sequences.

Some financial problems can be recast as integrations, sometimes in a large number of dimensions. For example, the value of a European option on lognormal random variables can be written as the present value of the risk-neutral expected payoff. The expected payoff is an integral of the product of the payoff function and the probability density function for the underlying(s) at expiration. If there are n underlyings then there is typically an n-dimensional integral to be calculated. If the number of dimensions is small then there are simple efficient algorithms for performing this calculation. In one dimension, for example, divide the domain of integration up into uniform intervals and use the trapezium rule. This means evaluating the integrand at a number of points, and as the number of function evaluations increases so the accuracy of the method improves.

Unfortunately, in higher dimensions evaluating the function at uniformly spaced points becomes computationally inefficient.

If the domain of integration is a unit hypercube (and, of course, it can always be transformed into one) then the value of the integral is the same as the average of the function over that domain: where the xi are uniformly distributed. This suggests that an alternative method of numerical evaluation of the integral is to select the points in the hypercube from a uniform random distribution and then compute their average. If N function evaluations are performed then the method converges like 0(N-1/2). This is the Monte Carlo method of numerical integration. Although very simple to perform it suffers from problems associated with the inevitable clustering and gapping that will happen with randomly chosen numbers.

Clearly we would like to use a sequence of numbers that do not suffer from the gapping/clustering problem. This is where low-discrepancy sequences come in.

Low-discrepancy numbers exploit the Koksma-Hlawka inequality which puts a bound on the error in the above averaging method for an arbitrary sets of sampling points xi. The Koksma-Hlawka inequality says that if f(x) is of bounded variation V(f)then where D*n(x1,... ,(x)n) is the discrepancy of the sequence. (This discrepancy measures the deviation from a uniform distribution. It is calculated by looking at how many of the sampling points can be found in sub intervals compared with how many there would be for a uniform distribution and then taking the worst case.)

Rather than the details, the important point concerning this result is that the bound is a product of one term specific to the function (its variation, which is independent of the set of sampling points) and a term specific to the set of sampling points (and independent of the function being sampled). So once you have found a set of points that is good, of low discrepancy, then it will work for all integrands of bounded variation.

The popular low-discrepancy sequences mentioned above have where C is a constant. Therefore convergence of this quasi-Monte Carlo numerical quadrature method is faster than genuinely random Monte Carlo.

Another advantage of these low-discrepancy sequences is that if you collapse the points onto a lower dimension (for example, let all of the points in a two-dimensional plot fall down onto the horizontal axis) they will not be repeated, they will not fall on top of one another. This means that if there is any particularly strong dependence on one of the variables over the others then the method will still give an accurate answer because it will distribute points nicely over lower dimensions.

Unfortunately, achieving a good implementation of some low-discrepancy sequences remains tricky. Some practitioners prefer to buy off-the-shelf software for generating quasi-random numbers.

## Intuition

Suppose you want to attach a poster to the wall, just along its upper edge, and I am going to give you some pieces of blu-tack or poster putty you can use. Where would you attach the putty? The first piece I give you, you would probably put in the middle. If I gave you another piece you might add it near the end of the top edge. If I give you another piece you might add it near the other edge. The fourth piece would be used to start filling gaps between the other pieces. As I give you more and more pieces you put them in spaces and the poster is then held firmer and firmer. The position of the bits of blu-tack is rather like a low-discrepancy sequence. Note that I don t tell you how much blu-tack you can use, and nor can you remove old bits and place them in new places. If you were allowed to put the putty anywhere on the poster, not just along the top edge, then that would be like a two-dimensional low-discrepancy sequence. (There s also another analogy involving a row of urinals in a gentlemen s convenience.)

References and Further Reading

Barrett, JW, Moore, G & Wilmott, P 1992 Inelegant efficiency. Risk magazine 5 (9) 82-84

Cheyette, O 1990 Pricing options on multiple assets. Adv. Fut. Opt. Res. 4 68-91

Faure, H 1969 Resultat voisin d'un thereme de Landau sur le nombre de points d'un reseau dans une hypersphere. C. R. Acad. Sci. Paris Sér. A 269 383-386

Halton, JH 1960 On the efficiency of certain quasi-random sequences of points in evaluating multi-dimensional integrals. Num. Maths. 2 84-90

Hammersley, JM & Handscomb, DC 1964 Monte Carlo Methods. Methuen, London

Haselgrove, CB 1961 A method for numerical integration. Mathematics of Computation 15 323-337

Jackel, P 2002 Monte Carlo Methods in Finance. John Wiley & Sons Ltd

Niederreiter, H 1992 Random Number Generation and Quasi-Monte Carlo Methods. SIAM

Ninomiya, S & Tezuka, S 1996 Toward real-time pricing of complex financial derivatives. Applied Mathematical Finance 3 1-20

Paskov, SH 1996 New methodologies for valuing derivatives. In Mathematics of Derivative Securities (eds Pliska, SR & Dempster, M)

Paskov, SH & Traub, JF 1995 Faster valuation of financial derivatives. Journal of Portfolio Managament Fall 113-120

Press, WH, Flannery, BP, Teukolsky, SA & Vetterling, WT 1992 Numerical Recipes in C. Cambridge University Press

Sloan, IH & Walsh, L 1990 A computer search of rank two lattice rules for multidimensional quadrature. Mathematics of Computation 54 281-302

Sobol', IM 1967 On the distribution of points in cube and the approximate evaluation of integrals. USSR Comp. Maths and Math. Phys. 7 86-112

Traub, JF & Wozniakowski, H 1994 Breaking intractability. Scientific American January 102-107

Wilmott, P 2006 Paul Wilmott on Quantitative Finance, second edition. John Wiley & Sons Ltd Found a mistake? Please highlight the word and press Shift + Enter Subjects