There's some confusion about subrandom numbers. Sometimes they are called psuedo-random numbers and quasi-random numbers. But both pseudorandom and quasirandom numbers are trying to be random. Subrandom numbers are not. Subrandom numbers have the same single-point statistics (mean, standard deviation, skewness etc.) as random numbers but have distinctly non-random correlation.

Often it is best to run Monte-Carlo simulations with numbers that aren't fully random. That is, the numbers need to have the same single-point statistics (pdf) as random numbers but are anti-correlated in order to cover the whole domain as quickly as possible. Taking real random numbers and applying an anti-correlation function doesn't work well, because that tends to destroy the single-point statistics and doesn't cover the whole domain particularly well. Numerical Recipes discusses some methods for producing subrandom numbers, but these are based on binary tree type systems. A more detailed description is here.

There is a much simpler alternative. Select an irrational number *α* and construct a sequence:

for integer i.

i.e. construct the next number from the last one by adding an irrational number and taking the result modulo 1. This algorithm is not new, it was considered and rejected by Knuth as a way of finding random numbers, which is not surprising as it does not generate random numbers.

The algorithm generates a sequence of numbers on domain [0,1) that never repeats until round-off error gets in the way. Even then, by truncating the number correctly, it can cover every possibility between 0 and 1 once before repeating. This mimics the behaviour of the uniform distribution found in standard random number generators.

Although all irrational values of *α* work, some cover the domain much better than others. For instance, an irrational number near zero will take a long time to cover the whole domain once. An irrational number near a half will come back to near its starting point in just two iterations. The best irrational numbers are those that create the largest minimum gap on the real number line for all values of *i*.

Results are plotted below.

Figure 1. For clarity, this graph only shows the local minima of this function.

The algorithm is symmetric about *α* = 0.5 and gives poor results for *α* less than 0.25. High values on this graph are the rational numbers with small denominators (eg. 1/3, 2/5). These are bad values. The best values are those with the lowest results. The downward spike at *α* near 0.382 gives the best value, and that comes from *α *= 2-Φ for golden ratio Φ. The second best value for *α* is near 0.414, and that comes from *α *= sqrt(2)-1. These values of phi and square root of two can be derived analytically, as the limits of sequences that avoid the rational numbers with small denominator by as large a quantity as possible.

What use are subrandom numbers?

Subrandom numbers have an advantage over pure random numbers in that they cover the domain of interest quickly and evenly. They have an advantage over purely deterministic methods in that deterministic methods only give high accuracy when the number of datapoints is pre-set whereas in using subrandom numbers the accuracy improves continually as more datapoints are added.

The two applications where subrandom numbers ought to really excel are in numerically finding the characteristic function of a pdf, and in finding the derivative function of a deterministic function with a small amount of noise. It should excel in the first of these because using subrandom numbers allows higher order moments to be calculated to high accuracy very quickly. It should excel in the second of these because the intervals between adjacent points stay large (whereas intervals between random numbers don't) and because subrandom numbers lead to continual improvements as the number of points increases until the error from the small amount of noise starts to take over, allowing the user to find the result with the minimum error.

Other very useful applications that don't involve sorting would be in finding the mean, standard deviation, skewness, kurtosis and higher order moments of a statistical distribution, and in finding the integral and global minima and maxima of difficult deterministic functions. Subrandom numbers should also be used for providing starting points for deterministic algorithms that only work locally, such as Newton-Raphson iteration.

Subrandom numbers can also be combined with search algorithms. A binary tree "Quicksort" style algorithm ought to work exceptionally well because subrandom numbers flatten the tree far better than random numbers, and the flatter the tree the faster the sorting. With a search algorithm, subrandom numbers can be used to find the mode, median, confidence intervals and cumulative distribution of a statistical distribution, and all local minima and all solutions of deterministic functions.

What about higher dimensions?

You don't want to use 1-D subrandom numbers for n-D problems with n greater than 1 because the correlation will force number pairs into a very limited part of the total domain. Clearly a set of n mutually irrational numbers such as sqrt(2), sqrt(3), sqrt(5), sqrt(7), sqrt(11) etc. will suffice, and this strategy isn't too bad, but it won't be optimal. Unlike the case of 1-D, I haven't found any analytic expression for the best possible set of irrational numbers in n-D. So let's look at numerical results for 2-D.

When I try irrational numbers 0.5545497... & 0.308517..., the distribution of points over the square domain from 0 to one for 20, 200, 1000, 7000 points looks like the following:

Figure 2.

The uniformity of distribution exceeds what is possible with the methods described in "Numerical Recipes". Perhaps this is as good a point as any to mention that the method of generating subrandom numbers described here is fully vectorisable to arbitrarily large vector length.

Figure 3.

Figure 3 is a two-dimensional equivalent of Figure 1. It's in perspective looking down. The corners clockwise from upper left are (0, 0), (0, 0.5), (0.5, 0.5), (0.5, 0). The point (1/3, 1/3) slows up clearly. The peaks are the rational numbers with small denominator and the troughs are those numbers that give the most uniform spacing when used as a pair of irrational numbers.

Figure 4. a detail from Figure 3, looking down directly from above covering the range (0.45-0.47), (0.31-0.33). A higher result is better.

The data in Figure 4 is in not nearly as much detail as in Figure 1, and there is no analytical solution to guide me here. A good region seems to be to search for irrational pairs of numbers near: (0.4605, 0.3215).

There are several different ways to numerically construct good sets of points in n-D. One that I tried early and don't recommend is to look at the distances between points in n-D in order to maximise the minimum distance. To do this we only need to check the last point against zero. The problem with this is that although it gives a good distribution of points in n-D it can give very poor performance for individual 1-D values.

Another method that I tried was to use the stored 1-D results from Figure 1 with every single alpha value and pair of alpha values. This gave very poor performance in higher dimensions. As a result I can't firmly recommend any values in n-D for n>1.