# How to generate higher order Sobol points in Matlab and some numerical examples

Higher order digital nets and sequences are quasi-Monte Carlo point sets $\{\boldsymbol{x}_{n-1}\}_{n=1}^N,$ where $N=b^m$ for digital nets and $N=\infty$ for digital sequences (for more background information on this topic see Chapters 1-4 and Chapter 8 in the book; several sample chapters of this book can be downloaded here), which satisfy the following property:

$\displaystyle \left|\int_{[0,1]^s} f(\boldsymbol{x}) \, \mathrm{d} \boldsymbol{x} - \frac{1}{b^m} \sum_{n=0}^{b^m-1} f(\boldsymbol{x}_n) \right| \le C_{s,\alpha} \frac{(\log N)^{s \alpha}}{N^\alpha}, \hspace{2cm} (1)$

where $\alpha \ge 1$ is the smoothness of the integrand ${}f$ and $C_{s,\alpha} \textgreater 0$ is a constant which only depends on ${}s$ and ${}\alpha$ (but not on ${}N).$ For instance, if ${}f$ has square integrable partial mixed derivatives up to order ${}2$ in each variable, then we get a convergence rate of $\mathcal{O}(N^{-2+\varepsilon}),$ where ${}N$ is the number of quadrature points used in the approximation and $\varepsilon >0$ can be chosen arbitrarily small. If ${}f$ has ${}3$ derivatives then we get a convergence of $\mathcal{O}(N^{-3+\varepsilon}),$ and so on.

This method has been introduced in the papers:

The first paper only deals with periodic functions, whereas the second paper also includes nonperiodic functions.

For some heuristic explanation how higher order digital nets and sequences work see the paper:

• J. Dick, On quasi-Monte Carlo rules achieving higher order convergence. In: Proceedings of the MCQMC’08 conference, Montreal, Canada, P. L’Ecuyer and A. Owen (eds.), pp. 73–96, 2009. doi: 10.1007/978-3-642-04107-5_5 An earlier version can be found here.

In this entry we provide a Matlab program which generates point sets which satisfy the property (1).

Before we do so, we give some further explanations. The construction of the point set is based on Sobol sequences (which is a particular low-discrepancy sequence) and requires several parameters. First notice that for a Sobol sequence, the base ${}b$ is always ${}2.$ The first parameter which needs to be specified in the program is the natural number ${}m:$ the number of points will be ${}2^m.$ The second parameter is the dimension ${}s$ (which is a natural number). The third parameter is the interlacing factor ${}d.$ Again, ${}d$ is a natural number and should be chosen according to the smoothness of the integrand. So if the integrand ${}f$ has absolutely integrable partial mixed derivatives up to order $\alpha,$ then choose $d=\alpha.$ Of course, $\alpha$ might not be known. In this case the following happens:

• If $d \textgreater \alpha,$ then the method yields a convergence rate of order $\mathcal{O}(N^{-\alpha+\varepsilon}).$
• If $d \le \alpha,$ then the method yields a convergence rate of order $\mathcal{O}(N^{-d+\varepsilon}).$

Notice that choosing ${}d$ too large might be disadvantageous (high numerical precision is required and the constants increase rapidly, which can have a stronger effect than the improved rate of convergence for reasonable sample sizes).

Matlab function for generating higher order Sobol points

function [X] = HOSobol(m,s,d)
% Higher order Sobol sequence
% Create a higher order Sobol sequence.
% 2^m number of points
% s dimension of final point set
% d interlacing factor
% X Output Sobol sequence

N = pow2(m); % Number of points;
P = sobolset(d*s); % Get Sobol sequence;
sobolpoints = net(P,N); % Get net from Sobol sequence with N points;

% Create binary representation of digits;

W = sobolpoints* N;
Z = transpose(W);
Y = zeros(s,N);
for j = 1:s,
for i = 1:m,
for k = 1:d
Y(j,:) = bitset( Y(j,:),(m*d+1) – k – (i-1)*d,bitget( Z((j-1)*d+k,:),(m+1) – i));
end;
end;
end;
Y = Y * pow2(-m*d);

X=transpose(Y); % X is matrix of higher order Sobol points,
% where the number of columns equals the dimension
% and the number of rows equals the number of points;

end

Note Choosing $d=1$ in the program HOSobol simply yields the classical Sobol sequence. $\Box$

Note The precision required for the optimal performance of this point set is $m*d$ bits. Hence if we have $52$ bits available, then we require that $m*d \le 52.$ $\Box$

How to improve higher order digital nets and sequences

Simple examples show that the method achieves the theoretically predicted convergence rate (see below).

The method runs into problems when the dimension increases (see also the curse of dimensionality). One way to improve the implementation above is by replacing the Sobol points with Niederreiter-Xing sequences (Niederreiter-Xing points can be downloaded from this page). For these points the theoretical dependence on the dimension is better.

Improving higher order digital nets and sequences is an ongoing area of research and quite a number of papers have been published recently on this topic, which can be found here.

In the following we present some very simple numerical examples.

Example 1: Dimension ${}1$, Function ${}x$

• Matlab program

mmax=13;
dmax=4;

for d=1:dmax
z=zeros(mmax);
for m=1:mmax
x=HOSobol(m,1,d);
z(m)=abs((2^(-m)*sum(x)) -1/2);
end;
loglog(2.^(1:mmax),z);
hold on;
end;

for d=1:dmax
loglog(2.^(1:mmax),2.^(-d*(1:mmax)),’green’);
end;

• This program generates the following figure:

Here, the blue line shows the integration error and the green lines are $N^{-d}$ for $d=1,2,3,4.$ One can see that in each case we obtain a convergence rate of $N^{-d}$ as predicted by the theory.

Example 2: Dimension ${}1$, Function $\sin 2\pi x$

• Matlab program

mmin=1;
mmax=13;
d=2;
dmin=d;
dmax=d;

for d=dmin:dmax
z=zeros(mmax);
for m=mmin:mmax
x=HOSobol(m,1,d);
z(m)=abs(2^(-m)*sum(sin(2*pi*x))-0);
end;
loglog(2.^(mmin:mmax),z);
hold on;
end;

for d=dmin:dmax
loglog(2.^(mmin:mmax),2.^(-d*(mmin:mmax)),’green’);
end;
hold off;

• Numerical results for $d=1$; Comparison of error with line $N^{-1};$ (The point Sobol points have some symmetry which yields the small integration error in this case.)

• Numerical results for $d=2$; Comparison of error with line $N^{-2};$

• Numerical results for $d= 3$; Comparison of error with line $N^{-3};$

• Numerical results for $d=4$; Comparison of error with line $N^{-4};$

• In each case one can approximately see the convergence of the error of order $N^{-d}.$

Example: Dimension ${}1$, Function $\frac{5}{2}x^{3/2}$

This function is interesting as it has absolutely integrable derivatives up to order approximately $2.5$ and hence we can expect a convergence rate of at most $N^{-2.5}.$ This can also be observed in the numerical experiment. Increasing ${}d$ beyond ${}3$ does not have any advantage in this case.

• Matlab program

s=1; % Dimension
mmin=1; % Minimum number of points is 2^(mmin);
mmax=17; % Maximum number of points is 2^(mmax);

dmin=1; % Minimum d;
dmax=3; % Maximum d;

for d=dmin:dmax
z=zeros(mmax);
for m=mmin:mmax
x=HOSobol(m,s,d);
z(m)=abs(2^(-m)*5/2*sum(x.^(3/2))-1);
end;
loglog(2.^(mmin:mmax),z);
hold on;
end;

for d=dmin:dmax
loglog(2.^(mmin:mmax),2.^(-d*(mmin:mmax)),’green’);
end;
hold off;

• Numerical results for $d=1,2,3$;

For $d=3$ we obtain a convergence rate of approximately $N^{-2.5}.$

• Numerical results for $d=3,4,5$;

The convergence rate does not increase by increasing ${}d$ beyond ${}3.$ We still only get a convergence rate of approximately $N^{-2.5}$ in this case.

Example: Dimension ${}2$, Function $5x^{3/2} y$

• Matlab program

s=2; % Dimension
mmin=1; % Minimum number of points is 2^(mmin);
mmax=17; % Maximum number of points is 2^(mmax);

dmin=1; % Minimum d;
dmax=3; % Maximum d;

for d=dmin:dmax
z=zeros(mmax);
for m=mmin:mmax
x=HOSobol(m,s,d);
z(m)=abs(2^(-m)*5*sum(x(:,1).^(3/2).*x(:,2))-1);
end;
loglog(2.^(mmin:mmax),z);
hold on;
end;

for d=dmin:dmax
loglog(2.^(mmin:mmax),2.^(-d*(mmin:mmax)),’green’);
end;
hold off;

• Numerical results for $d=1,2,3$;

Again we obtain a convergence of at most $N^{-5/2}$ as predicted by the theory.

Example: Dimension ${}5$, Function $32 x_1x_2x_3x_4x_5$

• Matlab program

s=5; % Dimension
mmin=1; % Minimum number of points is 2^(mmin);
mmax=23; % Maximum number of points is 2^(mmax);

dmin=2; % Minimum d;
dmax=2; % Maximum d;

for d=dmin:dmax
z=zeros(mmax);
for m=mmin:mmax
x=HOSobol(m,s,d);
z(m)=abs(2^(-m)*sum(32*x(:,1).*x(:,2).*x(:,3).*x(:,4).*x(:,5))-1);
end;
loglog(2.^(mmin:mmax),z);
hold on;
end;

for d=dmin:dmax
loglog(2.^(mmin:mmax),2.^(-d*(mmin:mmax)),’green’);
end;
hold off;

• Numerical results for $d=2$;

We obtain the expected rate of convergence of $N^{-2}$ (notice that even though the function is infinitely smooth, we have chosen $d=2$ and hence the rate of convergence can at most only be $N^{-2}$).

Plots of higher order Sobol points in dimension ${}2$

The following figures show some plots of higher order Sobol sequences in dimension ${}2.$

In Matlab, these plots can be generated for instance using the commands:

m=10; % Change the parameter m here;
s=2; % Change the parameter s here;
d=3; % Change the parameter d here;
X= HOSobol(m,s,d);
plot(X(:,1),X(:,2),’.’);

• $m=10, s=2,d=1$
• $m=10, s=2,d=2$
• $m=10, s=2,d=3$
• $m=10, s=2,d=4$
• $m=10, s=2,d=5$
• $m=5, s=2, d=2$
• $m=6, s=2, d=2$
• $m=7, s=2, d=2$
• $m=8, s=2, d=2$
• $m=9, s=2, d=2$
• $m=11, s=2, d=2$
• $m=12, s=2, d=2$
• $m=13, s=2, d=2$