Higher order scrambling

Recently I uploaded the paper

This paper deals with a generalization of Owen’s scrambling algorithm which improves on the convergence rate of the root mean square error for smooth integrands. The bound on the root mean square error is best possible (apart from the power of the \log N factor) and this can also be observed from some simple numerical examples shown in the paper (note that the figures in the paper show the standard deviation (or root mean square error) and not the variance of the estimator). In this post you can also find Matlab programs which generate the quadrature points introduced in this paper and a program to generate the numerical results shown in the paper.

The approach used in this paper is similar to the one used in

  1. J. Dick, Explicit constructions of quasi-Monte Carlo rules for the numerical integration of high dimensional periodic functions. SIAM J. Numer. Anal., 45, 2141–2176, 2007.
  2. J. Dick, Walsh spaces containing smooth functions and quasi-Monte Carlo rules of arbitrary high order. SIAM J. Numer. Anal., 46, 1519–1553, 2008.

There are some differences though. The concept of higher order digital (t,\alpha,\beta, n \times m)-nets, introduced in 1. and 2., cannot be used directly. We shall explain the reason for this below.

The main result

Let f:[0,1]^s\to\mathbb{R} and \alpha \ge 1. We define a norm

\displaystyle V_\alpha(f) = \left(\sum_{u \subseteq \{1,\ldots, s\}} \sum_{\boldsymbol{\alpha} \subseteq \{1,\ldots, \alpha\}^{|u|}} \int_{[0,1]^{|u|}} \left| \int_{[0,1]^{s-|u|}} \frac{\partial^{\sum_{i \in u} \alpha_i} f}{\prod_{i \in u} \partial x_i^{\alpha_i}} \,\mathrm{d} \boldsymbol{x}_{\{1,\ldots, s\} \setminus u} \right| \,\mathrm{d} \boldsymbol{x}_u \right)^{1/2}.

For instance, if s = 1, then the norm above becomes

\displaystyle V_\alpha(f) = \left(\left|\int_0^1 f(x) \,\mathrm{d} x\right|^2 + \sum_{r=1}^\alpha \int_0^1 |f^{(\alpha)}(x)|^2 \,\mathrm{d} x\right)^{1/2},

where f^{(\alpha)} denotes the \alphath derivative of f. We consider integrands for which V_\alpha(f) \textless \infty.

Let \{\boldsymbol{y}_0,\ldots, \boldsymbol{y}_{b^m-1}\} \subseteq [0,1]^{ds} be an Owen-scrambled digital (t,m, ds)-net over \mathbb{Z}_b (for more information on digital nets see Chapter 4 in the book and for more information on scrambling see Chapter 13 in the book).

Define the digit interlacing function \mathscr{D}_d for vectors (x_1,\ldots, x_d) \in [0,1)^d, where x_i = \xi_{i,1} b^{-1} + \xi_{i,2} b^{-2} + \cdots is the base {}b expansion of x_i, by

\displaystyle \mathscr{D}_d(x_1,\ldots, x_d) = \sum_{r=1}^\infty \sum_{i=1}^d \xi_{i,r} b^{-(r-1) d - i}.

For \boldsymbol{x} = (x_1,\ldots, x_{ds}) \in [0,1)^{ds} we define \mathscr{D}_d(\boldsymbol{x}) = (\mathscr{D}_d(x_1,\ldots, x_d),\ldots, \mathscr{D}_d(x_{(s-1)d+1},\ldots, x_{ds})). Our algorithm then simple computes

\displaystyle \widehat{I}(f) := \frac{1}{b^m} \sum_{n=0}^{b^m-1} f(\mathscr{D}_d(\boldsymbol{y}_n)) \approx \int_{[0,1]^s} f(\boldsymbol{x}) \,\mathrm{d} \boldsymbol{x}.

Since digital nets and scrambling are already implemented in Matlab, one only needs to implement the digit interlacing function to obtain an implementation in Matlab. A simple Matlab program which generates the quadrature points is included in this post (see below).

The main result is now the following.

Let f:[0,1]^s \to\mathbb{R} such that V_\alpha(f) \textless \infty. Let \widehat{I}(f) be defined as above. Then the variance of the estimator \widehat{I}(f) satisfies

\displaystyle \mathrm{Var}[\widehat{I}(f)] = \mathcal{O}(N^{-2\min(\alpha,d) - 1} (\log N)^{s\min(d+1,\alpha+1)}),

where N=b^m denotes the number of quadrature points.

It is known that this result is best possible (except for the power of the \log N factor).

If the smoothness \alpha of the integrand is known we can choose d=\alpha and the above result can be slightly improved to

\displaystyle \mathrm{Var}[\widehat{I}(f)] = \mathcal{O}(N^{-2\alpha-1} (\log N)^{\alpha s - 1}).

See also the comment in the MathSciNet report of Erich Novak on Art Owen’s paper on local antithetic sampling with scrambled nets, which can be downloaded here.

Higher order scrambling

The scrambling method proposed here uses Owen’s scrambling together with the interlacing of digits.

Let C_1,\ldots, C_{ds} \in \mathbb{Z}_b^{m\times m} be the generating matrices of a digital (t,m,ds)-net over \mathbb{Z}_b. Then the interlacing of the digits can also be described using the generating matrices. The following picture illustrates how the new first generating matrix is obtained from C_1,\ldots, C_d.

The algorithm in the paper first uses a scrambled digital (t,m,ds)-net and then interlaces the digits of the points. In principle one can of course also use a higher order digital net, use the inverse of the interlacing function \mathscr{D}_d as defined in the manuscript, scramble this point set and then apply the interlacing function \mathscr{D}_d.

The inverse of the interlacing principle can also be applied to the generating matrices of the higher order digital nets. In this case one has generating matrices E_1,\ldots, E_s \in \mathbb{Z}_b^{dm \times m} and then obtains matrices C_1,\ldots, C_{ds} \in \mathbb{Z}_b^{m\times m}. The following picture illustrates how one obtains {}d matrices C_1,\ldots, C_d from E_1.

The matrices C_1,\ldots, C_s generate a digital (t^\prime,m,ds)-net over \mathbb{Z}_b. However, even if E_1,\ldots, E_s are the generating matrices of a digital (t,d,1, dm \times m,s)-net, not much can be said about the quality parameter t^\prime of the digital net generated by C_1,\ldots, C_{ds}. Hence, if one would first apply the inverse interlacing function \mathscr{D}_d^{-1} to a higher order digital net, scramble this point set and then apply the interlacing function \mathscr{D}_d, then one looses the information about the quality parameter of the higher order digital net. For this reason, the scrambling method of the manuscript uses a scrambled digital (t,m,ds)-net and applies the interlacing function to these sample points directly. (Whether there is a scrambling which can be applied directly to digital higher order nets remains open.)

Matlab function which generates higher order scrambled digital nets based on Sobol sequence

function [X] = HOscrambleSobol(m,s,d)
% Creates a higher order scrambled net based on Sobol sequence.
% 2^m number of points
% s dimension of final point set
% d interlacing factor
% X Output Sobol sequence

N = pow2(m); % Number of points;
P = sobolset(d*s); % Get Sobol sequence;
P = scramble(P,’MatousekAffineOwen’); % Scramble Sobol points;
sobolpoints = net(P,N); % Get net from Sobol sequence with N points;

% Create binary representation of digits;

u=52; % Number of digits used in higher order representation;
depth=floor(u/d); % Number of digits used of original point set;

W = sobolpoints * pow2(depth);
Z = floor(transpose(W));
Y = zeros(s,N);
for j = 1:s,
for i = 1:depth,
for k = 1:d
Y(j,:) = bitset( Y(j,:),(depth*d+1) – k – (i-1)*d,bitget( Z((j-1)*d+k,:),(depth+1) – i));
Y = Y * pow2(-depth*d);

X=transpose(Y); % X is matrix of higher order Sobol points,
% where the number of columns equals the dimension
% and the number of rows equals the number of points;


Numerical Example

We consider the function

\displaystyle f(x,y)= \frac{y \mathrm{e}^{xy}}{\mathrm{e}-2}.

The following Matlab program estimates the variances using higher order scrambling.

s=2; % Dimension;
mmin=1; % Minimum number of points;
mmax=17; % Maximum number of points;
dmin=1; % Minimum of interlacing factor;
dmax=3; % Maximum of interlacing factor;
rmax=100; % Number of repeats;
%Notice that we must have mmax * dmax < 53.

for d=dmin:dmax
for m=mmin:mmax
for r=1:rmax
z(r)= sum(x(:,2).*exp(x(:,1).*x(:,2))/(exp(1)-2))/(pow2(m)*(exp(1)-2)); % Evaluation of function;
var(m) = sum((z – sum(z)/rmax).^2)/(rmax*(rmax-1)); % Estimation
% of the variance;

hold on;

for d=dmin:dmax
hold off;

Further research

The construction of higher order polynomial lattice rules is not based on the construction principle of 1. and 2. Hence the question arises how suitable higher order scrambled polynomial lattice rules can be constructed for which the root mean square error converges at the optimal rate. An approach similar to the one used in the recent paper on scrambled polynomial lattice rules, see here, might also work in this case.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s