Home | deutsch  | Legals | Data Protection | Sitemap | KIT

Efficient Random Sampling – Parallel, Vectorized, Cache-Efficient, and Online

Efficient Random Sampling – Parallel, Vectorized, Cache-Efficient, and Online
Author(s):

Peter Sanders, Sebastian Lamm, Lorenz Hübschle-Schneider, Emanuel Schrade, and Carsten Dachsbacher

Links:
Source:

arXiv:1610.05141

Date: October 2016

We consider the problem of sampling n numbers from the range {1,…,N} without replacement on modern architectures. The main result is a simple divide-and-conquer scheme that makes sequential algorithms more cache efficient and leads to a parallel algorithm running in expected time O(n/p+logp) on p processors. The amount of communication between the processors is very small and independent of the sample size. We also discuss modifications needed for load balancing, reservoir sampling, online sampling, sampling with replacement, Bernoulli sampling, and vectorization on SIMD units or GPUs.