Home | english  | Impressum | Sitemap | KIT

Efficient Random Sampling – Parallel, Vectorized, Cache-Efficient, and Online

Efficient Random Sampling – Parallel, Vectorized, Cache-Efficient, and Online
Autor:

Peter Sanders, Sebastian Lamm, Lorenz Hübschle-SchneiderEmanuel SchradeCarsten Dachsbacher

Links:
Quelle:

Technical Report Oktober 2016, arXiv:1610.05141

Datum: 17.10.2016

We consider the problem of sampling n numbers from the range {1,…,N} without replacement on modern architectures. The main result is a simple divide-and-conquer scheme that makes sequential algorithms more cache efficient and leads to a parallel algorithm running in expected time O(n/p+logp) on p processors. The amount of communication between the processors is very small and independent of the sample size. We also discuss modifications needed for load balancing, reservoir sampling, online sampling, sampling with replacement, Bernoulli sampling, and vectorization on SIMD units or GPUs.