DFG-Schwerpunktprogramm "Algorithms for BIG DATA"

  • Ansprechperson:

    P. Sanders

Kurzbeschreibung

BIG DATA

Computer systems pervade all parts of human activity and acquire, process, and exchange data at a rapidly increasing pace. As a consequence, we live in a Big Data world where information is accumulating at an exponential rate and often the real problem has shifted from collecting enough data to dealing with its impetuous growth and abundance. In fact, we often face poor scale-up behavior from algorithms that have been designed based on models of computation that are no longer realistic for big data.

While it is getting more and more difficult to build faster processors, the hardware industry keeps on increasing the number of processors/cores per board or graphics card, and also invests into improved storage technologies. However, all these investments are in vain, if we lack algorithmic methods that are able to efficiently utilize additional processors or memory features.

Our Goal

The new priority programme wants to improve the situation by bringing together expertise from different areas. On the one hand recent hardware developments and technological challenges need to be appropriately captured in better computational models. On the other hand, both common and problem specific algorithmic challenges due to big data are to be identified and clustered. Considering both sides, a basic toolbox of improved algorithms and data structures for big data sets is to be derived, where we do not only strive for theoretical results but intend to follow the whole algorithm engineering development cycle.

Relevante Teilprojekte

  • Engineering Algorithms for Partitioning Large Graphs Link
  • Massive Text Indices Link
  • 3D+T Terabyte Image Analysis (associated) Link