The input data instances of our algorithms are mostly long 8 bit
character strings strings of size up to 4 GBytes. We run our
algorithms with power of two prefixes of these data instances. For the
'Bad Case Data' we construct for each power of two a new instance.
- The data consist of a concatenation of human chromosomes
- Behaves mostly like random data in the sense of data compression
- Contains some long sequences of equal characters (not yet
analysed genome partitions)
- Works good with discarding algorithms
- The data consist of a large collection of English books and more
from 'Project Gutenberg'
- Contains many repetions of the same data
- Each input instance consist of two times the same random string
- The input string consist of of a collection of open source code
- The input string consist of the result of a web crawl