Optimize MySQL for speed when the query is non-predictable

Seeking a big data table for a location does take time, so, obviously, the more scattered the records are, the longer response time will be. This article is about how to minimize the response time when the result set is as scattered as the will be if the records are unrelated to each other, ie. randomly dispersed.

When the table is big, there are three alternatives:

  1. Partition on id, but do not index
  2. Make an index on id, but do not partition
  3. Partition on id, and make an index on id.

Since MySQL support a maximum of 1024 partitions, that is what we will use. The table used has about 45 million rows.

We will vary the number of records sought, from 1, 10, 100, 1000, 10000, and 20.000.

The recordid will be randomly sampled from the set of all record ids.

comments powered by Disqus


Back to the index

Blog roll

R-bloggers, Debian Weekly
Valid XHTML 1.0 Strict [Valid RSS] Valid CSS! Emacs Muse Last modified: oktober 12, 2017