Seeking a big data table for a location does take time, so, obviously, the more scattered the records are, the longer response time will be. This article is about how to minimize the response time when the result set is as scattered as the will be if the records are unrelated to each other, ie. randomly dispersed.
When the table is big, there are three alternatives:
Since MySQL support a maximum of 1024 partitions, that is what we will use. The table used has about 45 million rows.
We will vary the number of records sought, from 1, 10, 100, 1000, 10000, and 20.000.
The recordid will be randomly sampled from the set of all record ids.