From a high-altitude view, that's why splitting a huge database table into smaller partitions is not an automatic performance win. If you have M partitions with N rows each, then a lookup might require O(log M) time to find a partition and O(log N) time to find a row within the partition. But O(log M + log N) = O(log MN) which is what you would get from a single big table with appropriate indexing.
Of course, in the real world constant factors and implementation details matter, so this is just a heuristic. But it seems to run contrary to a lot of novice programmers' intuition that a large DB table must automatically be a slow one.
From a high-altitude view, that's why splitting a huge database table into smaller partitions is not an automatic performance win. If you have M partitions with N rows each, then a lookup might require O(log M) time to find a partition and O(log N) time to find a row within the partition. But O(log M + log N) = O(log MN) which is what you would get from a single big table with appropriate indexing.
Of course, in the real world constant factors and implementation details matter, so this is just a heuristic. But it seems to run contrary to a lot of novice programmers' intuition that a large DB table must automatically be a slow one.