aboutsummaryrefslogtreecommitdiffstats
path: root/object.c
diff options
context:
space:
mode:
authorJeff King <peff@peff.net>2013-05-01 16:34:50 -0400
committerJunio C Hamano <gitster@pobox.com>2013-05-02 08:36:50 -0700
commit9a414486d9f0325c3663210471e4673ed0cd862c (patch)
tree3e387d349d03763029093aeb935bc6275199b25f /object.c
parent15999998fbda60552742275570947431b57108ae (diff)
downloadgit-9a414486d9f0325c3663210471e4673ed0cd862c.tar.gz
lookup_object: prioritize recently found objects
The lookup_object function is backed by a hash table of all objects we have seen in the program. We manage collisions with a linear walk over the colliding entries, checking each with hashcmp(). The main cost of lookup is in these hashcmp() calls; finding our item in the first slot is cheaper than finding it in the second slot, which is cheaper than the third, and so on. If we assume that there is some locality to the object lookups (e.g., if X and Y collide, and we have just looked up X, the next lookup is more likely to be for X than for Y), then we can improve our average lookup speed by checking X before Y. This patch does so by swapping a found item to the front of the collision chain. The p0001 perf test reveals that this does indeed exploit locality in the case of "rev-list --all --objects": Test origin this tree ------------------------------------------------------------------------- 0001.1: rev-list --all 0.40(0.38+0.02) 0.40(0.36+0.03) +0.0% 0001.2: rev-list --all --objects 2.24(2.17+0.05) 1.86(1.79+0.05) -17.0% This is not surprising, as the full object traversal will hit the same tree entries over and over (e.g., for every commit that doesn't change "Documentation/", we will have to look up the same sha1 just to find out that we already processed it). The reason why this technique works (and does not violate any properties of the hash table) is subtle and bears some explanation. Let's imagine we get a lookup for sha1 `X`, and it hashes to bucket `i` in our table. That stretch of the table may look like: index | i-1 | i | i+1 | i+2 | ----------------------------------- entry ... | A | B | C | X | ... ----------------------------------- We start our probe at i, see that B does not match, nor does C, and finally find X. There may be multiple C's in the middle, but we know that there are no empty slots (or else we would not find X at all). We do not know the original index of B; it may be `i`, or it may be less than i (e.g., if it were `i-1`, it would collide with A and spill over into the `i` bucket). So it is acceptable for us to move it to the right of a contiguous stretch of entries (because we will find it from a linear walk starting anywhere at `i` or before), but never to the left (if we moved it to `i-1`, we would miss it when starting our walk at `i`). We do know the original index of X; it is `i`, so it is safe to place it anywhere in the contiguous stretch between `i` and where we found it (`i+2` in the this case). This patch does a pure swap; after finding X in the situation above, we would end with: index | i-1 | i | i+1 | i+2 | ----------------------------------- entry ... | A | X | C | B | ... ----------------------------------- We could instead bump X into the `i` slot, and then shift the whole contiguous chain down by one, resulting in: index | i-1 | i | i+1 | i+2 | ----------------------------------- entry ... | A | X | B | C | ... ----------------------------------- That puts our chain in true most-recently-used order. However, experiments show that it is not any faster (and in fact, is slightly slower due to the extra manipulation). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Diffstat (limited to 'object.c')
-rw-r--r--object.c14
1 files changed, 12 insertions, 2 deletions
diff --git a/object.c b/object.c
index 4af3451bf8..3fb1caa29f 100644
--- a/object.c
+++ b/object.c
@@ -71,13 +71,13 @@ static unsigned int hashtable_index(const unsigned char *sha1)
struct object *lookup_object(const unsigned char *sha1)
{
- unsigned int i;
+ unsigned int i, first;
struct object *obj;
if (!obj_hash)
return NULL;
- i = hashtable_index(sha1);
+ first = i = hashtable_index(sha1);
while ((obj = obj_hash[i]) != NULL) {
if (!hashcmp(sha1, obj->sha1))
break;
@@ -85,6 +85,16 @@ struct object *lookup_object(const unsigned char *sha1)
if (i == obj_hash_size)
i = 0;
}
+ if (obj && i != first) {
+ /*
+ * Move object to where we started to look for it so
+ * that we do not need to walk the hash table the next
+ * time we look for it.
+ */
+ struct object *tmp = obj_hash[i];
+ obj_hash[i] = obj_hash[first];
+ obj_hash[first] = tmp;
+ }
return obj;
}