1
0
Fork 0
mirror of https://github.com/git/git.git synced 2024-05-06 13:26:12 +02:00
git/repo-settings.c
Taylor Blau b0afdce5da pack-bitmap.c: use commit boundary during bitmap traversal
When reachability bitmap coverage exists in a repository, Git will use a
different (and hopefully faster) traversal to compute revision walks.

Consider a set of positive and negative tips (which we'll refer to with
their standard bitmap parlance by "wants", and "haves"). In order to
figure out what objects exist between the tips, the existing traversal
in `prepare_bitmap_walk()` does something like:

  1. Consider if we can even compute the set of objects with bitmaps,
     and fall back to the usual traversal if we cannot. For example,
     pathspec limiting traversals can't be computed using bitmaps (since
     they don't know which objects are at which paths). The same is true
     of certain kinds of non-trivial object filters.

  2. If we can compute the traversal with bitmaps, partition the
     (dereferenced) tips into two object lists, "haves", and "wants",
     based on whether or not the objects have the UNINTERESTING flag,
     respectively.

  3. Fall back to the ordinary object traversal if either (a) there are
     more than zero haves, none of which are in the bitmapped pack or
     MIDX, or (b) there are no wants.

  4. Construct a reachability bitmap for the "haves" side by walking
     from the revision tips down to any existing bitmaps, OR-ing in any
     bitmaps as they are found.

  5. Then do the same for the "wants" side, stopping at any objects that
     appear in the "haves" bitmap.

  6. Filter the results if any object filter (that can be easily
     computed with bitmaps alone) was given, and then return back to the
     caller.

When there is good bitmap coverage relative to the traversal tips, this
walk is often significantly faster than an ordinary object traversal
because it can visit far fewer objects.

But in certain cases, it can be significantly *slower* than the usual
object traversal. Why? Because we need to compute complete bitmaps on
either side of the walk. If either one (or both) of the sides require
walking many (or all!) objects before they get to an existing bitmap,
the extra bitmap machinery is mostly or all overhead.

One of the benefits, however, is that even if the walk is slower, bitmap
traversals are guaranteed to provide an *exact* answer. Unlike the
traditional object traversal algorithm, which can over-count the results
by not opening trees for older commits, the bitmap walk builds an exact
reachability bitmap for either side, meaning the results are never
over-counted.

But producing non-exact results is OK for our traversal here (both in
the bitmap case and not), as long as the results are over-counted, not
under.

Relaxing the bitmap traversal to allow it to produce over-counted
results gives us the opportunity to make some significant improvements.
Instead of the above, the new algorithm only has to walk from the
*boundary* down to the nearest bitmap, instead of from each of the
UNINTERESTING tips.

The boundary-based approach still has degenerate cases, but we'll show
in a moment that it is often a significant improvement.

The new algorithm works as follows:

  1. Build a (partial) bitmap of the haves side by first OR-ing any
     bitmap(s) that already exist for UNINTERESTING commits between the
     haves and the boundary.

  2. For each commit along the boundary, add it as a fill-in traversal
     tip (where the traversal terminates once an existing bitmap is
     found), and perform fill-in traversal.

  3. Build up a complete bitmap of the wants side as usual, stopping any
     time we intersect the (partial) haves side.

  4. Return the results.

And is more-or-less equivalent to using the *old* algorithm with this
invocation:

    $ git rev-list --objects --use-bitmap-index $WANTS --not \
        $(git rev-list --objects --boundary $WANTS --not $HAVES |
          perl -lne 'print $1 if /^-(.*)/')

The new result performs significantly better in many cases, particularly
when the distance from the boundary commit(s) to an existing bitmap is
shorter than the distance from (all of) the have tips to the nearest
bitmapped commit.

Note that when using the old bitmap traversal algorithm, the results can
be *slower* than without bitmaps! Under the new algorithm, the result is
computed faster with bitmaps than without (at the cost of over-counting
the true number of objects in a similar fashion as the non-bitmap
traversal):

    # (Computing the number of tagged objects not on any branches
    # without bitmaps).
    $ time git rev-list --count --objects --tags --not --branches
    20

    real	0m1.388s
    user	0m1.092s
    sys	0m0.296s

    # (Computing the same query using the old bitmap traversal).
    $ time git rev-list --count --objects --tags --not --branches --use-bitmap-index
    19

    real	0m22.709s
    user	0m21.628s
    sys	0m1.076s

    # (this commit)
    $ time git.compile rev-list --count --objects --tags --not --branches --use-bitmap-index
    19

    real	0m1.518s
    user	0m1.234s
    sys	0m0.284s

The new algorithm is still slower than not using bitmaps at all, but it
is nearly a 15-fold improvement over the existing traversal.

In a more realistic setting (using my local copy of git.git), I can
observe a similar (if more modest) speed-up:

    $ argv="--count --objects --branches --not --tags"
    hyperfine \
      -n 'no bitmaps' "git.compile rev-list $argv" \
      -n 'existing traversal' "git.compile rev-list --use-bitmap-index $argv" \
      -n 'boundary traversal' "git.compile -c pack.useBitmapBoundaryTraversal=true rev-list --use-bitmap-index $argv"
    Benchmark 1: no bitmaps
      Time (mean ± σ):     124.6 ms ±   2.1 ms    [User: 103.7 ms, System: 20.8 ms]
      Range (min … max):   122.6 ms … 133.1 ms    22 runs

    Benchmark 2: existing traversal
      Time (mean ± σ):     368.6 ms ±   3.0 ms    [User: 325.3 ms, System: 43.1 ms]
      Range (min … max):   365.1 ms … 374.8 ms    10 runs

    Benchmark 3: boundary traversal
      Time (mean ± σ):     167.6 ms ±   0.9 ms    [User: 139.5 ms, System: 27.9 ms]
      Range (min … max):   166.1 ms … 169.2 ms    17 runs

    Summary
      'no bitmaps' ran
        1.34 ± 0.02 times faster than 'boundary traversal'
        2.96 ± 0.05 times faster than 'existing traversal'

Here, the new algorithm is also still slower than not using bitmaps, but
represents a more than 2-fold improvement over the existing traversal in
a more modest example.

Since this algorithm was originally written (nearly a year and a half
ago, at the time of writing), the bitmap lookup table shipped, making
the new algorithm's result more competitive. A few other future
directions for improving bitmap traversal times beyond not using bitmaps
at all:

  - Decrease the cost to decompress and OR together many bitmaps
    together (particularly when enumerating the uninteresting side of
    the walk). Here we could explore more efficient bitmap storage
    techniques, like Roaring+Run and/or use SIMD instructions to speed
    up ORing them together.

  - Store pseudo-merge bitmaps, which could allow us to OR together
    fewer "summary" bitmaps (which would also help with the above).

Helped-by: Jeff King <peff@peff.net>
Helped-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-05-08 12:05:55 -07:00

122 lines
4.2 KiB
C

#include "git-compat-util.h"
#include "config.h"
#include "repository.h"
#include "midx.h"
#include "compat/fsmonitor/fsm-listen.h"
static void repo_cfg_bool(struct repository *r, const char *key, int *dest,
int def)
{
if (repo_config_get_bool(r, key, dest))
*dest = def;
}
static void repo_cfg_int(struct repository *r, const char *key, int *dest,
int def)
{
if (repo_config_get_int(r, key, dest))
*dest = def;
}
void prepare_repo_settings(struct repository *r)
{
int experimental;
int value;
const char *strval;
int manyfiles;
if (!r->gitdir)
BUG("Cannot add settings for uninitialized repository");
if (r->settings.initialized++)
return;
/* Defaults */
r->settings.index_version = -1;
r->settings.core_untracked_cache = UNTRACKED_CACHE_KEEP;
r->settings.fetch_negotiation_algorithm = FETCH_NEGOTIATION_CONSECUTIVE;
/* Booleans config or default, cascades to other settings */
repo_cfg_bool(r, "feature.manyfiles", &manyfiles, 0);
repo_cfg_bool(r, "feature.experimental", &experimental, 0);
/* Defaults modified by feature.* */
if (experimental) {
r->settings.fetch_negotiation_algorithm = FETCH_NEGOTIATION_SKIPPING;
r->settings.pack_use_bitmap_boundary_traversal = 1;
}
if (manyfiles) {
r->settings.index_version = 4;
r->settings.index_skip_hash = 1;
r->settings.core_untracked_cache = UNTRACKED_CACHE_WRITE;
}
/* Commit graph config or default, does not cascade (simple) */
repo_cfg_bool(r, "core.commitgraph", &r->settings.core_commit_graph, 1);
repo_cfg_int(r, "commitgraph.generationversion", &r->settings.commit_graph_generation_version, 2);
repo_cfg_bool(r, "commitgraph.readchangedpaths", &r->settings.commit_graph_read_changed_paths, 1);
repo_cfg_bool(r, "gc.writecommitgraph", &r->settings.gc_write_commit_graph, 1);
repo_cfg_bool(r, "fetch.writecommitgraph", &r->settings.fetch_write_commit_graph, 0);
/* Boolean config or default, does not cascade (simple) */
repo_cfg_bool(r, "pack.usesparse", &r->settings.pack_use_sparse, 1);
repo_cfg_bool(r, "core.multipackindex", &r->settings.core_multi_pack_index, 1);
repo_cfg_bool(r, "index.sparse", &r->settings.sparse_index, 0);
repo_cfg_bool(r, "index.skiphash", &r->settings.index_skip_hash, r->settings.index_skip_hash);
repo_cfg_bool(r, "pack.readreverseindex", &r->settings.pack_read_reverse_index, 1);
repo_cfg_bool(r, "pack.usebitmapboundarytraversal",
&r->settings.pack_use_bitmap_boundary_traversal,
r->settings.pack_use_bitmap_boundary_traversal);
/*
* The GIT_TEST_MULTI_PACK_INDEX variable is special in that
* either it *or* the config sets
* r->settings.core_multi_pack_index if true. We don't take
* the environment variable if it exists (even if false) over
* any config, as in most other cases.
*/
if (git_env_bool(GIT_TEST_MULTI_PACK_INDEX, 0))
r->settings.core_multi_pack_index = 1;
/*
* Non-boolean config
*/
if (!repo_config_get_int(r, "index.version", &value))
r->settings.index_version = value;
if (!repo_config_get_string_tmp(r, "core.untrackedcache", &strval)) {
int v = git_parse_maybe_bool(strval);
/*
* If it's set to "keep", or some other non-boolean
* value then "v < 0". Then we do nothing and keep it
* at the default of UNTRACKED_CACHE_KEEP.
*/
if (v >= 0)
r->settings.core_untracked_cache = v ?
UNTRACKED_CACHE_WRITE : UNTRACKED_CACHE_REMOVE;
}
if (!repo_config_get_string_tmp(r, "fetch.negotiationalgorithm", &strval)) {
int fetch_default = r->settings.fetch_negotiation_algorithm;
if (!strcasecmp(strval, "skipping"))
r->settings.fetch_negotiation_algorithm = FETCH_NEGOTIATION_SKIPPING;
else if (!strcasecmp(strval, "noop"))
r->settings.fetch_negotiation_algorithm = FETCH_NEGOTIATION_NOOP;
else if (!strcasecmp(strval, "consecutive"))
r->settings.fetch_negotiation_algorithm = FETCH_NEGOTIATION_CONSECUTIVE;
else if (!strcasecmp(strval, "default"))
r->settings.fetch_negotiation_algorithm = fetch_default;
else
die("unknown fetch negotiation algorithm '%s'", strval);
}
/*
* This setting guards all index reads to require a full index
* over a sparse index. After suitable guards are placed in the
* codebase around uses of the index, this setting will be
* removed.
*/
r->settings.command_requires_full_index = 1;
}