1
0
Fork 0
mirror of https://github.com/git/git.git synced 2024-05-20 00:36:07 +02:00

Compare commits

...

73 Commits

Author SHA1 Message Date
Hudson Worden 72f4909881
Merge 0cbeac6dfb into 0f3415f1f8 2024-05-08 19:55:29 +00:00
Junio C Hamano 0f3415f1f8 The second batch
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-05-08 10:18:47 -07:00
Junio C Hamano 20ceead5c3 Merge branch 'bb/rgb-12-bit-colors'
The color parsing code learned to handle 12-bit RGB colors, spelled
as "#RGB" (in addition to "#RRGGBB" that is already supported).

* bb/rgb-12-bit-colors:
  color: add support for 12-bit RGB colors
  t/t4026-color: add test coverage for invalid RGB colors
  t/t4026-color: remove an extra double quote character
2024-05-08 10:18:47 -07:00
Junio C Hamano db05f61738 Merge branch 'rs/diff-parseopts-cleanup'
Code clean-up to remove code that is now a noop.

* rs/diff-parseopts-cleanup:
  diff-lib: stop calling diff_setup_done() in do_diff_cache()
2024-05-08 10:18:46 -07:00
Junio C Hamano 97673bdea7 Merge branch 'dk/zsh-git-repo-path-fix'
Command line completion support for zsh (in contrib/) has been
updated to stop exposing internal state to end-user shell
interaction.

* dk/zsh-git-repo-path-fix:
  completion: zsh: stop leaking local cache variable
2024-05-08 10:18:46 -07:00
Junio C Hamano c2b36ab32e Merge branch 'bc/zsh-compatibility'
zsh can pretend to be a normal shell pretty well except for some
glitches that we tickle in some of our scripts. Work them around
so that "vimdiff" and our test suite works well enough with it.

* bc/zsh-compatibility:
  vimdiff: make script and tests work with zsh
  t4046: avoid continue in &&-chain for zsh
2024-05-08 10:18:46 -07:00
Junio C Hamano 80dbfac2aa Merge branch 'rj/add-p-typo-reaction'
When the user responds to a prompt given by "git add -p" with an
unsupported command, list of available commands were given, which
was too much if the user knew what they wanted to type but merely
made a typo.  Now the user gets a much shorter error message.

* rj/add-p-typo-reaction:
  add-patch: response to unknown command
  add-patch: do not show UI messages on stderr
2024-05-08 10:18:45 -07:00
Junio C Hamano 34f34d63bb Merge branch 'jt/doc-submitting-rerolled-series'
Developer doc update.

* jt/doc-submitting-rerolled-series:
  doc: clarify practices for submitting updated patch versions
2024-05-08 10:18:45 -07:00
Junio C Hamano 2c34e4e747 Merge branch 'rh/complete-symbolic-ref'
Command line completion script (in contrib/) learned to complete
"git symbolic-ref" a bit better (you need to enable plumbing
commands to be completed with GIT_COMPLETION_SHOW_ALL_COMMANDS).

* rh/complete-symbolic-ref:
  completion: add docs on how to add subcommand completions
  completion: improve docs for using __git_complete
  completion: add 'symbolic-ref'
2024-05-08 10:18:45 -07:00
Junio C Hamano f526a4f314 Merge branch 'ps/the-index-is-no-more'
The singleton index_state instance "the_index" has been eliminated
by always instantiating "the_repository" and replacing references
to "the_index"  with references to its .index member.

* ps/the-index-is-no-more:
  repository: drop `initialize_the_repository()`
  repository: drop `the_index` variable
  builtin/clone: stop using `the_index`
  repository: initialize index in `repo_init()`
  builtin: stop using `the_index`
  t/helper: stop using `the_index`
2024-05-08 10:18:44 -07:00
Junio C Hamano c5c9acf77d Merge branch 'bc/credential-scheme-enhancement'
The credential helper protocol, together with the HTTP layer, have
been enhanced to support authentication schemes different from
username & password pair, like Bearer and NTLM.

* bc/credential-scheme-enhancement:
  credential: add method for querying capabilities
  credential-cache: implement authtype capability
  t: add credential tests for authtype
  credential: add support for multistage credential rounds
  t5563: refactor for multi-stage authentication
  docs: set a limit on credential line length
  credential: enable state capability
  credential: add an argument to keep state
  http: add support for authtype and credential
  docs: indicate new credential protocol fields
  credential: add a field called "ephemeral"
  credential: gate new fields on capability
  credential: add a field for pre-encoded credentials
  http: use new headers for each object request
  remote-curl: reset headers on new request
  credential: add an authtype field
2024-05-08 10:18:44 -07:00
Junio C Hamano d25ad94df6 Merge branch 'ps/ci-test-with-jgit'
Tests to ensure interoperability between reftable written by jgit
and our code have been added and enabled in CI.

* ps/ci-test-with-jgit:
  t0612: add tests to exercise Git/JGit reftable compatibility
  t0610: fix non-portable variable assignment
  t06xx: always execute backend-specific tests
  ci: install JGit dependency
  ci: make Perforce binaries executable for all users
  ci: merge scripts which install dependencies
  ci: fix setup of custom path for GitLab CI
  ci: merge custom PATH directories
  ci: convert "install-dependencies.sh" to use "/bin/sh"
  ci: drop duplicate package installation for "linux-gcc-default"
  ci: skip sudo when we are already root
  ci: expose distro name in dockerized GitHub jobs
  ci: rename "runs_on_pool" to "distro"
2024-05-08 10:18:44 -07:00
Junio C Hamano 5aec7231c8 Merge branch 'ps/reftable-write-optim'
Code to write out reftable has seen some optimization and
simplification.

* ps/reftable-write-optim:
  reftable/block: reuse compressed array
  reftable/block: reuse zstream when writing log blocks
  reftable/writer: reset `last_key` instead of releasing it
  reftable/writer: unify releasing memory
  reftable/writer: refactorings for `writer_flush_nonempty_block()`
  reftable/writer: refactorings for `writer_add_record()`
  refs/reftable: don't recompute committer ident
  reftable: remove name checks
  refs/reftable: skip duplicate name checks
  refs/reftable: perform explicit D/F check when writing symrefs
  refs/reftable: fix D/F conflict error message on ref copy
2024-05-08 10:18:43 -07:00
Beat Bolli 7b97dfe47b color: add support for 12-bit RGB colors
RGB color parsing currently supports 24-bit values in the form #RRGGBB.

As in Cascading Style Sheets (CSS [1]), also allow to specify an RGB color
using only three digits with #RGB.

In this shortened form, each of the digits is – again, as in CSS –
duplicated to convert the color to 24 bits, e.g. #f1b specifies the same
color as #ff11bb.

In color.h, remove the '0x' prefix in the example to match the actual
syntax.

[1] https://developer.mozilla.org/en-US/docs/Web/CSS/hex-color

Signed-off-by: Beat Bolli <dev+git@drbeat.li>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-05-02 09:30:38 -07:00
Beat Bolli d78d692efc t/t4026-color: add test coverage for invalid RGB colors
Make sure that the RGB color parser rejects invalid characters and
invalid lengths.

Signed-off-by: Beat Bolli <dev+git@drbeat.li>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-05-02 09:30:38 -07:00
Beat Bolli e95af749a2 t/t4026-color: remove an extra double quote character
This is most probably just an editing left-over from cb357221a4 (t4026:
test "normal" color, 2014-11-20) which added this test.

Signed-off-by: Beat Bolli <dev+git@drbeat.li>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-05-02 09:30:37 -07:00
René Scharfe a5a4cb7b27 diff-lib: stop calling diff_setup_done() in do_diff_cache()
d44e5267ea (diff-lib: plug minor memory leaks in do_diff_cache(),
2020-11-14) added the call to diff_setup_done() to release the memory
of the parseopt member of struct diff_options that repo_init_revisions()
had allocated via repo_diff_setup() and prep_parse_options().

189e97bc4b (diff: remove parseopts member from struct diff_options,
2022-12-01) did away with that allocation; diff_setup_done() doesn't
release any memory anymore.  So stop calling this function on the blank
diffopt member before it is overwritten, as this is no longer necessary.

Signed-off-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-05-01 09:44:10 -07:00
D. Ben Knoble 3c20acdf46 completion: zsh: stop leaking local cache variable
Completing commands like "git rebase" in one repository will leak the
local __git_repo_path into the shell's environment so that completing
commands after changing to a different repository will give the old
repository's references (or none at all).

The bug report on the mailing list [1] suggests one simple way to observe
this yourself:

Enter the following commands from some directory:
  mkdir a b b/c
  for d (a b); git -C $d init && git -C $d commit --allow-empty -m init
  cd a
  git branch foo
  pushd ../b/c
  git branch bar

Now type these:
  git rebase <TAB>… # completion for bar available; C-c to abort
  declare -p __git_repo_path # outputs /path/to/b/.git
  popd
  git branch # outputs foo, main
  git rebase <TAB>… # completion candidates are bar, main!

Ideally, the last typed <TAB> should be yielding foo, main.

Commit beb6ee7163 (completion: extract repository discovery from
__gitdir(), 2017-02-03) anticipated this problem by marking
__git_repo_path as local in __git_main and __gitk_main for Bash
completion but did not give the same mark to _git for Zsh completion.
Thus make __git_repo_path local for Zsh completion, too.

[1]: https://lore.kernel.org/git/CALnO6CBv3+e2WL6n6Mh7ZZHCX2Ni8GpvM4a-bQYxNqjmgZdwdg@mail.gmail.com/

Signed-off-by: D. Ben Knoble <ben.knoble+github@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-30 15:24:56 -07:00
Rubén Justo 26998ed2a2 add-patch: response to unknown command
When the user gives an unknown command to the "add -p" prompt, the list
of accepted commands with their explanation is given.  This is the same
output they get when they say '?'.

However, the unknown command may be due to a user input error rather
than the user not knowing the valid command.

To reduce the likelihood of user confusion and error repetition, instead
of displaying the list of accepted commands, display a short error
message with the unknown command received, as feedback to the user.

Include a reminder about the current command '?' in the new message, to
guide the user if they want help.

Signed-off-by: Rubén Justo <rjusto@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-30 12:02:50 -07:00
Rubén Justo 9d225b025d add-patch: do not show UI messages on stderr
There is no need to show some UI messages on stderr, and yet doing so
may produce some undesirable results, such as messages appearing in an
unexpected order.

Let's use stdout for all UI messages, and adjusts the tests accordingly.

Signed-off-by: Rubén Justo <rjusto@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-30 12:02:39 -07:00
brian m. carlson fedd5c79ff vimdiff: make script and tests work with zsh
When we process the $LAYOUT variable through sed, the result will end
with the character "#".  We then split it at the shell using IFS so that
we can process it a character at a time.

POSIX specifies that only "IFS white space shall be ignored at the
beginning and end of the input".  The hash mark is not a white space
character, so it is not ignored at the beginning and end of the input.

POSIX then specifies that "[e]ach occurrence in the input of an IFS
character that is not IFS white space, along with any adjacent IFS white
space, shall delimit a field, as described previously."  Thus, the final
hash mark delimits a field, and the final field is the empty string.

zsh implements this behavior strictly in compliance with POSIX (and
differently from most other shells), such that we end up with a trailing
empty field.  We don't want this empty field and processing it in the
normal way causes us to fail to parse properly and fail the tests with
"ERROR" entries, so let's just ignore it instead.  This is the behavior
of bash and dash anyway and what was clearly intended, so this is a
reasonable thing to do.

Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-26 16:31:46 -07:00
brian m. carlson 058b8dc2c2 t4046: avoid continue in &&-chain for zsh
zsh has a bug in which the keyword "continue" within an &&-chain is not
effective and the code following it is executed nonetheless.
Fortunately, this bug has been fixed upstream in 12e5db145 ("51608:
Don't execute commands after "continue &&"", 2023-03-29).  However, zsh
releases very infrequently, so it is not present in a stable release
yet.

That, combined with the fact that almost all zsh users get their shell
from their OS vendor, means that it will likely be a long time before
this problem is fixed for most users.  We have other workarounds in
place for FreeBSD ash and dash, so it shouldn't be too difficult to add
one here, either.

Replace the existing code with a test and if-block, which comes only at
the cost of an additional indentation, and leaves the code a little more
idiomatic anyway.

Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-26 16:31:46 -07:00
Justin Tobler 4cf6e7bf5e doc: clarify practices for submitting updated patch versions
The `SubmittingPatches` documentation briefly mentions that related
patches should be grouped together in their own e-mail thread. Expand on
this to explicitly state that updated versions of a patch series should
also follow this. Also provide add a link to existing documentation from
`MyFirstContribution` that provides detailed instructions on how to do
this via `git-send-email(1)`.

Signed-off-by: Justin Tobler <jltobler@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-25 14:49:06 -07:00
Roland Hieber 6b7c45e8c9 completion: add docs on how to add subcommand completions
Signed-off-by: Roland Hieber <rhi@pengutronix.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-25 09:23:27 -07:00
Roland Hieber d13a295074 completion: improve docs for using __git_complete
It took me more than a few tries and a good lecture of __git_main to
understand that the two paragraphs really only refer to adding
completion functions for executables that are not called through git's
subcommand magic. Improve the docs and be more specific.

Signed-off-by: Roland Hieber <rhi@pengutronix.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-25 09:23:26 -07:00
Roland Hieber cb85fdf4a4 completion: add 'symbolic-ref'
Even 'symbolic-ref' is only completed when
GIT_COMPLETION_SHOW_ALL_COMMANDS=1 is set, it currently defaults to
completing file names, which is not very helpful. Add a simple
completion function which completes options and refs.

Signed-off-by: Roland Hieber <rhi@pengutronix.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-25 09:23:26 -07:00
Patrick Steinhardt 0c47355790 repository: drop `initialize_the_repository()`
Now that we have dropped `the_index`, `initialize_the_repository()`
doesn't really do a lot anymore except for setting up the pointer for
`the_repository` and then calling `initialize_repository()`. The former
can be replaced by statically initializing the pointer though, which
basically makes this function moot.

Convert callers to instead call `initialize_repository(the_repository)`
and drop `initialize_thee_repository()`.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-18 12:30:43 -07:00
Patrick Steinhardt 19fa8cd48c repository: drop `the_index` variable
All users of `the_index` have been converted to use either a custom
`struct index_state *` or the index provided by `the_repository`. We can
thus drop the globally-accessible declaration of this variable. In fact,
we can go further than that and drop `the_index` completely now and have
it be allocated dynamically in `initialize_repository()` as all the
other data structures in it are.

This concludes the quest to make Git `the_index` free, which has started
with 4aab5b46f4 (Make read-cache.c "the_index" free., 2007-04-01).

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-18 12:30:42 -07:00
Patrick Steinhardt 9ee6d63bab builtin/clone: stop using `the_index`
Convert git-clone(1) to use `the_repository->index` instead of
`the_index`.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-18 12:30:42 -07:00
Patrick Steinhardt 66bce9d00b repository: initialize index in `repo_init()`
When Git starts, one of the first things it will do is to call
`initialize_the_repository()`. This function sets up both the global
`the_repository` and `the_index` variables as required. Part of that
setup is also to set `the_repository.index = &the_index` so that the
index can be accessed via the repository.

When calling `repo_init()` on a repository though we set the complete
struct to all-zeroes, which will also cause us to unset the `index`
pointer. And as we don't re-initialize the index in that function, we
will end up with a `NULL` pointer here.

This has been fine until now becaues this function is only used to
create a new repository. git-init(1) does not access the index at all
after initializing the repository, whereas git-checkout(1) only uses
`the_index` directly. We are about to remove `the_index` though, which
will uncover this partially-initialized repository structure.

Refactor the code and create a common `initialize_repository()` function
that gets called from `repo_init()` and `initialize_the_repository()`.
This function sets up both the repository and the index as required.
Like this, we can easily special-case when `repo_init()` gets called
with `the_repository`.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-18 12:30:42 -07:00
Patrick Steinhardt f59aa5e0a9 builtin: stop using `the_index`
Convert builtins to use `the_repository->index` instead of `the_index`.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-18 12:30:42 -07:00
Patrick Steinhardt 319ba14407 t/helper: stop using `the_index`
Convert test-helper tools to use `the_repository->index` instead of
`the_index`.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-18 12:30:41 -07:00
brian m. carlson ffff4ac065 credential: add method for querying capabilities
Right now, there's no specific way to determine whether a credential
helper or git credential itself supports a given set of capabilities.
It would be helpful to have such a way, so let's let credential helpers
and git credential take an argument, "capability", which has it list the
capabilities and a version number on standard output.

Specifically choose a format that is slightly different from regular
credential output and assume that no capabilities are supported if a
non-zero exit status occurs or the data deviates from the format.  It is
common for users to write small shell scripts as the argument to
credential.helper, which will almost never be designed to emit
capabilities.  We want callers to gracefully handle this case by
assuming that they are not capable of extended support because that is
almost certainly the case, and specifying the error behavior up front
does this and preserves backwards compatibility in a graceful way.

Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-16 22:39:08 -07:00
brian m. carlson 40220f48b1 credential-cache: implement authtype capability
Now that we have full support in Git for the authtype capability, let's
add support to the cache credential helper.

When parsing data, we always set the initial capabilities because we're
the helper, and we need both the initial and helper capabilities to be
set in order to have the helper capabilities take effect.

When emitting data, always emit the supported capability and make sure
we emit items only if we have them and they're supported by the caller.
Since we may no longer have a username or password, be sure to emit
those conditionally as well so we don't segfault on a NULL pointer.
Similarly, when comparing credentials, consider both the password and
credential fields when we're matching passwords.

Adjust the partial credential detection code so that we can store
credentials missing a username or password as long as they have an
authtype and credential.

Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-16 22:39:08 -07:00
brian m. carlson 30c0a3036f t: add credential tests for authtype
It's helpful to have some basic tests for credential helpers supporting
the authtype and credential fields.  Let's add some tests for this case
so that we can make sure newly supported helpers work correctly.

Note that we explicitly check that credential helpers can produce
different sets of authtype and credential values based on the username.
While the username is not used in the HTTP protocol with authtype and
credential, it can still be specified in the URL and thus may be part of
the protocol.  Additionally, because it is common for users to have
multiple accounts on one service (say, both personal and professional
accounts), it's very helpful to be able to store different credentials
for different accounts in the same helper, and that doesn't become less
useful if one is using, say, Bearer authentication instead of Basic.
Thus, credential helpers should be expected to support this
functionality as basic functionality, so verify here that they do so.

Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-16 22:39:08 -07:00
brian m. carlson ac4c7cbfaa credential: add support for multistage credential rounds
Over HTTP, NTLM and Kerberos require two rounds of authentication on the
client side.  It's possible that there are custom authentication schemes
that also implement this same approach.  Since these are tricky schemes
to implement and the HTTP library in use may not always handle them
gracefully on all systems, it would be helpful to allow the credential
helper to implement them instead for increased portability and
robustness.

To allow this to happen, add a boolean flag, continue, that indicates
that instead of failing when we get a 401, we should retry another round
of authentication.  However, this necessitates some changes in our
current credential code so that we can make this work.

Keep the state[] headers between iterations, but only use them to send
to the helper and only consider the new ones we read from the credential
helper to be valid on subsequent iterations.  That avoids us passing
stale data when we finally approve or reject the credential.  Similarly,
clear the multistage and wwwauth[] values appropriately so that we
don't pass stale data or think we're trying a multiround response when
we're not.  Remove the credential values so that we can actually fill a
second time with new responses.

Limit the number of iterations of reauthentication we do to 3.  This
means that if there's a problem, we'll terminate with an error message
instead of retrying indefinitely and not informing the user (and
possibly conducting a DoS on the server).

In our tests, handle creating multiple response output files from our
helper so we can verify that each of the messages sent is correct.

Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-16 22:39:08 -07:00
brian m. carlson 37417b7717 t5563: refactor for multi-stage authentication
Some HTTP authentication schemes, such as NTLM- and Kerberos-based
options, require more than one round trip to authenticate.  Currently,
these can only be supported in libcurl, since Git does not have support
for this in the credential helper protocol.

However, in a future commit, we'll add support for this functionality
into the credential helper protocol and Git itself. Because we don't
really want to implement either NTLM or Kerberos, both of which are
complex protocols, we'll want to test this using a fake credential
authentication scheme.  In order to do so, update t5563 and its backend
to allow us to accept multiple sets of credentials and respond with
different behavior in each case.

Since we can now provide any number of possible status codes, provide a
non-specific reason phrase so we don't have to generate a more specific
one based on the response.  The reason phrase is mandatory according to
the status-line production in RFC 7230, but clients SHOULD ignore it,
and curl does (except to print it).

Each entry in the authorization and challenge fields contains an ID,
which indicates a corresponding credential and response.  If the
response is a 200 status, then we continue to execute git-http-backend.
Otherwise, we print the corresponding status and response.  If no ID is
matched, we use the default response with a status of 401.

Note that there is an implicit order to the parameters.  The ID is
always first and the creds or response value is always last, and
therefore may contain spaces, equals signs, or other arbitrary data.

Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-16 22:39:08 -07:00
brian m. carlson bd590bde58 docs: set a limit on credential line length
We recently introduced a way for credential helpers to add arbitrary
state as part of the protocol.  Set some limits on line length to avoid
helpers passing extremely large amounts of data.  While Git doesn't have
a fixed parsing length, there are other tools which support this
protocol and it's kind to allow them to use a reasonable fixed-size
buffer for parsing.  In addition, we would like to be moderate in our
memory usage and imposing reasonable limits is helpful for that purpose.

In the event a credential helper is incapable of storing its serialized
state in 64 KiB, it can feel free to serialize it on disk and store a
reference instead.

Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-16 22:39:07 -07:00
brian m. carlson 36f7d865e3 credential: enable state capability
Now that we've implemented the state capability, let's send it along by
default when filling credentials so we can make use of it.

Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-16 22:39:07 -07:00
brian m. carlson 8470c94be3 credential: add an argument to keep state
Until now, our credential code has mostly deal with usernames and
passwords and we've let libcurl deal with the variant of authentication
to be used.  However, now that we have the credential value, the
credential helper can take control of the authentication, so the value
provided might be something that's generated, such as a Digest hash
value.

In such a case, it would be helpful for a credential helper that gets an
erase or store command to be able to keep track of an identifier for the
original secret that went into the computation.  Furthermore, some types
of authentication, such as NTLM and Kerberos, actually need two round
trips to authenticate, which will require that the credential helper
keep some state.

In order to allow for these use cases and others, allow storing state in
a field called "state[]".  This value is passed back to the credential
helper that created it, which avoids confusion caused by parsing values
from different helpers.

Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-16 22:39:07 -07:00
brian m. carlson ad9bb6dfe6 http: add support for authtype and credential
Now that we have the credential helper code set up to handle arbitrary
authentications schemes, let's add support for this in the HTTP code,
where we really want to use it.  If we're using this new functionality,
don't set a username and password, and instead set a header wherever
we'd normally do so, including for proxy authentication.

Since we can now handle this case, ask the credential helper to enable
the appropriate capabilities.

Finally, if we're using the authtype value, set "Expect: 100-continue".
Any type of authentication that requires multiple rounds (such as NTLM
or Kerberos) requires a 100 Continue (if we're larger than
http.postBuffer) because otherwise we send the pack data before we're
authenticated, the push gets a 401 response, and we can't rewind the
stream.  We don't know for certain what other custom schemes might
require this, the HTTP/1.1 standard has required handling this since
1999, the broken HTTP server for which we disabled this (Google's) is
now fixed and has been for some time, and libcurl has a 1-second
fallback in case the HTTP server is still broken.  In addition, it is
not unreasonable to require compliance with a 25-year old standard to
use new Git features.  For all of these reasons, do so here.

Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-16 22:39:07 -07:00
brian m. carlson 5af5cc68aa docs: indicate new credential protocol fields
Now that we have new fields (authtype and credential), let's document
them for users and credential helper implementers.

Indicate specifically what common values of authtype are and what values
are allowed.  Note that, while common, digest and NTLM authentication
are insecure because they require unsalted, uniterated password hashes
to be stored.

Tell users that they can continue to use a username and password even if
the new capability is supported.

Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-16 22:39:07 -07:00
brian m. carlson 2ae6dc686d credential: add a field called "ephemeral"
Now that we have support for a wide variety of types of authentication,
it's important to indicate to other credential helpers whether they
should store credentials, since not every credential helper may
intuitively understand all possible values of the authtype field.  Do so
with a boolean field called "ephemeral", to indicate whether the
credential is expected to be temporary.

For example, in HTTP Digest authentication, the Authorization header
value is based off a nonce.  It isn't useful to store this value
for later use because reusing the credential long term will not result
in successful authentication due to the nonce necessarily differing.

An additional case is potentially short-lived credentials, which may
last only a few hours.  It similarly wouldn't be helper for other
credential helpers to attempt to provide these much later.

We do still pass the value to "git credential store" or "git credential
erase", since it may be helpful to the original helper to know whether
the operation was successful.

Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-16 22:39:07 -07:00
brian m. carlson ca9ccbf674 credential: gate new fields on capability
We support the new credential and authtype fields, but we lack a way to
indicate to a credential helper that we'd like them to be used.  Without
some sort of indication, the credential helper doesn't know if it should
try to provide us a username and password, or a pre-encoded credential.
For example, the helper might prefer a more restricted Bearer token if
pre-encoded credentials are possible, but might have to fall back to
more general username and password if not.

Let's provide a simple way to indicate whether Git (or, for that matter,
the helper) is capable of understanding the authtype and credential
fields.  We send this capability when we generate a request, and the
other side may reply to indicate to us that it does, too.

For now, don't enable sending capabilities for the HTTP code.  In a
future commit, we'll introduce appropriate handling for that code,
which requires more in-depth work.

The logic for determining whether a capability is supported may seem
complex, but it is not.  At each stage, we emit the capability to the
following stage if all preceding stages have declared it.  Thus, if the
caller to git credential fill didn't declare it, then we won't send it
to the helper, and if fill's caller did send but the helper doesn't
understand it, then we won't send it on in the response.  If we're an
internal user, then we know about all capabilities and will request
them.

For "git credential approve" and "git credential reject", we set the
helper capability before calling the helper, since we assume that the
input we're getting from the external program comes from a previous call
to "git credential fill", and thus we'll invoke send a capability to the
helper if and only if we got one from the standard input, which is the
correct behavior.

Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-16 22:39:06 -07:00
brian m. carlson 6a6d6fb12e credential: add a field for pre-encoded credentials
At the moment, our credential code wants to find a username and password
for access, which, for HTTP, it will pass to libcurl to encode and
process.  However, many users want to use authentication schemes that
libcurl doesn't support, such as Bearer authentication.  In these
schemes, the secret is not a username and password pair, but some sort
of token that meets the production for authentication data in the RFC.

In fact, in general, it's useful to allow our credential helper to have
knowledge about what specifically to put in the protocol header.  Thus,
add a field, credential, which contains data that's preencoded to be
suitable for the protocol in question.  If we have such data, we need
neither a username nor a password, so make that adjustment as well.

It is in theory possible to reuse the password field for this.  However,
if we do so, we must know whether the credential helper supports our new
scheme before sending it data, which necessitates some sort of
capability inquiry, because otherwise an uninformed credential helper
would store our preencoded data as a password, which would fail the next
time we attempted to connect to the remote server.  This design is
substantially simpler, and we can hint to the credential helper that we
support this approach with a simple new field instead of needing to
query it first.

Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-16 22:39:06 -07:00
brian m. carlson d01c76f1cf http: use new headers for each object request
Currently we create one set of headers for all object requests and reuse
it.  However, we'll need to adjust the headers for authentication
purposes in the future, so let's create a new set for each request so
that we can adjust them if the authentication changes.

Note that the cost of allocation here is tiny compared to the fact that
we're making a network call, not to mention probably a full TLS
connection, so this shouldn't have a significant impact on performance.
Moreover, nobody who cares about performance is using the dumb HTTP
protocol anyway, since it often makes huge numbers of requests compared
to the smart protocol.

Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-16 22:39:06 -07:00
brian m. carlson 90765ea81e remote-curl: reset headers on new request
When we retry a post_rpc request, we currently reuse the same headers as
before.  In the future, we'd like to be able to modify them based on the
result we get back, so let's reset them on each retry so we can avoid
sending potentially duplicate headers if the values change.

Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-16 22:39:06 -07:00
brian m. carlson 7046f1d572 credential: add an authtype field
When Git makes an HTTP request, it can negotiate the type of
authentication to use with the server provided the authentication scheme
is one of a few well-known types (Basic, Digest, NTLM, or Negotiate).
However, some servers wish to use other types of authentication, such as
the Bearer type from OAuth2.  Since libcurl doesn't natively support
this type, it isn't possible to use it, and the user is forced to
specify the Authorization header using the http.extraheader setting.

However, storing a plaintext token in the repository configuration is
not very secure, especially if a repository can be shared by multiple
parties.  We already have support for many types of secure credential
storage by using credential helpers, so let's teach credential helpers
how to produce credentials for an arbitrary scheme.

If the credential helper specifies an authtype field, then it specifies
an authentication scheme (e.g., Bearer) and the password field specifies
the raw authentication token, with any encoding already specified.  We
reuse the password field for this because some credential helpers store
the metadata without encryption even though the password is encrypted,
and we'd like to avoid insecure storage if an older version of the
credential helper gets ahold of the data.

The username is not used in this case, but it is still preserved for the
purpose of finding the right credential if the user has multiple
accounts.

If the authtype field is not specified, then the password behaves as
normal and it is passed along with the username to libcurl.

Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-16 22:39:06 -07:00
Patrick Steinhardt 70b81fbf3c t0612: add tests to exercise Git/JGit reftable compatibility
While the reftable format is a recent introduction in Git, JGit already
knows to read and write reftables since 2017. Given the complexity of
the format there is a very real risk of incompatibilities between those
two implementations, which is something that we really want to avoid.

Add some basic tests that verify that reftables written by Git and JGit
can be read by the respective other implementation. For now this test
suite is rather small, only covering basic functionality. But it serves
as a good starting point and can be extended over time.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-12 08:47:51 -07:00
Patrick Steinhardt db1d63bf57 t0610: fix non-portable variable assignment
Older versions of the Dash shell fail to parse `local var=val`
assignments in some cases when `val` is unquoted. Such failures can be
observed e.g. with Ubuntu 20.04 and older, which has a Dash version that
still has this bug.

Such an assignment has been introduced in t0610. The issue wasn't
detected for a while because this test used to only run when the
GIT_TEST_DEFAULT_REF_FORMAT environment variable was set to "reftable".
We have dropped that requirement now though, meaning that it runs
unconditionally, including on jobs which use such older versions of
Ubuntu.

We have worked around such issues in the past, e.g. in ebee5580ca
(parallel-checkout: avoid dash local bug in tests, 2021-06-06), by
quoting the `val` side. Apply the same fix to t0610.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-12 08:47:51 -07:00
Patrick Steinhardt ca13c3e94a t06xx: always execute backend-specific tests
The tests in t06xx exercise specific ref formats. Next to probing some
basic functionality, these tests also exercise other low-level details
specific to the format. Those tests are only executed though in case
`GIT_TEST_DEFAULT_REF_FORMAT` is set to the ref format of the respective
backend-under-test.

Ideally, we would run the full test matrix for ref formats such that our
complete test suite is executed with every supported format on every
supported platform. This is quite an expensive undertaking though, and
thus we only execute e.g. the "reftable" tests on macOS and Linux. As a
result, we basically have no test coverage for the "reftable" format at
all on other platforms like Windows.

Adapt these tests so that they override `GIT_TEST_DEFAULT_REF_FORMAT`,
which means that they'll always execute. This increases test coverage on
platforms that don't run the full test matrix, which at least gives us
some basic test coverage on those platforms for the "reftable" format.

This of course comes at the cost of running those tests multiple times
on platforms where we do run the full test matrix. But arguably, this is
a good thing because it will also cause us to e.g. run those tests with
the address sanitizer and other non-standard parameters.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-12 08:47:51 -07:00
Patrick Steinhardt 04ba2c7eb3 ci: install JGit dependency
We have some tests in t5310 that use JGit to verify that bitmaps can be
read both by Git and by JGit. We do not execute these tests in our CI
jobs though because we don't make JGit available there. Consequently,
the tests basically bitrot because almost nobody is ever going to have
JGit in their path.

Install JGit to plug this test gap.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-12 08:47:50 -07:00
Patrick Steinhardt ca44ef3165 ci: make Perforce binaries executable for all users
The Perforce binaries are only made executable for the current user. On
GitLab CI though we execute tests as a different user than "root", and
thus these binaries may not be executable by that test user at all. This
has gone unnoticed so far because those binaries are optional -- in case
they don't exist we simply skip over tests requiring them.

Fix the setup so that we set the executable bits for all users.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-12 08:47:50 -07:00
Patrick Steinhardt 9cdeb34b96 ci: merge scripts which install dependencies
We have two different scripts which install dependencies, one for
dockerized jobs and one for non-dockerized ones. Naturally, these
scripts have quite some duplication. Furthermore, either of these
scripts is missing some test dependencies that the respective other
script has, thus reducing test coverage.

Merge those two scripts such that there is a single source of truth for
test dependencies, only.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-12 08:47:50 -07:00
Patrick Steinhardt 2c5c7639e5 ci: fix setup of custom path for GitLab CI
Part of "install-dependencies.sh" is to install some binaries required
for tests into a custom directory that gets added to the PATH. This
directory is located at "$HOME/path" and thus depends on the current
user that the script executes as.

This creates problems for GitLab CI, which installs dependencies as the
root user, but runs tests as a separate, unprivileged user. As their
respective home directories are different, we will end up using two
different custom path directories. Consequently, the unprivileged user
will not be able to find the binaries that were set up as root user.

Fix this issue by allowing CI to override the custom path, which allows
GitLab to set up a constant value that isn't derived from "$HOME".

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-12 08:47:50 -07:00
Patrick Steinhardt d1ef3d3b1d ci: merge custom PATH directories
We're downloading various executables required by our tests. Each of
these executables goes into its own directory, which is then appended to
the PATH variable. Consequently, whenever we add a new dependency and
thus a new directory, we would have to adapt to this change in several
places.

Refactor this to instead put all binaries into a single directory.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-12 08:47:50 -07:00
Patrick Steinhardt 40c60f4c12 ci: convert "install-dependencies.sh" to use "/bin/sh"
We're about to merge the "install-docker-dependencies.sh" script into
"install-dependencies.sh". This will also move our Alpine-based jobs
over to use the latter script. This script uses the Bash shell though,
which is not available by default on Alpine Linux.

Refactor "install-dependencies.sh" to use "/bin/sh" instead of Bash.
This requires us to get rid of the pushd/popd invocations, which are
replaced by some more elaborate commands that download or extract
executables right to where they are needed.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-12 08:47:50 -07:00
Patrick Steinhardt 21bcb4a602 ci: drop duplicate package installation for "linux-gcc-default"
The "linux-gcc-default" job installs common Ubuntu packages. This is
already done in the distro-specific switch, so we basically duplicate
the effort here.

Drop the duplicate package installations and inline the variable that
contains those common packages.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-12 08:47:50 -07:00
Patrick Steinhardt 11d3f1aa5f ci: skip sudo when we are already root
Our "install-dependencies.sh" script is executed by non-dockerized jobs
to install dependencies. These jobs don't run with "root" permissions,
but with a separate user. Consequently, we need to use sudo(8) there to
elevate permissions when installing packages.

We're about to merge "install-docker-dependencies.sh" into that script
though, and our Docker containers do run as "root". Using sudo(8) is
thus unnecessary there, even though it would be harmless. On some images
like Alpine Linux though there is no sudo(8) available by default, which
would consequently break the build.

Adapt the script to make "sudo" a no-op when running as "root" user.
This allows us to easily reuse the script for our dockerized jobs.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-12 08:47:50 -07:00
Patrick Steinhardt ab2b3aadf3 ci: expose distro name in dockerized GitHub jobs
Expose a distro name in dockerized jobs. This will be used in a
subsequent commit where we merge the installation scripts for dockerized
and non-dockerized jobs.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-12 08:47:50 -07:00
Patrick Steinhardt 2d65e5b6a6 ci: rename "runs_on_pool" to "distro"
The "runs_on_pool" environment variable is used by our CI scripts to
distinguish the different kinds of operating systems. It is quite
specific to GitHub Actions though and not really a descriptive name.

Rename the variable to "distro" to clarify its intent.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-12 08:47:49 -07:00
Patrick Steinhardt fa74f32291 reftable/block: reuse compressed array
Similar to the preceding commit, let's reuse the `compressed` array that
we use to store compressed data in. This results in a small reduction in
memory allocations when writing many refs.

Before:

  HEAP SUMMARY:
      in use at exit: 671,931 bytes in 151 blocks
    total heap usage: 22,620,528 allocs, 22,620,377 frees, 1,245,549,984 bytes allocated

After:

  HEAP SUMMARY:
      in use at exit: 671,931 bytes in 151 blocks
    total heap usage: 22,618,257 allocs, 22,618,106 frees, 1,236,351,528 bytes allocated

So while the reduction in allocations isn't really all that big, it's a
low hanging fruit and thus there isn't much of a reason not to pick it.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-08 17:01:42 -07:00
Patrick Steinhardt a155ab2bf4 reftable/block: reuse zstream when writing log blocks
While most reftable blocks are written to disk as-is, blocks for log
records are compressed with zlib. To compress them we use `compress2()`,
which is a simple wrapper around the more complex `zstream` interface
that would require multiple function invocations.

One downside of this interface is that `compress2()` will reallocate
internal state of the `zstream` interface on every single invocation.
Consequently, as we call `compress2()` for every single log block which
we are about to write, this can lead to quite some memory allocation
churn.

Refactor the code so that the block writer reuses a `zstream`. This
significantly reduces the number of bytes allocated when writing many
refs in a single transaction, as demonstrated by the following benchmark
that writes 100k refs in a single transaction.

Before:

  HEAP SUMMARY:
      in use at exit: 671,931 bytes in 151 blocks
    total heap usage: 22,631,887 allocs, 22,631,736 frees, 1,854,670,793 bytes allocated

After:

  HEAP SUMMARY:
      in use at exit: 671,931 bytes in 151 blocks
    total heap usage: 22,620,528 allocs, 22,620,377 frees, 1,245,549,984 bytes allocated

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-08 17:01:42 -07:00
Patrick Steinhardt 8aaeffe3b5 reftable/writer: reset `last_key` instead of releasing it
The reftable writer tracks the last key that it has written so that it
can properly compute the compressed prefix for the next record it is
about to write. This last key must be reset whenever we move on to write
the next block, which is done in `writer_reinit_block_writer()`. We do
this by calling `strbuf_release()` though, which needlessly deallocates
the underlying buffer.

Convert the code to use `strbuf_reset()` instead, which saves one
allocation per block we're about to write. This requires us to also
amend `reftable_writer_free()` to release the buffer's memory now as we
previously seemingly relied on `writer_reinit_block_writer()` to release
the memory for us. Releasing memory here is the right thing to do
anyway.

While at it, convert a callsite where we truncate the buffer by setting
its length to zero to instead use `strbuf_reset()`, too.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-08 17:01:41 -07:00
Patrick Steinhardt 60dd319519 reftable/writer: unify releasing memory
There are two code paths which release memory of the reftable writer:

  - `reftable_writer_close()` releases internal state after it has
    written data.

  - `reftable_writer_free()` releases the block that was written to and
    the writer itself.

Both code paths free different parts of the writer, and consequently the
caller must make sure to call both. And while callers mostly do this
already, this falls apart when a write failure causes the caller to skip
calling `reftable_write_close()`.

Introduce a new function `reftable_writer_release()` that releases all
internal state and call it from both paths. Like this it is fine for the
caller to not call `reftable_writer_close()`.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-08 17:01:41 -07:00
Patrick Steinhardt 7e892fec47 reftable/writer: refactorings for `writer_flush_nonempty_block()`
Large parts of the reftable library do not conform to Git's typical code
style. Refactor `writer_flush_nonempty_block()` such that it conforms
better to it and add some documentation that explains some of its more
intricate behaviour.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-08 17:01:41 -07:00
Patrick Steinhardt d0dd119f72 reftable/writer: refactorings for `writer_add_record()`
Large parts of the reftable library do not conform to Git's typical code
style. Refactor `writer_add_record()` such that it conforms better to it
and add some documentation that explains some of its more intricate
behaviour.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-08 17:01:41 -07:00
Patrick Steinhardt 44afd85fbd refs/reftable: don't recompute committer ident
In order to write reflog entries we need to compute the committer's
identity as it gets encoded in the log record itself. The reftable
backend does this via `git_committer_info()` and `split_ident_line()` in
`fill_reftable_log_record()`, which use the Git config as well as
environment variables to figure out the identity.

While most callers would only call `fill_reftable_log_record()` once or
twice, `write_transaction_table()` will call it as many times as there
are queued ref updates. This can be quite a waste of effort when writing
many refs with reflog entries in a single transaction.

Refactor the code to pre-compute the committer information. This results
in a small speedup when writing 100000 refs in a single transaction:

  Benchmark 1: update-ref: create many refs (HEAD~)
    Time (mean ± σ):      2.895 s ±  0.020 s    [User: 1.516 s, System: 1.374 s]
    Range (min … max):    2.868 s …  2.983 s    100 runs

  Benchmark 2: update-ref: create many refs (HEAD)
    Time (mean ± σ):      2.845 s ±  0.017 s    [User: 1.461 s, System: 1.379 s]
    Range (min … max):    2.803 s …  2.913 s    100 runs

  Summary
    update-ref: create many refs (HEAD) ran
      1.02 ± 0.01 times faster than update-ref: create many refs (HEAD~)

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-08 17:01:41 -07:00
Patrick Steinhardt 485c63cf5c reftable: remove name checks
In the preceding commit we have disabled name checks in the "reftable"
backend. These checks were responsible for verifying multiple things
when writing records to the reftable stack:

  - Detecting file/directory conflicts. Starting with the preceding
    commits this is now handled by the reftable backend itself via
    `refs_verify_refname_available()`.

  - Validating refnames. This is handled by `check_refname_format()` in
    the generic ref transacton layer.

The code in the reftable library is thus not used anymore and likely to
bitrot over time. Remove it.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-08 17:01:41 -07:00
Patrick Steinhardt 4af31dc84a refs/reftable: skip duplicate name checks
All the callback functions which write refs in the reftable backend
perform D/F conflict checks via `refs_verify_refname_available()`. But
in reality we perform these D/F conflict checks a second time in the
reftable library via `stack_check_addition()`.

Interestingly, the code in the reftable library is inferior compared to
the generic function:

  - It is slower than `refs_verify_refname_available()`, even though
    this can probably be optimized.

  - It does not provide a proper error message to the caller, and thus
    all the user would see is a generic "file/directory conflict"
    message.

Disable the D/F conflict checks in the reftable library by setting the
`skip_name_check` write option. This results in a non-negligible speedup
when writing many refs. The following benchmark writes 100k refs in a
single transaction:

  Benchmark 1: update-ref: create many refs (HEAD~)
    Time (mean ± σ):      3.241 s ±  0.040 s    [User: 1.854 s, System: 1.381 s]
    Range (min … max):    3.185 s …  3.454 s    100 runs

  Benchmark 2: update-ref: create many refs (HEAD)
    Time (mean ± σ):      2.878 s ±  0.024 s    [User: 1.506 s, System: 1.367 s]
    Range (min … max):    2.838 s …  2.960 s    100 runs

  Summary
    update-ref: create many refs (HEAD~) ran
      1.13 ± 0.02 times faster than update-ref: create many refs (HEAD)

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-08 16:59:02 -07:00
Patrick Steinhardt 455d61b6d2 refs/reftable: perform explicit D/F check when writing symrefs
We already perform explicit D/F checks in all reftable callbacks which
write refs, except when writing symrefs. For one this leads to an error
message which isn't perfectly actionable because we only tell the user
that there was a D/F conflict, but not which refs conflicted with each
other. But second, once all ref updating callbacks explicitly check for
D/F conflicts, we can disable the D/F checks in the reftable library
itself and thus avoid some duplicated efforts.

Refactor the code that writes symref tables to explicitly call into
`refs_verify_refname_available()` when writing symrefs.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-08 16:59:01 -07:00
Patrick Steinhardt f57cc987a9 refs/reftable: fix D/F conflict error message on ref copy
The `write_copy_table()` function is shared between the reftable
implementations for renaming and copying refs. The only difference
between those two cases is that the rename will also delete the old
reference, whereas copying won't.

This has resulted in a bug though where we don't properly verify refname
availability. When calling `refs_verify_refname_available()`, we always
add the old ref name to the list of refs to be skipped when computing
availability, which indicates that the name would be available even if
it already exists at the current point in time. This is only the right
thing to do for renames though, not for copies.

The consequence of this bug is quite harmless because the reftable
backend has its own checks for D/F conflicts further down in the call
stack, and thus we refuse the update regardless of the bug. But all the
user gets in this case is an uninformative message that copying the ref
has failed, without any further details.

Fix the bug and only add the old name to the skip-list in case we rename
the ref. Consequently, this error case will now be handled by
`refs_verify_refname_available()`, which knows to provide a proper error
message.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-08 16:59:01 -07:00
Hudson Worden 0cbeac6dfb Add --no-verify support for cherry-pick 2023-07-16 14:25:39 -04:00
98 changed files with 2099 additions and 1282 deletions

View File

@ -303,7 +303,7 @@ jobs:
CC: ${{matrix.vector.cc}}
CC_PACKAGE: ${{matrix.vector.cc_package}}
jobname: ${{matrix.vector.jobname}}
runs_on_pool: ${{matrix.vector.pool}}
distro: ${{matrix.vector.pool}}
runs-on: ${{matrix.vector.pool}}
steps:
- uses: actions/checkout@v4
@ -342,12 +342,16 @@ jobs:
vector:
- jobname: linux-musl
image: alpine
distro: alpine-latest
- jobname: linux32
image: daald/ubuntu32:xenial
distro: ubuntu32-16.04
- jobname: pedantic
image: fedora
distro: fedora-latest
env:
jobname: ${{matrix.vector.jobname}}
distro: ${{matrix.vector.distro}}
runs-on: ubuntu-latest
container: ${{matrix.vector.image}}
steps:
@ -355,7 +359,7 @@ jobs:
if: matrix.vector.jobname != 'linux32'
- uses: actions/checkout@v1 # cannot be upgraded because Node.js Actions aren't supported in this container
if: matrix.vector.jobname == 'linux32'
- run: ci/install-docker-dependencies.sh
- run: ci/install-dependencies.sh
- run: ci/run-build-and-tests.sh
- name: print test failures
if: failure() && env.FAILED_TEST_ARTIFACTS != ''

View File

@ -9,8 +9,10 @@ workflow:
test:linux:
image: $image
variables:
CUSTOM_PATH: "/custom"
before_script:
- ./ci/install-docker-dependencies.sh
- ./ci/install-dependencies.sh
script:
- useradd builder --create-home
- chown -R builder "${CI_PROJECT_DIR}"
@ -98,7 +100,7 @@ static-analysis:
variables:
jobname: StaticAnalysis
before_script:
- ./ci/install-docker-dependencies.sh
- ./ci/install-dependencies.sh
script:
- ./ci/run-static-analysis.sh
- ./ci/check-directional-formatting.bash

View File

@ -10,7 +10,22 @@ UI, Workflows & Features
* The "--rfc" option of "git format-patch" learned to take an
optional string value to be used in place of "RFC" to tweak the
"[PATCH]" on the subject header.
(merge ce36894509 jc/format-patch-rfc-more later to maint).
* The credential helper protocol, together with the HTTP layer, have
been enhanced to support authentication schemes different from
username & password pair, like Bearer and NTLM.
* Command line completion script (in contrib/) learned to complete
"git symbolic-ref" a bit better (you need to enable plumbing
commands to be completed with GIT_COMPLETION_SHOW_ALL_COMMANDS).
* When the user responds to a prompt given by "git add -p" with an
unsupported command, list of available commands were given, which
was too much if the user knew what they wanted to type but merely
made a typo. Now the user gets a much shorter error message.
* The color parsing code learned to handle 12-bit RGB colors, spelled
as "#RGB" (in addition to "#RRGGBB" that is already supported).
Performance, Internal Implementation, Development Support etc.
@ -23,6 +38,16 @@ Performance, Internal Implementation, Development Support etc.
that are used in fuzzer tests, to make sure at least they build
without bitrot, in Linux CI runs.
* Code to write out reftable has seen some optimization and
simplification.
* Tests to ensure interoperability between reftable written by jgit
and our code have been added and enabled in CI.
* The singleton index_state instance "the_index" has been eliminated
by always instantiating "the_repository" and replacing references
to "the_index" with references to its .index member.
Fixes since v2.45
-----------------
@ -52,4 +77,16 @@ Fixes since v2.45
errored out. Now it keeps going.
(merge c75662bfc9 js/for-each-repo-keep-going later to maint).
* zsh can pretend to be a normal shell pretty well except for some
glitches that we tickle in some of our scripts. Work them around
so that "vimdiff" and our test suite works well enough with it.
(merge fedd5c79ff bc/zsh-compatibility later to maint).
* Command line completion support for zsh (in contrib/) has been
updated to stop exposing internal state to end-user shell
interaction.
(merge 3c20acdf46 dk/zsh-git-repo-path-fix later to maint).
* Other code cleanup, docfix, build fix, etc.
(merge 4cf6e7bf5e jt/doc-submitting-rerolled-series later to maint).
(merge a5a4cb7b27 rs/diff-parseopts-cleanup later to maint).

View File

@ -455,10 +455,12 @@ e-mail tools, so that they may comment on specific portions of
your code. For this reason, each patch should be submitted
"inline" in a separate message.
Multiple related patches should be grouped into their own e-mail
thread to help readers find all parts of the series. To that end,
send them as replies to either an additional "cover letter" message
(see below), the first patch, or the respective preceding patch.
All subsequent versions of a patch series and other related patches should be
grouped into their own e-mail thread to help readers find all parts of the
series. To that end, send them as replies to either an additional "cover
letter" message (see below), the first patch, or the respective preceding patch.
Here is a link:MyFirstContribution.html#v2-git-send-email[step-by-step guide] on
how to submit updated versions of a patch series.
If your log message (including your name on the
`Signed-off-by` trailer) is not writable in ASCII, make sure that

View File

@ -316,7 +316,8 @@ terminals, this is usually not the same as setting to "white black".
Colors may also be given as numbers between 0 and 255; these use ANSI
256-color mode (but note that not all terminals may support this). If
your terminal supports it, you may also specify 24-bit RGB values as
hex, like `#ff0ab3`.
hex, like `#ff0ab3`, or 12-bit RGB values like `#f1b`, which is
equivalent to the 24-bit color `#ff11bb`.
+
The accepted attributes are `bold`, `dim`, `ul`, `blink`, `reverse`,
`italic`, and `strike` (for crossed-out or "strikethrough" letters).

View File

@ -8,7 +8,7 @@ git-credential - Retrieve and store user credentials
SYNOPSIS
--------
------------------
'git credential' (fill|approve|reject)
'git credential' (fill|approve|reject|capability)
------------------
DESCRIPTION
@ -41,6 +41,9 @@ If the action is `reject`, git-credential will send the description to
any configured credential helpers, which may erase any stored
credentials matching the description.
If the action is `capability`, git-credential will announce any capabilities
it supports to standard output.
If the action is `approve` or `reject`, no output should be emitted.
TYPICAL USE OF GIT CREDENTIAL
@ -111,7 +114,9 @@ attribute per line. Each attribute is specified by a key-value pair,
separated by an `=` (equals) sign, followed by a newline.
The key may contain any bytes except `=`, newline, or NUL. The value may
contain any bytes except newline or NUL.
contain any bytes except newline or NUL. A line, including the trailing
newline, may not exceed 65535 bytes in order to allow implementations to
parse efficiently.
Attributes with keys that end with C-style array brackets `[]` can have
multiple values. Each instance of a multi-valued attribute forms an
@ -178,6 +183,61 @@ empty string.
Components which are missing from the URL (e.g., there is no
username in the example above) will be left unset.
`authtype`::
This indicates that the authentication scheme in question should be used.
Common values for HTTP and HTTPS include `basic`, `bearer`, and `digest`,
although the latter is insecure and should not be used. If `credential`
is used, this may be set to an arbitrary string suitable for the protocol in
question (usually HTTP).
+
This value should not be sent unless the appropriate capability (see below) is
provided on input.
`credential`::
The pre-encoded credential, suitable for the protocol in question (usually
HTTP). If this key is sent, `authtype` is mandatory, and `username` and
`password` are not used. For HTTP, Git concatenates the `authtype` value and
this value with a single space to determine the `Authorization` header.
+
This value should not be sent unless the appropriate capability (see below) is
provided on input.
`ephemeral`::
This boolean value indicates, if true, that the value in the `credential`
field should not be saved by the credential helper because its usefulness is
limited in time. For example, an HTTP Digest `credential` value is computed
using a nonce and reusing it will not result in successful authentication.
This may also be used for situations with short duration (e.g., 24-hour)
credentials. The default value is false.
+
The credential helper will still be invoked with `store` or `erase` so that it
can determine whether the operation was successful.
+
This value should not be sent unless the appropriate capability (see below) is
provided on input.
`state[]`::
This value provides an opaque state that will be passed back to this helper
if it is called again. Each different credential helper may specify this
once. The value should include a prefix unique to the credential helper and
should ignore values that don't match its prefix.
+
This value should not be sent unless the appropriate capability (see below) is
provided on input.
`continue`::
This is a boolean value, which, if enabled, indicates that this
authentication is a non-final part of a multistage authentication step. This
is common in protocols such as NTLM and Kerberos, where two rounds of client
authentication are required, and setting this flag allows the credential
helper to implement the multistage authentication step. This flag should
only be sent if a further stage is required; that is, if another round of
authentication is expected.
+
This value should not be sent unless the appropriate capability (see below) is
provided on input. This attribute is 'one-way' from a credential helper to
pass information to Git (or other programs invoking `git credential`).
`wwwauth[]`::
When an HTTP response is received by Git that includes one or more
@ -189,7 +249,45 @@ attribute 'wwwauth[]', where the order of the attributes is the same as
they appear in the HTTP response. This attribute is 'one-way' from Git
to pass additional information to credential helpers.
Unrecognised attributes are silently discarded.
`capability[]`::
This signals that Git, or the helper, as appropriate, supports the capability
in question. This can be used to provide better, more specific data as part
of the protocol. A `capability[]` directive must precede any value depending
on it and these directives _should_ be the first item announced in the
protocol.
+
There are two currently supported capabilities. The first is `authtype`, which
indicates that the `authtype`, `credential`, and `ephemeral` values are
understood. The second is `state`, which indicates that the `state[]` and
`continue` values are understood.
+
It is not obligatory to use the additional features just because the capability
is supported, but they should not be provided without the capability.
Unrecognised attributes and capabilities are silently discarded.
[[CAPA-IOFMT]]
CAPABILITY INPUT/OUTPUT FORMAT
------------------------------
For `git credential capability`, the format is slightly different. First, a
`version 0` announcement is made to indicate the current version of the
protocol, and then each capability is announced with a line like `capability
authtype`. Credential helpers may also implement this format, again with the
`capability` argument. Additional lines may be added in the future; callers
should ignore lines which they don't understand.
Because this is a new part of the credential helper protocol, older versions of
Git, as well as some credential helpers, may not support it. If a non-zero
exit status is received, or if the first line doesn't start with the word
`version` and a space, callers should assume that no capabilities are supported.
The intention of this format is to differentiate it from the credential output
in an unambiguous way. It is possible to use very simple credential helpers
(e.g., inline shell scripts) which always produce identical output. Using a
distinct format allows users to continue to use this syntax without having to
worry about correctly implementing capability advertisements or accidentally
confusing callers querying for capabilities.
GIT
---

View File

@ -2665,7 +2665,6 @@ REFTABLE_OBJS += reftable/merged.o
REFTABLE_OBJS += reftable/pq.o
REFTABLE_OBJS += reftable/reader.o
REFTABLE_OBJS += reftable/record.o
REFTABLE_OBJS += reftable/refname.o
REFTABLE_OBJS += reftable/generic.o
REFTABLE_OBJS += reftable/stack.o
REFTABLE_OBJS += reftable/tree.o
@ -2678,7 +2677,6 @@ REFTABLE_TEST_OBJS += reftable/merged_test.o
REFTABLE_TEST_OBJS += reftable/pq_test.o
REFTABLE_TEST_OBJS += reftable/record_test.o
REFTABLE_TEST_OBJS += reftable/readwrite_test.o
REFTABLE_TEST_OBJS += reftable/refname_test.o
REFTABLE_TEST_OBJS += reftable/stack_test.o
REFTABLE_TEST_OBJS += reftable/test_framework.o
REFTABLE_TEST_OBJS += reftable/tree_test.o

View File

@ -293,10 +293,9 @@ static void err(struct add_p_state *s, const char *fmt, ...)
va_list args;
va_start(args, fmt);
fputs(s->s.error_color, stderr);
vfprintf(stderr, fmt, args);
fputs(s->s.reset_color, stderr);
fputc('\n', stderr);
fputs(s->s.error_color, stdout);
vprintf(fmt, args);
puts(s->s.reset_color);
va_end(args);
}
@ -1326,7 +1325,7 @@ static int apply_for_checkout(struct add_p_state *s, struct strbuf *diff,
err(s, _("Nothing was applied.\n"));
} else
/* As a last resort, show the diff to the user */
fwrite(diff->buf, diff->len, 1, stderr);
fwrite(diff->buf, diff->len, 1, stdout);
return 0;
}
@ -1668,7 +1667,7 @@ static int patch_update_file(struct add_p_state *s,
}
} else if (s->answer.buf[0] == 'p') {
rendered_hunk_index = -1;
} else {
} else if (s->answer.buf[0] == '?') {
const char *p = _(help_patch_remainder), *eol = p;
color_fprintf(stdout, s->s.help_color, "%s",
@ -1692,6 +1691,9 @@ static int patch_update_file(struct add_p_state *s,
color_fprintf_ln(stdout, s->s.help_color,
"%.*s", (int)(eol - p), p);
}
} else {
err(s, _("Unknown command '%s' (use '?' for help)"),
s->answer.buf);
}
}
@ -1778,9 +1780,9 @@ int run_add_p(struct repository *r, enum add_p_mode mode,
break;
if (s.file_diff_nr == 0)
fprintf(stderr, _("No changes.\n"));
err(&s, _("No changes."));
else if (binary_count == s.file_diff_nr)
fprintf(stderr, _("Only binary files changed.\n"));
err(&s, _("Only binary files changed."));
add_p_state_clear(&s);
return 0;

View File

@ -3,7 +3,7 @@
*
* Copyright (C) 2006 Linus Torvalds
*/
#define USE_THE_INDEX_VARIABLE
#include "builtin.h"
#include "advice.h"
#include "config.h"
@ -40,20 +40,20 @@ static int chmod_pathspec(struct pathspec *pathspec, char flip, int show_only)
{
int i, ret = 0;
for (i = 0; i < the_index.cache_nr; i++) {
struct cache_entry *ce = the_index.cache[i];
for (i = 0; i < the_repository->index->cache_nr; i++) {
struct cache_entry *ce = the_repository->index->cache[i];
int err;
if (!include_sparse &&
(ce_skip_worktree(ce) ||
!path_in_sparse_checkout(ce->name, &the_index)))
!path_in_sparse_checkout(ce->name, the_repository->index)))
continue;
if (pathspec && !ce_path_match(&the_index, ce, pathspec, NULL))
if (pathspec && !ce_path_match(the_repository->index, ce, pathspec, NULL))
continue;
if (!show_only)
err = chmod_index_entry(&the_index, ce, flip);
err = chmod_index_entry(the_repository->index, ce, flip);
else
err = S_ISREG(ce->ce_mode) ? 0 : -1;
@ -68,20 +68,20 @@ static int renormalize_tracked_files(const struct pathspec *pathspec, int flags)
{
int i, retval = 0;
for (i = 0; i < the_index.cache_nr; i++) {
struct cache_entry *ce = the_index.cache[i];
for (i = 0; i < the_repository->index->cache_nr; i++) {
struct cache_entry *ce = the_repository->index->cache[i];
if (!include_sparse &&
(ce_skip_worktree(ce) ||
!path_in_sparse_checkout(ce->name, &the_index)))
!path_in_sparse_checkout(ce->name, the_repository->index)))
continue;
if (ce_stage(ce))
continue; /* do not touch unmerged paths */
if (!S_ISREG(ce->ce_mode) && !S_ISLNK(ce->ce_mode))
continue; /* do not touch non blobs */
if (pathspec && !ce_path_match(&the_index, ce, pathspec, NULL))
if (pathspec && !ce_path_match(the_repository->index, ce, pathspec, NULL))
continue;
retval |= add_file_to_index(&the_index, ce->name,
retval |= add_file_to_index(the_repository->index, ce->name,
flags | ADD_CACHE_RENORMALIZE);
}
@ -100,11 +100,11 @@ static char *prune_directory(struct dir_struct *dir, struct pathspec *pathspec,
i = dir->nr;
while (--i >= 0) {
struct dir_entry *entry = *src++;
if (dir_path_match(&the_index, entry, pathspec, prefix, seen))
if (dir_path_match(the_repository->index, entry, pathspec, prefix, seen))
*dst++ = entry;
}
dir->nr = dst - dir->entries;
add_pathspec_matches_against_index(pathspec, &the_index, seen,
add_pathspec_matches_against_index(pathspec, the_repository->index, seen,
PS_IGNORE_SKIP_WORKTREE);
return seen;
}
@ -119,14 +119,14 @@ static int refresh(int verbose, const struct pathspec *pathspec)
(verbose ? REFRESH_IN_PORCELAIN : REFRESH_QUIET);
seen = xcalloc(pathspec->nr, 1);
refresh_index(&the_index, flags, pathspec, seen,
refresh_index(the_repository->index, flags, pathspec, seen,
_("Unstaged changes after refreshing the index:"));
for (i = 0; i < pathspec->nr; i++) {
if (!seen[i]) {
const char *path = pathspec->items[i].original;
if (matches_skip_worktree(pathspec, i, &skip_worktree_seen) ||
!path_in_sparse_checkout(path, &the_index)) {
!path_in_sparse_checkout(path, the_repository->index)) {
string_list_append(&only_match_skip_worktree,
pathspec->items[i].original);
} else {
@ -338,12 +338,12 @@ static int add_files(struct dir_struct *dir, int flags)
for (i = 0; i < dir->nr; i++) {
if (!include_sparse &&
!path_in_sparse_checkout(dir->entries[i]->name, &the_index)) {
!path_in_sparse_checkout(dir->entries[i]->name, the_repository->index)) {
string_list_append(&matched_sparse_paths,
dir->entries[i]->name);
continue;
}
if (add_file_to_index(&the_index, dir->entries[i]->name, flags)) {
if (add_file_to_index(the_repository->index, dir->entries[i]->name, flags)) {
if (!ignore_add_errors)
die(_("adding files failed"));
exit_status = 1;
@ -461,8 +461,8 @@ int cmd_add(int argc, const char **argv, const char *prefix)
if (repo_read_index_preload(the_repository, &pathspec, 0) < 0)
die(_("index file corrupt"));
die_in_unpopulated_submodule(&the_index, prefix);
die_path_inside_submodule(&the_index, &pathspec);
die_in_unpopulated_submodule(the_repository->index, prefix);
die_path_inside_submodule(the_repository->index, &pathspec);
if (add_new_files) {
int baselen;
@ -474,7 +474,7 @@ int cmd_add(int argc, const char **argv, const char *prefix)
}
/* This picks up the paths that are not tracked */
baselen = fill_directory(&dir, &the_index, &pathspec);
baselen = fill_directory(&dir, the_repository->index, &pathspec);
if (pathspec.nr)
seen = prune_directory(&dir, &pathspec, baselen);
}
@ -491,7 +491,7 @@ int cmd_add(int argc, const char **argv, const char *prefix)
if (!seen)
seen = find_pathspecs_matching_against_index(&pathspec,
&the_index, PS_IGNORE_SKIP_WORKTREE);
the_repository->index, PS_IGNORE_SKIP_WORKTREE);
/*
* file_exists() assumes exact match
@ -527,8 +527,8 @@ int cmd_add(int argc, const char **argv, const char *prefix)
!file_exists(path)) {
if (ignore_missing) {
int dtype = DT_UNKNOWN;
if (is_excluded(&dir, &the_index, path, &dtype))
dir_add_ignored(&dir, &the_index,
if (is_excluded(&dir, the_repository->index, path, &dtype))
dir_add_ignored(&dir, the_repository->index,
path, pathspec.items[i].len);
} else
die(_("pathspec '%s' did not match any files"),
@ -569,7 +569,7 @@ int cmd_add(int argc, const char **argv, const char *prefix)
end_odb_transaction();
finish:
if (write_locked_index(&the_index, &lock_file,
if (write_locked_index(the_repository->index, &lock_file,
COMMIT_LOCK | SKIP_IF_UNCHANGED))
die(_("unable to write new index file"));

View File

@ -3,7 +3,7 @@
*
* Based on git-am.sh by Junio C Hamano.
*/
#define USE_THE_INDEX_VARIABLE
#include "builtin.h"
#include "abspath.h"
#include "advice.h"
@ -1536,8 +1536,8 @@ static int run_apply(const struct am_state *state, const char *index_file)
if (index_file) {
/* Reload index as apply_all_patches() will have modified it. */
discard_index(&the_index);
read_index_from(&the_index, index_file, get_git_dir());
discard_index(the_repository->index);
read_index_from(the_repository->index, index_file, get_git_dir());
}
return 0;
@ -1579,10 +1579,10 @@ static int fall_back_threeway(const struct am_state *state, const char *index_pa
if (build_fake_ancestor(state, index_path))
return error("could not build fake ancestor");
discard_index(&the_index);
read_index_from(&the_index, index_path, get_git_dir());
discard_index(the_repository->index);
read_index_from(the_repository->index, index_path, get_git_dir());
if (write_index_as_tree(&orig_tree, &the_index, index_path, 0, NULL))
if (write_index_as_tree(&orig_tree, the_repository->index, index_path, 0, NULL))
return error(_("Repository lacks necessary blobs to fall back on 3-way merge."));
say(state, stdout, _("Using index info to reconstruct a base tree..."));
@ -1608,12 +1608,12 @@ static int fall_back_threeway(const struct am_state *state, const char *index_pa
return error(_("Did you hand edit your patch?\n"
"It does not apply to blobs recorded in its index."));
if (write_index_as_tree(&their_tree, &the_index, index_path, 0, NULL))
if (write_index_as_tree(&their_tree, the_repository->index, index_path, 0, NULL))
return error("could not write tree");
say(state, stdout, _("Falling back to patching base and 3-way merge..."));
discard_index(&the_index);
discard_index(the_repository->index);
repo_read_index(the_repository);
/*
@ -1660,7 +1660,7 @@ static void do_commit(const struct am_state *state)
if (!state->no_verify && run_hooks("pre-applypatch"))
exit(1);
if (write_index_as_tree(&tree, &the_index, get_index_file(), 0, NULL))
if (write_index_as_tree(&tree, the_repository->index, get_index_file(), 0, NULL))
die(_("git write-tree failed to write a tree"));
if (!repo_get_oid_commit(the_repository, "HEAD", &parent)) {
@ -1948,7 +1948,7 @@ static void am_resolve(struct am_state *state, int allow_empty)
}
}
if (unmerged_index(&the_index)) {
if (unmerged_index(the_repository->index)) {
printf_ln(_("You still have unmerged paths in your index.\n"
"You should 'git add' each file with resolved conflicts to mark them as such.\n"
"You might run `git rm` on a file to accept \"deleted by them\" for it."));
@ -1987,12 +1987,12 @@ static int fast_forward_to(struct tree *head, struct tree *remote, int reset)
repo_hold_locked_index(the_repository, &lock_file, LOCK_DIE_ON_ERROR);
refresh_index(&the_index, REFRESH_QUIET, NULL, NULL, NULL);
refresh_index(the_repository->index, REFRESH_QUIET, NULL, NULL, NULL);
memset(&opts, 0, sizeof(opts));
opts.head_idx = 1;
opts.src_index = &the_index;
opts.dst_index = &the_index;
opts.src_index = the_repository->index;
opts.dst_index = the_repository->index;
opts.update = 1;
opts.merge = 1;
opts.reset = reset ? UNPACK_RESET_PROTECT_UNTRACKED : 0;
@ -2006,7 +2006,7 @@ static int fast_forward_to(struct tree *head, struct tree *remote, int reset)
return -1;
}
if (write_locked_index(&the_index, &lock_file, COMMIT_LOCK))
if (write_locked_index(the_repository->index, &lock_file, COMMIT_LOCK))
die(_("unable to write new index file"));
return 0;
@ -2029,8 +2029,8 @@ static int merge_tree(struct tree *tree)
memset(&opts, 0, sizeof(opts));
opts.head_idx = 1;
opts.src_index = &the_index;
opts.dst_index = &the_index;
opts.src_index = the_repository->index;
opts.dst_index = the_repository->index;
opts.merge = 1;
opts.fn = oneway_merge;
init_tree_desc(&t[0], &tree->object.oid, tree->buffer, tree->size);
@ -2040,7 +2040,7 @@ static int merge_tree(struct tree *tree)
return -1;
}
if (write_locked_index(&the_index, &lock_file, COMMIT_LOCK))
if (write_locked_index(the_repository->index, &lock_file, COMMIT_LOCK))
die(_("unable to write new index file"));
return 0;
@ -2068,7 +2068,7 @@ static int clean_index(const struct object_id *head, const struct object_id *rem
if (fast_forward_to(head_tree, head_tree, 1))
return -1;
if (write_index_as_tree(&index, &the_index, get_index_file(), 0, NULL))
if (write_index_as_tree(&index, the_repository->index, get_index_file(), 0, NULL))
return -1;
index_tree = parse_tree_indirect(&index);

View File

@ -3,7 +3,7 @@
*
* Copyright (C) Linus Torvalds, 2005
*/
#define USE_THE_INDEX_VARIABLE
#include "builtin.h"
#include "config.h"
#include "convert.h"
@ -77,7 +77,7 @@ static int filter_object(const char *path, unsigned mode,
struct checkout_metadata meta;
init_checkout_metadata(&meta, NULL, NULL, oid);
if (convert_to_working_tree(&the_index, path, *buf, *size, &strbuf, &meta)) {
if (convert_to_working_tree(the_repository->index, path, *buf, *size, &strbuf, &meta)) {
free(*buf);
*size = strbuf.len;
*buf = strbuf_detach(&strbuf, NULL);

View File

@ -1,4 +1,3 @@
#define USE_THE_INDEX_VARIABLE
#include "builtin.h"
#include "config.h"
#include "attr.h"
@ -71,9 +70,9 @@ static void check_attr(const char *prefix, struct attr_check *check,
prefix_path(prefix, prefix ? strlen(prefix) : 0, file);
if (collect_all) {
git_all_attrs(&the_index, full_path, check);
git_all_attrs(the_repository->index, full_path, check);
} else {
git_check_attr(&the_index, full_path, check);
git_check_attr(the_repository->index, full_path, check);
}
output_attr(check, file);

View File

@ -1,4 +1,3 @@
#define USE_THE_INDEX_VARIABLE
#include "builtin.h"
#include "config.h"
#include "dir.h"
@ -95,21 +94,21 @@ static int check_ignore(struct dir_struct *dir,
PATHSPEC_KEEP_ORDER,
prefix, argv);
die_path_inside_submodule(&the_index, &pathspec);
die_path_inside_submodule(the_repository->index, &pathspec);
/*
* look for pathspecs matching entries in the index, since these
* should not be ignored, in order to be consistent with
* 'git status', 'git add' etc.
*/
seen = find_pathspecs_matching_against_index(&pathspec, &the_index,
seen = find_pathspecs_matching_against_index(&pathspec, the_repository->index,
PS_HEED_SKIP_WORKTREE);
for (i = 0; i < pathspec.nr; i++) {
full_path = pathspec.items[i].match;
pattern = NULL;
if (!seen[i]) {
int dtype = DT_UNKNOWN;
pattern = last_matching_pattern(dir, &the_index,
pattern = last_matching_pattern(dir, the_repository->index,
full_path, &dtype);
if (!verbose && pattern &&
pattern->flags & PATTERN_FLAG_NEGATIVE)

View File

@ -4,7 +4,7 @@
* Copyright (C) 2005 Linus Torvalds
*
*/
#define USE_THE_INDEX_VARIABLE
#include "builtin.h"
#include "config.h"
#include "gettext.h"
@ -69,7 +69,7 @@ static void write_tempfile_record(const char *name, const char *prefix)
static int checkout_file(const char *name, const char *prefix)
{
int namelen = strlen(name);
int pos = index_name_pos(&the_index, name, namelen);
int pos = index_name_pos(the_repository->index, name, namelen);
int has_same_name = 0;
int is_file = 0;
int is_skipped = 1;
@ -79,8 +79,8 @@ static int checkout_file(const char *name, const char *prefix)
if (pos < 0)
pos = -pos - 1;
while (pos < the_index.cache_nr) {
struct cache_entry *ce = the_index.cache[pos];
while (pos <the_repository->index->cache_nr) {
struct cache_entry *ce =the_repository->index->cache[pos];
if (ce_namelen(ce) != namelen ||
memcmp(ce->name, name, namelen))
break;
@ -140,8 +140,8 @@ static int checkout_all(const char *prefix, int prefix_length)
int i, errs = 0;
struct cache_entry *last_ce = NULL;
for (i = 0; i < the_index.cache_nr ; i++) {
struct cache_entry *ce = the_index.cache[i];
for (i = 0; i < the_repository->index->cache_nr ; i++) {
struct cache_entry *ce = the_repository->index->cache[i];
if (S_ISSPARSEDIR(ce->ce_mode)) {
if (!ce_skip_worktree(ce))
@ -154,8 +154,8 @@ static int checkout_all(const char *prefix, int prefix_length)
* first entry inside the expanded sparse directory).
*/
if (ignore_skip_worktree) {
ensure_full_index(&the_index);
ce = the_index.cache[i];
ensure_full_index(the_repository->index);
ce = the_repository->index->cache[i];
}
}
@ -260,7 +260,7 @@ int cmd_checkout_index(int argc, const char **argv, const char *prefix)
argc = parse_options(argc, argv, prefix, builtin_checkout_index_options,
builtin_checkout_index_usage, 0);
state.istate = &the_index;
state.istate = the_repository->index;
state.force = force;
state.quiet = quiet;
state.not_new = not_new;
@ -280,7 +280,7 @@ int cmd_checkout_index(int argc, const char **argv, const char *prefix)
*/
if (index_opt && !state.base_dir_len && !to_tempfile) {
state.refresh_cache = 1;
state.istate = &the_index;
state.istate = the_repository->index;
repo_hold_locked_index(the_repository, &lock_file,
LOCK_DIE_ON_ERROR);
}
@ -339,7 +339,7 @@ int cmd_checkout_index(int argc, const char **argv, const char *prefix)
return 1;
if (is_lock_file_locked(&lock_file) &&
write_locked_index(&the_index, &lock_file, COMMIT_LOCK))
write_locked_index(the_repository->index, &lock_file, COMMIT_LOCK))
die("Unable to write new index file");
return 0;
}

View File

@ -1,4 +1,3 @@
#define USE_THE_INDEX_VARIABLE
#include "builtin.h"
#include "advice.h"
#include "branch.h"
@ -146,7 +145,7 @@ static int update_some(const struct object_id *oid, struct strbuf *base,
return READ_TREE_RECURSIVE;
len = base->len + strlen(pathname);
ce = make_empty_cache_entry(&the_index, len);
ce = make_empty_cache_entry(the_repository->index, len);
oidcpy(&ce->oid, oid);
memcpy(ce->name, base->buf, base->len);
memcpy(ce->name + base->len, pathname, len - base->len);
@ -159,9 +158,9 @@ static int update_some(const struct object_id *oid, struct strbuf *base,
* entry in place. Whether it is UPTODATE or not, checkout_entry will
* do the right thing.
*/
pos = index_name_pos(&the_index, ce->name, ce->ce_namelen);
pos = index_name_pos(the_repository->index, ce->name, ce->ce_namelen);
if (pos >= 0) {
struct cache_entry *old = the_index.cache[pos];
struct cache_entry *old = the_repository->index->cache[pos];
if (ce->ce_mode == old->ce_mode &&
!ce_intent_to_add(old) &&
oideq(&ce->oid, &old->oid)) {
@ -171,7 +170,7 @@ static int update_some(const struct object_id *oid, struct strbuf *base,
}
}
add_index_entry(&the_index, ce,
add_index_entry(the_repository->index, ce,
ADD_CACHE_OK_TO_ADD | ADD_CACHE_OK_TO_REPLACE);
return 0;
}
@ -190,8 +189,8 @@ static int read_tree_some(struct tree *tree, const struct pathspec *pathspec)
static int skip_same_name(const struct cache_entry *ce, int pos)
{
while (++pos < the_index.cache_nr &&
!strcmp(the_index.cache[pos]->name, ce->name))
while (++pos < the_repository->index->cache_nr &&
!strcmp(the_repository->index->cache[pos]->name, ce->name))
; /* skip */
return pos;
}
@ -199,9 +198,9 @@ static int skip_same_name(const struct cache_entry *ce, int pos)
static int check_stage(int stage, const struct cache_entry *ce, int pos,
int overlay_mode)
{
while (pos < the_index.cache_nr &&
!strcmp(the_index.cache[pos]->name, ce->name)) {
if (ce_stage(the_index.cache[pos]) == stage)
while (pos < the_repository->index->cache_nr &&
!strcmp(the_repository->index->cache[pos]->name, ce->name)) {
if (ce_stage(the_repository->index->cache[pos]) == stage)
return 0;
pos++;
}
@ -218,8 +217,8 @@ static int check_stages(unsigned stages, const struct cache_entry *ce, int pos)
unsigned seen = 0;
const char *name = ce->name;
while (pos < the_index.cache_nr) {
ce = the_index.cache[pos];
while (pos < the_repository->index->cache_nr) {
ce = the_repository->index->cache[pos];
if (strcmp(name, ce->name))
break;
seen |= (1 << ce_stage(ce));
@ -235,10 +234,10 @@ static int checkout_stage(int stage, const struct cache_entry *ce, int pos,
const struct checkout *state, int *nr_checkouts,
int overlay_mode)
{
while (pos < the_index.cache_nr &&
!strcmp(the_index.cache[pos]->name, ce->name)) {
if (ce_stage(the_index.cache[pos]) == stage)
return checkout_entry(the_index.cache[pos], state,
while (pos < the_repository->index->cache_nr &&
!strcmp(the_repository->index->cache[pos]->name, ce->name)) {
if (ce_stage(the_repository->index->cache[pos]) == stage)
return checkout_entry(the_repository->index->cache[pos], state,
NULL, nr_checkouts);
pos++;
}
@ -256,7 +255,7 @@ static int checkout_merged(int pos, const struct checkout *state,
int *nr_checkouts, struct mem_pool *ce_mem_pool,
int conflict_style)
{
struct cache_entry *ce = the_index.cache[pos];
struct cache_entry *ce = the_repository->index->cache[pos];
const char *path = ce->name;
mmfile_t ancestor, ours, theirs;
enum ll_merge_result merge_status;
@ -269,7 +268,7 @@ static int checkout_merged(int pos, const struct checkout *state,
int renormalize = 0;
memset(threeway, 0, sizeof(threeway));
while (pos < the_index.cache_nr) {
while (pos < the_repository->index->cache_nr) {
int stage;
stage = ce_stage(ce);
if (!stage || strcmp(path, ce->name))
@ -278,7 +277,7 @@ static int checkout_merged(int pos, const struct checkout *state,
if (stage == 2)
mode = create_ce_mode(ce->ce_mode);
pos++;
ce = the_index.cache[pos];
ce = the_repository->index->cache[pos];
}
if (is_null_oid(&threeway[1]) || is_null_oid(&threeway[2]))
return error(_("path '%s' does not have necessary versions"), path);
@ -356,7 +355,7 @@ static void mark_ce_for_checkout_overlay(struct cache_entry *ce,
* match_pathspec() for _all_ entries when
* opts->source_tree != NULL.
*/
if (ce_path_match(&the_index, ce, &opts->pathspec, ps_matched))
if (ce_path_match(the_repository->index, ce, &opts->pathspec, ps_matched))
ce->ce_flags |= CE_MATCHED;
}
@ -367,7 +366,7 @@ static void mark_ce_for_checkout_no_overlay(struct cache_entry *ce,
ce->ce_flags &= ~CE_MATCHED;
if (!opts->ignore_skipworktree && ce_skip_worktree(ce))
return;
if (ce_path_match(&the_index, ce, &opts->pathspec, ps_matched)) {
if (ce_path_match(the_repository->index, ce, &opts->pathspec, ps_matched)) {
ce->ce_flags |= CE_MATCHED;
if (opts->source_tree && !(ce->ce_flags & CE_UPDATE))
/*
@ -391,7 +390,7 @@ static int checkout_worktree(const struct checkout_opts *opts,
state.force = 1;
state.refresh_cache = 1;
state.istate = &the_index;
state.istate = the_repository->index;
mem_pool_init(&ce_mem_pool, 0);
get_parallel_checkout_configs(&pc_workers, &pc_threshold);
@ -404,8 +403,8 @@ static int checkout_worktree(const struct checkout_opts *opts,
if (pc_workers > 1)
init_parallel_checkout();
for (pos = 0; pos < the_index.cache_nr; pos++) {
struct cache_entry *ce = the_index.cache[pos];
for (pos = 0; pos < the_repository->index->cache_nr; pos++) {
struct cache_entry *ce = the_repository->index->cache[pos];
if (ce->ce_flags & CE_MATCHED) {
if (!ce_stage(ce)) {
errs |= checkout_entry(ce, &state,
@ -429,7 +428,7 @@ static int checkout_worktree(const struct checkout_opts *opts,
errs |= run_parallel_checkout(&state, pc_workers, pc_threshold,
NULL, NULL);
mem_pool_discard(&ce_mem_pool, should_validate_cache_entries());
remove_marked_cache_entries(&the_index, 1);
remove_marked_cache_entries(the_repository->index, 1);
remove_scheduled_dirs();
errs |= finish_delayed_checkout(&state, opts->show_progress);
@ -571,7 +570,7 @@ static int checkout_paths(const struct checkout_opts *opts,
if (opts->source_tree)
read_tree_some(opts->source_tree, &opts->pathspec);
if (opts->merge)
unmerge_index(&the_index, &opts->pathspec, CE_MATCHED);
unmerge_index(the_repository->index, &opts->pathspec, CE_MATCHED);
ps_matched = xcalloc(opts->pathspec.nr, 1);
@ -579,13 +578,13 @@ static int checkout_paths(const struct checkout_opts *opts,
* Make sure all pathspecs participated in locating the paths
* to be checked out.
*/
for (pos = 0; pos < the_index.cache_nr; pos++)
for (pos = 0; pos < the_repository->index->cache_nr; pos++)
if (opts->overlay_mode)
mark_ce_for_checkout_overlay(the_index.cache[pos],
mark_ce_for_checkout_overlay(the_repository->index->cache[pos],
ps_matched,
opts);
else
mark_ce_for_checkout_no_overlay(the_index.cache[pos],
mark_ce_for_checkout_no_overlay(the_repository->index->cache[pos],
ps_matched,
opts);
@ -596,8 +595,8 @@ static int checkout_paths(const struct checkout_opts *opts,
free(ps_matched);
/* Any unmerged paths? */
for (pos = 0; pos < the_index.cache_nr; pos++) {
const struct cache_entry *ce = the_index.cache[pos];
for (pos = 0; pos < the_repository->index->cache_nr; pos++) {
const struct cache_entry *ce = the_repository->index->cache[pos];
if (ce->ce_flags & CE_MATCHED) {
if (!ce_stage(ce))
continue;
@ -622,7 +621,7 @@ static int checkout_paths(const struct checkout_opts *opts,
if (opts->checkout_worktree)
errs |= checkout_worktree(opts, new_branch_info);
else
remove_marked_cache_entries(&the_index, 1);
remove_marked_cache_entries(the_repository->index, 1);
/*
* Allow updating the index when checking out from the index.
@ -634,7 +633,7 @@ static int checkout_paths(const struct checkout_opts *opts,
checkout_index = opts->checkout_index;
if (checkout_index) {
if (write_locked_index(&the_index, &lock_file, COMMIT_LOCK))
if (write_locked_index(the_repository->index, &lock_file, COMMIT_LOCK))
die(_("unable to write new index file"));
} else {
/*
@ -703,8 +702,8 @@ static int reset_tree(struct tree *tree, const struct checkout_opts *o,
opts.merge = 1;
opts.fn = oneway_merge;
opts.verbose_update = o->show_progress;
opts.src_index = &the_index;
opts.dst_index = &the_index;
opts.src_index = the_repository->index;
opts.dst_index = the_repository->index;
init_checkout_metadata(&opts.meta, info->refname,
info->commit ? &info->commit->object.oid : null_oid(),
NULL);
@ -756,12 +755,12 @@ static void init_topts(struct unpack_trees_options *topts, int merge,
{
memset(topts, 0, sizeof(*topts));
topts->head_idx = -1;
topts->src_index = &the_index;
topts->dst_index = &the_index;
topts->src_index = the_repository->index;
topts->dst_index = the_repository->index;
setup_unpack_trees_porcelain(topts, "checkout");
topts->initial_checkout = is_index_unborn(&the_index);
topts->initial_checkout = is_index_unborn(the_repository->index);
topts->update = 1;
topts->merge = 1;
topts->quiet = merge && old_commit;
@ -783,7 +782,7 @@ static int merge_working_tree(const struct checkout_opts *opts,
if (repo_read_index_preload(the_repository, NULL, 0) < 0)
return error(_("index file corrupt"));
resolve_undo_clear_index(&the_index);
resolve_undo_clear_index(the_repository->index);
if (opts->new_orphan_branch && opts->orphan_from_empty_tree) {
if (new_branch_info->commit)
BUG("'switch --orphan' should never accept a commit as starting point");
@ -807,9 +806,9 @@ static int merge_working_tree(const struct checkout_opts *opts,
struct unpack_trees_options topts;
const struct object_id *old_commit_oid;
refresh_index(&the_index, REFRESH_QUIET, NULL, NULL, NULL);
refresh_index(the_repository->index, REFRESH_QUIET, NULL, NULL, NULL);
if (unmerged_index(&the_index)) {
if (unmerged_index(the_repository->index)) {
error(_("you need to resolve your current index first"));
return 1;
}
@ -919,10 +918,10 @@ static int merge_working_tree(const struct checkout_opts *opts,
}
}
if (!cache_tree_fully_valid(the_index.cache_tree))
cache_tree_update(&the_index, WRITE_TREE_SILENT | WRITE_TREE_REPAIR);
if (!cache_tree_fully_valid(the_repository->index->cache_tree))
cache_tree_update(the_repository->index, WRITE_TREE_SILENT | WRITE_TREE_REPAIR);
if (write_locked_index(&the_index, &lock_file, COMMIT_LOCK))
if (write_locked_index(the_repository->index, &lock_file, COMMIT_LOCK))
die(_("unable to write new index file"));
if (!opts->discard_changes && !opts->quiet && new_branch_info->commit)

View File

@ -6,7 +6,6 @@
* Based on git-clean.sh by Pavel Roskin
*/
#define USE_THE_INDEX_VARIABLE
#include "builtin.h"
#include "abspath.h"
#include "config.h"
@ -714,7 +713,7 @@ static int filter_by_patterns_cmd(void)
for_each_string_list_item(item, &del_list) {
int dtype = DT_UNKNOWN;
if (is_excluded(&dir, &the_index, item->string, &dtype)) {
if (is_excluded(&dir, the_repository->index, item->string, &dtype)) {
*item->string = '\0';
changed++;
}
@ -1021,7 +1020,7 @@ int cmd_clean(int argc, const char **argv, const char *prefix)
PATHSPEC_PREFER_CWD,
prefix, argv);
fill_directory(&dir, &the_index, &pathspec);
fill_directory(&dir, the_repository->index, &pathspec);
correct_untracked_entries(&dir);
for (i = 0; i < dir.nr; i++) {
@ -1029,7 +1028,7 @@ int cmd_clean(int argc, const char **argv, const char *prefix)
struct stat st;
const char *rel;
if (!index_name_is_other(&the_index, ent->name, ent->len))
if (!index_name_is_other(the_repository->index, ent->name, ent->len))
continue;
if (lstat(ent->name, &st))

View File

@ -8,7 +8,6 @@
* Clone a repository into a different directory that does not yet exist.
*/
#define USE_THE_INDEX_VARIABLE
#include "builtin.h"
#include "abspath.h"
#include "advice.h"
@ -731,8 +730,8 @@ static int checkout(int submodule_progress, int filter_submodules)
opts.preserve_ignored = 0;
opts.fn = oneway_merge;
opts.verbose_update = (option_verbosity >= 0);
opts.src_index = &the_index;
opts.dst_index = &the_index;
opts.src_index = the_repository->index;
opts.dst_index = the_repository->index;
init_checkout_metadata(&opts.meta, head, &oid, NULL);
tree = parse_tree_indirect(&oid);
@ -746,7 +745,7 @@ static int checkout(int submodule_progress, int filter_submodules)
free(head);
if (write_locked_index(&the_index, &lock_file, COMMIT_LOCK))
if (write_locked_index(the_repository->index, &lock_file, COMMIT_LOCK))
die(_("unable to write new index file"));
err |= run_hooks_l("post-checkout", oid_to_hex(null_oid()),

View File

@ -5,7 +5,6 @@
* Based on git-commit.sh by Junio C Hamano and Linus Torvalds
*/
#define USE_THE_INDEX_VARIABLE
#include "builtin.h"
#include "advice.h"
#include "config.h"
@ -266,19 +265,19 @@ static int list_paths(struct string_list *list, const char *with_tree,
if (with_tree) {
char *max_prefix = common_prefix(pattern);
overlay_tree_on_index(&the_index, with_tree, max_prefix);
overlay_tree_on_index(the_repository->index, with_tree, max_prefix);
free(max_prefix);
}
/* TODO: audit for interaction with sparse-index. */
ensure_full_index(&the_index);
for (i = 0; i < the_index.cache_nr; i++) {
const struct cache_entry *ce = the_index.cache[i];
ensure_full_index(the_repository->index);
for (i = 0; i < the_repository->index->cache_nr; i++) {
const struct cache_entry *ce = the_repository->index->cache[i];
struct string_list_item *item;
if (ce->ce_flags & CE_UPDATE)
continue;
if (!ce_path_match(&the_index, ce, pattern, m))
if (!ce_path_match(the_repository->index, ce, pattern, m))
continue;
item = string_list_insert(list, ce->name);
if (ce_skip_worktree(ce))
@ -302,10 +301,10 @@ static void add_remove_files(struct string_list *list)
continue;
if (!lstat(p->string, &st)) {
if (add_to_index(&the_index, p->string, &st, 0))
if (add_to_index(the_repository->index, p->string, &st, 0))
die(_("updating files failed"));
} else
remove_file_from_index(&the_index, p->string);
remove_file_from_index(the_repository->index, p->string);
}
}
@ -316,7 +315,7 @@ static void create_base_index(const struct commit *current_head)
struct tree_desc t;
if (!current_head) {
discard_index(&the_index);
discard_index(the_repository->index);
return;
}
@ -324,8 +323,8 @@ static void create_base_index(const struct commit *current_head)
opts.head_idx = 1;
opts.index_only = 1;
opts.merge = 1;
opts.src_index = &the_index;
opts.dst_index = &the_index;
opts.src_index = the_repository->index;
opts.dst_index = the_repository->index;
opts.fn = oneway_merge;
tree = parse_tree_indirect(&current_head->object.oid);
@ -344,7 +343,7 @@ static void refresh_cache_or_die(int refresh_flags)
* refresh_flags contains REFRESH_QUIET, so the only errors
* are for unmerged entries.
*/
if (refresh_index(&the_index, refresh_flags | REFRESH_IN_PORCELAIN, NULL, NULL, NULL))
if (refresh_index(the_repository->index, refresh_flags | REFRESH_IN_PORCELAIN, NULL, NULL, NULL))
die_resolve_conflict("commit");
}
@ -393,7 +392,7 @@ static const char *prepare_index(const char **argv, const char *prefix,
refresh_cache_or_die(refresh_flags);
if (write_locked_index(&the_index, &index_lock, 0))
if (write_locked_index(the_repository->index, &index_lock, 0))
die(_("unable to create temporary index"));
old_repo_index_file = the_repository->index_file;
@ -412,13 +411,13 @@ static const char *prepare_index(const char **argv, const char *prefix,
unsetenv(INDEX_ENVIRONMENT);
FREE_AND_NULL(old_index_env);
discard_index(&the_index);
read_index_from(&the_index, get_lock_file_path(&index_lock),
discard_index(the_repository->index);
read_index_from(the_repository->index, get_lock_file_path(&index_lock),
get_git_dir());
if (cache_tree_update(&the_index, WRITE_TREE_SILENT) == 0) {
if (cache_tree_update(the_repository->index, WRITE_TREE_SILENT) == 0) {
if (reopen_lock_file(&index_lock) < 0)
die(_("unable to write index file"));
if (write_locked_index(&the_index, &index_lock, 0))
if (write_locked_index(the_repository->index, &index_lock, 0))
die(_("unable to update temporary index"));
} else
warning(_("Failed to update main cache tree"));
@ -450,8 +449,8 @@ static const char *prepare_index(const char **argv, const char *prefix,
exit(128);
refresh_cache_or_die(refresh_flags);
cache_tree_update(&the_index, WRITE_TREE_SILENT);
if (write_locked_index(&the_index, &index_lock, 0))
cache_tree_update(the_repository->index, WRITE_TREE_SILENT);
if (write_locked_index(the_repository->index, &index_lock, 0))
die(_("unable to write new index file"));
commit_style = COMMIT_NORMAL;
ret = get_lock_file_path(&index_lock);
@ -472,10 +471,10 @@ static const char *prepare_index(const char **argv, const char *prefix,
repo_hold_locked_index(the_repository, &index_lock,
LOCK_DIE_ON_ERROR);
refresh_cache_or_die(refresh_flags);
if (the_index.cache_changed
|| !cache_tree_fully_valid(the_index.cache_tree))
cache_tree_update(&the_index, WRITE_TREE_SILENT);
if (write_locked_index(&the_index, &index_lock,
if (the_repository->index->cache_changed
|| !cache_tree_fully_valid(the_repository->index->cache_tree))
cache_tree_update(the_repository->index, WRITE_TREE_SILENT);
if (write_locked_index(the_repository->index, &index_lock,
COMMIT_LOCK | SKIP_IF_UNCHANGED))
die(_("unable to write new index file"));
commit_style = COMMIT_AS_IS;
@ -516,15 +515,15 @@ static const char *prepare_index(const char **argv, const char *prefix,
if (list_paths(&partial, !current_head ? NULL : "HEAD", &pathspec))
exit(1);
discard_index(&the_index);
discard_index(the_repository->index);
if (repo_read_index(the_repository) < 0)
die(_("cannot read the index"));
repo_hold_locked_index(the_repository, &index_lock, LOCK_DIE_ON_ERROR);
add_remove_files(&partial);
refresh_index(&the_index, REFRESH_QUIET, NULL, NULL, NULL);
cache_tree_update(&the_index, WRITE_TREE_SILENT);
if (write_locked_index(&the_index, &index_lock, 0))
refresh_index(the_repository->index, REFRESH_QUIET, NULL, NULL, NULL);
cache_tree_update(the_repository->index, WRITE_TREE_SILENT);
if (write_locked_index(the_repository->index, &index_lock, 0))
die(_("unable to write new index file"));
hold_lock_file_for_update(&false_lock,
@ -534,14 +533,14 @@ static const char *prepare_index(const char **argv, const char *prefix,
create_base_index(current_head);
add_remove_files(&partial);
refresh_index(&the_index, REFRESH_QUIET, NULL, NULL, NULL);
refresh_index(the_repository->index, REFRESH_QUIET, NULL, NULL, NULL);
if (write_locked_index(&the_index, &false_lock, 0))
if (write_locked_index(the_repository->index, &false_lock, 0))
die(_("unable to write temporary index file"));
discard_index(&the_index);
discard_index(the_repository->index);
ret = get_lock_file_path(&false_lock);
read_index_from(&the_index, ret, get_git_dir());
read_index_from(the_repository->index, ret, get_git_dir());
out:
string_list_clear(&partial, 0);
clear_pathspec(&pathspec);
@ -999,7 +998,7 @@ static int prepare_to_commit(const char *index_file, const char *prefix,
struct object_id oid;
const char *parent = "HEAD";
if (!the_index.initialized && repo_read_index(the_repository) < 0)
if (!the_repository->index->initialized && repo_read_index(the_repository) < 0)
die(_("Cannot read index"));
if (amend)
@ -1009,11 +1008,11 @@ static int prepare_to_commit(const char *index_file, const char *prefix,
int i, ita_nr = 0;
/* TODO: audit for interaction with sparse-index. */
ensure_full_index(&the_index);
for (i = 0; i < the_index.cache_nr; i++)
if (ce_intent_to_add(the_index.cache[i]))
ensure_full_index(the_repository->index);
for (i = 0; i < the_repository->index->cache_nr; i++)
if (ce_intent_to_add(the_repository->index->cache[i]))
ita_nr++;
committable = the_index.cache_nr - ita_nr > 0;
committable = the_repository->index->cache_nr - ita_nr > 0;
} else {
/*
* Unless the user did explicitly request a submodule
@ -1081,11 +1080,11 @@ static int prepare_to_commit(const char *index_file, const char *prefix,
* and could have updated it. We must do this before we invoke
* the editor and after we invoke run_status above.
*/
discard_index(&the_index);
discard_index(the_repository->index);
}
read_index_from(&the_index, index_file, get_git_dir());
read_index_from(the_repository->index, index_file, get_git_dir());
if (cache_tree_update(&the_index, 0)) {
if (cache_tree_update(the_repository->index, 0)) {
error(_("Error building trees"));
return 0;
}
@ -1586,7 +1585,7 @@ int cmd_status(int argc, const char **argv, const char *prefix)
status_format != STATUS_FORMAT_PORCELAIN_V2)
progress_flag = REFRESH_PROGRESS;
repo_read_index(the_repository);
refresh_index(&the_index,
refresh_index(the_repository->index,
REFRESH_QUIET|REFRESH_UNMERGED|progress_flag,
&s.pathspec, NULL, NULL);
@ -1856,7 +1855,7 @@ int cmd_commit(int argc, const char **argv, const char *prefix)
append_merge_tag_headers(parents, &tail);
}
if (commit_tree_extended(sb.buf, sb.len, &the_index.cache_tree->oid,
if (commit_tree_extended(sb.buf, sb.len, &the_repository->index->cache_tree->oid,
parents, &oid, author_ident.buf, NULL,
sign_commit, extra)) {
rollback_index_files();

View File

@ -115,7 +115,9 @@ static int read_request(FILE *fh, struct credential *c,
return error("client sent bogus timeout line: %s", item.buf);
*timeout = atoi(p);
if (credential_read(c, fh) < 0)
credential_set_all_capabilities(c, CREDENTIAL_OP_INITIAL);
if (credential_read(c, fh, CREDENTIAL_OP_HELPER) < 0)
return -1;
return 0;
}
@ -131,8 +133,18 @@ static void serve_one_client(FILE *in, FILE *out)
else if (!strcmp(action.buf, "get")) {
struct credential_cache_entry *e = lookup_credential(&c);
if (e) {
fprintf(out, "username=%s\n", e->item.username);
fprintf(out, "password=%s\n", e->item.password);
e->item.capa_authtype.request_initial = 1;
e->item.capa_authtype.request_helper = 1;
fprintf(out, "capability[]=authtype\n");
if (e->item.username)
fprintf(out, "username=%s\n", e->item.username);
if (e->item.password)
fprintf(out, "password=%s\n", e->item.password);
if (credential_has_capability(&c.capa_authtype, CREDENTIAL_OP_HELPER) && e->item.authtype)
fprintf(out, "authtype=%s\n", e->item.authtype);
if (credential_has_capability(&c.capa_authtype, CREDENTIAL_OP_HELPER) && e->item.credential)
fprintf(out, "credential=%s\n", e->item.credential);
if (e->item.password_expiry_utc != TIME_MAX)
fprintf(out, "password_expiry_utc=%"PRItime"\n",
e->item.password_expiry_utc);
@ -157,8 +169,10 @@ static void serve_one_client(FILE *in, FILE *out)
else if (!strcmp(action.buf, "store")) {
if (timeout < 0)
warning("cache client didn't specify a timeout");
else if (!c.username || !c.password)
else if ((!c.username || !c.password) && (!c.authtype && !c.credential))
warning("cache client gave us a partial credential");
else if (c.ephemeral)
warning("not storing ephemeral credential");
else {
remove_credential(&c, 0);
cache_credential(&c, timeout);

View File

@ -1,4 +1,5 @@
#include "builtin.h"
#include "credential.h"
#include "gettext.h"
#include "parse-options.h"
#include "path.h"
@ -127,6 +128,13 @@ static char *get_socket_path(void)
return socket;
}
static void announce_capabilities(void)
{
struct credential c = CREDENTIAL_INIT;
c.capa_authtype.request_initial = 1;
credential_announce_capabilities(&c, stdout);
}
int cmd_credential_cache(int argc, const char **argv, const char *prefix)
{
char *socket_path = NULL;
@ -163,6 +171,8 @@ int cmd_credential_cache(int argc, const char **argv, const char *prefix)
do_cache(socket_path, op, timeout, FLAG_RELAY);
else if (!strcmp(op, "store"))
do_cache(socket_path, op, timeout, FLAG_RELAY|FLAG_SPAWN);
else if (!strcmp(op, "capability"))
announce_capabilities();
else
; /* ignore unknown operation */

View File

@ -205,7 +205,7 @@ int cmd_credential_store(int argc, const char **argv, const char *prefix)
if (!fns.nr)
die("unable to set up default path; use --file");
if (credential_read(&c, stdin) < 0)
if (credential_read(&c, stdin, CREDENTIAL_OP_HELPER) < 0)
die("unable to read credential");
if (!strcmp(op, "get"))

View File

@ -17,15 +17,24 @@ int cmd_credential(int argc, const char **argv, const char *prefix UNUSED)
usage(usage_msg);
op = argv[1];
if (credential_read(&c, stdin) < 0)
if (!strcmp(op, "capability")) {
credential_set_all_capabilities(&c, CREDENTIAL_OP_INITIAL);
credential_announce_capabilities(&c, stdout);
return 0;
}
if (credential_read(&c, stdin, CREDENTIAL_OP_INITIAL) < 0)
die("unable to read credential from stdin");
if (!strcmp(op, "fill")) {
credential_fill(&c);
credential_write(&c, stdout);
credential_fill(&c, 0);
credential_next_state(&c);
credential_write(&c, stdout, CREDENTIAL_OP_RESPONSE);
} else if (!strcmp(op, "approve")) {
credential_set_all_capabilities(&c, CREDENTIAL_OP_HELPER);
credential_approve(&c);
} else if (!strcmp(op, "reject")) {
credential_set_all_capabilities(&c, CREDENTIAL_OP_HELPER);
credential_reject(&c);
} else {
usage(usage_msg);

View File

@ -1,4 +1,3 @@
#define USE_THE_INDEX_VARIABLE
#include "builtin.h"
#include "config.h"
#include "environment.h"
@ -674,7 +673,7 @@ int cmd_describe(int argc, const char **argv, const char *prefix)
prepare_repo_settings(the_repository);
the_repository->settings.command_requires_full_index = 0;
repo_read_index(the_repository);
refresh_index(&the_index, REFRESH_QUIET|REFRESH_UNMERGED,
refresh_index(the_repository->index, REFRESH_QUIET|REFRESH_UNMERGED,
NULL, NULL, NULL);
fd = repo_hold_locked_index(the_repository,
&index_lock, 0);

View File

@ -1,4 +1,3 @@
#define USE_THE_INDEX_VARIABLE
#include "builtin.h"
#include "config.h"
#include "diff.h"
@ -206,7 +205,7 @@ int cmd_diff_tree(int argc, const char **argv, const char *prefix)
opt->diffopt.rotate_to_strict = 0;
opt->diffopt.no_free = 1;
if (opt->diffopt.detect_rename) {
if (!the_index.cache)
if (the_repository->index->cache)
repo_read_index(the_repository);
opt->diffopt.setup |= DIFF_SETUP_USE_SIZE_CACHE;
}

View File

@ -3,7 +3,7 @@
*
* Copyright (c) 2006 Junio C Hamano
*/
#define USE_THE_INDEX_VARIABLE
#include "builtin.h"
#include "config.h"
#include "ewah/ewok.h"
@ -239,9 +239,9 @@ static void refresh_index_quietly(void)
fd = repo_hold_locked_index(the_repository, &lock_file, 0);
if (fd < 0)
return;
discard_index(&the_index);
discard_index(the_repository->index);
repo_read_index(the_repository);
refresh_index(&the_index, REFRESH_QUIET|REFRESH_UNMERGED, NULL, NULL,
refresh_index(the_repository->index, REFRESH_QUIET|REFRESH_UNMERGED, NULL, NULL,
NULL);
repo_update_index_if_able(the_repository, &lock_file);
}

View File

@ -11,7 +11,7 @@
*
* Copyright (C) 2016 Johannes Schindelin
*/
#define USE_THE_INDEX_VARIABLE
#include "builtin.h"
#include "abspath.h"
#include "config.h"
@ -117,7 +117,7 @@ static int use_wt_file(const char *workdir, const char *name,
int fd = open(buf.buf, O_RDONLY);
if (fd >= 0 &&
!index_fd(&the_index, &wt_oid, fd, &st, OBJ_BLOB, name, 0)) {
!index_fd(the_repository->index, &wt_oid, fd, &st, OBJ_BLOB, name, 0)) {
if (is_null_oid(oid)) {
oidcpy(oid, &wt_oid);
use = 1;

View File

@ -1,4 +1,3 @@
#define USE_THE_INDEX_VARIABLE
#include "builtin.h"
#include "hex.h"
#include "read-cache-ll.h"
@ -18,11 +17,11 @@ static int merge_entry(int pos, const char *path)
char ownbuf[4][60];
struct child_process cmd = CHILD_PROCESS_INIT;
if (pos >= the_index.cache_nr)
if (pos >= the_repository->index->cache_nr)
die("git merge-index: %s not in the cache", path);
found = 0;
do {
const struct cache_entry *ce = the_index.cache[pos];
const struct cache_entry *ce = the_repository->index->cache[pos];
int stage = ce_stage(ce);
if (strcmp(ce->name, path))
@ -32,7 +31,7 @@ static int merge_entry(int pos, const char *path)
xsnprintf(ownbuf[stage], sizeof(ownbuf[stage]), "%o", ce->ce_mode);
arguments[stage] = hexbuf[stage];
arguments[stage + 4] = ownbuf[stage];
} while (++pos < the_index.cache_nr);
} while (++pos < the_repository->index->cache_nr);
if (!found)
die("git merge-index: %s not in the cache", path);
@ -51,7 +50,7 @@ static int merge_entry(int pos, const char *path)
static void merge_one_path(const char *path)
{
int pos = index_name_pos(&the_index, path, strlen(path));
int pos = index_name_pos(the_repository->index, path, strlen(path));
/*
* If it already exists in the cache as stage0, it's
@ -65,9 +64,9 @@ static void merge_all(void)
{
int i;
/* TODO: audit for interaction with sparse-index. */
ensure_full_index(&the_index);
for (i = 0; i < the_index.cache_nr; i++) {
const struct cache_entry *ce = the_index.cache[i];
ensure_full_index(the_repository->index);
for (i = 0; i < the_repository->index->cache_nr; i++) {
const struct cache_entry *ce = the_repository->index->cache[i];
if (!ce_stage(ce))
continue;
i += merge_entry(i, ce->name)-1;
@ -89,7 +88,7 @@ int cmd_merge_index(int argc, const char **argv, const char *prefix UNUSED)
repo_read_index(the_repository);
/* TODO: audit for interaction with sparse-index. */
ensure_full_index(&the_index);
ensure_full_index(the_repository->index);
i = 1;
if (!strcmp(argv[i], "-o")) {

View File

@ -1,4 +1,3 @@
#define USE_THE_INDEX_VARIABLE
#include "builtin.h"
#include "tree-walk.h"
#include "xdiff-interface.h"
@ -364,7 +363,7 @@ static void trivial_merge_trees(struct tree_desc t[3], const char *base)
setup_traverse_info(&info, base);
info.fn = threeway_callback;
traverse_trees(&the_index, 3, t, &info);
traverse_trees(the_repository->index, 3, t, &info);
}
static void *get_tree_descriptor(struct repository *r,

View File

@ -6,7 +6,6 @@
* Based on git-merge.sh by Junio C Hamano.
*/
#define USE_THE_INDEX_VARIABLE
#include "builtin.h"
#include "abspath.h"
#include "advice.h"
@ -300,7 +299,7 @@ static int save_state(struct object_id *stash)
int rc = -1;
fd = repo_hold_locked_index(the_repository, &lock_file, 0);
refresh_index(&the_index, REFRESH_QUIET, NULL, NULL, NULL);
refresh_index(the_repository->index, REFRESH_QUIET, NULL, NULL, NULL);
if (0 <= fd)
repo_update_index_if_able(the_repository, &lock_file);
rollback_lock_file(&lock_file);
@ -372,7 +371,7 @@ static void restore_state(const struct object_id *head,
run_command(&cmd);
refresh_cache:
discard_index(&the_index);
discard_index(the_repository->index);
if (repo_read_index(the_repository) < 0)
die(_("could not read index"));
}
@ -657,8 +656,8 @@ static int read_tree_trivial(struct object_id *common, struct object_id *head,
memset(&opts, 0, sizeof(opts));
opts.head_idx = 2;
opts.src_index = &the_index;
opts.dst_index = &the_index;
opts.src_index = the_repository->index;
opts.dst_index = the_repository->index;
opts.update = 1;
opts.verbose_update = 1;
opts.trivial_merges_only = 1;
@ -674,7 +673,7 @@ static int read_tree_trivial(struct object_id *common, struct object_id *head,
if (!trees[nr_trees++])
return -1;
opts.fn = threeway_merge;
cache_tree_free(&the_index.cache_tree);
cache_tree_free(&the_repository->index->cache_tree);
for (i = 0; i < nr_trees; i++) {
parse_tree(trees[i]);
init_tree_desc(t+i, &trees[i]->object.oid,
@ -687,7 +686,7 @@ static int read_tree_trivial(struct object_id *common, struct object_id *head,
static void write_tree_trivial(struct object_id *oid)
{
if (write_index_as_tree(oid, &the_index, get_index_file(), 0, NULL))
if (write_index_as_tree(oid, the_repository->index, get_index_file(), 0, NULL))
die(_("git write-tree failed to write a tree"));
}
@ -745,7 +744,7 @@ static int try_merge_strategy(const char *strategy, struct commit_list *common,
rollback_lock_file(&lock);
return 2;
}
if (write_locked_index(&the_index, &lock,
if (write_locked_index(the_repository->index, &lock,
COMMIT_LOCK | SKIP_IF_UNCHANGED))
die(_("unable to write %s"), get_index_file());
return clean ? 0 : 1;
@ -768,8 +767,8 @@ static int count_unmerged_entries(void)
{
int i, ret = 0;
for (i = 0; i < the_index.cache_nr; i++)
if (ce_stage(the_index.cache[i]))
for (i = 0; i < the_repository->index->cache_nr; i++)
if (ce_stage(the_repository->index->cache[i]))
ret++;
return ret;
@ -843,9 +842,9 @@ static void prepare_to_commit(struct commit_list *remoteheads)
* the editor and after we invoke run_status above.
*/
if (invoked_hook)
discard_index(&the_index);
discard_index(the_repository->index);
}
read_index_from(&the_index, index_file, get_git_dir());
read_index_from(the_repository->index, index_file, get_git_dir());
strbuf_addbuf(&msg, &merge_msg);
if (squash)
BUG("the control must not reach here under --squash");
@ -957,7 +956,7 @@ static int suggest_conflicts(void)
* Thus, we will get the cleanup mode which is returned when we _are_
* using an editor.
*/
append_conflicts_hint(&the_index, &msgbuf,
append_conflicts_hint(the_repository->index, &msgbuf,
get_cleanup_mode(cleanup_arg, 1));
fputs(msgbuf.buf, fp);
strbuf_release(&msgbuf);
@ -1386,7 +1385,7 @@ int cmd_merge(int argc, const char **argv, const char *prefix)
else
die(_("You have not concluded your cherry-pick (CHERRY_PICK_HEAD exists)."));
}
resolve_undo_clear_index(&the_index);
resolve_undo_clear_index(the_repository->index);
if (option_edit < 0)
option_edit = default_edit_option();
@ -1595,7 +1594,7 @@ int cmd_merge(int argc, const char **argv, const char *prefix)
* We are not doing octopus, not fast-forward, and have
* only one common.
*/
refresh_index(&the_index, REFRESH_QUIET, NULL, NULL, NULL);
refresh_index(the_repository->index, REFRESH_QUIET, NULL, NULL, NULL);
if (allow_trivial && fast_forward != FF_ONLY) {
/*
* Must first ensure that index matches HEAD before
@ -1784,6 +1783,6 @@ int cmd_merge(int argc, const char **argv, const char *prefix)
}
strbuf_release(&buf);
free(branch_to_free);
discard_index(&the_index);
discard_index(the_repository->index);
return ret;
}

View File

@ -3,7 +3,7 @@
*
* Copyright (C) 2006 Johannes Schindelin
*/
#define USE_THE_INDEX_VARIABLE
#include "builtin.h"
#include "abspath.h"
#include "advice.h"
@ -95,9 +95,9 @@ static void prepare_move_submodule(const char *src, int first,
const char **submodule_gitfile)
{
struct strbuf submodule_dotgit = STRBUF_INIT;
if (!S_ISGITLINK(the_index.cache[first]->ce_mode))
if (!S_ISGITLINK(the_repository->index->cache[first]->ce_mode))
die(_("Directory %s is in index and no submodule?"), src);
if (!is_staging_gitmodules_ok(&the_index))
if (!is_staging_gitmodules_ok(the_repository->index))
die(_("Please stage your changes to .gitmodules or stash them to proceed"));
strbuf_addf(&submodule_dotgit, "%s/.git", src);
*submodule_gitfile = read_gitfile(submodule_dotgit.buf);
@ -114,13 +114,13 @@ static int index_range_of_same_dir(const char *src, int length,
const char *src_w_slash = add_slash(src);
int first, last, len_w_slash = length + 1;
first = index_name_pos(&the_index, src_w_slash, len_w_slash);
first = index_name_pos(the_repository->index, src_w_slash, len_w_slash);
if (first >= 0)
die(_("%.*s is in index"), len_w_slash, src_w_slash);
first = -1 - first;
for (last = first; last < the_index.cache_nr; last++) {
const char *path = the_index.cache[last]->name;
for (last = first; last < the_repository->index->cache_nr; last++) {
const char *path = the_repository->index->cache[last]->name;
if (strncmp(path, src_w_slash, len_w_slash))
break;
}
@ -144,14 +144,14 @@ static int empty_dir_has_sparse_contents(const char *name)
const char *with_slash = add_slash(name);
int length = strlen(with_slash);
int pos = index_name_pos(&the_index, with_slash, length);
int pos = index_name_pos(the_repository->index, with_slash, length);
const struct cache_entry *ce;
if (pos < 0) {
pos = -pos - 1;
if (pos >= the_index.cache_nr)
if (pos >= the_repository->index->cache_nr)
goto free_return;
ce = the_index.cache[pos];
ce = the_repository->index->cache[pos];
if (strncmp(with_slash, ce->name, length))
goto free_return;
if (ce_skip_worktree(ce))
@ -223,7 +223,7 @@ int cmd_mv(int argc, const char **argv, const char *prefix)
S_ISDIR(st.st_mode)) {
destination = internal_prefix_pathspec(dst_w_slash, argv, argc, DUP_BASENAME);
} else {
if (!path_in_sparse_checkout(dst_w_slash, &the_index) &&
if (!path_in_sparse_checkout(dst_w_slash, the_repository->index) &&
empty_dir_has_sparse_contents(dst_w_slash)) {
destination = internal_prefix_pathspec(dst_w_slash, argv, argc, DUP_BASENAME);
dst_mode = SKIP_WORKTREE_DIR;
@ -239,7 +239,7 @@ int cmd_mv(int argc, const char **argv, const char *prefix)
* is deprecated at this point) sparse-checkout. As
* SPARSE here is only considering cone-mode situation.
*/
if (!path_in_cone_mode_sparse_checkout(destination[0], &the_index))
if (!path_in_cone_mode_sparse_checkout(destination[0], the_repository->index))
dst_mode = SPARSE;
}
}
@ -263,10 +263,10 @@ int cmd_mv(int argc, const char **argv, const char *prefix)
int pos;
const struct cache_entry *ce;
pos = index_name_pos(&the_index, src, length);
pos = index_name_pos(the_repository->index, src, length);
if (pos < 0) {
const char *src_w_slash = add_slash(src);
if (!path_in_sparse_checkout(src_w_slash, &the_index) &&
if (!path_in_sparse_checkout(src_w_slash, the_repository->index) &&
empty_dir_has_sparse_contents(src)) {
modes[i] |= SKIP_WORKTREE_DIR;
goto dir_check;
@ -276,7 +276,7 @@ int cmd_mv(int argc, const char **argv, const char *prefix)
bad = _("bad source");
goto act_on_entry;
}
ce = the_index.cache[pos];
ce = the_repository->index->cache[pos];
if (!ce_skip_worktree(ce)) {
bad = _("bad source");
goto act_on_entry;
@ -286,7 +286,7 @@ int cmd_mv(int argc, const char **argv, const char *prefix)
goto act_on_entry;
}
/* Check if dst exists in index */
if (index_name_pos(&the_index, dst, strlen(dst)) < 0) {
if (index_name_pos(the_repository->index, dst, strlen(dst)) < 0) {
modes[i] |= SPARSE;
goto act_on_entry;
}
@ -311,7 +311,7 @@ int cmd_mv(int argc, const char **argv, const char *prefix)
dir_check:
if (S_ISDIR(st.st_mode)) {
int j, dst_len, n;
int first = index_name_pos(&the_index, src, length), last;
int first = index_name_pos(the_repository->index, src, length), last;
if (first >= 0) {
prepare_move_submodule(src, first,
@ -339,7 +339,7 @@ int cmd_mv(int argc, const char **argv, const char *prefix)
dst_len = strlen(dst);
for (j = 0; j < last - first; j++) {
const struct cache_entry *ce = the_index.cache[first + j];
const struct cache_entry *ce = the_repository->index->cache[first + j];
const char *path = ce->name;
source[argc + j] = path;
destination[argc + j] =
@ -351,7 +351,7 @@ int cmd_mv(int argc, const char **argv, const char *prefix)
argc += last - first;
goto act_on_entry;
}
if (!(ce = index_file_exists(&the_index, src, length, 0))) {
if (!(ce = index_file_exists(the_repository->index, src, length, 0))) {
bad = _("not under version control");
goto act_on_entry;
}
@ -387,7 +387,7 @@ int cmd_mv(int argc, const char **argv, const char *prefix)
if (ignore_sparse &&
(dst_mode & (SKIP_WORKTREE_DIR | SPARSE)) &&
index_entry_exists(&the_index, dst, strlen(dst))) {
index_entry_exists(the_repository->index, dst, strlen(dst))) {
bad = _("destination exists in the index");
if (force) {
if (verbose)
@ -404,12 +404,12 @@ int cmd_mv(int argc, const char **argv, const char *prefix)
* option as a way to have a successful run.
*/
if (!ignore_sparse &&
!path_in_sparse_checkout(src, &the_index)) {
!path_in_sparse_checkout(src, the_repository->index)) {
string_list_append(&only_match_skip_worktree, src);
skip_sparse = 1;
}
if (!ignore_sparse &&
!path_in_sparse_checkout(dst, &the_index)) {
!path_in_sparse_checkout(dst, the_repository->index)) {
string_list_append(&only_match_skip_worktree, dst);
skip_sparse = 1;
}
@ -449,7 +449,7 @@ int cmd_mv(int argc, const char **argv, const char *prefix)
int pos;
int sparse_and_dirty = 0;
struct checkout state = CHECKOUT_INIT;
state.istate = &the_index;
state.istate = the_repository->index;
if (force)
state.force = 1;
@ -476,14 +476,14 @@ int cmd_mv(int argc, const char **argv, const char *prefix)
if (mode & (WORKING_DIRECTORY | SKIP_WORKTREE_DIR))
continue;
pos = index_name_pos(&the_index, src, strlen(src));
pos = index_name_pos(the_repository->index, src, strlen(src));
assert(pos >= 0);
if (!(mode & SPARSE) && !lstat(src, &st))
sparse_and_dirty = ie_modified(&the_index,
the_index.cache[pos],
sparse_and_dirty = ie_modified(the_repository->index,
the_repository->index->cache[pos],
&st,
0);
rename_index_entry_at(&the_index, pos, dst);
rename_index_entry_at(the_repository->index, pos, dst);
if (ignore_sparse &&
core_apply_sparse_checkout &&
@ -495,11 +495,11 @@ int cmd_mv(int argc, const char **argv, const char *prefix)
* should be added in a future patch.
*/
if ((mode & SPARSE) &&
path_in_sparse_checkout(dst, &the_index)) {
path_in_sparse_checkout(dst, the_repository->index)) {
/* from out-of-cone to in-cone */
int dst_pos = index_name_pos(&the_index, dst,
int dst_pos = index_name_pos(the_repository->index, dst,
strlen(dst));
struct cache_entry *dst_ce = the_index.cache[dst_pos];
struct cache_entry *dst_ce = the_repository->index->cache[dst_pos];
dst_ce->ce_flags &= ~CE_SKIP_WORKTREE;
@ -507,11 +507,11 @@ int cmd_mv(int argc, const char **argv, const char *prefix)
die(_("cannot checkout %s"), dst_ce->name);
} else if ((dst_mode & (SKIP_WORKTREE_DIR | SPARSE)) &&
!(mode & SPARSE) &&
!path_in_sparse_checkout(dst, &the_index)) {
!path_in_sparse_checkout(dst, the_repository->index)) {
/* from in-cone to out-of-cone */
int dst_pos = index_name_pos(&the_index, dst,
int dst_pos = index_name_pos(the_repository->index, dst,
strlen(dst));
struct cache_entry *dst_ce = the_index.cache[dst_pos];
struct cache_entry *dst_ce = the_repository->index->cache[dst_pos];
/*
* if src is clean, it will suffice to remove it
@ -559,9 +559,9 @@ int cmd_mv(int argc, const char **argv, const char *prefix)
advise_on_moving_dirty_path(&dirty_paths);
if (gitmodules_modified)
stage_updated_gitmodules(&the_index);
stage_updated_gitmodules(the_repository->index);
if (write_locked_index(&the_index, &lock_file,
if (write_locked_index(the_repository->index, &lock_file,
COMMIT_LOCK | SKIP_IF_UNCHANGED))
die(_("Unable to write new index file"));

View File

@ -5,7 +5,7 @@
*
* Fetch one or more remote refs and merge it/them into the current HEAD.
*/
#define USE_THE_INDEX_VARIABLE
#include "builtin.h"
#include "advice.h"
#include "config.h"
@ -1044,7 +1044,7 @@ int cmd_pull(int argc, const char **argv, const char *prefix)
if (opt_autostash == -1)
opt_autostash = config_autostash;
if (is_null_oid(&orig_head) && !is_index_unborn(&the_index))
if (is_null_oid(&orig_head) && !is_index_unborn(the_repository->index))
die(_("Updating an unborn branch with changes added to the index."));
if (!opt_autostash)

View File

@ -4,7 +4,6 @@
* Copyright (C) Linus Torvalds, 2005
*/
#define USE_THE_INDEX_VARIABLE
#include "builtin.h"
#include "config.h"
#include "gettext.h"
@ -159,8 +158,8 @@ int cmd_read_tree(int argc, const char **argv, const char *cmd_prefix)
memset(&opts, 0, sizeof(opts));
opts.head_idx = -1;
opts.src_index = &the_index;
opts.dst_index = &the_index;
opts.src_index = the_repository->index;
opts.dst_index = the_repository->index;
git_config(git_read_tree_config, NULL);
@ -197,7 +196,7 @@ int cmd_read_tree(int argc, const char **argv, const char *cmd_prefix)
die(_("You need to resolve your current index first"));
stage = opts.merge = 1;
}
resolve_undo_clear_index(&the_index);
resolve_undo_clear_index(the_repository->index);
for (i = 0; i < argc; i++) {
const char *arg = argv[i];
@ -225,7 +224,7 @@ int cmd_read_tree(int argc, const char **argv, const char *cmd_prefix)
setup_work_tree();
if (opts.skip_sparse_checkout)
ensure_full_index(&the_index);
ensure_full_index(the_repository->index);
if (opts.merge) {
switch (stage - 1) {
@ -237,7 +236,7 @@ int cmd_read_tree(int argc, const char **argv, const char *cmd_prefix)
break;
case 2:
opts.fn = twoway_merge;
opts.initial_checkout = is_index_unborn(&the_index);
opts.initial_checkout = is_index_unborn(the_repository->index);
break;
case 3:
default:
@ -258,7 +257,7 @@ int cmd_read_tree(int argc, const char **argv, const char *cmd_prefix)
if (nr_trees == 1 && !opts.prefix)
opts.skip_cache_tree_update = 1;
cache_tree_free(&the_index.cache_tree);
cache_tree_free(&the_repository->index->cache_tree);
for (i = 0; i < nr_trees; i++) {
struct tree *tree = trees[i];
if (parse_tree(tree) < 0)
@ -282,7 +281,7 @@ int cmd_read_tree(int argc, const char **argv, const char *cmd_prefix)
the_repository->index,
trees[0]);
if (write_locked_index(&the_index, &lock_file, COMMIT_LOCK))
if (write_locked_index(the_repository->index, &lock_file, COMMIT_LOCK))
die("unable to write new index file");
return 0;
}

View File

@ -4,7 +4,6 @@
* Copyright (c) 2018 Pratik Karki
*/
#define USE_THE_INDEX_VARIABLE
#include "builtin.h"
#include "abspath.h"
#include "environment.h"
@ -295,7 +294,7 @@ static int do_interactive_rebase(struct rebase_options *opts, unsigned flags)
if (ret)
error(_("could not generate todo list"));
else {
discard_index(&the_index);
discard_index(the_repository->index);
if (todo_list_parse_insn_buffer(the_repository, todo_list.buf.buf,
&todo_list))
BUG("unusable todo list");

View File

@ -2,7 +2,6 @@
* "git replay" builtin command
*/
#define USE_THE_INDEX_VARIABLE
#include "git-compat-util.h"
#include "builtin.h"

View File

@ -7,7 +7,7 @@
*
* Copyright (c) 2005, 2006 Linus Torvalds and Junio C Hamano
*/
#define USE_THE_INDEX_VARIABLE
#include "builtin.h"
#include "advice.h"
#include "config.h"
@ -66,8 +66,8 @@ static int reset_index(const char *ref, const struct object_id *oid, int reset_t
memset(&opts, 0, sizeof(opts));
opts.head_idx = 1;
opts.src_index = &the_index;
opts.dst_index = &the_index;
opts.src_index = the_repository->index;
opts.dst_index = the_repository->index;
opts.fn = oneway_merge;
opts.merge = 1;
init_checkout_metadata(&opts.meta, ref, oid, NULL);
@ -159,11 +159,11 @@ static void update_index_from_diff(struct diff_queue_struct *q,
struct cache_entry *ce;
if (!is_in_reset_tree && !intent_to_add) {
remove_file_from_index(&the_index, one->path);
remove_file_from_index(the_repository->index, one->path);
continue;
}
ce = make_cache_entry(&the_index, one->mode, &one->oid, one->path,
ce = make_cache_entry(the_repository->index, one->mode, &one->oid, one->path,
0, 0);
/*
@ -174,9 +174,9 @@ static void update_index_from_diff(struct diff_queue_struct *q,
* if this entry is outside the sparse cone - this is necessary
* to properly construct the reset sparse directory.
*/
pos = index_name_pos(&the_index, one->path, strlen(one->path));
if ((pos >= 0 && ce_skip_worktree(the_index.cache[pos])) ||
(pos < 0 && !path_in_sparse_checkout(one->path, &the_index)))
pos = index_name_pos(the_repository->index, one->path, strlen(one->path));
if ((pos >= 0 && ce_skip_worktree(the_repository->index->cache[pos])) ||
(pos < 0 && !path_in_sparse_checkout(one->path, the_repository->index)))
ce->ce_flags |= CE_SKIP_WORKTREE;
if (!ce)
@ -186,7 +186,7 @@ static void update_index_from_diff(struct diff_queue_struct *q,
ce->ce_flags |= CE_INTENT_TO_ADD;
set_object_name_for_intent_to_add_entry(ce);
}
add_index_entry(&the_index, ce,
add_index_entry(the_repository->index, ce,
ADD_CACHE_OK_TO_ADD | ADD_CACHE_OK_TO_REPLACE);
}
}
@ -208,8 +208,8 @@ static int read_from_tree(const struct pathspec *pathspec,
opt.change = diff_change;
opt.add_remove = diff_addremove;
if (pathspec->nr && pathspec_needs_expanded_index(&the_index, pathspec))
ensure_full_index(&the_index);
if (pathspec->nr && pathspec_needs_expanded_index(the_repository->index, pathspec))
ensure_full_index(the_repository->index);
if (do_diff_cache(tree_oid, &opt))
return 1;
@ -235,7 +235,7 @@ static void set_reflog_message(struct strbuf *sb, const char *action,
static void die_if_unmerged_cache(int reset_type)
{
if (is_merge() || unmerged_index(&the_index))
if (is_merge() || unmerged_index(the_repository->index))
die(_("Cannot do a %s reset in the middle of a merge."),
_(reset_type_names[reset_type]));
@ -470,12 +470,12 @@ int cmd_reset(int argc, const char **argv, const char *prefix)
update_ref_status = 1;
goto cleanup;
}
the_index.updated_skipworktree = 1;
the_repository->index->updated_skipworktree = 1;
if (!no_refresh && get_git_work_tree()) {
uint64_t t_begin, t_delta_in_ms;
t_begin = getnanotime();
refresh_index(&the_index, flags, NULL, NULL,
refresh_index(the_repository->index, flags, NULL, NULL,
_("Unstaged changes after reset:"));
t_delta_in_ms = (getnanotime() - t_begin) / 1000000;
if (!quiet && advice_enabled(ADVICE_RESET_NO_REFRESH_WARNING) && t_delta_in_ms > REFRESH_INDEX_DELAY_WARNING_IN_MS) {
@ -501,7 +501,7 @@ int cmd_reset(int argc, const char **argv, const char *prefix)
free(ref);
}
if (write_locked_index(&the_index, &lock, COMMIT_LOCK))
if (write_locked_index(the_repository->index, &lock, COMMIT_LOCK))
die(_("Could not write new index file."));
}
@ -516,7 +516,7 @@ int cmd_reset(int argc, const char **argv, const char *prefix)
if (!pathspec.nr)
remove_branch_state(the_repository, 0);
discard_index(&the_index);
discard_index(the_repository->index);
cleanup:
clear_pathspec(&pathspec);

View File

@ -3,7 +3,7 @@
*
* Copyright (C) Linus Torvalds, 2005
*/
#define USE_THE_INDEX_VARIABLE
#include "builtin.h"
#include "abspath.h"
#include "config.h"
@ -1049,8 +1049,8 @@ int cmd_rev_parse(int argc, const char **argv, const char *prefix)
if (!strcmp(arg, "--shared-index-path")) {
if (repo_read_index(the_repository) < 0)
die(_("Could not read the index"));
if (the_index.split_index) {
const struct object_id *oid = &the_index.split_index->base_oid;
if (the_repository->index->split_index) {
const struct object_id *oid = &the_repository->index->split_index->base_oid;
const char *path = git_path("sharedindex.%s", oid_to_hex(oid));
print_path(path, prefix, format, DEFAULT_RELATIVE);
}

View File

@ -120,6 +120,7 @@ static int run_sequencer(int argc, const char **argv, const char *prefix,
OPT_CLEANUP(&cleanup_arg),
OPT_BOOL('n', "no-commit", &opts->no_commit, N_("don't automatically commit")),
OPT_BOOL('e', "edit", &opts->edit, N_("edit the commit message")),
OPT_BOOL('n', "no-verify", &opts->no_verify, N_("bypass pre-commit and commit-msg hooks")),
OPT_NOOP_NOARG('r', NULL),
OPT_BOOL('s', "signoff", &opts->signoff, N_("add a Signed-off-by trailer")),
OPT_CALLBACK('m', "mainline", opts, N_("parent-number"),

View File

@ -3,7 +3,7 @@
*
* Copyright (C) Linus Torvalds 2006
*/
#define USE_THE_INDEX_VARIABLE
#include "builtin.h"
#include "advice.h"
#include "config.h"
@ -41,8 +41,8 @@ static int get_ours_cache_pos(const char *path, int pos)
{
int i = -pos - 1;
while ((i < the_index.cache_nr) && !strcmp(the_index.cache[i]->name, path)) {
if (ce_stage(the_index.cache[i]) == 2)
while ((i < the_repository->index->cache_nr) && !strcmp(the_repository->index->cache[i]->name, path)) {
if (ce_stage(the_repository->index->cache[i]) == 2)
return i;
i++;
}
@ -78,13 +78,13 @@ static void submodules_absorb_gitdir_if_needed(void)
int pos;
const struct cache_entry *ce;
pos = index_name_pos(&the_index, name, strlen(name));
pos = index_name_pos(the_repository->index, name, strlen(name));
if (pos < 0) {
pos = get_ours_cache_pos(name, pos);
if (pos < 0)
continue;
}
ce = the_index.cache[pos];
ce = the_repository->index->cache[pos];
if (!S_ISGITLINK(ce->ce_mode) ||
!file_exists(ce->name) ||
@ -122,7 +122,7 @@ static int check_local_mod(struct object_id *head, int index_only)
int local_changes = 0;
int staged_changes = 0;
pos = index_name_pos(&the_index, name, strlen(name));
pos = index_name_pos(the_repository->index, name, strlen(name));
if (pos < 0) {
/*
* Skip unmerged entries except for populated submodules
@ -132,11 +132,11 @@ static int check_local_mod(struct object_id *head, int index_only)
if (pos < 0)
continue;
if (!S_ISGITLINK(the_index.cache[pos]->ce_mode) ||
if (!S_ISGITLINK(the_repository->index->cache[pos]->ce_mode) ||
is_empty_dir(name))
continue;
}
ce = the_index.cache[pos];
ce = the_repository->index->cache[pos];
if (lstat(ce->name, &st) < 0) {
if (!is_missing_file_error(errno))
@ -173,7 +173,7 @@ static int check_local_mod(struct object_id *head, int index_only)
* Is the index different from the file in the work tree?
* If it's a submodule, is its work tree modified?
*/
if (ie_match_stat(&the_index, ce, &st, 0) ||
if (ie_match_stat(the_repository->index, ce, &st, 0) ||
(S_ISGITLINK(ce->ce_mode) &&
bad_to_remove_submodule(ce->name,
SUBMODULE_REMOVAL_DIE_ON_ERROR |
@ -301,27 +301,27 @@ int cmd_rm(int argc, const char **argv, const char *prefix)
if (repo_read_index(the_repository) < 0)
die(_("index file corrupt"));
refresh_index(&the_index, REFRESH_QUIET|REFRESH_UNMERGED, &pathspec, NULL, NULL);
refresh_index(the_repository->index, REFRESH_QUIET|REFRESH_UNMERGED, &pathspec, NULL, NULL);
seen = xcalloc(pathspec.nr, 1);
if (pathspec_needs_expanded_index(&the_index, &pathspec))
ensure_full_index(&the_index);
if (pathspec_needs_expanded_index(the_repository->index, &pathspec))
ensure_full_index(the_repository->index);
for (i = 0; i < the_index.cache_nr; i++) {
const struct cache_entry *ce = the_index.cache[i];
for (i = 0; i < the_repository->index->cache_nr; i++) {
const struct cache_entry *ce = the_repository->index->cache[i];
if (!include_sparse &&
(ce_skip_worktree(ce) ||
!path_in_sparse_checkout(ce->name, &the_index)))
!path_in_sparse_checkout(ce->name, the_repository->index)))
continue;
if (!ce_path_match(&the_index, ce, &pathspec, seen))
if (!ce_path_match(the_repository->index, ce, &pathspec, seen))
continue;
ALLOC_GROW(list.entry, list.nr + 1, list.alloc);
list.entry[list.nr].name = xstrdup(ce->name);
list.entry[list.nr].is_submodule = S_ISGITLINK(ce->ce_mode);
if (list.entry[list.nr++].is_submodule &&
!is_staging_gitmodules_ok(&the_index))
!is_staging_gitmodules_ok(the_repository->index))
die(_("please stage your changes to .gitmodules or stash them to proceed"));
}
@ -391,7 +391,7 @@ int cmd_rm(int argc, const char **argv, const char *prefix)
if (!quiet)
printf("rm '%s'\n", path);
if (remove_file_from_index(&the_index, path))
if (remove_file_from_index(the_repository->index, path))
die(_("git rm: unable to remove %s"), path);
}
@ -432,10 +432,10 @@ int cmd_rm(int argc, const char **argv, const char *prefix)
}
strbuf_release(&buf);
if (gitmodules_modified)
stage_updated_gitmodules(&the_index);
stage_updated_gitmodules(the_repository->index);
}
if (write_locked_index(&the_index, &lock_file,
if (write_locked_index(the_repository->index, &lock_file,
COMMIT_LOCK | SKIP_IF_UNCHANGED))
die(_("Unable to write new index file"));

View File

@ -1,4 +1,3 @@
#define USE_THE_INDEX_VARIABLE
#include "builtin.h"
#include "abspath.h"
#include "config.h"
@ -273,7 +272,7 @@ static int reset_tree(struct object_id *i_tree, int update, int reset)
struct lock_file lock_file = LOCK_INIT;
repo_read_index_preload(the_repository, NULL, 0);
if (refresh_index(&the_index, REFRESH_QUIET, NULL, NULL, NULL))
if (refresh_index(the_repository->index, REFRESH_QUIET, NULL, NULL, NULL))
return -1;
repo_hold_locked_index(the_repository, &lock_file, LOCK_DIE_ON_ERROR);
@ -287,8 +286,8 @@ static int reset_tree(struct object_id *i_tree, int update, int reset)
init_tree_desc(t, &tree->object.oid, tree->buffer, tree->size);
opts.head_idx = 1;
opts.src_index = &the_index;
opts.dst_index = &the_index;
opts.src_index = the_repository->index;
opts.dst_index = the_repository->index;
opts.merge = 1;
opts.reset = reset ? UNPACK_RESET_PROTECT_UNTRACKED : 0;
opts.update = update;
@ -299,7 +298,7 @@ static int reset_tree(struct object_id *i_tree, int update, int reset)
if (unpack_trees(nr_trees, t, &opts))
return -1;
if (write_locked_index(&the_index, &lock_file, COMMIT_LOCK))
if (write_locked_index(the_repository->index, &lock_file, COMMIT_LOCK))
return error(_("unable to write new index file"));
return 0;
@ -430,7 +429,7 @@ static void unstage_changes_unless_new(struct object_id *orig_tree)
state.force = 1;
state.quiet = 1;
state.refresh_cache = 1;
state.istate = &the_index;
state.istate = the_repository->index;
/*
* Step 1: get a difference between orig_tree (which corresponding
@ -454,7 +453,7 @@ static void unstage_changes_unless_new(struct object_id *orig_tree)
/* Look up the path's position in the current index. */
p = diff_queued_diff.queue[i];
pos = index_name_pos(&the_index, p->two->path,
pos = index_name_pos(the_repository->index, p->two->path,
strlen(p->two->path));
/*
@ -465,10 +464,10 @@ static void unstage_changes_unless_new(struct object_id *orig_tree)
* path, but left it out of the working tree, then clear the
* SKIP_WORKTREE bit and write it to the working tree.
*/
if (pos >= 0 && ce_skip_worktree(the_index.cache[pos])) {
if (pos >= 0 && ce_skip_worktree(the_repository->index->cache[pos])) {
struct stat st;
ce = the_index.cache[pos];
ce = the_repository->index->cache[pos];
if (!lstat(ce->name, &st)) {
/* Conflicting path present; relocate it */
struct strbuf new_path = STRBUF_INIT;
@ -504,12 +503,12 @@ static void unstage_changes_unless_new(struct object_id *orig_tree)
if (pos < 0)
option = ADD_CACHE_OK_TO_ADD;
ce = make_cache_entry(&the_index,
ce = make_cache_entry(the_repository->index,
p->one->mode,
&p->one->oid,
p->one->path,
0, 0);
add_index_entry(&the_index, ce, option);
add_index_entry(the_repository->index, ce, option);
}
}
diff_flush(&diff_opts);
@ -518,7 +517,7 @@ static void unstage_changes_unless_new(struct object_id *orig_tree)
* Step 4: write the new index to disk
*/
repo_hold_locked_index(the_repository, &lock, LOCK_DIE_ON_ERROR);
if (write_locked_index(&the_index, &lock,
if (write_locked_index(the_repository->index, &lock,
COMMIT_LOCK | SKIP_IF_UNCHANGED))
die(_("could not write index"));
}
@ -539,7 +538,7 @@ static int do_apply_stash(const char *prefix, struct stash_info *info,
NULL, NULL, NULL))
return error(_("could not write index"));
if (write_index_as_tree(&c_tree, &the_index, get_index_file(), 0,
if (write_index_as_tree(&c_tree, the_repository->index, get_index_file(), 0,
NULL))
return error(_("cannot apply a stash in the middle of a merge"));
@ -562,14 +561,14 @@ static int do_apply_stash(const char *prefix, struct stash_info *info,
return error(_("conflicts in index. "
"Try without --index."));
discard_index(&the_index);
discard_index(the_repository->index);
repo_read_index(the_repository);
if (write_index_as_tree(&index_tree, &the_index,
if (write_index_as_tree(&index_tree, the_repository->index,
get_index_file(), 0, NULL))
return error(_("could not save index tree"));
reset_head();
discard_index(&the_index);
discard_index(the_repository->index);
repo_read_index(the_repository);
}
}
@ -875,8 +874,8 @@ static void diff_include_untracked(const struct stash_info *info, struct diff_op
}
unpack_tree_opt.head_idx = -1;
unpack_tree_opt.src_index = &the_index;
unpack_tree_opt.dst_index = &the_index;
unpack_tree_opt.src_index = the_repository->index;
unpack_tree_opt.dst_index = the_repository->index;
unpack_tree_opt.merge = 1;
unpack_tree_opt.fn = stash_worktree_untracked_merge;
@ -1395,7 +1394,7 @@ static int do_create_stash(const struct pathspec *ps, struct strbuf *stash_msg_b
strbuf_addf(&commit_tree_label, "index on %s\n", msg.buf);
commit_list_insert(head_commit, &parents);
if (write_index_as_tree(&info->i_tree, &the_index, get_index_file(), 0,
if (write_index_as_tree(&info->i_tree, the_repository->index, get_index_file(), 0,
NULL) ||
commit_tree(commit_tree_label.buf, commit_tree_label.len,
&info->i_tree, parents, &info->i_commit, NULL, NULL)) {
@ -1540,9 +1539,9 @@ static int do_push_stash(const struct pathspec *ps, const char *stash_msg, int q
char *ps_matched = xcalloc(ps->nr, 1);
/* TODO: audit for interaction with sparse-index. */
ensure_full_index(&the_index);
for (i = 0; i < the_index.cache_nr; i++)
ce_path_match(&the_index, the_index.cache[i], ps,
ensure_full_index(the_repository->index);
for (i = 0; i < the_repository->index->cache_nr; i++)
ce_path_match(the_repository->index, the_repository->index->cache[i], ps,
ps_matched);
if (report_path_error(ps_matched, ps)) {
@ -1612,7 +1611,7 @@ static int do_push_stash(const struct pathspec *ps, const char *stash_msg, int q
goto done;
}
}
discard_index(&the_index);
discard_index(the_repository->index);
if (ps->nr) {
struct child_process cp_add = CHILD_PROCESS_INIT;
struct child_process cp_diff = CHILD_PROCESS_INIT;

View File

@ -1,4 +1,3 @@
#define USE_THE_INDEX_VARIABLE
#include "builtin.h"
#include "abspath.h"
#include "environment.h"
@ -207,18 +206,18 @@ static int module_list_compute(const char **argv,
if (repo_read_index(the_repository) < 0)
die(_("index file corrupt"));
for (i = 0; i < the_index.cache_nr; i++) {
const struct cache_entry *ce = the_index.cache[i];
for (i = 0; i < the_repository->index->cache_nr; i++) {
const struct cache_entry *ce = the_repository->index->cache[i];
if (!match_pathspec(&the_index, pathspec, ce->name, ce_namelen(ce),
if (!match_pathspec(the_repository->index, pathspec, ce->name, ce_namelen(ce),
0, ps_matched, 1) ||
!S_ISGITLINK(ce->ce_mode))
continue;
ALLOC_GROW(list->entries, list->nr + 1, list->alloc);
list->entries[list->nr++] = ce;
while (i + 1 < the_index.cache_nr &&
!strcmp(ce->name, the_index.cache[i + 1]->name))
while (i + 1 < the_repository->index->cache_nr &&
!strcmp(ce->name, the_repository->index->cache[i + 1]->name))
/*
* Skip entries with the same name in different stages
* to make sure an entry is returned only once.
@ -907,7 +906,7 @@ static void generate_submodule_summary(struct summary_cb *info,
int fd = open(p->sm_path, O_RDONLY);
if (fd < 0 || fstat(fd, &st) < 0 ||
index_fd(&the_index, &p->oid_dst, fd, &st, OBJ_BLOB,
index_fd(the_repository->index, &p->oid_dst, fd, &st, OBJ_BLOB,
p->sm_path, 0))
error(_("couldn't hash object from '%s'"), p->sm_path);
} else {
@ -3243,21 +3242,21 @@ static void die_on_index_match(const char *path, int force)
char *ps_matched = xcalloc(ps.nr, 1);
/* TODO: audit for interaction with sparse-index. */
ensure_full_index(&the_index);
ensure_full_index(the_repository->index);
/*
* Since there is only one pathspec, we just need to
* check ps_matched[0] to know if a cache entry matched.
*/
for (i = 0; i < the_index.cache_nr; i++) {
ce_path_match(&the_index, the_index.cache[i], &ps,
for (i = 0; i < the_repository->index->cache_nr; i++) {
ce_path_match(the_repository->index, the_repository->index->cache[i], &ps,
ps_matched);
if (ps_matched[0]) {
if (!force)
die(_("'%s' already exists in the index"),
path);
if (!S_ISGITLINK(the_index.cache[i]->ce_mode))
if (!S_ISGITLINK(the_repository->index->cache[i]->ce_mode))
die(_("'%s' already exists in the index "
"and is not a submodule"), path);
break;

View File

@ -3,7 +3,7 @@
*
* Copyright (C) Linus Torvalds, 2005
*/
#define USE_THE_INDEX_VARIABLE
#include "builtin.h"
#include "bulk-checkin.h"
#include "config.h"
@ -247,16 +247,16 @@ static int test_if_untracked_cache_is_supported(void)
static int mark_ce_flags(const char *path, int flag, int mark)
{
int namelen = strlen(path);
int pos = index_name_pos(&the_index, path, namelen);
int pos = index_name_pos(the_repository->index, path, namelen);
if (0 <= pos) {
mark_fsmonitor_invalid(&the_index, the_index.cache[pos]);
mark_fsmonitor_invalid(the_repository->index, the_repository->index->cache[pos]);
if (mark)
the_index.cache[pos]->ce_flags |= flag;
the_repository->index->cache[pos]->ce_flags |= flag;
else
the_index.cache[pos]->ce_flags &= ~flag;
the_index.cache[pos]->ce_flags |= CE_UPDATE_IN_BASE;
cache_tree_invalidate_path(&the_index, path);
the_index.cache_changed |= CE_ENTRY_CHANGED;
the_repository->index->cache[pos]->ce_flags &= ~flag;
the_repository->index->cache[pos]->ce_flags |= CE_UPDATE_IN_BASE;
cache_tree_invalidate_path(the_repository->index, path);
the_repository->index->cache_changed |= CE_ENTRY_CHANGED;
return 0;
}
return -1;
@ -266,7 +266,7 @@ static int remove_one_path(const char *path)
{
if (!allow_remove)
return error("%s: does not exist and --remove not passed", path);
if (remove_file_from_index(&the_index, path))
if (remove_file_from_index(the_repository->index, path))
return error("%s: cannot remove from the index", path);
return 0;
}
@ -291,24 +291,24 @@ static int add_one_path(const struct cache_entry *old, const char *path, int len
struct cache_entry *ce;
/* Was the old index entry already up-to-date? */
if (old && !ce_stage(old) && !ie_match_stat(&the_index, old, st, 0))
if (old && !ce_stage(old) && !ie_match_stat(the_repository->index, old, st, 0))
return 0;
ce = make_empty_cache_entry(&the_index, len);
ce = make_empty_cache_entry(the_repository->index, len);
memcpy(ce->name, path, len);
ce->ce_flags = create_ce_flags(0);
ce->ce_namelen = len;
fill_stat_cache_info(&the_index, ce, st);
fill_stat_cache_info(the_repository->index, ce, st);
ce->ce_mode = ce_mode_from_stat(old, st->st_mode);
if (index_path(&the_index, &ce->oid, path, st,
if (index_path(the_repository->index, &ce->oid, path, st,
info_only ? 0 : HASH_WRITE_OBJECT)) {
discard_cache_entry(ce);
return -1;
}
option = allow_add ? ADD_CACHE_OK_TO_ADD : 0;
option |= allow_replace ? ADD_CACHE_OK_TO_REPLACE : 0;
if (add_index_entry(&the_index, ce, option)) {
if (add_index_entry(the_repository->index, ce, option)) {
discard_cache_entry(ce);
return error("%s: cannot add to the index - missing --add option?", path);
}
@ -341,11 +341,11 @@ static int add_one_path(const struct cache_entry *old, const char *path, int len
static int process_directory(const char *path, int len, struct stat *st)
{
struct object_id oid;
int pos = index_name_pos(&the_index, path, len);
int pos = index_name_pos(the_repository->index, path, len);
/* Exact match: file or existing gitlink */
if (pos >= 0) {
const struct cache_entry *ce = the_index.cache[pos];
const struct cache_entry *ce = the_repository->index->cache[pos];
if (S_ISGITLINK(ce->ce_mode)) {
/* Do nothing to the index if there is no HEAD! */
@ -360,8 +360,8 @@ static int process_directory(const char *path, int len, struct stat *st)
/* Inexact match: is there perhaps a subdirectory match? */
pos = -pos-1;
while (pos < the_index.cache_nr) {
const struct cache_entry *ce = the_index.cache[pos++];
while (pos < the_repository->index->cache_nr) {
const struct cache_entry *ce = the_repository->index->cache[pos++];
if (strncmp(ce->name, path, len))
break;
@ -391,8 +391,8 @@ static int process_path(const char *path, struct stat *st, int stat_errno)
if (has_symlink_leading_path(path, len))
return error("'%s' is beyond a symbolic link", path);
pos = index_name_pos(&the_index, path, len);
ce = pos < 0 ? NULL : the_index.cache[pos];
pos = index_name_pos(the_repository->index, path, len);
ce = pos < 0 ? NULL : the_repository->index->cache[pos];
if (ce && ce_skip_worktree(ce)) {
/*
* working directory version is assumed "good"
@ -400,7 +400,7 @@ static int process_path(const char *path, struct stat *st, int stat_errno)
* On the other hand, removing it from index should work
*/
if (!ignore_skip_worktree_entries && allow_remove &&
remove_file_from_index(&the_index, path))
remove_file_from_index(the_repository->index, path))
return error("%s: cannot remove from the index", path);
return 0;
}
@ -428,7 +428,7 @@ static int add_cacheinfo(unsigned int mode, const struct object_id *oid,
return error("Invalid path '%s'", path);
len = strlen(path);
ce = make_empty_cache_entry(&the_index, len);
ce = make_empty_cache_entry(the_repository->index, len);
oidcpy(&ce->oid, oid);
memcpy(ce->name, path, len);
@ -439,7 +439,7 @@ static int add_cacheinfo(unsigned int mode, const struct object_id *oid,
ce->ce_flags |= CE_VALID;
option = allow_add ? ADD_CACHE_OK_TO_ADD : 0;
option |= allow_replace ? ADD_CACHE_OK_TO_REPLACE : 0;
if (add_index_entry(&the_index, ce, option))
if (add_index_entry(the_repository->index, ce, option))
return error("%s: cannot add to the index - missing --add option?",
path);
report("add '%s'", path);
@ -451,11 +451,11 @@ static void chmod_path(char flip, const char *path)
int pos;
struct cache_entry *ce;
pos = index_name_pos(&the_index, path, strlen(path));
pos = index_name_pos(the_repository->index, path, strlen(path));
if (pos < 0)
goto fail;
ce = the_index.cache[pos];
if (chmod_index_entry(&the_index, ce, flip) < 0)
ce = the_repository->index->cache[pos];
if (chmod_index_entry(the_repository->index, ce, flip) < 0)
goto fail;
report("chmod %cx '%s'", flip, path);
@ -498,7 +498,7 @@ static void update_one(const char *path)
}
if (force_remove) {
if (remove_file_from_index(&the_index, path))
if (remove_file_from_index(the_repository->index, path))
die("git update-index: unable to remove %s", path);
report("remove '%s'", path);
return;
@ -581,7 +581,7 @@ static void read_index_info(int nul_term_line)
if (!mode) {
/* mode == 0 means there is no such path -- remove */
if (remove_file_from_index(&the_index, path_name))
if (remove_file_from_index(the_repository->index, path_name))
die("git update-index: unable to remove %s",
ptr);
}
@ -622,12 +622,12 @@ static struct cache_entry *read_one_ent(const char *which,
error("%s: not in %s branch.", path, which);
return NULL;
}
if (!the_index.sparse_index && mode == S_IFDIR) {
if (!the_repository->index->sparse_index && mode == S_IFDIR) {
if (which)
error("%s: not a blob in %s branch.", path, which);
return NULL;
}
ce = make_empty_cache_entry(&the_index, namelen);
ce = make_empty_cache_entry(the_repository->index, namelen);
oidcpy(&ce->oid, &oid);
memcpy(ce->name, path, namelen);
@ -642,12 +642,12 @@ static int unresolve_one(const char *path)
struct string_list_item *item;
int res = 0;
if (!the_index.resolve_undo)
if (!the_repository->index->resolve_undo)
return res;
item = string_list_lookup(the_index.resolve_undo, path);
item = string_list_lookup(the_repository->index->resolve_undo, path);
if (!item)
return res; /* no resolve-undo record for the path */
res = unmerge_index_entry(&the_index, path, item->util, 0);
res = unmerge_index_entry(the_repository->index, path, item->util, 0);
FREE_AND_NULL(item->util);
return res;
}
@ -688,13 +688,13 @@ static int do_reupdate(const char **paths,
*/
has_head = 0;
redo:
for (pos = 0; pos < the_index.cache_nr; pos++) {
const struct cache_entry *ce = the_index.cache[pos];
for (pos = 0; pos < the_repository->index->cache_nr; pos++) {
const struct cache_entry *ce = the_repository->index->cache[pos];
struct cache_entry *old = NULL;
int save_nr;
char *path;
if (ce_stage(ce) || !ce_path_match(&the_index, ce, &pathspec, NULL))
if (ce_stage(ce) || !ce_path_match(the_repository->index, ce, &pathspec, NULL))
continue;
if (has_head)
old = read_one_ent(NULL, &head_oid,
@ -710,7 +710,7 @@ static int do_reupdate(const char **paths,
* to process each path individually
*/
if (S_ISSPARSEDIR(ce->ce_mode)) {
ensure_full_index(&the_index);
ensure_full_index(the_repository->index);
goto redo;
}
@ -718,12 +718,12 @@ static int do_reupdate(const char **paths,
* path anymore, in which case, under 'allow_remove',
* or worse yet 'allow_replace', active_nr may decrease.
*/
save_nr = the_index.cache_nr;
save_nr = the_repository->index->cache_nr;
path = xstrdup(ce->name);
update_one(path);
free(path);
discard_cache_entry(old);
if (save_nr != the_index.cache_nr)
if (save_nr != the_repository->index->cache_nr)
goto redo;
}
clear_pathspec(&pathspec);
@ -739,9 +739,9 @@ static int refresh(struct refresh_params *o, unsigned int flag)
{
setup_work_tree();
repo_read_index(the_repository);
*o->has_errors |= refresh_index(&the_index, o->flags | flag, NULL,
*o->has_errors |= refresh_index(the_repository->index, o->flags | flag, NULL,
NULL, NULL);
if (has_racy_timestamp(&the_index)) {
if (has_racy_timestamp(the_repository->index)) {
/*
* Even if nothing else has changed, updating the file
* increases the chance that racy timestamps become
@ -750,7 +750,7 @@ static int refresh(struct refresh_params *o, unsigned int flag)
* refresh_index() as these are no actual errors.
* cmd_status() does the same.
*/
the_index.cache_changed |= SOMETHING_CHANGED;
the_repository->index->cache_changed |= SOMETHING_CHANGED;
}
return 0;
}
@ -787,7 +787,7 @@ static int resolve_undo_clear_callback(const struct option *opt UNUSED,
{
BUG_ON_OPT_NEG(unset);
BUG_ON_OPT_ARG(arg);
resolve_undo_clear_index(&the_index);
resolve_undo_clear_index(the_repository->index);
return 0;
}
@ -888,7 +888,7 @@ static enum parse_opt_result unresolve_callback(
*has_errors = do_unresolve(ctx->argc, ctx->argv,
prefix, prefix ? strlen(prefix) : 0);
if (*has_errors)
the_index.cache_changed = 0;
the_repository->index->cache_changed = 0;
ctx->argv += ctx->argc - 1;
ctx->argc = 1;
@ -909,7 +909,7 @@ static enum parse_opt_result reupdate_callback(
setup_work_tree();
*has_errors = do_reupdate(ctx->argv + 1, prefix);
if (*has_errors)
the_index.cache_changed = 0;
the_repository->index->cache_changed = 0;
ctx->argv += ctx->argc - 1;
ctx->argc = 1;
@ -1056,7 +1056,7 @@ int cmd_update_index(int argc, const char **argv, const char *prefix)
if (entries < 0)
die("cache corrupted");
the_index.updated_skipworktree = 1;
the_repository->index->updated_skipworktree = 1;
/*
* Custom copy of parse_options() because we want to handle
@ -1111,18 +1111,18 @@ int cmd_update_index(int argc, const char **argv, const char *prefix)
getline_fn = nul_term_line ? strbuf_getline_nul : strbuf_getline_lf;
if (preferred_index_format) {
if (preferred_index_format < 0) {
printf(_("%d\n"), the_index.version);
printf(_("%d\n"), the_repository->index->version);
} else if (preferred_index_format < INDEX_FORMAT_LB ||
INDEX_FORMAT_UB < preferred_index_format) {
die("index-version %d not in range: %d..%d",
preferred_index_format,
INDEX_FORMAT_LB, INDEX_FORMAT_UB);
} else {
if (the_index.version != preferred_index_format)
the_index.cache_changed |= SOMETHING_CHANGED;
if (the_repository->index->version != preferred_index_format)
the_repository->index->cache_changed |= SOMETHING_CHANGED;
report(_("index-version: was %d, set to %d"),
the_index.version, preferred_index_format);
the_index.version = preferred_index_format;
the_repository->index->version, preferred_index_format);
the_repository->index->version = preferred_index_format;
}
}
@ -1159,16 +1159,16 @@ int cmd_update_index(int argc, const char **argv, const char *prefix)
warning(_("core.splitIndex is set to false; "
"remove or change it, if you really want to "
"enable split index"));
if (the_index.split_index)
the_index.cache_changed |= SPLIT_INDEX_ORDERED;
if (the_repository->index->split_index)
the_repository->index->cache_changed |= SPLIT_INDEX_ORDERED;
else
add_split_index(&the_index);
add_split_index(the_repository->index);
} else if (!split_index) {
if (git_config_get_split_index() == 1)
warning(_("core.splitIndex is set to true; "
"remove or change it, if you really want to "
"disable split index"));
remove_split_index(&the_index);
remove_split_index(the_repository->index);
}
prepare_repo_settings(r);
@ -1180,7 +1180,7 @@ int cmd_update_index(int argc, const char **argv, const char *prefix)
warning(_("core.untrackedCache is set to true; "
"remove or change it, if you really want to "
"disable the untracked cache"));
remove_untracked_cache(&the_index);
remove_untracked_cache(the_repository->index);
report(_("Untracked cache disabled"));
break;
case UC_TEST:
@ -1192,7 +1192,7 @@ int cmd_update_index(int argc, const char **argv, const char *prefix)
warning(_("core.untrackedCache is set to false; "
"remove or change it, if you really want to "
"enable the untracked cache"));
add_untracked_cache(&the_index);
add_untracked_cache(the_repository->index);
report(_("Untracked cache enabled for '%s'"), get_git_work_tree());
break;
default:
@ -1222,7 +1222,7 @@ int cmd_update_index(int argc, const char **argv, const char *prefix)
"set it if you really want to "
"enable fsmonitor"));
}
add_fsmonitor(&the_index);
add_fsmonitor(the_repository->index);
report(_("fsmonitor enabled"));
} else if (!fsmonitor) {
enum fsmonitor_mode fsm_mode = fsm_settings__get_mode(r);
@ -1230,17 +1230,17 @@ int cmd_update_index(int argc, const char **argv, const char *prefix)
warning(_("core.fsmonitor is set; "
"remove it if you really want to "
"disable fsmonitor"));
remove_fsmonitor(&the_index);
remove_fsmonitor(the_repository->index);
report(_("fsmonitor disabled"));
}
if (the_index.cache_changed || force_write) {
if (the_repository->index->cache_changed || force_write) {
if (newfd < 0) {
if (refresh_args.flags & REFRESH_QUIET)
exit(128);
unable_to_lock_die(get_index_file(), lock_error);
}
if (write_locked_index(&the_index, &lock_file, COMMIT_LOCK))
if (write_locked_index(the_repository->index, &lock_file, COMMIT_LOCK))
die("Unable to write new index file");
}

View File

@ -3,7 +3,7 @@
*
* Copyright (C) Linus Torvalds, 2005
*/
#define USE_THE_INDEX_VARIABLE
#include "builtin.h"
#include "config.h"
#include "environment.h"
@ -44,8 +44,8 @@ int cmd_write_tree(int argc, const char **argv, const char *cmd_prefix)
prepare_repo_settings(the_repository);
the_repository->settings.command_requires_full_index = 0;
ret = write_index_as_tree(&oid, &the_index, get_index_file(), flags,
tree_prefix);
ret = write_index_as_tree(&oid, the_repository->index, get_index_file(),
flags, tree_prefix);
switch (ret) {
case 0:
printf("%s\n", oid_to_hex(&oid));

View File

@ -1,34 +1,69 @@
#!/usr/bin/env bash
#!/bin/sh
#
# Install dependencies required to build and test Git on Linux and macOS
#
. ${0%/*}/lib.sh
begin_group "Install dependencies"
P4WHENCE=https://cdist2.perforce.com/perforce/r21.2
LFSWHENCE=https://github.com/github/git-lfs/releases/download/v$LINUX_GIT_LFS_VERSION
UBUNTU_COMMON_PKGS="make libssl-dev libcurl4-openssl-dev libexpat-dev
tcl tk gettext zlib1g-dev perl-modules liberror-perl libauthen-sasl-perl
libemail-valid-perl libio-socket-ssl-perl libnet-smtp-ssl-perl"
JGITWHENCE=https://repo.eclipse.org/content/groups/releases//org/eclipse/jgit/org.eclipse.jgit.pgm/6.8.0.202311291450-r/org.eclipse.jgit.pgm-6.8.0.202311291450-r.sh
case "$runs_on_pool" in
# Make sudo a no-op and execute the command directly when running as root.
# While using sudo would be fine on most platforms when we are root already,
# some platforms like e.g. Alpine Linux do not have sudo available by default
# and would thus break.
if test "$(id -u)" -eq 0
then
sudo () {
"$@"
}
fi
case "$distro" in
alpine-*)
apk add --update shadow sudo build-base curl-dev openssl-dev expat-dev gettext \
pcre2-dev python3 musl-libintl perl-utils ncurses \
apache2 apache2-http2 apache2-proxy apache2-ssl apache2-webdav apr-util-dbd_sqlite3 \
bash cvs gnupg perl-cgi perl-dbd-sqlite >/dev/null
;;
fedora-*)
dnf -yq update >/dev/null &&
dnf -yq install make gcc findutils diffutils perl python3 gettext zlib-devel expat-devel openssl-devel curl-devel pcre2-devel >/dev/null
;;
ubuntu-*)
# Required so that apt doesn't wait for user input on certain packages.
export DEBIAN_FRONTEND=noninteractive
sudo apt-get -q update
sudo apt-get -q -y install language-pack-is libsvn-perl apache2 \
$UBUNTU_COMMON_PKGS $CC_PACKAGE $PYTHON_PACKAGE
mkdir --parents "$P4_PATH"
pushd "$P4_PATH"
wget --quiet "$P4WHENCE/bin.linux26x86_64/p4d"
wget --quiet "$P4WHENCE/bin.linux26x86_64/p4"
chmod u+x p4d
chmod u+x p4
popd
mkdir --parents "$GIT_LFS_PATH"
pushd "$GIT_LFS_PATH"
wget --quiet "$LFSWHENCE/git-lfs-linux-amd64-$LINUX_GIT_LFS_VERSION.tar.gz"
tar --extract --gunzip --file "git-lfs-linux-amd64-$LINUX_GIT_LFS_VERSION.tar.gz"
cp git-lfs-$LINUX_GIT_LFS_VERSION/git-lfs .
popd
sudo apt-get -q -y install \
language-pack-is libsvn-perl apache2 cvs cvsps git gnupg subversion \
make libssl-dev libcurl4-openssl-dev libexpat-dev wget sudo default-jre \
tcl tk gettext zlib1g-dev perl-modules liberror-perl libauthen-sasl-perl \
libemail-valid-perl libio-socket-ssl-perl libnet-smtp-ssl-perl libdbd-sqlite3-perl libcgi-pm-perl \
${CC_PACKAGE:-${CC:-gcc}} $PYTHON_PACKAGE
mkdir --parents "$CUSTOM_PATH"
wget --quiet --directory-prefix="$CUSTOM_PATH" \
"$P4WHENCE/bin.linux26x86_64/p4d" "$P4WHENCE/bin.linux26x86_64/p4"
chmod a+x "$CUSTOM_PATH/p4d" "$CUSTOM_PATH/p4"
wget --quiet "$LFSWHENCE/git-lfs-linux-amd64-$LINUX_GIT_LFS_VERSION.tar.gz"
tar -xzf "git-lfs-linux-amd64-$LINUX_GIT_LFS_VERSION.tar.gz" \
-C "$CUSTOM_PATH" --strip-components=1 "git-lfs-$LINUX_GIT_LFS_VERSION/git-lfs"
rm "git-lfs-linux-amd64-$LINUX_GIT_LFS_VERSION.tar.gz"
wget --quiet "$JGITWHENCE" --output-document="$CUSTOM_PATH/jgit"
chmod a+x "$CUSTOM_PATH/jgit"
;;
ubuntu32-*)
sudo linux32 --32bit i386 sh -c '
apt update >/dev/null &&
apt install -y build-essential libcurl4-openssl-dev \
libssl-dev libexpat-dev gettext python >/dev/null
'
;;
macos-*)
export HOMEBREW_NO_AUTO_UPDATE=1 HOMEBREW_NO_INSTALL_CLEANUP=1
@ -38,12 +73,11 @@ macos-*)
brew install $BREW_INSTALL_PACKAGES
brew link --force gettext
mkdir -p "$P4_PATH"
pushd "$P4_PATH"
wget -q "$P4WHENCE/bin.macosx1015x86_64/helix-core-server.tgz" &&
tar -xf helix-core-server.tgz &&
sudo xattr -d com.apple.quarantine p4 p4d 2>/dev/null || true
popd
mkdir -p "$CUSTOM_PATH"
wget -q "$P4WHENCE/bin.macosx1015x86_64/helix-core-server.tgz" &&
tar -xf helix-core-server.tgz -C "$CUSTOM_PATH" p4 p4d &&
sudo xattr -d com.apple.quarantine "$CUSTOM_PATH/p4" "$CUSTOM_PATH/p4d" 2>/dev/null || true
rm helix-core-server.tgz
if test -n "$CC_PACKAGE"
then
@ -72,10 +106,6 @@ Documentation)
test -n "$ALREADY_HAVE_ASCIIDOCTOR" ||
sudo gem install --version 1.5.8 asciidoctor
;;
linux-gcc-default)
sudo apt-get -q update
sudo apt-get -q -y install $UBUNTU_COMMON_PKGS
;;
esac
if type p4d >/dev/null 2>&1 && type p4 >/dev/null 2>&1
@ -87,6 +117,7 @@ then
else
echo >&2 "WARNING: perforce wasn't installed, see above for clues why"
fi
if type git-lfs >/dev/null 2>&1
then
echo "$(tput setaf 6)Git-LFS Version$(tput sgr0)"
@ -94,3 +125,13 @@ then
else
echo >&2 "WARNING: git-lfs wasn't installed, see above for clues why"
fi
if type jgit >/dev/null 2>&1
then
echo "$(tput setaf 6)JGit Version$(tput sgr0)"
jgit version
else
echo >&2 "WARNING: JGit wasn't installed, see above for clues why"
fi
end_group "Install dependencies"

View File

@ -1,46 +0,0 @@
#!/bin/sh
#
# Install dependencies required to build and test Git inside container
#
. ${0%/*}/lib.sh
begin_group "Install dependencies"
case "$jobname" in
linux32)
linux32 --32bit i386 sh -c '
apt update >/dev/null &&
apt install -y build-essential libcurl4-openssl-dev \
libssl-dev libexpat-dev gettext python >/dev/null
'
;;
linux-musl)
apk add --update shadow sudo build-base curl-dev openssl-dev expat-dev gettext \
pcre2-dev python3 musl-libintl perl-utils ncurses \
apache2 apache2-http2 apache2-proxy apache2-ssl apache2-webdav apr-util-dbd_sqlite3 \
bash cvs gnupg perl-cgi perl-dbd-sqlite >/dev/null
;;
linux-*|StaticAnalysis)
# Required so that apt doesn't wait for user input on certain packages.
export DEBIAN_FRONTEND=noninteractive
apt update -q &&
apt install -q -y sudo git make language-pack-is libsvn-perl apache2 libssl-dev \
libcurl4-openssl-dev libexpat-dev tcl tk gettext zlib1g-dev \
perl-modules liberror-perl libauthen-sasl-perl libemail-valid-perl \
libdbd-sqlite3-perl libio-socket-ssl-perl libnet-smtp-ssl-perl ${CC_PACKAGE:-${CC:-gcc}} \
apache2 cvs cvsps gnupg libcgi-pm-perl subversion
if test "$jobname" = StaticAnalysis
then
apt install -q -y coccinelle
fi
;;
pedantic)
dnf -yq update >/dev/null &&
dnf -yq install make gcc findutils diffutils perl python3 gettext zlib-devel expat-devel openssl-devel curl-devel pcre2-devel >/dev/null
;;
esac
end_group "Install dependencies"

View File

@ -279,7 +279,7 @@ then
cache_dir="$HOME/none"
runs_on_pool=$(echo "$CI_JOB_IMAGE" | tr : -)
distro=$(echo "$CI_JOB_IMAGE" | tr : -)
JOBS=$(nproc)
else
echo "Could not identify CI type" >&2
@ -318,7 +318,7 @@ export DEFAULT_TEST_TARGET=prove
export GIT_TEST_CLONE_2GB=true
export SKIP_DASHED_BUILT_INS=YesPlease
case "$runs_on_pool" in
case "$distro" in
ubuntu-*)
if test "$jobname" = "linux-gcc-default"
then
@ -340,10 +340,6 @@ ubuntu-*)
# image.
# Keep that in mind when you encounter a broken OS X build!
export LINUX_GIT_LFS_VERSION="1.5.2"
P4_PATH="$HOME/custom/p4"
GIT_LFS_PATH="$HOME/custom/git-lfs"
export PATH="$GIT_LFS_PATH:$P4_PATH:$PATH"
;;
macos-*)
MAKEFLAGS="$MAKEFLAGS PYTHON_PATH=$(which python3)"
@ -351,12 +347,12 @@ macos-*)
then
MAKEFLAGS="$MAKEFLAGS APPLE_COMMON_CRYPTO_SHA1=Yes"
fi
P4_PATH="$HOME/custom/p4"
export PATH="$P4_PATH:$PATH"
;;
esac
CUSTOM_PATH="${CUSTOM_PATH:-$HOME/path}"
export PATH="$CUSTOM_PATH:$PATH"
case "$jobname" in
linux32)
CC=gcc

21
color.c
View File

@ -64,12 +64,16 @@ static int match_word(const char *word, int len, const char *match)
return !strncasecmp(word, match, len) && !match[len];
}
static int get_hex_color(const char *in, unsigned char *out)
static int get_hex_color(const char **inp, int width, unsigned char *out)
{
const char *in = *inp;
unsigned int val;
val = (hexval(in[0]) << 4) | hexval(in[1]);
assert(width == 1 || width == 2);
val = (hexval(in[0]) << 4) | hexval(in[width - 1]);
if (val & ~0xff)
return -1;
*inp += width;
*out = val;
return 0;
}
@ -135,11 +139,14 @@ static int parse_color(struct color *out, const char *name, int len)
return 0;
}
/* Try a 24-bit RGB value */
if (len == 7 && name[0] == '#') {
if (!get_hex_color(name + 1, &out->red) &&
!get_hex_color(name + 3, &out->green) &&
!get_hex_color(name + 5, &out->blue)) {
/* Try a 24- or 12-bit RGB value prefixed with '#' */
if ((len == 7 || len == 4) && name[0] == '#') {
int width_per_color = (len == 7) ? 2 : 1;
const char *color = name + 1;
if (!get_hex_color(&color, width_per_color, &out->red) &&
!get_hex_color(&color, width_per_color, &out->green) &&
!get_hex_color(&color, width_per_color, &out->blue)) {
out->type = COLOR_RGB;
return 0;
}

View File

@ -112,7 +112,8 @@ int want_color_fd(int fd, int var);
* Translate a Git color from 'value' into a string that the terminal can
* interpret and store it into 'dst'. The Git color values are of the form
* "foreground [background] [attr]" where fore- and background can be a color
* name ("red"), a RGB code (#0xFF0000) or a 256-color-mode from the terminal.
* name ("red"), a RGB code (#FF0000 or #F00) or a 256-color-mode from the
* terminal.
*/
int color_parse(const char *value, char *dst);
int color_parse_mem(const char *value, int len, char *dst);

View File

@ -48,7 +48,7 @@ int main(int argc, const char **argv)
setlocale(LC_CTYPE, "");
git_setup_gettext();
initialize_the_repository();
initialize_repository(the_repository);
attr_start();

View File

@ -31,15 +31,29 @@
# Note that "git" is optional --- '!f() { : commit; ...}; f' would complete
# just like the 'git commit' command.
#
# If you have a command that is not part of git, but you would still
# like completion, you can use __git_complete:
# To add completion for git subcommands that are implemented in external
# scripts, define a function of the form '_git_${subcommand}' while replacing
# all dashes with underscores, and the main git completion will make use of it.
# For example, to add completion for 'git do-stuff' (which could e.g. live
# in /usr/bin/git-do-stuff), name the completion function '_git_do_stuff'.
# See _git_show, _git_bisect etc. below for more examples.
#
# If you have a shell command that is not part of git (and is not called as a
# git subcommand), but you would still like git-style completion for it, use
# __git_complete. For example, to use the same completion as for 'git log' also
# for the 'gl' command:
#
# __git_complete gl git_log
#
# Or if it's a main command (i.e. git or gitk):
# Or if the 'gk' command should be completed the same as 'gitk':
#
# __git_complete gk gitk
#
# The second parameter of __git_complete gives the completion function; it is
# resolved as a function named "$2", or "__$2_main", or "_$2" in that order.
# In the examples above, the actual functions used for completion will be
# _git_log and __gitk_main.
#
# Compatible with bash 3.2.57.
#
# You can set the following environment variables to influence the behavior of
@ -3581,6 +3595,17 @@ _git_svn ()
fi
}
_git_symbolic_ref () {
case "$cur" in
--*)
__gitcomp_builtin symbolic-ref
return
;;
esac
__git_complete_refs
}
_git_tag ()
{
local i c="$__git_cmd_idx" f=0

View File

@ -272,6 +272,7 @@ _git ()
{
local _ret=1
local cur cword prev
local __git_repo_path
cur=${words[CURRENT]}
prev=${words[CURRENT-1]}

View File

@ -25,13 +25,64 @@ void credential_clear(struct credential *c)
free(c->path);
free(c->username);
free(c->password);
free(c->credential);
free(c->oauth_refresh_token);
free(c->authtype);
string_list_clear(&c->helpers, 0);
strvec_clear(&c->wwwauth_headers);
strvec_clear(&c->state_headers);
strvec_clear(&c->state_headers_to_send);
credential_init(c);
}
void credential_next_state(struct credential *c)
{
strvec_clear(&c->state_headers_to_send);
SWAP(c->state_headers, c->state_headers_to_send);
}
void credential_clear_secrets(struct credential *c)
{
FREE_AND_NULL(c->password);
FREE_AND_NULL(c->credential);
}
static void credential_set_capability(struct credential_capability *capa,
enum credential_op_type op_type)
{
switch (op_type) {
case CREDENTIAL_OP_INITIAL:
capa->request_initial = 1;
break;
case CREDENTIAL_OP_HELPER:
capa->request_helper = 1;
break;
case CREDENTIAL_OP_RESPONSE:
capa->response = 1;
break;
}
}
void credential_set_all_capabilities(struct credential *c,
enum credential_op_type op_type)
{
credential_set_capability(&c->capa_authtype, op_type);
credential_set_capability(&c->capa_state, op_type);
}
static void announce_one(struct credential_capability *cc, const char *name, FILE *fp) {
if (cc->request_initial)
fprintf(fp, "capability %s\n", name);
}
void credential_announce_capabilities(struct credential *c, FILE *fp) {
fprintf(fp, "version 0\n");
announce_one(&c->capa_authtype, "authtype", fp);
announce_one(&c->capa_state, "state", fp);
}
int credential_match(const struct credential *want,
const struct credential *have, int match_password)
{
@ -40,7 +91,8 @@ int credential_match(const struct credential *want,
CHECK(host) &&
CHECK(path) &&
CHECK(username) &&
(!match_password || CHECK(password));
(!match_password || CHECK(password)) &&
(!match_password || CHECK(credential));
#undef CHECK
}
@ -208,7 +260,26 @@ static void credential_getpass(struct credential *c)
PROMPT_ASKPASS);
}
int credential_read(struct credential *c, FILE *fp)
int credential_has_capability(const struct credential_capability *capa,
enum credential_op_type op_type)
{
/*
* We're checking here if each previous step indicated that we had the
* capability. If it did, then we want to pass it along; conversely, if
* it did not, we don't want to report that to our caller.
*/
switch (op_type) {
case CREDENTIAL_OP_HELPER:
return capa->request_initial;
case CREDENTIAL_OP_RESPONSE:
return capa->request_initial && capa->request_helper;
default:
return 0;
}
}
int credential_read(struct credential *c, FILE *fp,
enum credential_op_type op_type)
{
struct strbuf line = STRBUF_INIT;
@ -233,6 +304,9 @@ int credential_read(struct credential *c, FILE *fp)
} else if (!strcmp(key, "password")) {
free(c->password);
c->password = xstrdup(value);
} else if (!strcmp(key, "credential")) {
free(c->credential);
c->credential = xstrdup(value);
} else if (!strcmp(key, "protocol")) {
free(c->protocol);
c->protocol = xstrdup(value);
@ -242,8 +316,19 @@ int credential_read(struct credential *c, FILE *fp)
} else if (!strcmp(key, "path")) {
free(c->path);
c->path = xstrdup(value);
} else if (!strcmp(key, "ephemeral")) {
c->ephemeral = !!git_config_bool("ephemeral", value);
} else if (!strcmp(key, "wwwauth[]")) {
strvec_push(&c->wwwauth_headers, value);
} else if (!strcmp(key, "state[]")) {
strvec_push(&c->state_headers, value);
} else if (!strcmp(key, "capability[]")) {
if (!strcmp(value, "authtype"))
credential_set_capability(&c->capa_authtype, op_type);
else if (!strcmp(value, "state"))
credential_set_capability(&c->capa_state, op_type);
} else if (!strcmp(key, "continue")) {
c->multistage = !!git_config_bool("continue", value);
} else if (!strcmp(key, "password_expiry_utc")) {
errno = 0;
c->password_expiry_utc = parse_timestamp(value, NULL, 10);
@ -252,6 +337,9 @@ int credential_read(struct credential *c, FILE *fp)
} else if (!strcmp(key, "oauth_refresh_token")) {
free(c->oauth_refresh_token);
c->oauth_refresh_token = xstrdup(value);
} else if (!strcmp(key, "authtype")) {
free(c->authtype);
c->authtype = xstrdup(value);
} else if (!strcmp(key, "url")) {
credential_from_url(c, value);
} else if (!strcmp(key, "quit")) {
@ -280,8 +368,20 @@ static void credential_write_item(FILE *fp, const char *key, const char *value,
fprintf(fp, "%s=%s\n", key, value);
}
void credential_write(const struct credential *c, FILE *fp)
void credential_write(const struct credential *c, FILE *fp,
enum credential_op_type op_type)
{
if (credential_has_capability(&c->capa_authtype, op_type))
credential_write_item(fp, "capability[]", "authtype", 0);
if (credential_has_capability(&c->capa_state, op_type))
credential_write_item(fp, "capability[]", "state", 0);
if (credential_has_capability(&c->capa_authtype, op_type)) {
credential_write_item(fp, "authtype", c->authtype, 0);
credential_write_item(fp, "credential", c->credential, 0);
if (c->ephemeral)
credential_write_item(fp, "ephemeral", "1", 0);
}
credential_write_item(fp, "protocol", c->protocol, 1);
credential_write_item(fp, "host", c->host, 1);
credential_write_item(fp, "path", c->path, 0);
@ -295,6 +395,12 @@ void credential_write(const struct credential *c, FILE *fp)
}
for (size_t i = 0; i < c->wwwauth_headers.nr; i++)
credential_write_item(fp, "wwwauth[]", c->wwwauth_headers.v[i], 0);
if (credential_has_capability(&c->capa_state, op_type)) {
if (c->multistage)
credential_write_item(fp, "continue", "1", 0);
for (size_t i = 0; i < c->state_headers_to_send.nr; i++)
credential_write_item(fp, "state[]", c->state_headers_to_send.v[i], 0);
}
}
static int run_credential_helper(struct credential *c,
@ -317,14 +423,14 @@ static int run_credential_helper(struct credential *c,
fp = xfdopen(helper.in, "w");
sigchain_push(SIGPIPE, SIG_IGN);
credential_write(c, fp);
credential_write(c, fp, want_output ? CREDENTIAL_OP_HELPER : CREDENTIAL_OP_RESPONSE);
fclose(fp);
sigchain_pop(SIGPIPE);
if (want_output) {
int r;
fp = xfdopen(helper.out, "r");
r = credential_read(c, fp);
r = credential_read(c, fp, CREDENTIAL_OP_HELPER);
fclose(fp);
if (r < 0) {
finish_command(&helper);
@ -357,14 +463,19 @@ static int credential_do(struct credential *c, const char *helper,
return r;
}
void credential_fill(struct credential *c)
void credential_fill(struct credential *c, int all_capabilities)
{
int i;
if (c->username && c->password)
if ((c->username && c->password) || c->credential)
return;
credential_next_state(c);
c->multistage = 0;
credential_apply_config(c);
if (all_capabilities)
credential_set_all_capabilities(c, CREDENTIAL_OP_INITIAL);
for (i = 0; i < c->helpers.nr; i++) {
credential_do(c, c->helpers.items[i].string, "get");
@ -374,15 +485,17 @@ void credential_fill(struct credential *c)
/* Reset expiry to maintain consistency */
c->password_expiry_utc = TIME_MAX;
}
if (c->username && c->password)
if ((c->username && c->password) || c->credential) {
strvec_clear(&c->wwwauth_headers);
return;
}
if (c->quit)
die("credential helper '%s' told us to quit",
c->helpers.items[i].string);
}
credential_getpass(c);
if (!c->username && !c->password)
if (!c->username && !c->password && !c->credential)
die("unable to get password from user");
}
@ -392,9 +505,11 @@ void credential_approve(struct credential *c)
if (c->approved)
return;
if (!c->username || !c->password || c->password_expiry_utc < time(NULL))
if (((!c->username || !c->password) && !c->credential) || c->password_expiry_utc < time(NULL))
return;
credential_next_state(c);
credential_apply_config(c);
for (i = 0; i < c->helpers.nr; i++)
@ -406,6 +521,8 @@ void credential_reject(struct credential *c)
{
int i;
credential_next_state(c);
credential_apply_config(c);
for (i = 0; i < c->helpers.nr; i++)
@ -413,6 +530,7 @@ void credential_reject(struct credential *c)
FREE_AND_NULL(c->username);
FREE_AND_NULL(c->password);
FREE_AND_NULL(c->credential);
FREE_AND_NULL(c->oauth_refresh_token);
c->password_expiry_utc = TIME_MAX;
c->approved = 0;

View File

@ -93,6 +93,27 @@
* -----------------------------------------------------------------------
*/
/*
* These values define the kind of operation we're performing and the
* capabilities at each stage. The first is either an external request (via git
* credential fill) or an internal request (e.g., via the HTTP) code. The
* second is the call to the credential helper, and the third is the response
* we're providing.
*
* At each stage, we will emit the capability only if the previous stage
* supported it.
*/
enum credential_op_type {
CREDENTIAL_OP_INITIAL = 1,
CREDENTIAL_OP_HELPER = 2,
CREDENTIAL_OP_RESPONSE = 3,
};
struct credential_capability {
unsigned request_initial:1,
request_helper:1,
response:1;
};
/**
* This struct represents a single username/password combination
@ -123,6 +144,16 @@ struct credential {
*/
struct strvec wwwauth_headers;
/**
* A `strvec` of state headers received from credential helpers.
*/
struct strvec state_headers;
/**
* A `strvec` of state headers to send to credential helpers.
*/
struct strvec state_headers_to_send;
/**
* Internal use only. Keeps track of if we previously matched against a
* WWW-Authenticate header line in order to re-fold future continuation
@ -131,24 +162,38 @@ struct credential {
unsigned header_is_last_match:1;
unsigned approved:1,
ephemeral:1,
configured:1,
multistage: 1,
quit:1,
use_http_path:1,
username_from_proto:1;
struct credential_capability capa_authtype;
struct credential_capability capa_state;
char *username;
char *password;
char *credential;
char *protocol;
char *host;
char *path;
char *oauth_refresh_token;
timestamp_t password_expiry_utc;
/**
* The authorization scheme to use. If this is NULL, libcurl is free to
* negotiate any scheme it likes.
*/
char *authtype;
};
#define CREDENTIAL_INIT { \
.helpers = STRING_LIST_INIT_DUP, \
.password_expiry_utc = TIME_MAX, \
.wwwauth_headers = STRVEC_INIT, \
.state_headers = STRVEC_INIT, \
.state_headers_to_send = STRVEC_INIT, \
}
/* Initialize a credential structure, setting all fields to empty. */
@ -167,8 +212,11 @@ void credential_clear(struct credential *);
* returns, the username and password fields of the credential are
* guaranteed to be non-NULL. If an error occurs, the function will
* die().
*
* If all_capabilities is set, this is an internal user that is prepared
* to deal with all known capabilities, and we should advertise that fact.
*/
void credential_fill(struct credential *);
void credential_fill(struct credential *, int all_capabilities);
/**
* Inform the credential subsystem that the provided credentials
@ -191,8 +239,46 @@ void credential_approve(struct credential *);
*/
void credential_reject(struct credential *);
int credential_read(struct credential *, FILE *);
void credential_write(const struct credential *, FILE *);
/**
* Enable all of the supported credential flags in this credential.
*/
void credential_set_all_capabilities(struct credential *c,
enum credential_op_type op_type);
/**
* Clear the secrets in this credential, but leave other data intact.
*
* This is useful for resetting credentials in preparation for a subsequent
* stage of filling.
*/
void credential_clear_secrets(struct credential *c);
/**
* Print a list of supported capabilities and version numbers to standard
* output.
*/
void credential_announce_capabilities(struct credential *c, FILE *fp);
/**
* Prepares the credential for the next iteration of the helper protocol by
* updating the state headers to send with the ones read by the last iteration
* of the protocol.
*
* Except for internal callers, this should be called exactly once between
* reading credentials with `credential_fill` and writing them.
*/
void credential_next_state(struct credential *c);
/**
* Return true if the capability is enabled for an operation of op_type.
*/
int credential_has_capability(const struct credential_capability *capa,
enum credential_op_type op_type);
int credential_read(struct credential *, FILE *,
enum credential_op_type);
void credential_write(const struct credential *, FILE *,
enum credential_op_type);
/*
* Parse a url into a credential struct, replacing any existing contents.

View File

@ -660,7 +660,6 @@ int do_diff_cache(const struct object_id *tree_oid, struct diff_options *opt)
repo_init_revisions(opt->repo, &revs, NULL);
copy_pathspec(&revs.prune_data, &opt->pathspec);
diff_setup_done(&revs.diffopt);
revs.diffopt = *opt;
if (diff_cache(&revs, tree_oid, NULL, 1))

129
http.c
View File

@ -128,7 +128,6 @@ static unsigned long empty_auth_useless =
| CURLAUTH_DIGEST;
static struct curl_slist *pragma_header;
static struct curl_slist *no_pragma_header;
static struct string_list extra_http_headers = STRING_LIST_INIT_DUP;
static struct curl_slist *host_resolutions;
@ -299,6 +298,11 @@ size_t fwrite_null(char *ptr UNUSED, size_t eltsize UNUSED, size_t nmemb,
return nmemb;
}
static struct curl_slist *object_request_headers(void)
{
return curl_slist_append(http_copy_default_headers(), "Pragma:");
}
static void closedown_active_slot(struct active_request_slot *slot)
{
active_requests--;
@ -557,18 +561,34 @@ static int curl_empty_auth_enabled(void)
return 0;
}
struct curl_slist *http_append_auth_header(const struct credential *c,
struct curl_slist *headers)
{
if (c->authtype && c->credential) {
struct strbuf auth = STRBUF_INIT;
strbuf_addf(&auth, "Authorization: %s %s",
c->authtype, c->credential);
headers = curl_slist_append(headers, auth.buf);
strbuf_release(&auth);
}
return headers;
}
static void init_curl_http_auth(CURL *result)
{
if (!http_auth.username || !*http_auth.username) {
if ((!http_auth.username || !*http_auth.username) &&
(!http_auth.credential || !*http_auth.credential)) {
if (curl_empty_auth_enabled())
curl_easy_setopt(result, CURLOPT_USERPWD, ":");
return;
}
credential_fill(&http_auth);
credential_fill(&http_auth, 1);
curl_easy_setopt(result, CURLOPT_USERNAME, http_auth.username);
curl_easy_setopt(result, CURLOPT_PASSWORD, http_auth.password);
if (http_auth.password) {
curl_easy_setopt(result, CURLOPT_USERNAME, http_auth.username);
curl_easy_setopt(result, CURLOPT_PASSWORD, http_auth.password);
}
}
/* *var must be free-able */
@ -582,17 +602,22 @@ static void var_override(const char **var, char *value)
static void set_proxyauth_name_password(CURL *result)
{
if (proxy_auth.password) {
curl_easy_setopt(result, CURLOPT_PROXYUSERNAME,
proxy_auth.username);
curl_easy_setopt(result, CURLOPT_PROXYPASSWORD,
proxy_auth.password);
} else if (proxy_auth.authtype && proxy_auth.credential) {
curl_easy_setopt(result, CURLOPT_PROXYHEADER,
http_append_auth_header(&proxy_auth, NULL));
}
}
static void init_curl_proxy_auth(CURL *result)
{
if (proxy_auth.username) {
if (!proxy_auth.password)
credential_fill(&proxy_auth);
if (!proxy_auth.password && !proxy_auth.credential)
credential_fill(&proxy_auth, 1);
set_proxyauth_name_password(result);
}
@ -626,7 +651,7 @@ static int has_cert_password(void)
cert_auth.host = xstrdup("");
cert_auth.username = xstrdup("");
cert_auth.path = xstrdup(ssl_cert);
credential_fill(&cert_auth);
credential_fill(&cert_auth, 0);
}
return 1;
}
@ -641,7 +666,7 @@ static int has_proxy_cert_password(void)
proxy_cert_auth.host = xstrdup("");
proxy_cert_auth.username = xstrdup("");
proxy_cert_auth.path = xstrdup(http_proxy_ssl_cert);
credential_fill(&proxy_cert_auth);
credential_fill(&proxy_cert_auth, 0);
}
return 1;
}
@ -1275,8 +1300,6 @@ void http_init(struct remote *remote, const char *url, int proactive_auth)
pragma_header = curl_slist_append(http_copy_default_headers(),
"Pragma: no-cache");
no_pragma_header = curl_slist_append(http_copy_default_headers(),
"Pragma:");
{
char *http_max_requests = getenv("GIT_HTTP_MAX_REQUESTS");
@ -1360,9 +1383,6 @@ void http_cleanup(void)
curl_slist_free_all(pragma_header);
pragma_header = NULL;
curl_slist_free_all(no_pragma_header);
no_pragma_header = NULL;
curl_slist_free_all(host_resolutions);
host_resolutions = NULL;
@ -1470,7 +1490,7 @@ struct active_request_slot *get_active_slot(void)
curl_easy_setopt(slot->curl, CURLOPT_IPRESOLVE, git_curl_ipresolve);
curl_easy_setopt(slot->curl, CURLOPT_HTTPAUTH, http_auth_methods);
if (http_auth.password || curl_empty_auth_enabled())
if (http_auth.password || http_auth.credential || curl_empty_auth_enabled())
init_curl_http_auth(slot->curl);
return slot;
@ -1759,7 +1779,12 @@ static int handle_curl_result(struct slot_results *results)
} else if (missing_target(results))
return HTTP_MISSING_TARGET;
else if (results->http_code == 401) {
if (http_auth.username && http_auth.password) {
if ((http_auth.username && http_auth.password) ||\
(http_auth.authtype && http_auth.credential)) {
if (http_auth.multistage) {
credential_clear_secrets(&http_auth);
return HTTP_REAUTH;
}
credential_reject(&http_auth);
return HTTP_NOAUTH;
} else {
@ -2067,11 +2092,15 @@ static int http_request(const char *url,
/* Add additional headers here */
if (options && options->extra_headers) {
const struct string_list_item *item;
for_each_string_list_item(item, options->extra_headers) {
headers = curl_slist_append(headers, item->string);
if (options && options->extra_headers) {
for_each_string_list_item(item, options->extra_headers) {
headers = curl_slist_append(headers, item->string);
}
}
}
headers = http_append_auth_header(&http_auth, headers);
curl_easy_setopt(slot->curl, CURLOPT_URL, url);
curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, headers);
curl_easy_setopt(slot->curl, CURLOPT_ENCODING, "");
@ -2153,6 +2182,7 @@ static int http_request_reauth(const char *url,
void *result, int target,
struct http_get_options *options)
{
int i = 3;
int ret = http_request(url, result, target, options);
if (ret != HTTP_OK && ret != HTTP_REAUTH)
@ -2166,35 +2196,35 @@ static int http_request_reauth(const char *url,
}
}
if (ret != HTTP_REAUTH)
return ret;
while (ret == HTTP_REAUTH && --i) {
/*
* The previous request may have put cruft into our output stream; we
* should clear it out before making our next request.
*/
switch (target) {
case HTTP_REQUEST_STRBUF:
strbuf_reset(result);
break;
case HTTP_REQUEST_FILE:
if (fflush(result)) {
error_errno("unable to flush a file");
return HTTP_START_FAILED;
}
rewind(result);
if (ftruncate(fileno(result), 0) < 0) {
error_errno("unable to truncate a file");
return HTTP_START_FAILED;
}
break;
default:
BUG("Unknown http_request target");
}
/*
* The previous request may have put cruft into our output stream; we
* should clear it out before making our next request.
*/
switch (target) {
case HTTP_REQUEST_STRBUF:
strbuf_reset(result);
break;
case HTTP_REQUEST_FILE:
if (fflush(result)) {
error_errno("unable to flush a file");
return HTTP_START_FAILED;
}
rewind(result);
if (ftruncate(fileno(result), 0) < 0) {
error_errno("unable to truncate a file");
return HTTP_START_FAILED;
}
break;
default:
BUG("Unknown http_request target");
credential_fill(&http_auth, 1);
ret = http_request(url, result, target, options);
}
credential_fill(&http_auth);
return http_request(url, result, target, options);
return ret;
}
int http_get_strbuf(const char *url,
@ -2371,6 +2401,7 @@ void release_http_pack_request(struct http_pack_request *preq)
}
preq->slot = NULL;
strbuf_release(&preq->tmpfile);
curl_slist_free_all(preq->headers);
free(preq->url);
free(preq);
}
@ -2455,11 +2486,11 @@ struct http_pack_request *new_direct_http_pack_request(
}
preq->slot = get_active_slot();
preq->headers = object_request_headers();
curl_easy_setopt(preq->slot->curl, CURLOPT_WRITEDATA, preq->packfile);
curl_easy_setopt(preq->slot->curl, CURLOPT_WRITEFUNCTION, fwrite);
curl_easy_setopt(preq->slot->curl, CURLOPT_URL, preq->url);
curl_easy_setopt(preq->slot->curl, CURLOPT_HTTPHEADER,
no_pragma_header);
curl_easy_setopt(preq->slot->curl, CURLOPT_HTTPHEADER, preq->headers);
/*
* If there is data present from a previous transfer attempt,
@ -2625,13 +2656,14 @@ struct http_object_request *new_http_object_request(const char *base_url,
}
freq->slot = get_active_slot();
freq->headers = object_request_headers();
curl_easy_setopt(freq->slot->curl, CURLOPT_WRITEDATA, freq);
curl_easy_setopt(freq->slot->curl, CURLOPT_FAILONERROR, 0);
curl_easy_setopt(freq->slot->curl, CURLOPT_WRITEFUNCTION, fwrite_sha1_file);
curl_easy_setopt(freq->slot->curl, CURLOPT_ERRORBUFFER, freq->errorstr);
curl_easy_setopt(freq->slot->curl, CURLOPT_URL, freq->url);
curl_easy_setopt(freq->slot->curl, CURLOPT_HTTPHEADER, no_pragma_header);
curl_easy_setopt(freq->slot->curl, CURLOPT_HTTPHEADER, freq->headers);
/*
* If we have successfully processed data from a previous fetch
@ -2719,5 +2751,6 @@ void release_http_object_request(struct http_object_request *freq)
release_active_slot(freq->slot);
freq->slot = NULL;
}
curl_slist_free_all(freq->headers);
strbuf_release(&freq->tmpfile);
}

5
http.h
View File

@ -175,6 +175,9 @@ int http_get_file(const char *url, const char *filename,
int http_fetch_ref(const char *base, struct ref *ref);
struct curl_slist *http_append_auth_header(const struct credential *c,
struct curl_slist *headers);
/* Helpers for fetching packs */
int http_get_info_packs(const char *base_url,
struct packed_git **packs_head);
@ -196,6 +199,7 @@ struct http_pack_request {
FILE *packfile;
struct strbuf tmpfile;
struct active_request_slot *slot;
struct curl_slist *headers;
};
struct http_pack_request *new_http_pack_request(
@ -229,6 +233,7 @@ struct http_object_request {
int zret;
int rename;
struct active_request_slot *slot;
struct curl_slist *headers;
};
struct http_object_request *new_http_object_request(

View File

@ -917,7 +917,7 @@ static void server_fill_credential(struct imap_server_conf *srvc, struct credent
cred->username = xstrdup_or_null(srvc->user);
cred->password = xstrdup_or_null(srvc->pass);
credential_fill(cred);
credential_fill(cred, 1);
if (!srvc->user)
srvc->user = xstrdup(cred->username);

View File

@ -72,7 +72,6 @@ gen_cmd_aux () {
nested=0
nested_min=100
# Step 1:
#
# Increase/decrease "start"/"end" indices respectively to get rid of
@ -87,7 +86,7 @@ gen_cmd_aux () {
IFS=#
for c in $(echo "$LAYOUT" | sed 's:.:&#:g')
do
if test "$c" = " "
if test -z "$c" || test "$c" = " "
then
continue
fi

View File

@ -11,7 +11,8 @@ int LLVMFuzzerTestOneInput(const uint8_t *data, size_t size)
{
struct commit_graph *g;
initialize_the_repository();
initialize_repository(the_repository);
/*
* Initialize the_repository with commit-graph settings that would
* normally be read from the repository's gitdir. We want to avoid

View File

@ -172,32 +172,30 @@ static int should_write_log(struct ref_store *refs, const char *refname)
}
}
static void fill_reftable_log_record(struct reftable_log_record *log)
static void fill_reftable_log_record(struct reftable_log_record *log, const struct ident_split *split)
{
const char *info = git_committer_info(0);
struct ident_split split = {0};
const char *tz_begin;
int sign = 1;
if (split_ident_line(&split, info, strlen(info)))
BUG("failed splitting committer info");
reftable_log_record_release(log);
log->value_type = REFTABLE_LOG_UPDATE;
log->value.update.name =
xstrndup(split.name_begin, split.name_end - split.name_begin);
xstrndup(split->name_begin, split->name_end - split->name_begin);
log->value.update.email =
xstrndup(split.mail_begin, split.mail_end - split.mail_begin);
log->value.update.time = atol(split.date_begin);
if (*split.tz_begin == '-') {
xstrndup(split->mail_begin, split->mail_end - split->mail_begin);
log->value.update.time = atol(split->date_begin);
tz_begin = split->tz_begin;
if (*tz_begin == '-') {
sign = -1;
split.tz_begin++;
tz_begin++;
}
if (*split.tz_begin == '+') {
if (*tz_begin == '+') {
sign = 1;
split.tz_begin++;
tz_begin++;
}
log->value.update.tz_offset = sign * atoi(split.tz_begin);
log->value.update.tz_offset = sign * atoi(tz_begin);
}
static int read_ref_without_reload(struct reftable_stack *stack,
@ -1021,9 +1019,15 @@ static int write_transaction_table(struct reftable_writer *writer, void *cb_data
reftable_stack_merged_table(arg->stack);
uint64_t ts = reftable_stack_next_update_index(arg->stack);
struct reftable_log_record *logs = NULL;
struct ident_split committer_ident = {0};
size_t logs_nr = 0, logs_alloc = 0, i;
const char *committer_info;
int ret = 0;
committer_info = git_committer_info(0);
if (split_ident_line(&committer_ident, committer_info, strlen(committer_info)))
BUG("failed splitting committer info");
QSORT(arg->updates, arg->updates_nr, transaction_update_cmp);
reftable_writer_set_limits(writer, ts, ts);
@ -1089,7 +1093,7 @@ static int write_transaction_table(struct reftable_writer *writer, void *cb_data
log = &logs[logs_nr++];
memset(log, 0, sizeof(*log));
fill_reftable_log_record(log);
fill_reftable_log_record(log, &committer_ident);
log->update_index = ts;
log->refname = xstrdup(u->refname);
memcpy(log->value.update.new_hash, u->new_oid.hash, GIT_MAX_RAWSZ);
@ -1227,6 +1231,7 @@ static int reftable_be_pack_refs(struct ref_store *ref_store,
struct write_create_symref_arg {
struct reftable_ref_store *refs;
struct reftable_stack *stack;
struct strbuf *err;
const char *refname;
const char *target;
const char *logmsg;
@ -1242,13 +1247,20 @@ static int write_create_symref_table(struct reftable_writer *writer, void *cb_da
.value.symref = (char *)create->target,
.update_index = ts,
};
struct ident_split committer_ident = {0};
struct reftable_log_record log = {0};
struct object_id new_oid;
struct object_id old_oid;
const char *committer_info;
int ret;
reftable_writer_set_limits(writer, ts, ts);
ret = refs_verify_refname_available(&create->refs->base, create->refname,
NULL, NULL, create->err);
if (ret < 0)
return ret;
ret = reftable_writer_add_ref(writer, &ref);
if (ret)
return ret;
@ -1267,7 +1279,11 @@ static int write_create_symref_table(struct reftable_writer *writer, void *cb_da
!should_write_log(&create->refs->base, create->refname))
return 0;
fill_reftable_log_record(&log);
committer_info = git_committer_info(0);
if (split_ident_line(&committer_ident, committer_info, strlen(committer_info)))
BUG("failed splitting committer info");
fill_reftable_log_record(&log, &committer_ident);
log.refname = xstrdup(create->refname);
log.update_index = ts;
log.value.update.message = xstrndup(create->logmsg,
@ -1290,12 +1306,14 @@ static int reftable_be_create_symref(struct ref_store *ref_store,
struct reftable_ref_store *refs =
reftable_be_downcast(ref_store, REF_STORE_WRITE, "create_symref");
struct reftable_stack *stack = stack_for(refs, refname, &refname);
struct strbuf err = STRBUF_INIT;
struct write_create_symref_arg arg = {
.refs = refs,
.stack = stack,
.refname = refname,
.target = target,
.logmsg = logmsg,
.err = &err,
};
int ret;
@ -1311,9 +1329,15 @@ static int reftable_be_create_symref(struct ref_store *ref_store,
done:
assert(ret != REFTABLE_API_ERROR);
if (ret)
error("unable to write symref for %s: %s", refname,
reftable_error_str(ret));
if (ret) {
if (err.len)
error("%s", err.buf);
else
error("unable to write symref for %s: %s", refname,
reftable_error_str(ret));
}
strbuf_release(&err);
return ret;
}
@ -1335,10 +1359,16 @@ static int write_copy_table(struct reftable_writer *writer, void *cb_data)
struct reftable_log_record old_log = {0}, *logs = NULL;
struct reftable_iterator it = {0};
struct string_list skip = STRING_LIST_INIT_NODUP;
struct ident_split committer_ident = {0};
struct strbuf errbuf = STRBUF_INIT;
size_t logs_nr = 0, logs_alloc = 0, i;
const char *committer_info;
int ret;
committer_info = git_committer_info(0);
if (split_ident_line(&committer_ident, committer_info, strlen(committer_info)))
BUG("failed splitting committer info");
if (reftable_stack_read_ref(arg->stack, arg->oldname, &old_ref)) {
ret = error(_("refname %s not found"), arg->oldname);
goto done;
@ -1361,7 +1391,8 @@ static int write_copy_table(struct reftable_writer *writer, void *cb_data)
/*
* Verify that the new refname is available.
*/
string_list_insert(&skip, arg->oldname);
if (arg->delete_old)
string_list_insert(&skip, arg->oldname);
ret = refs_verify_refname_available(&arg->refs->base, arg->newname,
NULL, &skip, &errbuf);
if (ret < 0) {
@ -1412,7 +1443,7 @@ static int write_copy_table(struct reftable_writer *writer, void *cb_data)
ALLOC_GROW(logs, logs_nr + 1, logs_alloc);
memset(&logs[logs_nr], 0, sizeof(logs[logs_nr]));
fill_reftable_log_record(&logs[logs_nr]);
fill_reftable_log_record(&logs[logs_nr], &committer_ident);
logs[logs_nr].refname = (char *)arg->newname;
logs[logs_nr].update_index = deletion_ts;
logs[logs_nr].value.update.message =
@ -1444,7 +1475,7 @@ static int write_copy_table(struct reftable_writer *writer, void *cb_data)
*/
ALLOC_GROW(logs, logs_nr + 1, logs_alloc);
memset(&logs[logs_nr], 0, sizeof(logs[logs_nr]));
fill_reftable_log_record(&logs[logs_nr]);
fill_reftable_log_record(&logs[logs_nr], &committer_ident);
logs[logs_nr].refname = (char *)arg->newname;
logs[logs_nr].update_index = creation_ts;
logs[logs_nr].value.update.message =

View File

@ -76,6 +76,10 @@ void block_writer_init(struct block_writer *bw, uint8_t typ, uint8_t *buf,
bw->entries = 0;
bw->restart_len = 0;
bw->last_key.len = 0;
if (!bw->zstream) {
REFTABLE_CALLOC_ARRAY(bw->zstream, 1);
deflateInit(bw->zstream, 9);
}
}
uint8_t block_writer_type(struct block_writer *bw)
@ -139,39 +143,52 @@ int block_writer_finish(struct block_writer *w)
w->next += 2;
put_be24(w->buf + 1 + w->header_off, w->next);
/*
* Log records are stored zlib-compressed. Note that the compression
* also spans over the restart points we have just written.
*/
if (block_writer_type(w) == BLOCK_TYPE_LOG) {
int block_header_skip = 4 + w->header_off;
uLongf src_len = w->next - block_header_skip;
uLongf dest_cap = src_len * 1.001 + 12;
uint8_t *compressed;
uLongf src_len = w->next - block_header_skip, compressed_len;
int ret;
REFTABLE_ALLOC_ARRAY(compressed, dest_cap);
ret = deflateReset(w->zstream);
if (ret != Z_OK)
return REFTABLE_ZLIB_ERROR;
while (1) {
uLongf out_dest_len = dest_cap;
int zresult = compress2(compressed, &out_dest_len,
w->buf + block_header_skip,
src_len, 9);
if (zresult == Z_BUF_ERROR && dest_cap < LONG_MAX) {
dest_cap *= 2;
compressed =
reftable_realloc(compressed, dest_cap);
if (compressed)
continue;
}
/*
* Precompute the upper bound of how many bytes the compressed
* data may end up with. Combined with `Z_FINISH`, `deflate()`
* is guaranteed to return `Z_STREAM_END`.
*/
compressed_len = deflateBound(w->zstream, src_len);
REFTABLE_ALLOC_GROW(w->compressed, compressed_len, w->compressed_cap);
if (Z_OK != zresult) {
reftable_free(compressed);
return REFTABLE_ZLIB_ERROR;
}
w->zstream->next_out = w->compressed;
w->zstream->avail_out = compressed_len;
w->zstream->next_in = w->buf + block_header_skip;
w->zstream->avail_in = src_len;
memcpy(w->buf + block_header_skip, compressed,
out_dest_len);
w->next = out_dest_len + block_header_skip;
reftable_free(compressed);
break;
}
/*
* We want to perform all decompression in a single step, which
* is why we can pass Z_FINISH here. As we have precomputed the
* deflated buffer's size via `deflateBound()` this function is
* guaranteed to succeed according to the zlib documentation.
*/
ret = deflate(w->zstream, Z_FINISH);
if (ret != Z_STREAM_END)
return REFTABLE_ZLIB_ERROR;
/*
* Overwrite the uncompressed data we have already written and
* adjust the `next` pointer to point right after the
* compressed data.
*/
memcpy(w->buf + block_header_skip, w->compressed,
w->zstream->total_out);
w->next = w->zstream->total_out + block_header_skip;
}
return w->next;
}
@ -514,7 +531,10 @@ int block_iter_seek_key(struct block_iter *it, const struct block_reader *br,
void block_writer_release(struct block_writer *bw)
{
deflateEnd(bw->zstream);
FREE_AND_NULL(bw->zstream);
FREE_AND_NULL(bw->restarts);
FREE_AND_NULL(bw->compressed);
strbuf_release(&bw->last_key);
/* the block is not owned. */
}

View File

@ -18,6 +18,10 @@ license that can be found in the LICENSE file or at
* allocation overhead.
*/
struct block_writer {
z_stream *zstream;
unsigned char *compressed;
size_t compressed_cap;
uint8_t *buf;
uint32_t block_size;

View File

@ -27,8 +27,6 @@ const char *reftable_error_str(int err)
return "misuse of the reftable API";
case REFTABLE_ZLIB_ERROR:
return "zlib failure";
case REFTABLE_NAME_CONFLICT:
return "file/directory conflict";
case REFTABLE_EMPTY_TABLE_ERROR:
return "wrote empty table";
case REFTABLE_REFNAME_ERROR:

View File

@ -1,206 +0,0 @@
/*
Copyright 2020 Google LLC
Use of this source code is governed by a BSD-style
license that can be found in the LICENSE file or at
https://developers.google.com/open-source/licenses/bsd
*/
#include "system.h"
#include "reftable-error.h"
#include "basics.h"
#include "refname.h"
#include "reftable-iterator.h"
struct refname_needle_lesseq_args {
char **haystack;
const char *needle;
};
static int refname_needle_lesseq(size_t k, void *_args)
{
struct refname_needle_lesseq_args *args = _args;
return strcmp(args->needle, args->haystack[k]) <= 0;
}
static int modification_has_ref(struct modification *mod, const char *name)
{
struct reftable_ref_record ref = { NULL };
int err = 0;
if (mod->add_len > 0) {
struct refname_needle_lesseq_args args = {
.haystack = mod->add,
.needle = name,
};
size_t idx = binsearch(mod->add_len, refname_needle_lesseq, &args);
if (idx < mod->add_len && !strcmp(mod->add[idx], name))
return 0;
}
if (mod->del_len > 0) {
struct refname_needle_lesseq_args args = {
.haystack = mod->del,
.needle = name,
};
size_t idx = binsearch(mod->del_len, refname_needle_lesseq, &args);
if (idx < mod->del_len && !strcmp(mod->del[idx], name))
return 1;
}
err = reftable_table_read_ref(&mod->tab, name, &ref);
reftable_ref_record_release(&ref);
return err;
}
static void modification_release(struct modification *mod)
{
/* don't delete the strings themselves; they're owned by ref records.
*/
FREE_AND_NULL(mod->add);
FREE_AND_NULL(mod->del);
mod->add_len = 0;
mod->del_len = 0;
}
static int modification_has_ref_with_prefix(struct modification *mod,
const char *prefix)
{
struct reftable_iterator it = { NULL };
struct reftable_ref_record ref = { NULL };
int err = 0;
if (mod->add_len > 0) {
struct refname_needle_lesseq_args args = {
.haystack = mod->add,
.needle = prefix,
};
size_t idx = binsearch(mod->add_len, refname_needle_lesseq, &args);
if (idx < mod->add_len &&
!strncmp(prefix, mod->add[idx], strlen(prefix)))
goto done;
}
err = reftable_table_seek_ref(&mod->tab, &it, prefix);
if (err)
goto done;
while (1) {
err = reftable_iterator_next_ref(&it, &ref);
if (err)
goto done;
if (mod->del_len > 0) {
struct refname_needle_lesseq_args args = {
.haystack = mod->del,
.needle = ref.refname,
};
size_t idx = binsearch(mod->del_len, refname_needle_lesseq, &args);
if (idx < mod->del_len &&
!strcmp(ref.refname, mod->del[idx]))
continue;
}
if (strncmp(ref.refname, prefix, strlen(prefix))) {
err = 1;
goto done;
}
err = 0;
goto done;
}
done:
reftable_ref_record_release(&ref);
reftable_iterator_destroy(&it);
return err;
}
static int validate_refname(const char *name)
{
while (1) {
char *next = strchr(name, '/');
if (!*name) {
return REFTABLE_REFNAME_ERROR;
}
if (!next) {
return 0;
}
if (next - name == 0 || (next - name == 1 && *name == '.') ||
(next - name == 2 && name[0] == '.' && name[1] == '.'))
return REFTABLE_REFNAME_ERROR;
name = next + 1;
}
return 0;
}
int validate_ref_record_addition(struct reftable_table tab,
struct reftable_ref_record *recs, size_t sz)
{
struct modification mod = {
.tab = tab,
.add = reftable_calloc(sz, sizeof(*mod.add)),
.del = reftable_calloc(sz, sizeof(*mod.del)),
};
int i = 0;
int err = 0;
for (; i < sz; i++) {
if (reftable_ref_record_is_deletion(&recs[i])) {
mod.del[mod.del_len++] = recs[i].refname;
} else {
mod.add[mod.add_len++] = recs[i].refname;
}
}
err = modification_validate(&mod);
modification_release(&mod);
return err;
}
static void strbuf_trim_component(struct strbuf *sl)
{
while (sl->len > 0) {
int is_slash = (sl->buf[sl->len - 1] == '/');
strbuf_setlen(sl, sl->len - 1);
if (is_slash)
break;
}
}
int modification_validate(struct modification *mod)
{
struct strbuf slashed = STRBUF_INIT;
int err = 0;
int i = 0;
for (; i < mod->add_len; i++) {
err = validate_refname(mod->add[i]);
if (err)
goto done;
strbuf_reset(&slashed);
strbuf_addstr(&slashed, mod->add[i]);
strbuf_addstr(&slashed, "/");
err = modification_has_ref_with_prefix(mod, slashed.buf);
if (err == 0) {
err = REFTABLE_NAME_CONFLICT;
goto done;
}
if (err < 0)
goto done;
strbuf_reset(&slashed);
strbuf_addstr(&slashed, mod->add[i]);
while (slashed.len) {
strbuf_trim_component(&slashed);
err = modification_has_ref(mod, slashed.buf);
if (err == 0) {
err = REFTABLE_NAME_CONFLICT;
goto done;
}
if (err < 0)
goto done;
}
}
err = 0;
done:
strbuf_release(&slashed);
return err;
}

View File

@ -1,29 +0,0 @@
/*
Copyright 2020 Google LLC
Use of this source code is governed by a BSD-style
license that can be found in the LICENSE file or at
https://developers.google.com/open-source/licenses/bsd
*/
#ifndef REFNAME_H
#define REFNAME_H
#include "reftable-record.h"
#include "reftable-generic.h"
struct modification {
struct reftable_table tab;
char **add;
size_t add_len;
char **del;
size_t del_len;
};
int validate_ref_record_addition(struct reftable_table tab,
struct reftable_ref_record *recs, size_t sz);
int modification_validate(struct modification *mod);
#endif

View File

@ -1,101 +0,0 @@
/*
Copyright 2020 Google LLC
Use of this source code is governed by a BSD-style
license that can be found in the LICENSE file or at
https://developers.google.com/open-source/licenses/bsd
*/
#include "basics.h"
#include "block.h"
#include "blocksource.h"
#include "reader.h"
#include "record.h"
#include "refname.h"
#include "reftable-error.h"
#include "reftable-writer.h"
#include "system.h"
#include "test_framework.h"
#include "reftable-tests.h"
struct testcase {
char *add;
char *del;
int error_code;
};
static void test_conflict(void)
{
struct reftable_write_options opts = { 0 };
struct strbuf buf = STRBUF_INIT;
struct reftable_writer *w =
reftable_new_writer(&strbuf_add_void, &noop_flush, &buf, &opts);
struct reftable_ref_record rec = {
.refname = "a/b",
.value_type = REFTABLE_REF_SYMREF,
.value.symref = "destination", /* make sure it's not a symref.
*/
.update_index = 1,
};
int err;
int i;
struct reftable_block_source source = { NULL };
struct reftable_reader *rd = NULL;
struct reftable_table tab = { NULL };
struct testcase cases[] = {
{ "a/b/c", NULL, REFTABLE_NAME_CONFLICT },
{ "b", NULL, 0 },
{ "a", NULL, REFTABLE_NAME_CONFLICT },
{ "a", "a/b", 0 },
{ "p/", NULL, REFTABLE_REFNAME_ERROR },
{ "p//q", NULL, REFTABLE_REFNAME_ERROR },
{ "p/./q", NULL, REFTABLE_REFNAME_ERROR },
{ "p/../q", NULL, REFTABLE_REFNAME_ERROR },
{ "a/b/c", "a/b", 0 },
{ NULL, "a//b", 0 },
};
reftable_writer_set_limits(w, 1, 1);
err = reftable_writer_add_ref(w, &rec);
EXPECT_ERR(err);
err = reftable_writer_close(w);
EXPECT_ERR(err);
reftable_writer_free(w);
block_source_from_strbuf(&source, &buf);
err = reftable_new_reader(&rd, &source, "filename");
EXPECT_ERR(err);
reftable_table_from_reader(&tab, rd);
for (i = 0; i < ARRAY_SIZE(cases); i++) {
struct modification mod = {
.tab = tab,
};
if (cases[i].add) {
mod.add = &cases[i].add;
mod.add_len = 1;
}
if (cases[i].del) {
mod.del = &cases[i].del;
mod.del_len = 1;
}
err = modification_validate(&mod);
EXPECT(err == cases[i].error_code);
}
reftable_reader_free(rd);
strbuf_release(&buf);
}
int refname_test_main(int argc, const char *argv[])
{
RUN_TEST(test_conflict);
return 0;
}

View File

@ -48,9 +48,6 @@ enum reftable_error {
/* Wrote a table without blocks. */
REFTABLE_EMPTY_TABLE_ERROR = -8,
/* Dir/file conflict. */
REFTABLE_NAME_CONFLICT = -9,
/* Invalid ref name. */
REFTABLE_REFNAME_ERROR = -10,

View File

@ -14,7 +14,6 @@ int block_test_main(int argc, const char **argv);
int merged_test_main(int argc, const char **argv);
int pq_test_main(int argc, const char **argv);
int record_test_main(int argc, const char **argv);
int refname_test_main(int argc, const char **argv);
int readwrite_test_main(int argc, const char **argv);
int stack_test_main(int argc, const char **argv);
int tree_test_main(int argc, const char **argv);

View File

@ -38,10 +38,6 @@ struct reftable_write_options {
/* Default mode for creating files. If unset, use 0666 (+umask) */
unsigned int default_permissions;
/* boolean: do not check ref names for validity or dir/file conflicts.
*/
unsigned skip_name_check : 1;
/* boolean: copy log messages exactly. If unset, check that the message
* is a single line, and add '\n' if missing.
*/

View File

@ -12,8 +12,8 @@ license that can be found in the LICENSE file or at
#include "system.h"
#include "merged.h"
#include "reader.h"
#include "refname.h"
#include "reftable-error.h"
#include "reftable-generic.h"
#include "reftable-record.h"
#include "reftable-merged.h"
#include "writer.h"
@ -27,8 +27,6 @@ static int stack_write_compact(struct reftable_stack *st,
struct reftable_writer *wr,
size_t first, size_t last,
struct reftable_log_expiry_config *config);
static int stack_check_addition(struct reftable_stack *st,
const char *new_tab_name);
static void reftable_addition_close(struct reftable_addition *add);
static int reftable_stack_reload_maybe_reuse(struct reftable_stack *st,
int reuse_open);
@ -787,10 +785,6 @@ int reftable_addition_add(struct reftable_addition *add,
goto done;
}
err = stack_check_addition(add->stack, get_tempfile_path(tab_file));
if (err < 0)
goto done;
if (wr->min_update_index < add->next_update_index) {
err = REFTABLE_API_ERROR;
goto done;
@ -1355,65 +1349,6 @@ int reftable_stack_read_log(struct reftable_stack *st, const char *refname,
return err;
}
static int stack_check_addition(struct reftable_stack *st,
const char *new_tab_name)
{
int err = 0;
struct reftable_block_source src = { NULL };
struct reftable_reader *rd = NULL;
struct reftable_table tab = { NULL };
struct reftable_ref_record *refs = NULL;
struct reftable_iterator it = { NULL };
int cap = 0;
int len = 0;
int i = 0;
if (st->config.skip_name_check)
return 0;
err = reftable_block_source_from_file(&src, new_tab_name);
if (err < 0)
goto done;
err = reftable_new_reader(&rd, &src, new_tab_name);
if (err < 0)
goto done;
err = reftable_reader_seek_ref(rd, &it, "");
if (err > 0) {
err = 0;
goto done;
}
if (err < 0)
goto done;
while (1) {
struct reftable_ref_record ref = { NULL };
err = reftable_iterator_next_ref(&it, &ref);
if (err > 0)
break;
if (err < 0)
goto done;
REFTABLE_ALLOC_GROW(refs, len + 1, cap);
refs[len++] = ref;
}
reftable_table_from_merged_table(&tab, reftable_stack_merged_table(st));
err = validate_ref_record_addition(tab, refs, len);
done:
for (i = 0; i < len; i++) {
reftable_ref_record_release(&refs[i]);
}
free(refs);
reftable_iterator_destroy(&it);
reftable_reader_free(rd);
return err;
}
static int is_table_name(const char *s)
{
const char *dot = strrchr(s, '.');

View File

@ -396,44 +396,6 @@ static void test_reftable_stack_auto_compaction_fails_gracefully(void)
clear_dir(dir);
}
static void test_reftable_stack_validate_refname(void)
{
struct reftable_write_options cfg = { 0 };
struct reftable_stack *st = NULL;
int err;
char *dir = get_tmp_dir(__LINE__);
int i;
struct reftable_ref_record ref = {
.refname = "a/b",
.update_index = 1,
.value_type = REFTABLE_REF_SYMREF,
.value.symref = "master",
};
char *additions[] = { "a", "a/b/c" };
err = reftable_new_stack(&st, dir, cfg);
EXPECT_ERR(err);
err = reftable_stack_add(st, &write_test_ref, &ref);
EXPECT_ERR(err);
for (i = 0; i < ARRAY_SIZE(additions); i++) {
struct reftable_ref_record ref = {
.refname = additions[i],
.update_index = 1,
.value_type = REFTABLE_REF_SYMREF,
.value.symref = "master",
};
err = reftable_stack_add(st, &write_test_ref, &ref);
EXPECT(err == REFTABLE_NAME_CONFLICT);
}
reftable_stack_destroy(st);
clear_dir(dir);
}
static int write_error(struct reftable_writer *wr, void *arg)
{
return *((int *)arg);
@ -1105,7 +1067,6 @@ int stack_test_main(int argc, const char *argv[])
RUN_TEST(test_reftable_stack_auto_compaction_fails_gracefully);
RUN_TEST(test_reftable_stack_update_index_check);
RUN_TEST(test_reftable_stack_uptodate);
RUN_TEST(test_reftable_stack_validate_refname);
RUN_TEST(test_suggest_compaction_segment);
RUN_TEST(test_suggest_compaction_segment_nothing);
return 0;

View File

@ -109,7 +109,7 @@ static void writer_reinit_block_writer(struct reftable_writer *w, uint8_t typ)
block_start = header_size(writer_version(w));
}
strbuf_release(&w->last_key);
strbuf_reset(&w->last_key);
block_writer_init(&w->block_writer_data, typ, w->block,
w->opts.block_size, block_start,
hash_size(w->opts.hash_id));
@ -149,11 +149,21 @@ void reftable_writer_set_limits(struct reftable_writer *w, uint64_t min,
w->max_update_index = max;
}
static void writer_release(struct reftable_writer *w)
{
if (w) {
reftable_free(w->block);
w->block = NULL;
block_writer_release(&w->block_writer_data);
w->block_writer = NULL;
writer_clear_index(w);
strbuf_release(&w->last_key);
}
}
void reftable_writer_free(struct reftable_writer *w)
{
if (!w)
return;
reftable_free(w->block);
writer_release(w);
reftable_free(w);
}
@ -209,7 +219,8 @@ static int writer_add_record(struct reftable_writer *w,
struct reftable_record *rec)
{
struct strbuf key = STRBUF_INIT;
int err = -1;
int err;
reftable_record_key(rec, &key);
if (strbuf_cmp(&w->last_key, &key) >= 0) {
err = REFTABLE_API_ERROR;
@ -218,27 +229,42 @@ static int writer_add_record(struct reftable_writer *w,
strbuf_reset(&w->last_key);
strbuf_addbuf(&w->last_key, &key);
if (!w->block_writer) {
if (!w->block_writer)
writer_reinit_block_writer(w, reftable_record_type(rec));
}
assert(block_writer_type(w->block_writer) == reftable_record_type(rec));
if (block_writer_type(w->block_writer) != reftable_record_type(rec))
BUG("record of type %d added to writer of type %d",
reftable_record_type(rec), block_writer_type(w->block_writer));
if (block_writer_add(w->block_writer, rec) == 0) {
/*
* Try to add the record to the writer. If this succeeds then we're
* done. Otherwise the block writer may have hit the block size limit
* and needs to be flushed.
*/
if (!block_writer_add(w->block_writer, rec)) {
err = 0;
goto done;
}
/*
* The current block is full, so we need to flush and reinitialize the
* writer to start writing the next block.
*/
err = writer_flush_block(w);
if (err < 0) {
if (err < 0)
goto done;
}
writer_reinit_block_writer(w, reftable_record_type(rec));
/*
* Try to add the record to the writer again. If this still fails then
* the record does not fit into the block size.
*
* TODO: it would be great to have `block_writer_add()` return proper
* error codes so that we don't have to second-guess the failure
* mode here.
*/
err = block_writer_add(w->block_writer, rec);
if (err == -1) {
/* we are writing into memory, so an error can only mean it
* doesn't fit. */
if (err) {
err = REFTABLE_ENTRY_TOO_BIG_ERROR;
goto done;
}
@ -452,7 +478,7 @@ static int writer_finish_section(struct reftable_writer *w)
bstats->max_index_level = max_level;
/* Reinit lastKey, as the next section can start with any key. */
w->last_key.len = 0;
strbuf_reset(&w->last_key);
return 0;
}
@ -627,74 +653,87 @@ int reftable_writer_close(struct reftable_writer *w)
}
done:
/* free up memory. */
block_writer_release(&w->block_writer_data);
writer_clear_index(w);
strbuf_release(&w->last_key);
writer_release(w);
return err;
}
static void writer_clear_index(struct reftable_writer *w)
{
for (size_t i = 0; i < w->index_len; i++)
for (size_t i = 0; w->index && i < w->index_len; i++)
strbuf_release(&w->index[i].last_key);
FREE_AND_NULL(w->index);
w->index_len = 0;
w->index_cap = 0;
}
static const int debug = 0;
static int writer_flush_nonempty_block(struct reftable_writer *w)
{
struct reftable_index_record index_record = {
.last_key = STRBUF_INIT,
};
uint8_t typ = block_writer_type(w->block_writer);
struct reftable_block_stats *bstats =
writer_reftable_block_stats(w, typ);
uint64_t block_typ_off = (bstats->blocks == 0) ? w->next : 0;
int raw_bytes = block_writer_finish(w->block_writer);
int padding = 0;
int err = 0;
struct reftable_index_record ir = { .last_key = STRBUF_INIT };
struct reftable_block_stats *bstats;
int raw_bytes, padding = 0, err;
uint64_t block_typ_off;
/*
* Finish the current block. This will cause the block writer to emit
* restart points and potentially compress records in case we are
* writing a log block.
*
* Note that this is still happening in memory.
*/
raw_bytes = block_writer_finish(w->block_writer);
if (raw_bytes < 0)
return raw_bytes;
if (!w->opts.unpadded && typ != BLOCK_TYPE_LOG) {
/*
* By default, all records except for log records are padded to the
* block size.
*/
if (!w->opts.unpadded && typ != BLOCK_TYPE_LOG)
padding = w->opts.block_size - raw_bytes;
}
if (block_typ_off > 0) {
bstats = writer_reftable_block_stats(w, typ);
block_typ_off = (bstats->blocks == 0) ? w->next : 0;
if (block_typ_off > 0)
bstats->offset = block_typ_off;
}
bstats->entries += w->block_writer->entries;
bstats->restarts += w->block_writer->restart_len;
bstats->blocks++;
w->stats.blocks++;
if (debug) {
fprintf(stderr, "block %c off %" PRIu64 " sz %d (%d)\n", typ,
w->next, raw_bytes,
get_be24(w->block + w->block_writer->header_off + 1));
}
if (w->next == 0) {
/*
* If this is the first block we're writing to the table then we need
* to also write the reftable header.
*/
if (!w->next)
writer_write_header(w, w->block);
}
err = padded_write(w, w->block, raw_bytes, padding);
if (err < 0)
return err;
/*
* Add an index record for every block that we're writing. If we end up
* having more than a threshold of index records we will end up writing
* an index section in `writer_finish_section()`. Each index record
* contains the last record key of the block it is indexing as well as
* the offset of that block.
*
* Note that this also applies when flushing index blocks, in which
* case we will end up with a multi-level index.
*/
REFTABLE_ALLOC_GROW(w->index, w->index_len + 1, w->index_cap);
ir.offset = w->next;
strbuf_reset(&ir.last_key);
strbuf_addbuf(&ir.last_key, &w->block_writer->last_key);
w->index[w->index_len] = ir;
index_record.offset = w->next;
strbuf_reset(&index_record.last_key);
strbuf_addbuf(&index_record.last_key, &w->block_writer->last_key);
w->index[w->index_len] = index_record;
w->index_len++;
w->next += padding + raw_bytes;
w->block_writer = NULL;
return 0;
}

View File

@ -889,7 +889,7 @@ static curl_off_t xcurl_off_t(size_t len)
static int post_rpc(struct rpc_state *rpc, int stateless_connect, int flush_received)
{
struct active_request_slot *slot;
struct curl_slist *headers = http_copy_default_headers();
struct curl_slist *headers = NULL;
int use_gzip = rpc->gzip_request;
char *gzip_body = NULL;
size_t gzip_size = 0;
@ -922,20 +922,24 @@ static int post_rpc(struct rpc_state *rpc, int stateless_connect, int flush_rece
do {
err = probe_rpc(rpc, &results);
if (err == HTTP_REAUTH)
credential_fill(&http_auth);
credential_fill(&http_auth, 0);
} while (err == HTTP_REAUTH);
if (err != HTTP_OK)
return -1;
if (results.auth_avail & CURLAUTH_GSSNEGOTIATE)
if (results.auth_avail & CURLAUTH_GSSNEGOTIATE || http_auth.authtype)
needs_100_continue = 1;
}
retry:
headers = http_copy_default_headers();
headers = curl_slist_append(headers, rpc->hdr_content_type);
headers = curl_slist_append(headers, rpc->hdr_accept);
headers = curl_slist_append(headers, needs_100_continue ?
"Expect: 100-continue" : "Expect:");
headers = http_append_auth_header(&http_auth, headers);
/* Add Accept-Language header */
if (rpc->hdr_accept_language)
headers = curl_slist_append(headers, rpc->hdr_accept_language);
@ -944,7 +948,6 @@ static int post_rpc(struct rpc_state *rpc, int stateless_connect, int flush_rece
if (rpc->protocol_header)
headers = curl_slist_append(headers, rpc->protocol_header);
retry:
slot = get_active_slot();
curl_easy_setopt(slot->curl, CURLOPT_NOBODY, 0);
@ -1041,7 +1044,8 @@ static int post_rpc(struct rpc_state *rpc, int stateless_connect, int flush_rece
rpc->any_written = 0;
err = run_slot(slot, NULL);
if (err == HTTP_REAUTH && !large_request) {
credential_fill(&http_auth);
credential_fill(&http_auth, 0);
curl_slist_free_all(headers);
goto retry;
}
if (err != HTTP_OK)

View File

@ -1,8 +1,3 @@
/*
* not really _using_ the compat macros, just make sure the_index
* declaration matches the definition in this file.
*/
#define USE_THE_INDEX_VARIABLE
#include "git-compat-util.h"
#include "abspath.h"
#include "repository.h"
@ -22,21 +17,35 @@
/* The main repository */
static struct repository the_repo;
struct repository *the_repository;
struct index_state the_index;
struct repository *the_repository = &the_repo;
void initialize_the_repository(void)
void initialize_repository(struct repository *repo)
{
the_repository = &the_repo;
repo->objects = raw_object_store_new();
repo->remote_state = remote_state_new();
repo->parsed_objects = parsed_object_pool_new();
ALLOC_ARRAY(repo->index, 1);
index_state_init(repo->index, repo);
the_repo.index = &the_index;
the_repo.objects = raw_object_store_new();
the_repo.remote_state = remote_state_new();
the_repo.parsed_objects = parsed_object_pool_new();
index_state_init(&the_index, the_repository);
repo_set_hash_algo(&the_repo, GIT_HASH_SHA1);
/*
* Unfortunately, we need to keep this hack around for the time being:
*
* - Not setting up the hash algorithm for `the_repository` leads to
* crashes because `the_hash_algo` is a macro that expands to
* `the_repository->hash_algo`. So if Git commands try to access
* `the_hash_algo` without a Git directory we crash.
*
* - Setting up the hash algorithm to be SHA1 by default breaks other
* commands when running with SHA256.
*
* This is another point in case why having global state is a bad idea.
* Eventually, we should remove this hack and stop setting the hash
* algorithm in this function altogether. Instead, it should only ever
* be set via our repository setup procedures. But that requires more
* work.
*/
if (repo == the_repository)
repo_set_hash_algo(repo, GIT_HASH_SHA1);
}
static void expand_base_dir(char **out, const char *in,
@ -188,9 +197,7 @@ int repo_init(struct repository *repo,
struct repository_format format = REPOSITORY_FORMAT_INIT;
memset(repo, 0, sizeof(*repo));
repo->objects = raw_object_store_new();
repo->parsed_objects = parsed_object_pool_new();
repo->remote_state = remote_state_new();
initialize_repository(repo);
if (repo_init_gitdir(repo, gitdir))
goto error;
@ -307,8 +314,7 @@ void repo_clear(struct repository *repo)
if (repo->index) {
discard_index(repo->index);
if (repo->index != &the_index)
FREE_AND_NULL(repo->index);
FREE_AND_NULL(repo->index);
}
if (repo->promisor_remote_config) {

View File

@ -187,9 +187,6 @@ struct repository {
};
extern struct repository *the_repository;
#ifdef USE_THE_INDEX_VARIABLE
extern struct index_state the_index;
#endif
/*
* Define a custom repository layout. Any field can be NULL, which
@ -210,7 +207,7 @@ void repo_set_worktree(struct repository *repo, const char *path);
void repo_set_hash_algo(struct repository *repo, int algo);
void repo_set_compat_hash_algo(struct repository *repo, int compat_algo);
void repo_set_ref_storage_format(struct repository *repo, unsigned int format);
void initialize_the_repository(void);
void initialize_repository(struct repository *repo);
RESULT_MUST_BE_USED
int repo_init(struct repository *r, const char *gitdir, const char *worktree);

View File

@ -5133,6 +5133,10 @@ static int continue_single_pick(struct repository *r, struct replay_opts *opts)
*/
strvec_pushl(&cmd.args, "--no-edit", "--cleanup=strip", NULL);
if (!opts->no_verify) {
strvec_pushl(&cmd.args, "--no-verify", NULL);
}
return run_command(&cmd);
}

View File

@ -56,6 +56,7 @@ struct replay_opts {
int committer_date_is_author_date;
int ignore_date;
int commit_use_reference;
int no_verify;
int mainline;

View File

@ -1,4 +1,3 @@
#define USE_THE_INDEX_VARIABLE
#include "test-tool.h"
#include "gettext.h"
#include "hex.h"
@ -38,29 +37,29 @@ int cmd__cache_tree(int argc, const char **argv)
if (repo_read_index(the_repository) < 0)
die(_("unable to read index file"));
oidcpy(&oid, &the_index.cache_tree->oid);
oidcpy(&oid, &the_repository->index->cache_tree->oid);
tree = parse_tree_indirect(&oid);
if (!tree)
die(_("not a tree object: %s"), oid_to_hex(&oid));
if (empty) {
/* clear the cache tree & allocate a new one */
cache_tree_free(&the_index.cache_tree);
the_index.cache_tree = cache_tree();
cache_tree_free(&the_repository->index->cache_tree);
the_repository->index->cache_tree = cache_tree();
} else if (invalidate_qty) {
/* invalidate the specified number of unique paths */
float f_interval = (float)the_index.cache_nr / invalidate_qty;
float f_interval = (float)the_repository->index->cache_nr / invalidate_qty;
int interval = f_interval < 1.0 ? 1 : (int)f_interval;
for (i = 0; i < invalidate_qty && i * interval < the_index.cache_nr; i++)
cache_tree_invalidate_path(&the_index, the_index.cache[i * interval]->name);
for (i = 0; i < invalidate_qty && i * interval < the_repository->index->cache_nr; i++)
cache_tree_invalidate_path(the_repository->index, the_repository->index->cache[i * interval]->name);
}
if (argc != 1)
usage_with_options(test_cache_tree_usage, options);
else if (!strcmp(argv[0], "prime"))
prime_cache_tree(the_repository, &the_index, tree);
prime_cache_tree(the_repository, the_repository->index, tree);
else if (!strcmp(argv[0], "update"))
cache_tree_update(&the_index, WRITE_TREE_SILENT | WRITE_TREE_REPAIR);
cache_tree_update(the_repository->index, WRITE_TREE_SILENT | WRITE_TREE_REPAIR);
/* use "control" subcommand to specify no-op */
else if (!!strcmp(argv[0], "control"))
die(_("Unhandled subcommand '%s'"), argv[0]);

View File

@ -1,4 +1,3 @@
#define USE_THE_INDEX_VARIABLE
#include "test-tool.h"
#include "hash.h"
#include "hex.h"
@ -68,10 +67,10 @@ int cmd__dump_cache_tree(int ac UNUSED, const char **av UNUSED)
setup_git_directory();
if (repo_read_index(the_repository) < 0)
die("unable to read index file");
istate = the_index;
istate = *the_repository->index;
istate.cache_tree = another;
cache_tree_update(&istate, WRITE_TREE_DRY_RUN);
ret = dump_cache_tree(the_index.cache_tree, another, "");
ret = dump_cache_tree(the_repository->index->cache_tree, another, "");
cache_tree_free(&another);
return ret;

View File

@ -1,4 +1,3 @@
#define USE_THE_INDEX_VARIABLE
#include "test-tool.h"
#include "hex.h"
#include "read-cache-ll.h"
@ -19,16 +18,16 @@ int cmd__dump_split_index(int ac UNUSED, const char **av)
setup_git_directory();
do_read_index(&the_index, av[1], 1);
printf("own %s\n", oid_to_hex(&the_index.oid));
si = the_index.split_index;
do_read_index(the_repository->index, av[1], 1);
printf("own %s\n", oid_to_hex(&the_repository->index->oid));
si = the_repository->index->split_index;
if (!si) {
printf("not a split index\n");
return 0;
}
printf("base %s\n", oid_to_hex(&si->base_oid));
for (i = 0; i < the_index.cache_nr; i++) {
struct cache_entry *ce = the_index.cache[i];
for (i = 0; i < the_repository->index->cache_nr; i++) {
struct cache_entry *ce = the_repository->index->cache[i];
printf("%06o %s %d\t%s\n", ce->ce_mode,
oid_to_hex(&ce->oid), ce_stage(ce), ce->name);
}

View File

@ -1,4 +1,3 @@
#define USE_THE_INDEX_VARIABLE
#include "test-tool.h"
#include "dir.h"
#include "hex.h"
@ -56,7 +55,7 @@ int cmd__dump_untracked_cache(int ac UNUSED, const char **av UNUSED)
setup_git_directory();
if (repo_read_index(the_repository) < 0)
die("unable to read index file");
uc = the_index.untracked;
uc = the_repository->index->untracked;
if (!uc) {
printf("no untracked cache\n");
return 0;

View File

@ -1,4 +1,3 @@
#define USE_THE_INDEX_VARIABLE
#include "test-tool.h"
#include "environment.h"
#include "name-hash.h"
@ -40,22 +39,22 @@ static void dump_run(void)
repo_read_index(the_repository);
if (single) {
test_lazy_init_name_hash(&the_index, 0);
test_lazy_init_name_hash(the_repository->index, 0);
} else {
int nr_threads_used = test_lazy_init_name_hash(&the_index, 1);
int nr_threads_used = test_lazy_init_name_hash(the_repository->index, 1);
if (!nr_threads_used)
die("non-threaded code path used");
}
hashmap_for_each_entry(&the_index.dir_hash, &iter_dir, dir,
hashmap_for_each_entry(&the_repository->index->dir_hash, &iter_dir, dir,
ent /* member name */)
printf("dir %08x %7d %s\n", dir->ent.hash, dir->nr, dir->name);
hashmap_for_each_entry(&the_index.name_hash, &iter_cache, ce,
hashmap_for_each_entry(&the_repository->index->name_hash, &iter_cache, ce,
ent /* member name */)
printf("name %08x %s\n", ce->ent.hash, ce->name);
discard_index(&the_index);
discard_index(the_repository->index);
}
/*
@ -74,7 +73,7 @@ static uint64_t time_runs(int try_threaded)
t0 = getnanotime();
repo_read_index(the_repository);
t1 = getnanotime();
nr_threads_used = test_lazy_init_name_hash(&the_index, try_threaded);
nr_threads_used = test_lazy_init_name_hash(the_repository->index, try_threaded);
t2 = getnanotime();
sum += (t2 - t1);
@ -86,16 +85,16 @@ static uint64_t time_runs(int try_threaded)
printf("%f %f %d multi %d\n",
((double)(t1 - t0))/1000000000,
((double)(t2 - t1))/1000000000,
the_index.cache_nr,
the_repository->index->cache_nr,
nr_threads_used);
else
printf("%f %f %d single\n",
((double)(t1 - t0))/1000000000,
((double)(t2 - t1))/1000000000,
the_index.cache_nr);
the_repository->index->cache_nr);
fflush(stdout);
discard_index(&the_index);
discard_index(the_repository->index);
}
avg = sum / count;
@ -120,8 +119,8 @@ static void analyze_run(void)
int nr;
repo_read_index(the_repository);
cache_nr_limit = the_index.cache_nr;
discard_index(&the_index);
cache_nr_limit = the_repository->index->cache_nr;
discard_index(the_repository->index);
nr = analyze;
while (1) {
@ -135,22 +134,22 @@ static void analyze_run(void)
for (i = 0; i < count; i++) {
repo_read_index(the_repository);
the_index.cache_nr = nr; /* cheap truncate of index */
the_repository->index->cache_nr = nr; /* cheap truncate of index */
t1s = getnanotime();
test_lazy_init_name_hash(&the_index, 0);
test_lazy_init_name_hash(the_repository->index, 0);
t2s = getnanotime();
sum_single += (t2s - t1s);
the_index.cache_nr = cache_nr_limit;
discard_index(&the_index);
the_repository->index->cache_nr = cache_nr_limit;
discard_index(the_repository->index);
repo_read_index(the_repository);
the_index.cache_nr = nr; /* cheap truncate of index */
the_repository->index->cache_nr = nr; /* cheap truncate of index */
t1m = getnanotime();
nr_threads_used = test_lazy_init_name_hash(&the_index, 1);
nr_threads_used = test_lazy_init_name_hash(the_repository->index, 1);
t2m = getnanotime();
sum_multi += (t2m - t1m);
the_index.cache_nr = cache_nr_limit;
discard_index(&the_index);
the_repository->index->cache_nr = cache_nr_limit;
discard_index(the_repository->index);
if (!nr_threads_used)
printf(" [size %8d] [single %f] non-threaded code path used\n",

View File

@ -1,4 +1,3 @@
#define USE_THE_INDEX_VARIABLE
#include "test-tool.h"
#include "config.h"
#include "read-cache-ll.h"
@ -10,7 +9,7 @@ int cmd__read_cache(int argc, const char **argv)
int i, cnt = 1;
const char *name = NULL;
initialize_the_repository();
initialize_repository(the_repository);
if (argc > 1 && skip_prefix(argv[1], "--print-and-refresh=", &name)) {
argc--;
@ -27,16 +26,16 @@ int cmd__read_cache(int argc, const char **argv)
if (name) {
int pos;
refresh_index(&the_index, REFRESH_QUIET,
refresh_index(the_repository->index, REFRESH_QUIET,
NULL, NULL, NULL);
pos = index_name_pos(&the_index, name, strlen(name));
pos = index_name_pos(the_repository->index, name, strlen(name));
if (pos < 0)
die("%s not in index", name);
printf("%s is%s up to date\n", name,
ce_uptodate(the_index.cache[pos]) ? "" : " not");
ce_uptodate(the_repository->index->cache[pos]) ? "" : " not");
write_file(name, "%d\n", i);
}
discard_index(&the_index);
discard_index(the_repository->index);
}
return 0;
}

View File

@ -13,7 +13,6 @@ int cmd__reftable(int argc, const char **argv)
readwrite_test_main(argc, argv);
merged_test_main(argc, argv);
stack_test_main(argc, argv);
refname_test_main(argc, argv);
return 0;
}

View File

@ -1,4 +1,3 @@
#define USE_THE_INDEX_VARIABLE
#include "test-tool.h"
#include "lockfile.h"
#include "read-cache-ll.h"
@ -15,9 +14,9 @@ int cmd__scrap_cache_tree(int ac UNUSED, const char **av UNUSED)
repo_hold_locked_index(the_repository, &index_lock, LOCK_DIE_ON_ERROR);
if (repo_read_index(the_repository) < 0)
die("unable to read index file");
cache_tree_free(&the_index.cache_tree);
the_index.cache_tree = NULL;
if (write_locked_index(&the_index, &index_lock, COMMIT_LOCK))
cache_tree_free(&the_repository->index->cache_tree);
the_repository->index->cache_tree = NULL;
if (write_locked_index(the_repository->index, &index_lock, COMMIT_LOCK))
die("unable to write index file");
return 0;
}

View File

@ -1,4 +1,3 @@
#define USE_THE_INDEX_VARIABLE
#include "test-tool.h"
#include "lockfile.h"
#include "read-cache-ll.h"
@ -16,7 +15,7 @@ int cmd__write_cache(int argc, const char **argv)
for (i = 0; i < cnt; i++) {
repo_hold_locked_index(the_repository, &index_lock,
LOCK_DIE_ON_ERROR);
if (write_locked_index(&the_index, &index_lock, COMMIT_LOCK))
if (write_locked_index(the_repository->index, &index_lock, COMMIT_LOCK))
die("unable to write index file");
}

View File

@ -538,6 +538,129 @@ helper_test_oauth_refresh_token() {
'
}
helper_test_authtype() {
HELPER=$1
test_expect_success "helper ($HELPER) stores authtype and credential" '
check approve $HELPER <<-\EOF
capability[]=authtype
authtype=Bearer
credential=random-token
protocol=https
host=git.example.com
EOF
'
test_expect_success "helper ($HELPER) gets authtype and credential" '
check fill $HELPER <<-\EOF
capability[]=authtype
protocol=https
host=git.example.com
--
capability[]=authtype
authtype=Bearer
credential=random-token
protocol=https
host=git.example.com
--
EOF
'
test_expect_success "helper ($HELPER) stores authtype and credential with username" '
check approve $HELPER <<-\EOF
capability[]=authtype
authtype=Bearer
credential=other-token
protocol=https
host=git.example.com
username=foobar
EOF
'
test_expect_success "helper ($HELPER) gets authtype and credential with username" '
check fill $HELPER <<-\EOF
capability[]=authtype
protocol=https
host=git.example.com
username=foobar
--
capability[]=authtype
authtype=Bearer
credential=other-token
protocol=https
host=git.example.com
username=foobar
--
EOF
'
test_expect_success "helper ($HELPER) does not get authtype and credential with different username" '
check fill $HELPER <<-\EOF
capability[]=authtype
protocol=https
host=git.example.com
username=barbaz
--
protocol=https
host=git.example.com
username=barbaz
password=askpass-password
--
askpass: Password for '\''https://barbaz@git.example.com'\'':
EOF
'
test_expect_success "helper ($HELPER) does not store ephemeral authtype and credential" '
check approve $HELPER <<-\EOF &&
capability[]=authtype
authtype=Bearer
credential=git2-token
protocol=https
host=git2.example.com
ephemeral=1
EOF
check fill $HELPER <<-\EOF
capability[]=authtype
protocol=https
host=git2.example.com
--
protocol=https
host=git2.example.com
username=askpass-username
password=askpass-password
--
askpass: Username for '\''https://git2.example.com'\'':
askpass: Password for '\''https://askpass-username@git2.example.com'\'':
EOF
'
test_expect_success "helper ($HELPER) does not store ephemeral username and password" '
check approve $HELPER <<-\EOF &&
capability[]=authtype
protocol=https
host=git2.example.com
user=barbaz
password=secret
ephemeral=1
EOF
check fill $HELPER <<-\EOF
capability[]=authtype
protocol=https
host=git2.example.com
--
protocol=https
host=git2.example.com
username=askpass-username
password=askpass-password
--
askpass: Username for '\''https://git2.example.com'\'':
askpass: Password for '\''https://askpass-username@git2.example.com'\'':
EOF
'
}
write_script askpass <<\EOF
echo >&2 askpass: $*
what=$(echo $1 | cut -d" " -f1 | tr A-Z a-z | tr -cd a-z)

View File

@ -19,21 +19,30 @@ CHALLENGE_FILE=custom-auth.challenge
#
if test -n "$HTTP_AUTHORIZATION" && \
grep -Fqsx "${HTTP_AUTHORIZATION}" "$VALID_CREDS_FILE"
grep -Fqs "creds=${HTTP_AUTHORIZATION}" "$VALID_CREDS_FILE"
then
idno=$(grep -F "creds=${HTTP_AUTHORIZATION}" "$VALID_CREDS_FILE" | sed -e 's/^id=\([a-z0-9-][a-z0-9-]*\) .*$/\1/')
status=$(sed -ne "s/^id=$idno.*status=\\([0-9][0-9][0-9]\\).*\$/\\1/p" "$CHALLENGE_FILE" | head -n1)
# Note that although git-http-backend returns a status line, it
# does so using a CGI 'Status' header. Because this script is an
# No Parsed Headers (NPH) script, we must return a real HTTP
# status line.
# This is only a test script, so we don't bother to check for
# the actual status from git-http-backend and always return 200.
echo 'HTTP/1.1 200 OK'
exec "$GIT_EXEC_PATH"/git-http-backend
echo "HTTP/1.1 $status Nonspecific Reason Phrase"
if test "$status" -eq 200
then
exec "$GIT_EXEC_PATH"/git-http-backend
else
sed -ne "s/^id=$idno.*response=//p" "$CHALLENGE_FILE"
echo
exit
fi
fi
echo 'HTTP/1.1 401 Authorization Required'
if test -f "$CHALLENGE_FILE"
then
cat "$CHALLENGE_FILE"
sed -ne 's/^id=default.*response=//p' "$CHALLENGE_FILE"
fi
echo

View File

@ -12,7 +12,13 @@ test_expect_success 'setup helper scripts' '
IFS==
while read key value; do
echo >&2 "$whoami: $key=$value"
eval "$key=$value"
if test -z "${key%%*\[\]}"
then
key=${key%%\[\]}
eval "$key=\"\$$key $value\""
else
eval "$key=$value"
fi
done
IFS=$OIFS
EOF
@ -35,6 +41,30 @@ test_expect_success 'setup helper scripts' '
test -z "$pass" || echo password=$pass
EOF
write_script git-credential-verbatim-cred <<-\EOF &&
authtype=$1; shift
credential=$1; shift
. ./dump
echo capability[]=authtype
echo capability[]=state
test -z "${capability##*authtype*}" || exit 0
test -z "$authtype" || echo authtype=$authtype
test -z "$credential" || echo credential=$credential
test -z "${capability##*state*}" || exit 0
echo state[]=verbatim-cred:foo
EOF
write_script git-credential-verbatim-ephemeral <<-\EOF &&
authtype=$1; shift
credential=$1; shift
. ./dump
echo capability[]=authtype
test -z "${capability##*authtype*}" || exit 0
test -z "$authtype" || echo authtype=$authtype
test -z "$credential" || echo credential=$credential
echo "ephemeral=1"
EOF
write_script git-credential-verbatim-with-expiry <<-\EOF &&
user=$1; shift
pass=$1; shift
@ -64,6 +94,67 @@ test_expect_success 'credential_fill invokes helper' '
EOF
'
test_expect_success 'credential_fill invokes helper with credential' '
check fill "verbatim-cred Bearer token" <<-\EOF
capability[]=authtype
protocol=http
host=example.com
--
capability[]=authtype
authtype=Bearer
credential=token
protocol=http
host=example.com
--
verbatim-cred: get
verbatim-cred: capability[]=authtype
verbatim-cred: protocol=http
verbatim-cred: host=example.com
EOF
'
test_expect_success 'credential_fill invokes helper with ephemeral credential' '
check fill "verbatim-ephemeral Bearer token" <<-\EOF
capability[]=authtype
protocol=http
host=example.com
--
capability[]=authtype
authtype=Bearer
credential=token
ephemeral=1
protocol=http
host=example.com
--
verbatim-ephemeral: get
verbatim-ephemeral: capability[]=authtype
verbatim-ephemeral: protocol=http
verbatim-ephemeral: host=example.com
EOF
'
test_expect_success 'credential_fill invokes helper with credential and state' '
check fill "verbatim-cred Bearer token" <<-\EOF
capability[]=authtype
capability[]=state
protocol=http
host=example.com
--
capability[]=authtype
capability[]=state
authtype=Bearer
credential=token
protocol=http
host=example.com
state[]=verbatim-cred:foo
--
verbatim-cred: get
verbatim-cred: capability[]=authtype
verbatim-cred: capability[]=state
verbatim-cred: protocol=http
verbatim-cred: host=example.com
EOF
'
test_expect_success 'credential_fill invokes multiple helpers' '
check fill useless "verbatim foo bar" <<-\EOF
protocol=http
@ -83,6 +174,45 @@ test_expect_success 'credential_fill invokes multiple helpers' '
EOF
'
test_expect_success 'credential_fill response does not get capabilities when helpers are incapable' '
check fill useless "verbatim foo bar" <<-\EOF
capability[]=authtype
capability[]=state
protocol=http
host=example.com
--
protocol=http
host=example.com
username=foo
password=bar
--
useless: get
useless: capability[]=authtype
useless: capability[]=state
useless: protocol=http
useless: host=example.com
verbatim: get
verbatim: capability[]=authtype
verbatim: capability[]=state
verbatim: protocol=http
verbatim: host=example.com
EOF
'
test_expect_success 'credential_fill response does not get capabilities when caller is incapable' '
check fill "verbatim-cred Bearer token" <<-\EOF
protocol=http
host=example.com
--
protocol=http
host=example.com
--
verbatim-cred: get
verbatim-cred: protocol=http
verbatim-cred: host=example.com
EOF
'
test_expect_success 'credential_fill stops when we get a full response' '
check fill "verbatim one two" "verbatim three four" <<-\EOF
protocol=http
@ -99,6 +229,25 @@ test_expect_success 'credential_fill stops when we get a full response' '
EOF
'
test_expect_success 'credential_fill thinks a credential is a full response' '
check fill "verbatim-cred Bearer token" "verbatim three four" <<-\EOF
capability[]=authtype
protocol=http
host=example.com
--
capability[]=authtype
authtype=Bearer
credential=token
protocol=http
host=example.com
--
verbatim-cred: get
verbatim-cred: capability[]=authtype
verbatim-cred: protocol=http
verbatim-cred: host=example.com
EOF
'
test_expect_success 'credential_fill continues through partial response' '
check fill "verbatim one \"\"" "verbatim two three" <<-\EOF
protocol=http
@ -175,6 +324,20 @@ test_expect_success 'credential_fill passes along metadata' '
EOF
'
test_expect_success 'credential_fill produces no credential without capability' '
check fill "verbatim-cred Bearer token" <<-\EOF
protocol=http
host=example.com
--
protocol=http
host=example.com
--
verbatim-cred: get
verbatim-cred: protocol=http
verbatim-cred: host=example.com
EOF
'
test_expect_success 'credential_approve calls all helpers' '
check approve useless "verbatim one two" <<-\EOF
protocol=http

View File

@ -39,6 +39,7 @@ test_atexit 'git credential-cache exit'
helper_test cache
helper_test_password_expiry_utc cache
helper_test_oauth_refresh_token cache
helper_test_authtype cache
test_expect_success 'socket defaults to ~/.cache/git/credential/socket' '
test_when_finished "

View File

@ -4,16 +4,12 @@ test_description='Test reffiles backend'
GIT_TEST_DEFAULT_INITIAL_BRANCH_NAME=main
export GIT_TEST_DEFAULT_INITIAL_BRANCH_NAME
GIT_TEST_DEFAULT_REF_FORMAT=files
export GIT_TEST_DEFAULT_REF_FORMAT
TEST_PASSES_SANITIZE_LEAK=true
. ./test-lib.sh
if ! test_have_prereq REFFILES
then
skip_all='skipping reffiles specific tests'
test_done
fi
test_expect_success 'setup' '
git commit --allow-empty -m Initial &&
C=$(git rev-parse HEAD) &&

View File

@ -9,18 +9,15 @@ test_description='git pack-refs should not change the branch semantic
This test runs git pack-refs and git show-ref and checks that the branch
semantic is still the same.
'
GIT_TEST_DEFAULT_INITIAL_BRANCH_NAME=main
export GIT_TEST_DEFAULT_INITIAL_BRANCH_NAME
GIT_TEST_DEFAULT_REF_FORMAT=files
export GIT_TEST_DEFAULT_REF_FORMAT
TEST_PASSES_SANITIZE_LEAK=true
. ./test-lib.sh
if ! test_have_prereq REFFILES
then
skip_all='skipping reffiles specific tests'
test_done
fi
test_expect_success 'enable reflogs' '
git config core.logallrefupdates true
'

View File

@ -4,17 +4,14 @@
#
test_description='reftable basics'
GIT_TEST_DEFAULT_INITIAL_BRANCH_NAME=main
export GIT_TEST_DEFAULT_INITIAL_BRANCH_NAME
GIT_TEST_DEFAULT_REF_FORMAT=reftable
export GIT_TEST_DEFAULT_REF_FORMAT
. ./test-lib.sh
if ! test_have_prereq REFTABLE
then
skip_all='skipping reftable tests; set GIT_TEST_DEFAULT_REF_FORMAT=reftable'
test_done
fi
INVALID_OID=$(test_oid 001)
test_expect_success 'init: creates basic reftable structures' '
@ -81,8 +78,8 @@ test_expect_success 'init: reinitializing reftable with files backend fails' '
'
test_expect_perms () {
local perms="$1"
local file="$2"
local perms="$1" &&
local file="$2" &&
local actual="$(ls -l "$file")" &&
case "$actual" in
@ -286,7 +283,7 @@ test_expect_success 'ref transaction: creating symbolic ref fails with F/D confl
git init repo &&
test_commit -C repo A &&
cat >expect <<-EOF &&
error: unable to write symref for refs/heads: file/directory conflict
error: ${SQ}refs/heads/main${SQ} exists; cannot create ${SQ}refs/heads${SQ}
EOF
test_must_fail git -C repo symbolic-ref refs/heads refs/heads/foo 2>err &&
test_cmp expect err
@ -854,6 +851,39 @@ test_expect_success 'reflog: updates via HEAD update HEAD reflog' '
)
'
test_expect_success 'branch: copying branch with D/F conflict' '
test_when_finished "rm -rf repo" &&
git init repo &&
(
cd repo &&
test_commit A &&
git branch branch &&
cat >expect <<-EOF &&
error: ${SQ}refs/heads/branch${SQ} exists; cannot create ${SQ}refs/heads/branch/moved${SQ}
fatal: branch copy failed
EOF
test_must_fail git branch -c branch branch/moved 2>err &&
test_cmp expect err
)
'
test_expect_success 'branch: moving branch with D/F conflict' '
test_when_finished "rm -rf repo" &&
git init repo &&
(
cd repo &&
test_commit A &&
git branch branch &&
git branch conflict &&
cat >expect <<-EOF &&
error: ${SQ}refs/heads/conflict${SQ} exists; cannot create ${SQ}refs/heads/conflict/moved${SQ}
fatal: branch rename failed
EOF
test_must_fail git branch -m branch conflict/moved 2>err &&
test_cmp expect err
)
'
test_expect_success 'worktree: adding worktree creates separate stack' '
test_when_finished "rm -rf repo worktree" &&
git init repo &&

View File

@ -0,0 +1,132 @@
#!/bin/sh
test_description='reftables are compatible with JGit'
GIT_TEST_DEFAULT_INITIAL_BRANCH_NAME=main
export GIT_TEST_DEFAULT_INITIAL_BRANCH_NAME
GIT_TEST_DEFAULT_REF_FORMAT=reftable
export GIT_TEST_DEFAULT_REF_FORMAT
# JGit does not support the 'link' DIRC extension.
GIT_TEST_SPLIT_INDEX=0
export GIT_TEST_SPLIT_INDEX
. ./test-lib.sh
if ! test_have_prereq JGIT
then
skip_all='skipping reftable JGit tests; JGit is not present in PATH'
test_done
fi
if ! test_have_prereq SHA1
then
skip_all='skipping reftable JGit tests; JGit does not support SHA256 reftables'
test_done
fi
test_commit_jgit () {
touch "$1" &&
jgit add "$1" &&
jgit commit -m "$1"
}
test_same_refs () {
git show-ref --head >cgit.actual &&
jgit show-ref >jgit-tabs.actual &&
tr "\t" " " <jgit-tabs.actual >jgit.actual &&
test_cmp cgit.actual jgit.actual
}
test_same_ref () {
git rev-parse "$1" >cgit.actual &&
jgit rev-parse "$1" >jgit.actual &&
test_cmp cgit.actual jgit.actual
}
test_same_reflog () {
git reflog "$*" >cgit.actual &&
jgit reflog "$*" >jgit-newline.actual &&
sed '/^$/d' <jgit-newline.actual >jgit.actual &&
test_cmp cgit.actual jgit.actual
}
test_expect_success 'CGit repository can be read by JGit' '
test_when_finished "rm -rf repo" &&
git init repo &&
(
cd repo &&
test_commit A &&
test_same_refs &&
test_same_ref HEAD &&
test_same_reflog HEAD
)
'
test_expect_success 'JGit repository can be read by CGit' '
test_when_finished "rm -rf repo" &&
jgit init repo &&
(
cd repo &&
touch file &&
jgit add file &&
jgit commit -m "initial commit" &&
# Note that we must convert the ref storage after we have
# written the default branch. Otherwise JGit will end up with
# no HEAD at all.
jgit convert-ref-storage --format=reftable &&
test_same_refs &&
test_same_ref HEAD &&
# Interestingly, JGit cannot read its own reflog here. CGit can
# though.
printf "%s HEAD@{0}: commit (initial): initial commit" "$(git rev-parse --short HEAD)" >expect &&
git reflog HEAD >actual &&
test_cmp expect actual
)
'
test_expect_success 'mixed writes from JGit and CGit' '
test_when_finished "rm -rf repo" &&
git init repo &&
(
cd repo &&
test_commit A &&
test_commit_jgit B &&
test_commit C &&
test_commit_jgit D &&
test_same_refs &&
test_same_ref HEAD &&
test_same_reflog HEAD
)
'
test_expect_success 'JGit can read multi-level index' '
test_when_finished "rm -rf repo" &&
git init repo &&
(
cd repo &&
test_commit A &&
awk "
BEGIN {
print \"start\";
for (i = 0; i < 10000; i++)
printf \"create refs/heads/branch-%d HEAD\n\", i;
print \"commit\";
}
" >input &&
git update-ref --stdin <input &&
test_same_refs &&
test_same_ref refs/heads/branch-1 &&
test_same_ref refs/heads/branch-5738 &&
test_same_ref refs/heads/branch-9999
)
'
test_done

View File

@ -8,6 +8,8 @@ TEST_PASSES_SANITIZE_LEAK=true
. ./test-lib.sh
. "$TEST_DIRECTORY"/lib-terminal.sh
SP=" "
diff_cmp () {
for x
do
@ -45,17 +47,30 @@ test_expect_success 'warn about add.interactive.useBuiltin' '
cat >expect <<-\EOF &&
warning: the add.interactive.useBuiltin setting has been removed!
See its entry in '\''git help config'\'' for details.
No changes.
EOF
echo "No changes." >expect.out &&
for v in = =true =false
do
git -c "add.interactive.useBuiltin$v" add -p >out 2>actual &&
test_must_be_empty out &&
test_cmp expect.out out &&
test_cmp expect actual || return 1
done
'
test_expect_success 'unknown command' '
test_when_finished "git reset --hard; rm -f command" &&
echo W >command &&
git add -N command &&
git diff command >expect &&
cat >>expect <<-EOF &&
(1/1) Stage addition [y,n,q,a,d,e,p,?]? Unknown command ${SQ}W${SQ} (use ${SQ}?${SQ} for help)
(1/1) Stage addition [y,n,q,a,d,e,p,?]?$SP
EOF
git add -p -- command <command >actual 2>&1 &&
test_cmp expect actual
'
test_expect_success 'setup (initial)' '
echo content >file &&
git add file &&
@ -232,7 +247,6 @@ test_expect_success 'setup file' '
'
test_expect_success 'setup patch' '
SP=" " &&
NULL="" &&
cat >patch <<-EOF
@@ -1,4 +1,4 @@
@ -335,13 +349,13 @@ test_expect_success 'different prompts for mode change/deleted' '
test_expect_success 'correct message when there is nothing to do' '
git reset --hard &&
git add -p 2>err &&
test_grep "No changes" err &&
git add -p >out &&
test_grep "No changes" out &&
printf "\\0123" >binary &&
git add binary &&
printf "\\0abc" >binary &&
git add -p 2>err &&
test_grep "Only binary files changed" err
git add -p >out &&
test_grep "Only binary files changed" out
'
test_expect_success 'setup again' '

View File

@ -96,8 +96,8 @@ test_expect_success '256 colors' '
color "254 bold 255" "[1;38;5;254;48;5;255m"
'
test_expect_success '24-bit colors' '
color "#ff00ff black" "[38;2;255;0;255;40m"
test_expect_success 'RGB colors' '
color "#ff00ff #0f0" "[38;2;255;0;255;48;2;0;255;0m"
'
test_expect_success '"default" foreground' '
@ -112,7 +112,7 @@ test_expect_success '"default" can be combined with attributes' '
color "default default no-reverse bold" "[1;27;39;49m"
'
test_expect_success '"normal" yields no color at all"' '
test_expect_success '"normal" yields no color at all' '
color "normal black" "[40m"
'
@ -140,6 +140,26 @@ test_expect_success 'extra character after attribute' '
invalid_color "dimX"
'
test_expect_success 'non-hex character in RGB color' '
invalid_color "#x23456" &&
invalid_color "#1x3456" &&
invalid_color "#12x456" &&
invalid_color "#123x56" &&
invalid_color "#1234x6" &&
invalid_color "#12345x" &&
invalid_color "#x23" &&
invalid_color "#1x3" &&
invalid_color "#12x"
'
test_expect_success 'wrong number of letters in RGB color' '
invalid_color "#1" &&
invalid_color "#23" &&
invalid_color "#789a" &&
invalid_color "#bcdef" &&
invalid_color "#1234567"
'
test_expect_success 'unknown color slots are ignored (diff)' '
git config color.diff.nosuchslotwilleverbedefined white &&
git diff --color

View File

@ -20,13 +20,15 @@ test_expect_success setup '
for t in o x
do
path="$b$o$t" &&
case "$path" in ooo) continue ;; esac &&
paths="$paths$path " &&
p=" $path" &&
case "$b" in x) echo "$m1$p" ;; esac &&
case "$o" in x) echo "$m2$p" ;; esac &&
case "$t" in x) echo "$m3$p" ;; esac ||
return 1
if test "$path" != ooo
then
paths="$paths$path " &&
p=" $path" &&
case "$b" in x) echo "$m1$p" ;; esac &&
case "$o" in x) echo "$m2$p" ;; esac &&
case "$t" in x) echo "$m3$p" ;; esac ||
return 1
fi
done
done
done >ls-files-s.expect &&

View File

@ -21,9 +21,17 @@ test_expect_success 'setup_credential_helper' '
CREDENTIAL_HELPER="$TRASH_DIRECTORY/bin/git-credential-test-helper" &&
write_script "$CREDENTIAL_HELPER" <<-\EOF
cmd=$1
teefile=$cmd-query.cred
teefile=$cmd-query-temp.cred
catfile=$cmd-reply.cred
sed -n -e "/^$/q" -e "p" >>$teefile
state=$(sed -ne "s/^state\[\]=helper://p" "$teefile")
if test -z "$state"
then
mv "$teefile" "$cmd-query.cred"
else
mv "$teefile" "$cmd-query-$state.cred"
catfile="$cmd-reply-$state.cred"
fi
if test "$cmd" = "get"
then
cat $catfile
@ -32,13 +40,15 @@ test_expect_success 'setup_credential_helper' '
'
set_credential_reply () {
cat >"$TRASH_DIRECTORY/$1-reply.cred"
local suffix="$(test -n "$2" && echo "-$2")"
cat >"$TRASH_DIRECTORY/$1-reply$suffix.cred"
}
expect_credential_query () {
cat >"$TRASH_DIRECTORY/$1-expect.cred" &&
test_cmp "$TRASH_DIRECTORY/$1-expect.cred" \
"$TRASH_DIRECTORY/$1-query.cred"
local suffix="$(test -n "$2" && echo "-$2")"
cat >"$TRASH_DIRECTORY/$1-expect$suffix.cred" &&
test_cmp "$TRASH_DIRECTORY/$1-expect$suffix.cred" \
"$TRASH_DIRECTORY/$1-query$suffix.cred"
}
per_test_cleanup () {
@ -63,17 +73,20 @@ test_expect_success 'access using basic auth' '
# Basic base64(alice:secret-passwd)
cat >"$HTTPD_ROOT_PATH/custom-auth.valid" <<-EOF &&
Basic YWxpY2U6c2VjcmV0LXBhc3N3ZA==
id=1 creds=Basic YWxpY2U6c2VjcmV0LXBhc3N3ZA==
EOF
cat >"$HTTPD_ROOT_PATH/custom-auth.challenge" <<-EOF &&
WWW-Authenticate: Basic realm="example.com"
id=1 status=200
id=default response=WWW-Authenticate: Basic realm="example.com"
EOF
test_config_global credential.helper test-helper &&
git ls-remote "$HTTPD_URL/custom_auth/repo.git" &&
expect_credential_query get <<-EOF &&
capability[]=authtype
capability[]=state
protocol=http
host=$HTTPD_DEST
wwwauth[]=Basic realm="example.com"
@ -87,6 +100,45 @@ test_expect_success 'access using basic auth' '
EOF
'
test_expect_success 'access using basic auth via authtype' '
test_when_finished "per_test_cleanup" &&
set_credential_reply get <<-EOF &&
capability[]=authtype
authtype=Basic
credential=YWxpY2U6c2VjcmV0LXBhc3N3ZA==
EOF
# Basic base64(alice:secret-passwd)
cat >"$HTTPD_ROOT_PATH/custom-auth.valid" <<-EOF &&
id=1 creds=Basic YWxpY2U6c2VjcmV0LXBhc3N3ZA==
EOF
cat >"$HTTPD_ROOT_PATH/custom-auth.challenge" <<-EOF &&
id=1 status=200
id=default response=WWW-Authenticate: Basic realm="example.com"
EOF
test_config_global credential.helper test-helper &&
GIT_CURL_VERBOSE=1 git ls-remote "$HTTPD_URL/custom_auth/repo.git" &&
expect_credential_query get <<-EOF &&
capability[]=authtype
capability[]=state
protocol=http
host=$HTTPD_DEST
wwwauth[]=Basic realm="example.com"
EOF
expect_credential_query store <<-EOF
capability[]=authtype
authtype=Basic
credential=YWxpY2U6c2VjcmV0LXBhc3N3ZA==
protocol=http
host=$HTTPD_DEST
EOF
'
test_expect_success 'access using basic auth invalid credentials' '
test_when_finished "per_test_cleanup" &&
@ -97,17 +149,20 @@ test_expect_success 'access using basic auth invalid credentials' '
# Basic base64(alice:secret-passwd)
cat >"$HTTPD_ROOT_PATH/custom-auth.valid" <<-EOF &&
Basic YWxpY2U6c2VjcmV0LXBhc3N3ZA==
id=1 creds=Basic YWxpY2U6c2VjcmV0LXBhc3N3ZA==
EOF
cat >"$HTTPD_ROOT_PATH/custom-auth.challenge" <<-EOF &&
WWW-Authenticate: Basic realm="example.com"
id=1 status=200
id=default response=WWW-Authenticate: Basic realm="example.com"
EOF
test_config_global credential.helper test-helper &&
test_must_fail git ls-remote "$HTTPD_URL/custom_auth/repo.git" &&
expect_credential_query get <<-EOF &&
capability[]=authtype
capability[]=state
protocol=http
host=$HTTPD_DEST
wwwauth[]=Basic realm="example.com"
@ -132,19 +187,22 @@ test_expect_success 'access using basic auth with extra challenges' '
# Basic base64(alice:secret-passwd)
cat >"$HTTPD_ROOT_PATH/custom-auth.valid" <<-EOF &&
Basic YWxpY2U6c2VjcmV0LXBhc3N3ZA==
id=1 creds=Basic YWxpY2U6c2VjcmV0LXBhc3N3ZA==
EOF
cat >"$HTTPD_ROOT_PATH/custom-auth.challenge" <<-EOF &&
WWW-Authenticate: FooBar param1="value1" param2="value2"
WWW-Authenticate: Bearer authorize_uri="id.example.com" p=1 q=0
WWW-Authenticate: Basic realm="example.com"
id=1 status=200
id=default response=WWW-Authenticate: FooBar param1="value1" param2="value2"
id=default response=WWW-Authenticate: Bearer authorize_uri="id.example.com" p=1 q=0
id=default response=WWW-Authenticate: Basic realm="example.com"
EOF
test_config_global credential.helper test-helper &&
git ls-remote "$HTTPD_URL/custom_auth/repo.git" &&
expect_credential_query get <<-EOF &&
capability[]=authtype
capability[]=state
protocol=http
host=$HTTPD_DEST
wwwauth[]=FooBar param1="value1" param2="value2"
@ -170,19 +228,22 @@ test_expect_success 'access using basic auth mixed-case wwwauth header name' '
# Basic base64(alice:secret-passwd)
cat >"$HTTPD_ROOT_PATH/custom-auth.valid" <<-EOF &&
Basic YWxpY2U6c2VjcmV0LXBhc3N3ZA==
id=1 creds=Basic YWxpY2U6c2VjcmV0LXBhc3N3ZA==
EOF
cat >"$HTTPD_ROOT_PATH/custom-auth.challenge" <<-EOF &&
www-authenticate: foobar param1="value1" param2="value2"
WWW-AUTHENTICATE: BEARER authorize_uri="id.example.com" p=1 q=0
WwW-aUtHeNtIcAtE: baSiC realm="example.com"
id=1 status=200
id=default response=www-authenticate: foobar param1="value1" param2="value2"
id=default response=WWW-AUTHENTICATE: BEARER authorize_uri="id.example.com" p=1 q=0
id=default response=WwW-aUtHeNtIcAtE: baSiC realm="example.com"
EOF
test_config_global credential.helper test-helper &&
git ls-remote "$HTTPD_URL/custom_auth/repo.git" &&
expect_credential_query get <<-EOF &&
capability[]=authtype
capability[]=state
protocol=http
host=$HTTPD_DEST
wwwauth[]=foobar param1="value1" param2="value2"
@ -208,24 +269,27 @@ test_expect_success 'access using basic auth with wwwauth header continuations'
# Basic base64(alice:secret-passwd)
cat >"$HTTPD_ROOT_PATH/custom-auth.valid" <<-EOF &&
Basic YWxpY2U6c2VjcmV0LXBhc3N3ZA==
id=1 creds=Basic YWxpY2U6c2VjcmV0LXBhc3N3ZA==
EOF
# Note that leading and trailing whitespace is important to correctly
# simulate a continuation/folded header.
cat >"$HTTPD_ROOT_PATH/custom-auth.challenge" <<-EOF &&
WWW-Authenticate: FooBar param1="value1"
param2="value2"
WWW-Authenticate: Bearer authorize_uri="id.example.com"
p=1
q=0
WWW-Authenticate: Basic realm="example.com"
id=1 status=200
id=default response=WWW-Authenticate: FooBar param1="value1"
id=default response= param2="value2"
id=default response=WWW-Authenticate: Bearer authorize_uri="id.example.com"
id=default response= p=1
id=default response= q=0
id=default response=WWW-Authenticate: Basic realm="example.com"
EOF
test_config_global credential.helper test-helper &&
git ls-remote "$HTTPD_URL/custom_auth/repo.git" &&
expect_credential_query get <<-EOF &&
capability[]=authtype
capability[]=state
protocol=http
host=$HTTPD_DEST
wwwauth[]=FooBar param1="value1" param2="value2"
@ -251,26 +315,29 @@ test_expect_success 'access using basic auth with wwwauth header empty continuat
# Basic base64(alice:secret-passwd)
cat >"$HTTPD_ROOT_PATH/custom-auth.valid" <<-EOF &&
Basic YWxpY2U6c2VjcmV0LXBhc3N3ZA==
id=1 creds=Basic YWxpY2U6c2VjcmV0LXBhc3N3ZA==
EOF
CHALLENGE="$HTTPD_ROOT_PATH/custom-auth.challenge" &&
# Note that leading and trailing whitespace is important to correctly
# simulate a continuation/folded header.
printf "WWW-Authenticate: FooBar param1=\"value1\"\r\n" >"$CHALLENGE" &&
printf " \r\n" >>"$CHALLENGE" &&
printf " param2=\"value2\"\r\n" >>"$CHALLENGE" &&
printf "WWW-Authenticate: Bearer authorize_uri=\"id.example.com\"\r\n" >>"$CHALLENGE" &&
printf " p=1\r\n" >>"$CHALLENGE" &&
printf " \r\n" >>"$CHALLENGE" &&
printf " q=0\r\n" >>"$CHALLENGE" &&
printf "WWW-Authenticate: Basic realm=\"example.com\"\r\n" >>"$CHALLENGE" &&
printf "id=1 status=200\n" >"$CHALLENGE" &&
printf "id=default response=WWW-Authenticate: FooBar param1=\"value1\"\r\n" >>"$CHALLENGE" &&
printf "id=default response= \r\n" >>"$CHALLENGE" &&
printf "id=default response= param2=\"value2\"\r\n" >>"$CHALLENGE" &&
printf "id=default response=WWW-Authenticate: Bearer authorize_uri=\"id.example.com\"\r\n" >>"$CHALLENGE" &&
printf "id=default response= p=1\r\n" >>"$CHALLENGE" &&
printf "id=default response= \r\n" >>"$CHALLENGE" &&
printf "id=default response= q=0\r\n" >>"$CHALLENGE" &&
printf "id=default response=WWW-Authenticate: Basic realm=\"example.com\"\r\n" >>"$CHALLENGE" &&
test_config_global credential.helper test-helper &&
git ls-remote "$HTTPD_URL/custom_auth/repo.git" &&
expect_credential_query get <<-EOF &&
capability[]=authtype
capability[]=state
protocol=http
host=$HTTPD_DEST
wwwauth[]=FooBar param1="value1" param2="value2"
@ -296,22 +363,25 @@ test_expect_success 'access using basic auth with wwwauth header mixed line-endi
# Basic base64(alice:secret-passwd)
cat >"$HTTPD_ROOT_PATH/custom-auth.valid" <<-EOF &&
Basic YWxpY2U6c2VjcmV0LXBhc3N3ZA==
id=1 creds=Basic YWxpY2U6c2VjcmV0LXBhc3N3ZA==
EOF
CHALLENGE="$HTTPD_ROOT_PATH/custom-auth.challenge" &&
# Note that leading and trailing whitespace is important to correctly
# simulate a continuation/folded header.
printf "WWW-Authenticate: FooBar param1=\"value1\"\r\n" >"$CHALLENGE" &&
printf " \r\n" >>"$CHALLENGE" &&
printf "\tparam2=\"value2\"\r\n" >>"$CHALLENGE" &&
printf "WWW-Authenticate: Basic realm=\"example.com\"" >>"$CHALLENGE" &&
printf "id=1 status=200\n" >"$CHALLENGE" &&
printf "id=default response=WWW-Authenticate: FooBar param1=\"value1\"\r\n" >>"$CHALLENGE" &&
printf "id=default response= \r\n" >>"$CHALLENGE" &&
printf "id=default response=\tparam2=\"value2\"\r\n" >>"$CHALLENGE" &&
printf "id=default response=WWW-Authenticate: Basic realm=\"example.com\"" >>"$CHALLENGE" &&
test_config_global credential.helper test-helper &&
git ls-remote "$HTTPD_URL/custom_auth/repo.git" &&
expect_credential_query get <<-EOF &&
capability[]=authtype
capability[]=state
protocol=http
host=$HTTPD_DEST
wwwauth[]=FooBar param1="value1" param2="value2"
@ -326,4 +396,166 @@ test_expect_success 'access using basic auth with wwwauth header mixed line-endi
EOF
'
test_expect_success 'access using bearer auth' '
test_when_finished "per_test_cleanup" &&
set_credential_reply get <<-EOF &&
capability[]=authtype
authtype=Bearer
credential=YS1naXQtdG9rZW4=
EOF
# Basic base64(a-git-token)
cat >"$HTTPD_ROOT_PATH/custom-auth.valid" <<-EOF &&
id=1 creds=Bearer YS1naXQtdG9rZW4=
EOF
CHALLENGE="$HTTPD_ROOT_PATH/custom-auth.challenge" &&
cat >"$HTTPD_ROOT_PATH/custom-auth.challenge" <<-EOF &&
id=1 status=200
id=default response=WWW-Authenticate: FooBar param1="value1" param2="value2"
id=default response=WWW-Authenticate: Bearer authorize_uri="id.example.com" p=1 q=0
id=default response=WWW-Authenticate: Basic realm="example.com"
EOF
test_config_global credential.helper test-helper &&
git ls-remote "$HTTPD_URL/custom_auth/repo.git" &&
expect_credential_query get <<-EOF &&
capability[]=authtype
capability[]=state
protocol=http
host=$HTTPD_DEST
wwwauth[]=FooBar param1="value1" param2="value2"
wwwauth[]=Bearer authorize_uri="id.example.com" p=1 q=0
wwwauth[]=Basic realm="example.com"
EOF
expect_credential_query store <<-EOF
capability[]=authtype
authtype=Bearer
credential=YS1naXQtdG9rZW4=
protocol=http
host=$HTTPD_DEST
EOF
'
test_expect_success 'access using bearer auth with invalid credentials' '
test_when_finished "per_test_cleanup" &&
set_credential_reply get <<-EOF &&
capability[]=authtype
authtype=Bearer
credential=incorrect-token
EOF
# Basic base64(a-git-token)
cat >"$HTTPD_ROOT_PATH/custom-auth.valid" <<-EOF &&
id=1 creds=Bearer YS1naXQtdG9rZW4=
EOF
CHALLENGE="$HTTPD_ROOT_PATH/custom-auth.challenge" &&
cat >"$HTTPD_ROOT_PATH/custom-auth.challenge" <<-EOF &&
id=1 status=200
id=default response=WWW-Authenticate: FooBar param1="value1" param2="value2"
id=default response=WWW-Authenticate: Bearer authorize_uri="id.example.com" p=1 q=0
id=default response=WWW-Authenticate: Basic realm="example.com"
EOF
test_config_global credential.helper test-helper &&
test_must_fail git ls-remote "$HTTPD_URL/custom_auth/repo.git" &&
expect_credential_query get <<-EOF &&
capability[]=authtype
capability[]=state
protocol=http
host=$HTTPD_DEST
wwwauth[]=FooBar param1="value1" param2="value2"
wwwauth[]=Bearer authorize_uri="id.example.com" p=1 q=0
wwwauth[]=Basic realm="example.com"
EOF
expect_credential_query erase <<-EOF
capability[]=authtype
authtype=Bearer
credential=incorrect-token
protocol=http
host=$HTTPD_DEST
wwwauth[]=FooBar param1="value1" param2="value2"
wwwauth[]=Bearer authorize_uri="id.example.com" p=1 q=0
wwwauth[]=Basic realm="example.com"
EOF
'
test_expect_success 'access using three-legged auth' '
test_when_finished "per_test_cleanup" &&
set_credential_reply get <<-EOF &&
capability[]=authtype
capability[]=state
authtype=Multistage
credential=YS1naXQtdG9rZW4=
state[]=helper:foobar
continue=1
EOF
set_credential_reply get foobar <<-EOF &&
capability[]=authtype
capability[]=state
authtype=Multistage
credential=YW5vdGhlci10b2tlbg==
state[]=helper:bazquux
EOF
cat >"$HTTPD_ROOT_PATH/custom-auth.valid" <<-EOF &&
id=1 creds=Multistage YS1naXQtdG9rZW4=
id=2 creds=Multistage YW5vdGhlci10b2tlbg==
EOF
CHALLENGE="$HTTPD_ROOT_PATH/custom-auth.challenge" &&
cat >"$HTTPD_ROOT_PATH/custom-auth.challenge" <<-EOF &&
id=1 status=401 response=WWW-Authenticate: Multistage challenge="456"
id=1 status=401 response=WWW-Authenticate: Bearer authorize_uri="id.example.com" p=1 q=0
id=2 status=200
id=default response=WWW-Authenticate: Multistage challenge="123"
id=default response=WWW-Authenticate: Bearer authorize_uri="id.example.com" p=1 q=0
EOF
test_config_global credential.helper test-helper &&
git ls-remote "$HTTPD_URL/custom_auth/repo.git" &&
expect_credential_query get <<-EOF &&
capability[]=authtype
capability[]=state
protocol=http
host=$HTTPD_DEST
wwwauth[]=Multistage challenge="123"
wwwauth[]=Bearer authorize_uri="id.example.com" p=1 q=0
EOF
expect_credential_query get foobar <<-EOF &&
capability[]=authtype
capability[]=state
authtype=Multistage
protocol=http
host=$HTTPD_DEST
wwwauth[]=Multistage challenge="456"
wwwauth[]=Bearer authorize_uri="id.example.com" p=1 q=0
state[]=helper:foobar
EOF
expect_credential_query store bazquux <<-EOF
capability[]=authtype
capability[]=state
authtype=Multistage
credential=YW5vdGhlci10b2tlbg==
protocol=http
host=$HTTPD_DEST
state[]=helper:bazquux
EOF
'
test_done

View File

@ -2518,6 +2518,29 @@ test_expect_success 'complete tree filename with metacharacters' '
EOF
'
test_expect_success 'symbolic-ref completes builtin options' '
test_completion "git symbolic-ref --d" <<-\EOF
--delete Z
EOF
'
test_expect_success 'symbolic-ref completes short ref names' '
test_completion "git symbolic-ref foo m" <<-\EOF
main Z
mybranch Z
mytag Z
EOF
'
test_expect_success 'symbolic-ref completes full ref names' '
test_completion "git symbolic-ref foo refs/" <<-\EOF
refs/heads/main Z
refs/heads/mybranch Z
refs/tags/mytag Z
refs/tags/A Z
EOF
'
test_expect_success PERL 'send-email' '
test_completion "git send-email --cov" <<-\EOF &&
--cover-from-description=Z