1
0
Fork 0
mirror of https://github.com/git/git.git synced 2024-05-26 22:16:15 +02:00
Commit Graph

407 Commits

Author SHA1 Message Date
Junio C Hamano 9ddd68973a Merge branch 'et/spell-poll-infinite-with-minus-one-only'
We used to pass -1000 to poll(2), expecting it to also mean "no
timeout", which should be spelled as -1.

* et/spell-poll-infinite-with-minus-one-only:
  upload-pack: keep poll(2)'s timeout to -1
2014-09-11 10:33:29 -07:00
Edward Thomson 6c71f8b0d3 upload-pack: keep poll(2)'s timeout to -1
Keep poll's timeout at -1 when uploadpack.keepalive = 0, instead of
setting it to -1000, since some pedantic old systems (eg HP-UX) and
the gnulib compat/poll will treat only -1 as the valid value for
an infinite timeout.

Signed-off-by: Edward Thomson <ethomson@microsoft.com>
Acked-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-08-22 11:19:47 -07:00
René Scharfe d318027932 run-command: introduce CHILD_PROCESS_INIT
Most struct child_process variables are cleared using memset first after
declaration.  Provide a macro, CHILD_PROCESS_INIT, that can be used to
initialize them statically instead.  That's shorter, doesn't require a
function call and is slightly more readable (especially given that we
already have STRBUF_INIT, ARGV_ARRAY_INIT etc.).

Helped-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Rene Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-08-20 09:53:37 -07:00
Junio C Hamano b407d40933 Merge branch 'nd/log-show-linear-break'
Attempts to show where a single-strand-of-pearls break in "git log"
output.

* nd/log-show-linear-break:
  log: add --show-linear-break to help see non-linear history
  object.h: centralize object flag allocation
2014-04-03 12:38:11 -07:00
Nguyễn Thái Ngọc Duy 208acbfb82 object.h: centralize object flag allocation
While the field "flags" is mainly used by the revision walker, it is
also used in many other places. Centralize the whole flag allocation to
one place for a better overview (and easier to move flags if we have
too).

Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-03-25 15:09:24 -07:00
Junio C Hamano 1ddb4d7e5e Merge branch 'nd/upload-pack-shallow'
Serving objects from a shallow repository needs to write a
temporary file to be used, but the serving upload-pack may not have
write access to the repository which is meant to be read-only.

Instead feed these temporary shallow bounds from the standard input
of pack-objects so that we do not have to use a temporary file.

* nd/upload-pack-shallow:
  upload-pack: send shallow info over stdin to pack-objects
2014-03-21 12:49:08 -07:00
Junio C Hamano 3e14384b12 Merge branch 'jk/shallow-update-fix'
Serving objects from a shallow repository needs to write a
new file to hold the temporary shallow boundaries but it was not
cleaned when we exit due to die() or a signal.

* jk/shallow-update-fix:
  shallow: verify shallow file after taking lock
  shallow: automatically clean up shallow tempfiles
  shallow: use stat_validity to check for up-to-date file
2014-03-21 12:33:29 -07:00
Nguyễn Thái Ngọc Duy b790e0f67c upload-pack: send shallow info over stdin to pack-objects
Before cdab485 (upload-pack: delegate rev walking in shallow fetch to
pack-objects - 2013-08-16) upload-pack does not write to the source
repository. cdab485 starts to write $GIT_DIR/shallow_XXXXXX if it's a
shallow fetch, so the source repo must be writable.

git:// servers do not need write access to repos and usually don't
have it, which means cdab485 breaks shallow clone over git://

Instead of using a temporary file as the media for shallow points, we
can send them over stdin to pack-objects as well. Prepend shallow
SHA-1 with --shallow so pack-objects knows what is what.

Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-03-11 13:32:10 -07:00
Jeff King 0179c945fc shallow: automatically clean up shallow tempfiles
We sometimes write tempfiles of the form "shallow_XXXXXX"
during fetch/push operations with shallow repositories.
Under normal circumstances, we clean up the result when we
are done. However, we do no take steps to clean up after
ourselves when we exit due to die() or signal death.

This patch teaches the tempfile creation code to register
handlers to clean up after ourselves. To handle this, we
change the ownership semantics of the filename returned by
setup_temporary_shallow. It now keeps a copy of the filename
itself, and returns only a const pointer to it.

We can also do away with explicit tempfile removal in the
callers. They all exit not long after finishing with the
file, so they can rely on the auto-cleanup, simplifying the
code.

Note that we keep things simple and maintain only a single
filename to be cleaned. This is sufficient for the current
caller, but we future-proof it with a die("BUG").

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-02-27 12:07:13 -08:00
Michael Haggerty afc711b8e1 rename read_replace_refs to check_replace_refs
The semantics of this flag was changed in commit

    e1111cef23 inline lookup_replace_object() calls

but wasn't renamed at the time to minimize code churn.  Rename it now,
and add a comment explaining its use.

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-02-20 14:16:55 -08:00
Junio C Hamano 92251b1b5b Merge branch 'nd/shallow-clone'
Fetching from a shallow-cloned repository used to be forbidden,
primarily because the codepaths involved were not carefully vetted
and we did not bother supporting such usage. This attempts to allow
object transfer out of a shallow-cloned repository in a controlled
way (i.e. the receiver become a shallow repository with truncated
history).

* nd/shallow-clone: (31 commits)
  t5537: fix incorrect expectation in test case 10
  shallow: remove unused code
  send-pack.c: mark a file-local function static
  git-clone.txt: remove shallow clone limitations
  prune: clean .git/shallow after pruning objects
  clone: use git protocol for cloning shallow repo locally
  send-pack: support pushing from a shallow clone via http
  receive-pack: support pushing to a shallow clone via http
  smart-http: support shallow fetch/clone
  remote-curl: pass ref SHA-1 to fetch-pack as well
  send-pack: support pushing to a shallow clone
  receive-pack: allow pushes that update .git/shallow
  connected.c: add new variant that runs with --shallow-file
  add GIT_SHALLOW_FILE to propagate --shallow-file to subprocesses
  receive/send-pack: support pushing from a shallow clone
  receive-pack: reorder some code in unpack()
  fetch: add --update-shallow to accept refs that update .git/shallow
  upload-pack: make sure deepening preserves shallow roots
  fetch: support fetching from a shallow repository
  clone: support remote shallow repository
  ...
2014-01-17 12:21:20 -08:00
Junio C Hamano ad70448576 Merge branch 'cc/starts-n-ends-with'
Remove a few duplicate implementations of prefix/suffix comparison
functions, and rename them to starts_with and ends_with.

* cc/starts-n-ends-with:
  replace {pre,suf}fixcmp() with {starts,ends}_with()
  strbuf: introduce starts_with() and ends_with()
  builtin/remote: remove postfixcmp() and use suffixcmp() instead
  environment: normalize use of prefixcmp() by removing " != 0"
2013-12-17 12:02:44 -08:00
Nguyễn Thái Ngọc Duy 16094885ca smart-http: support shallow fetch/clone
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-12-10 16:14:18 -08:00
Nguyễn Thái Ngọc Duy 79d3a236c5 upload-pack: make sure deepening preserves shallow roots
When "fetch --depth=N" where N exceeds the longest chain of history in
the source repo, usually we just send an "unshallow" line to the
client so full history is obtained.

When the source repo is shallow we need to make sure to "unshallow"
the current shallow point _and_ "shallow" again when the commit
reaches its shallow bottom in the source repo.

This should fix both cases: large <N> and --unshallow.

Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-12-10 16:14:17 -08:00
Nguyễn Thái Ngọc Duy 1a30f5a2f2 shallow.c: extend setup_*_shallow() to accept extra shallow commits
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-12-10 16:14:16 -08:00
Nguyễn Thái Ngọc Duy ad491366de make the sender advertise shallow commits to the receiver
If either receive-pack or upload-pack is called on a shallow
repository, shallow commits (*) will be sent after the ref
advertisement (but before the packet flush), so that the receiver has
the full "shape" of the sender's commit graph. This will be needed for
the receiver to update its .git/shallow if necessary.

This breaks the protocol for all clients trying to push to a shallow
repo, or fetch from one. Which is basically the same end result as
today's "is_repository_shallow() && die()" in receive-pack and
upload-pack. New clients will be made aware of shallow upstream and
can make use of this information.

The sender must send all shallow commits that are sent in the
following pack. It may send more shallow commits than necessary.

upload-pack for example may choose to advertise no shallow commits if
it knows in advance that the pack it's going to send contains no
shallow commits. But upload-pack is the server, so we choose the
cheaper way, send full .git/shallow and let the client deal with it.

Smart HTTP is not affected by this patch. Shallow support on
smart-http comes later separately.

(*) A shallow commit is a commit that terminates the revision
    walker. It is usually put in .git/shallow in order to keep the
    revision walker from going out of bound because there is no
    guarantee that objects behind this commit is available.

Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-12-10 16:14:16 -08:00
Christian Couder 5955654823 replace {pre,suf}fixcmp() with {starts,ends}_with()
Leaving only the function definitions and declarations so that any
new topic in flight can still make use of the old functions, replace
existing uses of the prefixcmp() and suffixcmp() with new API
functions.

The change can be recreated by mechanically applying this:

    $ git grep -l -e prefixcmp -e suffixcmp -- \*.c |
      grep -v strbuf\\.c |
      xargs perl -pi -e '
        s|!prefixcmp\(|starts_with\(|g;
        s|prefixcmp\(|!starts_with\(|g;
        s|!suffixcmp\(|ends_with\(|g;
        s|suffixcmp\(|!ends_with\(|g;
      '

on the result of preparatory changes in this series.

Signed-off-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-12-05 14:13:21 -08:00
Junio C Hamano 5bb62059f2 Merge branch 'jk/robustify-parse-commit'
* jk/robustify-parse-commit:
  checkout: do not die when leaving broken detached HEAD
  use parse_commit_or_die instead of custom message
  use parse_commit_or_die instead of segfaulting
  assume parse_commit checks for NULL commit
  assume parse_commit checks commit->object.parsed
  log_tree_diff: die when we fail to parse a commit
2013-12-05 12:54:01 -08:00
Junio C Hamano 33da0c9c3c Merge branch 'maint'
Hotfix for recent regression while talking to upload-pack
in a repository with many symbolic refs.

* maint:
  Revert "upload-pack: send non-HEAD symbolic refs"
2013-11-18 12:25:28 -08:00
Junio C Hamano d007dbf7d6 Revert "upload-pack: send non-HEAD symbolic refs"
This reverts commit 5e7dcad771cb873e278a0571b46910d7c32e2f6c; there
may be unbounded number of symbolic refs in the repository, but the
capability header line in the on-wire protocol has a rather low
length limit.
2013-11-18 10:15:45 -08:00
Junio C Hamano 9196a2f8bd Merge branch 'jc/upload-pack-send-symref' into maint
One long-standing flaw in the pack transfer protocol used by "git
clone" was that there was no way to tell the other end which branch
"HEAD" points at, and the receiving end needed to guess.  A new
capability has been defined in the pack protocol to convey this
information so that cloning from a repository with more than one
branches pointing at the same commit where the HEAD is at now
reliably sets the initial branch in the resulting repository.

* jc/upload-pack-send-symref:
  t5570: Update for clone-progress-to-stderr branch
  t5570: Update for symref capability
  clone: test the new HEAD detection logic
  connect: annotate refs with their symref information in get_remote_head()
  connect.c: make parse_feature_value() static
  upload-pack: send non-HEAD symbolic refs
  upload-pack: send symbolic ref information as capability
  upload-pack.c: do not pass confusing cb_data to mark_our_ref()
  t5505: fix "set-head --auto with ambiguous HEAD" test
2013-11-08 11:38:00 -08:00
Junio C Hamano 9907d1359c Merge branch 'jc/upload-pack-send-symref'
One long-standing flaw in the pack transfer protocol used by "git
clone" was that there was no way to tell the other end which branch
"HEAD" points at, and the receiving end needed to guess.  A new
capability has been defined in the pack protocol to convey this
information so that cloning from a repository with more than one
branches pointing at the same commit where the HEAD is at now
reliably sets the initial branch in the resulting repository.

* jc/upload-pack-send-symref:
  t5570: Update for clone-progress-to-stderr branch
  t5570: Update for symref capability
  clone: test the new HEAD detection logic
  connect: annotate refs with their symref information in get_remote_head()
  connect.c: make parse_feature_value() static
  upload-pack: send non-HEAD symbolic refs
  upload-pack: send symbolic ref information as capability
  upload-pack.c: do not pass confusing cb_data to mark_our_ref()
  t5505: fix "set-head --auto with ambiguous HEAD" test
2013-10-30 12:10:06 -07:00
Jeff King 367068e0dd use parse_commit_or_die instead of custom message
Many calls to parse_commit detect errors and die. In some
cases, the custom error messages are more useful than what
parse_commit_or_die could produce, because they give some
context, like which ref the commit came from. Some, however,
just say "invalid commit". Let's convert the latter to use
parse_commit_or_die; its message is slightly more informative,
and it makes the error more consistent throughout git.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-10-24 15:43:51 -07:00
Junio C Hamano 6ba0d9551a Merge branch 'nd/fetch-into-shallow' into maint
When there is no sufficient overlap between old and new history
during a "git fetch" into a shallow repository, objects that the
sending side knows the receiving end has were unnecessarily sent.

* nd/fetch-into-shallow:
  Add testcase for needless objects during a shallow fetch
  list-objects: mark more commits as edges in mark_edges_uninteresting
  list-objects: reduce one argument in mark_edges_uninteresting
  upload-pack: delegate rev walking in shallow fetch to pack-objects
  shallow: add setup_temporary_shallow()
  shallow: only add shallow graft points to new shallow file
  move setup_alternate_shallow and write_shallow_commits to shallow.c
2013-10-23 13:32:17 -07:00
Junio C Hamano 9432c6aaa5 Merge branch 'jk/upload-pack-keepalive' into maint
* jk/upload-pack-keepalive:
  upload-pack: bump keepalive default to 5 seconds
  upload-pack: send keepalive packets during pack computation
2013-10-17 15:46:01 -07:00
Junio C Hamano 7b8315bb59 Merge branch 'jk/upload-pack-keepalive'
When running "fetch -q", a long silence while the sender side
computes the set of objects to send can be mistaken by proxies as
dropped connection.  The server side has been taught to send a small
empty messages to keep the connection alive.

* jk/upload-pack-keepalive:
  upload-pack: bump keepalive default to 5 seconds
  upload-pack: send keepalive packets during pack computation
2013-09-20 12:39:05 -07:00
Junio C Hamano 238504b014 Merge branch 'nd/fetch-into-shallow'
When there is no sufficient overlap between old and new history
during a fetch into a shallow repository, we unnecessarily sent
objects the sending side knows the receiving end has.

* nd/fetch-into-shallow:
  Add testcase for needless objects during a shallow fetch
  list-objects: mark more commits as edges in mark_edges_uninteresting
  list-objects: reduce one argument in mark_edges_uninteresting
  upload-pack: delegate rev walking in shallow fetch to pack-objects
  shallow: add setup_temporary_shallow()
  shallow: only add shallow graft points to new shallow file
  move setup_alternate_shallow and write_shallow_commits to shallow.c
2013-09-20 12:25:32 -07:00
Junio C Hamano 5e7dcad771 upload-pack: send non-HEAD symbolic refs
With the same mechanism as used to tell where "HEAD" points at to
the other end, we can tell the target of other symbolic refs as
well.

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-09-17 21:51:58 -07:00
Junio C Hamano 7171d8c15f upload-pack: send symbolic ref information as capability
One long-standing flaw in the pack transfer protocol was that there
was no way to tell the other end which branch "HEAD" points at.
With a capability "symref=HEAD:refs/heads/master", let the sender to
tell the receiver what symbolic ref points at what ref.

This capability can be repeated more than once to represent symbolic
refs other than HEAD, such as "refs/remotes/origin/HEAD").

Add an infrastructure to collect symbolic refs, format them as extra
capabilities and put it on the wire.  For now, just send information
on the "HEAD" and nothing else.

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-09-17 21:50:26 -07:00
Junio C Hamano a4d695de0d upload-pack.c: do not pass confusing cb_data to mark_our_ref()
The callee does not use cb_data, and the caller is an intermediate
function in a callchain that later wants to use the cb_data for its
own use.  Clarify the code by breaking the dataflow explicitly by
not passing cb_data down to mark_our_ref().

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-09-17 21:50:02 -07:00
Jeff King 115dedd722 upload-pack: bump keepalive default to 5 seconds
There is no reason not to turn on keepalives by default.
They take very little bandwidth, and significantly less than
the progress reporting they are replacing. And in the case
that progress reporting is on, we should never need to send
a keepalive anyway, as we will constantly be showing
progress and resetting the keepalive timer.

We do not necessarily know what the client's idea of a
reasonable timeout is, so let's keep this on the low side of
5 seconds. That is high enough that we will always prefer
our normal 1-second progress reports to sending a keepalive
packet, but low enough that no sane client should consider
the connection hung.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-09-09 11:15:17 -07:00
Jeff King 05e95155a1 upload-pack: send keepalive packets during pack computation
When upload-pack has started pack-objects, there may be a quiet
period while pack-objects prepares the pack (i.e., counting objects
and delta compression). Normally we would see (and send to the
client) progress information, but if "--quiet" is in effect,
pack-objects will produce nothing at all until the pack data is
ready. On a large repository, this can take tens of seconds (or even
minutes if the system is loaded or the repository is badly packed).
Clients or intermediate proxies can sometimes give up in this
situation, assuming that the server or connection has hung.

This patch introduces a "keepalive" option; if upload-pack sees no
data from pack-objects for a certain number of seconds, it will send
an empty sideband data packet to let the other side know that we are
still working on it.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-09-09 11:14:37 -07:00
Nguyễn Thái Ngọc Duy cdab485853 upload-pack: delegate rev walking in shallow fetch to pack-objects
upload-pack has a special revision walking code for shallow
recipients. It works almost like the similar code in pack-objects
except:

1. in upload-pack, graft points could be added for deepening;

2. also when the repository is deepened, the shallow point will be
   moved further away from the tip, but the old shallow point will be
   marked as edge to produce more efficient packs. See 6523078 (make
   shallow repository deepening more network efficient - 2009-09-03).

Pass the file to pack-objects via --shallow-file. This will override
$GIT_DIR/shallow and give pack-objects the exact repository shape
that upload-pack has.

mark edge commits by revision command arguments. Even if old shallow
points are passed as "--not" revisions as in this patch, they will not
be picked up by mark_edges_uninteresting() because this function looks
up to parents for edges, while in this case the edge is the children,
in the opposite direction. This will be fixed in an later patch when
all given uninteresting commits are marked as edges.

Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-08-28 11:52:11 -07:00
Junio C Hamano 47a5918536 cache.h: move remote/connect API out of it
The definition of "struct ref" in "cache.h", a header file so
central to the system, always confused me.  This structure is not
about the local ref used by sha1-name API to name local objects.

It is what refspecs are expanded into, after finding out what refs
the other side has, to define what refs are updated after object
transfer succeeds to what values.  It belongs to "remote.h" together
with "struct refspec".

While we are at it, also move the types and functions related to the
Git transport connection to a new header file connect.h

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-07-08 14:34:24 -07:00
Michael Heemskerk af04fa2a78 upload-pack: ignore 'shallow' lines with unknown obj-ids
When the client sends a 'shallow' line for an object that the server does
not have, the server currently dies with the error: "did not find object
for shallow <obj-id>".  The client may have truncated the history at
the commit by fetching shallowly from a different server, or the commit
may have been garbage collected by the server. In either case, this
unknown commit is not relevant for calculating the pack that is to be
sent and can be safely ignored, and it is not used when recomputing where
the updated history of the client is cauterised.

The documentation in technical/pack-protocol.txt has been updated to
remove the restriction that "Clients MUST NOT mention an obj-id which it
does not know exists on the server". This requirement is not realistic
because clients cannot know whether an object has been garbage collected
by the server.

Signed-off-by: Michael Heemskerk <mheemskerk@atlassian.com>
Reviewed-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-04-28 22:33:53 -07:00
Junio C Hamano e013bdab0f Merge branch 'jk/pkt-line-cleanup'
Clean up pkt-line API, implementation and its callers to make them
more robust.

* jk/pkt-line-cleanup:
  do not use GIT_TRACE_PACKET=3 in tests
  remote-curl: always parse incoming refs
  remote-curl: move ref-parsing code up in file
  remote-curl: pass buffer straight to get_remote_heads
  teach get_remote_heads to read from a memory buffer
  pkt-line: share buffer/descriptor reading implementation
  pkt-line: provide a LARGE_PACKET_MAX static buffer
  pkt-line: move LARGE_PACKET_MAX definition from sideband
  pkt-line: teach packet_read_line to chomp newlines
  pkt-line: provide a generic reading function with options
  pkt-line: drop safe_write function
  pkt-line: move a misplaced comment
  write_or_die: raise SIGPIPE when we get EPIPE
  upload-archive: use argv_array to store client arguments
  upload-archive: do not copy repo name
  send-pack: prefer prefixcmp over memcmp in receive_status
  fetch-pack: fix out-of-bounds buffer offset in get_ack
  upload-pack: remove packet debugging harness
  upload-pack: do not add duplicate objects to shallow list
  upload-pack: use get_sha1_hex to parse "shallow" lines
2013-04-01 08:59:37 -07:00
Junio C Hamano 62bd0c0105 Merge branch 'jk/peel-ref'
Recent optimization broke shallow clones.

* jk/peel-ref:
  upload-pack: load non-tip "want" objects from disk
  upload-pack: make sure "want" objects are parsed
  upload-pack: drop lookup-before-parse optimization
2013-03-25 14:01:03 -07:00
Junio C Hamano e4e1c54990 Merge branch 'jc/fetch-raw-sha1'
Allows requests to fetch objects at any tip of refs (including
hidden ones).  It seems that there may be use cases even outside
Gerrit (e.g. $gmane/215701).

* jc/fetch-raw-sha1:
  fetch: fetch objects by their exact SHA-1 object names
  upload-pack: optionally allow fetching from the tips of hidden refs
  fetch: use struct ref to represent refs to be fetched
  parse_fetch_refspec(): clarify the codeflow a bit
2013-03-21 14:02:27 -07:00
Jeff King f59de5d1ff upload-pack: load non-tip "want" objects from disk
It is a long-time security feature that upload-pack will not
serve any "want" lines that do not correspond to the tip of
one of our refs. Traditionally, this was enforced by
checking the objects in the in-memory hash; they should have
been loaded and received the OUR_REF flag during the
advertisement.

The stateless-rpc mode, however, has a race condition here:
one process advertises, and another receives the want lines,
so the refs may have changed in the interim.  To address
this, commit 051e400 added a new verification mode; if the
object is not OUR_REF, we set a "has_non_tip" flag, and then
later verify that the requested objects are reachable from
our current tips.

However, we still die immediately when the object is not in
our in-memory hash, and at this point we should only have
loaded our tip objects. So the check_non_tip code path does
not ever actually trigger, as any non-tip objects would
have already caused us to die.

We can fix that by using parse_object instead of
lookup_object, which will load the object from disk if it
has not already been loaded.

We still need to check that parse_object does not return
NULL, though, as it is possible we do not have the object
at all. A more appropriate error message would be "no such
object" rather than "not our ref"; however, we do not want
to leak information about what objects are or are not in
the object database, so we continue to use the same "not
our ref" message that would be produced by an unreachable
object.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-03-16 22:19:29 -07:00
Jeff King 06f15bf1f3 upload-pack: make sure "want" objects are parsed
When upload-pack receives a "want" line from the client, it
adds it to an object array. We call lookup_object to find
the actual object, which will only check for objects already
in memory. This works because we are expecting to find
objects that we already loaded during the ref advertisement.

We use the resulting object structs for a variety of
purposes. Some of them care only about the object flags, but
others care about the type of the object (e.g.,
ok_to_give_up), or even feed them to the revision parser
(when --depth is used), which assumes that objects it
receives are fully parsed.

Once upon a time, this was OK; any object we loaded into
memory would also have been parsed. But since 435c833
(upload-pack: use peel_ref for ref advertisements,
2012-10-04), we try to avoid parsing objects during the ref
advertisement. This means that lookup_object may return an
object with a type of OBJ_NONE. The resulting mess depends
on the exact set of objects, but can include the revision
parser barfing, or the shallow code sending the wrong set of
objects.

This patch teaches upload-pack to parse each "want" object
as we receive it. We do not replace the lookup_object call
with parse_object, as the current code is careful not to let
just any object appear on a "want" line, but rather only one
we have previously advertised (whereas parse_object would
actually load any arbitrary object from disk).

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-03-16 22:16:56 -07:00
Jeff King a6eec12638 upload-pack: drop lookup-before-parse optimization
When we receive a "have" line from the client, we want to
load the object pointed to by the sha1. However, we are
careful to do:

  o = lookup_object(sha1);
  if (!o || !o->parsed)
	  o = parse_object(sha1);

to avoid loading the object from disk if we have already
seen it.  However, since ccdc603 (parse_object: try internal
cache before reading object db), parse_object already does
this optimization internally. We can just call parse_object
directly.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-03-16 22:16:45 -07:00
Jeff King 74543a0423 pkt-line: provide a LARGE_PACKET_MAX static buffer
Most of the callers of packet_read_line just read into a
static 1000-byte buffer (callers which handle arbitrary
binary data already use LARGE_PACKET_MAX). This works fine
in practice, because:

  1. The only variable-sized data in these lines is a ref
     name, and refs tend to be a lot shorter than 1000
     characters.

  2. When sending ref lines, git-core always limits itself
     to 1000 byte packets.

However, the only limit given in the protocol specification
in Documentation/technical/protocol-common.txt is
LARGE_PACKET_MAX; the 1000 byte limit is mentioned only in
pack-protocol.txt, and then only describing what we write,
not as a specific limit for readers.

This patch lets us bump the 1000-byte limit to
LARGE_PACKET_MAX. Even though git-core will never write a
packet where this makes a difference, there are two good
reasons to do this:

  1. Other git implementations may have followed
     protocol-common.txt and used a larger maximum size. We
     don't bump into it in practice because it would involve
     very long ref names.

  2. We may want to increase the 1000-byte limit one day.
     Since packets are transferred before any capabilities,
     it's difficult to do this in a backwards-compatible
     way. But if we bump the size of buffer the readers can
     handle, eventually older versions of git will be
     obsolete enough that we can justify bumping the
     writers, as well. We don't have plans to do this
     anytime soon, but there is no reason not to start the
     clock ticking now.

Just bumping all of the reading bufs to LARGE_PACKET_MAX
would waste memory. Instead, since most readers just read
into a temporary buffer anyway, let's provide a single
static buffer that all callers can use. We can further wrap
this detail away by having the packet_read_line wrapper just
use the buffer transparently and return a pointer to the
static storage.  That covers most of the cases, and the
remaining ones already read into their own LARGE_PACKET_MAX
buffers.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-02-20 13:42:22 -08:00
Jeff King 819b929d33 pkt-line: teach packet_read_line to chomp newlines
The packets sent during ref negotiation are all terminated
by newline; even though the code to chomp these newlines is
short, we end up doing it in a lot of places.

This patch teaches packet_read_line to auto-chomp the
trailing newline; this lets us get rid of a lot of inline
chomping code.

As a result, some call-sites which are not reading
line-oriented data (e.g., when reading chunks of packfiles
alongside sideband) transition away from packet_read_line to
the generic packet_read interface. This patch converts all
of the existing callsites.

Since the function signature of packet_read_line does not
change (but its behavior does), there is a possibility of
new callsites being introduced in later commits, silently
introducing an incompatibility.  However, since a later
patch in this series will change the signature, such a
commit would have to be merged directly into this commit,
not to the tip of the series; we can therefore ignore the
issue.

This is an internal cleanup and should produce no change of
behavior in the normal case. However, there is one corner
case to note. Callers of packet_read_line have never been
able to tell the difference between a flush packet ("0000")
and an empty packet ("0004"), as both cause packet_read_line
to return a length of 0. Readers treat them identically,
even though Documentation/technical/protocol-common.txt says
we must not; it also says that implementations should not
send an empty pkt-line.

By stripping out the newline before the result gets to the
caller, we will now treat the newline-only packet ("0005\n")
the same as an empty packet, which in turn gets treated like
a flush packet. In practice this doesn't matter, as neither
empty nor newline-only packets are part of git's protocols
(at least not for the line-oriented bits, and readers who
are not expecting line-oriented packets will be calling
packet_read directly, anyway). But even if we do decide to
care about the distinction later, it is orthogonal to this
patch.  The right place to tighten would be to stop treating
empty packets as flush packets, and this change does not
make doing so any harder.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-02-20 13:42:21 -08:00
Jeff King cdf4fb8e33 pkt-line: drop safe_write function
This is just write_or_die by another name. The one
distinction is that write_or_die will treat EPIPE specially
by suppressing error messages. That's fine, as we die by
SIGPIPE anyway (and in the off chance that it is disabled,
write_or_die will simulate it).

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-02-20 13:42:21 -08:00
Jeff King 97a83fa839 upload-pack: remove packet debugging harness
If you set the GIT_DEBUG_SEND_PACK environment variable,
upload-pack will dump lines it receives in the receive_needs
phase to a descriptor. This debugging harness is a strict
subset of what GIT_TRACE_PACKET can do. Let's just drop it
in favor of that.

A few tests used GIT_DEBUG_SEND_PACK to confirm which
objects get sent; we have to adapt them to the new output
format.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-02-20 13:42:21 -08:00
Jeff King e58e57e49e upload-pack: do not add duplicate objects to shallow list
When the client tells us it has a shallow object via
"shallow <sha1>", we make sure we have the object, mark it
with a flag, then add it to a dynamic array of shallow
objects. This means that a client can get us to allocate
arbitrary amounts of memory just by flooding us with shallow
lines (whether they have the objects or not). You can
demonstrate it easily with:

  yes '0035shallow e83c5163316f89bfbde7d9ab23ca2e25604af290' |
  git-upload-pack git.git

We already protect against duplicates in want lines by
checking if our flag is already set; let's do the same thing
here. Note that a client can still get us to allocate some
amount of memory by marking every object in the repo as
"shallow" (or "want"). But this at least bounds it with the
number of objects in the repository, which is not under the
control of an upload-pack client.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-02-20 13:42:21 -08:00
Jeff King b7b021701c upload-pack: use get_sha1_hex to parse "shallow" lines
When we receive a line like "shallow <sha1>" from the
client, we feed the <sha1> part to get_sha1. This is a
mistake, as the argument on a shallow line is defined by
Documentation/technical/pack-protocol.txt to contain an
"obj-id".  This is never defined in the BNF, but it is clear
from the text and from the other uses that it is meant to be
a hex sha1, not an arbitrary identifier (and that is what
fetch-pack has always sent).

We should be using get_sha1_hex instead, which doesn't allow
the client to request arbitrary junk like "HEAD@{yesterday}".
Because this is just marking shallow objects, the client
couldn't actually do anything interesting (like fetching
objects from unreachable reflog entries), but we should keep
our parsing tight to be on the safe side.

Because get_sha1 is for the most part a superset of
get_sha1_hex, in theory the only behavior change should be
disallowing non-hex object references. However, there is
one interesting exception: get_sha1 will only parse
a 40-character hex sha1 if the string has exactly 40
characters, whereas get_sha1_hex will just eat the first 40
characters, leaving the rest. That means that current
versions of git-upload-pack will not accept a "shallow"
packet that has a trailing newline, even though the protocol
documentation is clear that newlines are allowed (even
encouraged) in non-binary parts of the protocol.

This never mattered in practice, though, because fetch-pack,
contrary to the protocol documentation, does not include a
newline in its shallow lines. JGit follows its lead (though
it correctly is strict on the parsing end about wanting a
hex object id).

We do not adjust fetch-pack to send newlines here, as it
would break communication with older versions of git (and
there is no actual benefit to doing so, except for
consistency with other parts of the protocol).

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-02-20 13:42:20 -08:00
Junio C Hamano ce735bf7fd Merge branch 'jc/hidden-refs'
Allow the server side to redact the refs/ namespace it shows to the
client.

Will merge to 'master'.

* jc/hidden-refs:
  upload/receive-pack: allow hiding ref hierarchies
  upload-pack: simplify request validation
  upload-pack: share more code
2013-02-17 15:25:57 -08:00
Junio C Hamano 390eb36b0a upload-pack: optionally allow fetching from the tips of hidden refs
With uploadpack.allowtipsha1inwant configuration option set, future
versions of "git fetch" that allow an exact object name (likely to
have been obtained out of band) on the LHS of the fetch refspec can
make a request with a "want" line that names an object that may not
have been advertised due to transfer.hiderefs configuration.

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-02-07 13:56:52 -08:00
Junio C Hamano daebaa7813 upload/receive-pack: allow hiding ref hierarchies
A repository may have refs that are only used for its internal
bookkeeping purposes that should not be exposed to the others that
come over the network.

Teach upload-pack to omit some refs from its initial advertisement
by paying attention to the uploadpack.hiderefs multi-valued
configuration variable.  Do the same to receive-pack via the
receive.hiderefs variable.  As a convenient short-hand, allow using
transfer.hiderefs to set the value to both of these variables.

Any ref that is under the hierarchies listed on the value of these
variable is excluded from responses to requests made by "ls-remote",
"fetch", etc. (for upload-pack) and "push" (for receive-pack).

Because these hidden refs do not count as OUR_REF, an attempt to
fetch objects at the tip of them will be rejected, and because these
refs do not get advertised, "git push :" will not see local branches
that have the same name as them as "matching" ones to be sent.

An attempt to update/delete these hidden refs with an explicit
refspec, e.g. "git push origin :refs/hidden/22", is rejected.  This
is not a new restriction.  To the pusher, it would appear that there
is no such ref, so its push request will conclude with "Now that I
sent you all the data, it is time for you to update the refs.  I saw
that the ref did not exist when I started pushing, and I want the
result to point at this commit".  The receiving end will apply the
compare-and-swap rule to this request and rejects the push with
"Well, your update request conflicts with somebody else; I see there
is such a ref.", which is the right thing to do. Otherwise a push to
a hidden ref will always be "the last one wins", which is not a good
default.

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-02-07 13:48:47 -08:00
Junio C Hamano 2532d891a4 Merge branch 'nd/fetch-depth-is-broken'
"git fetch --depth" was broken in at least three ways.  The
resulting history was deeper than specified by one commit, it was
unclear how to wipe the shallowness of the repository with the
command, and documentation was misleading.

* nd/fetch-depth-is-broken:
  fetch: elaborate --depth action
  upload-pack: fix off-by-one depth calculation in shallow clone
  fetch: add --unshallow for turning shallow repo into complete one
2013-02-01 12:39:24 -08:00
Junio C Hamano 3f1da57fff upload-pack: simplify request validation
Long time ago, we used to punt on a large (read: asking for more
than 256 refs) fetch request and instead sent a full pack, because
we couldn't fit many refs on the command line of rev-list we run
internally to enumerate the objects to be sent.  To fix this,
565ebbf (upload-pack: tighten request validation., 2005-10-24),
added a check to count the number of refs in the request and matched
with the number of refs we advertised, and changed the invocation of
rev-list to pass "--all" to it, still keeping us under the command
line argument limit.

However, these days we feed the list of objects requested and the
list of objects the other end is known to have via standard input,
so there is no longer a valid reason to special case a full clone
request.  Remove the code associated with "create_full_pack" to
simplify the logic.

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-01-28 21:05:51 -08:00
Junio C Hamano cbbe50db76 upload-pack: share more code
We mark the objects pointed at our refs with "OUR_REF" flag in two
functions (mark_our_ref() and send_ref()), but we can just use the
former as a helper for the latter.

Update the way mark_our_ref() prepares in-core object to use
lookup_unknown_object() to delay reading the actual object data,
just like we did in 435c833 (upload-pack: use peel_ref for ref
advertisements, 2012-10-04).

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-01-18 15:48:49 -08:00
Junio C Hamano e43171a4a7 Merge branch 'nd/upload-pack-shallow-must-be-commit'
A minor consistency check patch that does not have much relevance
to the real world.

* nd/upload-pack-shallow-must-be-commit:
  upload-pack: only accept commits from "shallow" line
2013-01-14 08:15:44 -08:00
Nguyễn Thái Ngọc Duy 4dcb167fc3 fetch: add --unshallow for turning shallow repo into complete one
The user can do --depth=2147483647 (*) for restoring complete repo
now. But it's hard to remember. Any other numbers larger than the
longest commit chain in the repository would also do, but some
guessing may be involved. Make easy-to-remember --unshallow an alias
for --depth=2147483647.

Make upload-pack recognize this special number as infinite depth. The
effect is essentially the same as before, except that upload-pack is
more efficient because it does not have to traverse to the bottom
anymore.

The chance of a user actually wanting exactly 2147483647 commits
depth, not infinite, on a repository with a history that long, is
probably too small to consider. The client can learn to add or
subtract one commit to avoid the special treatment when that actually
happens.

(*) This is the largest positive number a 32-bit signed integer can
    contain. JGit and older C Git store depth as "int" so both are OK
    with this number. Dulwich does not support shallow clone.

Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-01-11 09:09:30 -08:00
Nguyễn Thái Ngọc Duy 6293ded348 upload-pack: only accept commits from "shallow" line
We only allow cuts at commits, not arbitrary objects. upload-pack will
fail eventually in register_shallow if a non-commit is given with a
generic error "Object %s is a %s, not a commit". Check it early and
give a more accurate error.

This should never show up in an ordinary session. It's for buggy
clients, or when the user manually edits .git/shallow.

Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-01-08 09:28:00 -08:00
Jeff King 435c833237 upload-pack: use peel_ref for ref advertisements
When upload-pack advertises refs, we attempt to peel tags
and advertise the peeled version. We currently hand-roll the
tag dereferencing, and use as many optimizations as we can
to avoid loading non-tag objects into memory.

Not only has peel_ref recently learned these optimizations,
too, but it also contains an even more important one: it
has access to the "peeled" data from the pack-refs file.
That means we can avoid not only loading annotated tags
entirely, but also avoid doing any kind of object lookup at
all.

This cut the CPU time to advertise refs by 50% in the
linux-2.6 repo, as measured by:

  echo 0000 | git-upload-pack . >/dev/null

best-of-five, warm cache, objects and refs fully packed:

  [before]             [after]
  real    0m0.026s     real    0m0.013s
  user    0m0.024s     user    0m0.008s
  sys     0m0.000s     sys     0m0.000s

Those numbers are irrelevantly small compared to an actual
fetch. Here's a larger repo (400K refs, of which 12K are
unique, and of which only 107 are unique annotated tags):

  [before]             [after]
  real    0m0.704s     real    0m0.596s
  user    0m0.600s     user    0m0.496s
  sys     0m0.096s     sys     0m0.092s

This shows only a 15% speedup (mostly because it has fewer
actual tags to parse), but a larger absolute value (100ms,
which isn't a lot compared to a real fetch, but this
advertisement happens on every fetch, even if the client is
just finding out they are completely up to date).

In truly pathological cases, where you have a large number
of unique annotated tags, it can make an even bigger
difference. Here are the numbers for a linux-2.6 repository
that has had every seventh commit tagged (so about 50K
tags):

  [before]             [after]
  real    0m0.443s     real    0m0.097s
  user    0m0.416s     user    0m0.080s
  sys     0m0.024s     sys     0m0.012s

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-10-04 20:34:29 -07:00
Jeff King ff5effdf45 include agent identifier in capability string
Instead of having the client advertise a particular version
number in the git protocol, we have managed extensions and
backwards compatibility by having clients and servers
advertise capabilities that they support. This is far more
robust than having each side consult a table of
known versions, and provides sufficient information for the
protocol interaction to complete.

However, it does not allow servers to keep statistics on
which client versions are being used. This information is
not necessary to complete the network request (the
capabilities provide enough information for that), but it
may be helpful to conduct a general survey of client
versions in use.

We already send the client version in the user-agent header
for http requests; adding it here allows us to gather
similar statistics for non-http requests.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-08-03 13:03:34 -07:00
Junio C Hamano d1afa8baa2 Merge branch 'jk/parse-object-cached'
* jk/parse-object-cached:
  upload-pack: avoid parsing tag destinations
  upload-pack: avoid parsing objects during ref advertisement
  parse_object: try internal cache before reading object db
2012-01-29 13:18:55 -08:00
Junio C Hamano f47182c852 server_supports(): parse feature list more carefully
We have been carefully choosing feature names used in the protocol
extensions so that the vocabulary does not contain a word that is a
substring of another word, so it is not a real problem, but we have
recently added "quiet" feature word, which would mean we cannot later
add some other word with "quiet" (e.g. "quiet-push"), which is awkward.

Let's make sure that we can eventually be able to do so by teaching the
clients and servers that feature words consist of non whitespace
letters. This parser also allows us to later add features with parameters
e.g. "feature=1.5" (parameter values need to be quoted for whitespaces,
but we will worry about the detauls when we do introduce them).

Signed-off-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Clemens Buchacher <drizzd@aon.at>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-01-08 14:26:28 -08:00
Jeff King 90108a2441 upload-pack: avoid parsing tag destinations
When upload-pack advertises refs, it dereferences any tags
it sees, and shows the resulting sha1 to the client. It does
this by calling deref_tag. That function must load and parse
each tag object to find the sha1 of the tagged object.
However, it also ends up parsing the tagged object itself,
which is not strictly necessary for upload-pack's use.

Each tag produces two object loads (assuming it is not a
recursive tag), when it could get away with only a single
one. Dropping the second load halves the effort we spend.

The downside is that we are no longer verifying the
resulting object by loading it. In particular:

  1. We never cross-check the "type" field given in the tag
     object with the type of the pointed-to object.  If the
     tag says it points to a tag but doesn't, then we will
     keep peeling and realize the error.  If the tag says it
     points to a non-tag but actually points to a tag, we
     will stop peeling and just advertise the pointed-to
     tag.

  2. If we are missing the pointed-to object, we will not
     realize (because we never even look it up in the object
     db).

However, both of these are errors in the object database,
and both will be detected if a client actually requests the
broken objects in question. So we are simply pushing the
verification away from the advertising stage, and down to
the actual fetching stage.

On my test repo with 120K refs, this drops the time to
advertise the refs from ~3.2s to ~2.0s.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-01-06 13:28:57 -08:00
Jeff King 926f1dd954 upload-pack: avoid parsing objects during ref advertisement
When we advertise a ref, the first thing we do is parse the
pointed-to object. This gives us two things:

  1. a "struct object" we can use to store flags

  2. the type of the object, so we know whether we need to
     dereference it as a tag

Instead, we can just use lookup_unknown_object to get an
object struct, and then fill in just the type field using
sha1_object_info (which, in the case of packed files, can
find the information without actually inflating the object
data).

This can save time if you have a large number of refs, and
the client isn't actually going to request those refs (e.g.,
because most of them are already up-to-date).

The downside is that we are no longer verifying objects that
we advertise by fully parsing them (however, we do still
know we actually have them, because sha1_object_info must
find them to get the type). While we might fail to detect a
corrupt object here, if the client actually fetches the
object, we will parse (and verify) it then.

On a repository with 120K refs, the advertisement portion of
upload-pack goes from ~3.4s to 3.2s (the failure to speed up
more is largely due to the fact that most of these refs are
tags, which need dereferenced to find the tag destination
anyway).

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-01-06 13:28:55 -08:00
Ævar Arnfjörð Bjarmason 5e9637c629 i18n: add infrastructure for translating Git with gettext
Change the skeleton implementation of i18n in Git to one that can show
localized strings to users for our C, Shell and Perl programs using
either GNU libintl or the Solaris gettext implementation.

This new internationalization support is enabled by default. If
gettext isn't available, or if Git is compiled with
NO_GETTEXT=YesPlease, Git falls back on its current behavior of
showing interface messages in English. When using the autoconf script
we'll auto-detect if the gettext libraries are installed and act
appropriately.

This change is somewhat large because as well as adding a C, Shell and
Perl i18n interface we're adding a lot of tests for them, and for
those tests to work we need a skeleton PO file to actually test
translations. A minimal Icelandic translation is included for this
purpose. Icelandic includes multi-byte characters which makes it easy
to test various edge cases, and it's a language I happen to
understand.

The rest of the commit message goes into detail about various
sub-parts of this commit.

= Installation

Gettext .mo files will be installed and looked for in the standard
$(prefix)/share/locale path. GIT_TEXTDOMAINDIR can also be set to
override that, but that's only intended to be used to test Git itself.

= Perl

Perl code that's to be localized should use the new Git::I18n
module. It imports a __ function into the caller's package by default.

Instead of using the high level Locale::TextDomain interface I've
opted to use the low-level (equivalent to the C interface)
Locale::Messages module, which Locale::TextDomain itself uses.

Locale::TextDomain does a lot of redundant work we don't need, and
some of it would potentially introduce bugs. It tries to set the
$TEXTDOMAIN based on package of the caller, and has its own
hardcoded paths where it'll search for messages.

I found it easier just to completely avoid it rather than try to
circumvent its behavior. In any case, this is an issue wholly
internal Git::I18N. Its guts can be changed later if that's deemed
necessary.

See <AANLkTilYD_NyIZMyj9dHtVk-ylVBfvyxpCC7982LWnVd@mail.gmail.com> for
a further elaboration on this topic.

= Shell

Shell code that's to be localized should use the git-sh-i18n
library. It's basically just a wrapper for the system's gettext.sh.

If gettext.sh isn't available we'll fall back on gettext(1) if it's
available. The latter is available without the former on Solaris,
which has its own non-GNU gettext implementation. We also need to
emulate eval_gettext() there.

If neither are present we'll use a dumb printf(1) fall-through
wrapper.

= About libcharset.h and langinfo.h

We use libcharset to query the character set of the current locale if
it's available. I.e. we'll use it instead of nl_langinfo if
HAVE_LIBCHARSET_H is set.

The GNU gettext manual recommends using langinfo.h's
nl_langinfo(CODESET) to acquire the current character set, but on
systems that have libcharset.h's locale_charset() using the latter is
either saner, or the only option on those systems.

GNU and Solaris have a nl_langinfo(CODESET), FreeBSD can use either,
but MinGW and some others need to use libcharset.h's locale_charset()
instead.

=Credits

This patch is based on work by Jeff Epler <jepler@unpythonic.net> who
did the initial Makefile / C work, and a lot of comments from the Git
mailing list, including Jonathan Nieder, Jakub Narebski, Johannes
Sixt, Erik Faye-Lund, Peter Krefting, Junio C Hamano, Thomas Rast and
others.

[jc: squashed a small Makefile fix from Ramsay]

Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-12-05 20:46:55 -08:00
Junio C Hamano 2e2e7e9dd0 Merge branch 'jc/fetch-verify'
* jc/fetch-verify:
  fetch: verify we have everything we need before updating our ref
  rev-list --verify-object
  list-objects: pass callback data to show_objects()
2011-10-05 12:36:20 -07:00
Junio C Hamano f817f2fbb5 Merge branch 'jc/traverse-commit-list'
* jc/traverse-commit-list:
  revision.c: update show_object_with_name() without using malloc()
  revision.c: add show_object_with_name() helper function
  rev-list: fix finish_object() call
2011-10-05 12:36:19 -07:00
Junio C Hamano 4947367267 list-objects: pass callback data to show_objects()
The traverse_commit_list() API takes two callback functions, one to show
commit objects, and the other to show other kinds of objects. Even though
the former has a callback data parameter, so that the callback does not
have to rely on global state, the latter does not.

Give the show_objects() callback the same callback data parameter.

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-09-01 15:46:12 -07:00
Junio C Hamano b7fcd00715 Sync with 1.7.6.1
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-08-24 12:18:02 -07:00
Brian Harring 2a74532412 get_indexed_object can return NULL if nothing is in that slot; check for it
This fixes a segfault introduced by 051e400; via it, no longer able to
trigger the http/smartserv race.

Signed-off-by: Brian Harring <ferringb@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-08-24 10:50:33 -07:00
Junio C Hamano 91f175165a revision.c: add show_object_with_name() helper function
There are two copies of traverse_commit_list callback that show the object
name followed by pathname the object was found, to produce output similar
to "rev-list --objects".

Unify them.

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-08-22 11:34:55 -07:00
Junio C Hamano 2f5cb6aa1e Merge branch 'jc/maint-smart-http-race-upload-pack'
* jc/maint-smart-http-race-upload-pack:
  helping smart-http/stateless-rpc fetch race
2011-08-17 17:35:58 -07:00
Junio C Hamano 051e4005a3 helping smart-http/stateless-rpc fetch race
A request to fetch from a client over smart HTTP protocol is served in
multiple steps. In the first round, the server side shows the set of refs
it has and their values, and the client picks from them and sends "I want
to fetch the history leading to these commits".

When the server tries to respond to this second request, its refs may have
progressed by a push from elsewhere. By design, we do not allow fetching
objects that are not at the tip of an advertised ref, and the server
rejects such a request. The client needs to try again, which is not ideal
especially for a busy server.

Teach upload-pack (which is the workhorse driven by git-daemon and smart
http server interface) that it is OK for a smart-http client to ask for
commits that are not at the tip of any advertised ref, as long as they are
reachable from advertised refs.

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-08-08 15:33:28 -07:00
Josh Triplett 6b01ecfe22 ref namespaces: Support remote repositories via upload-pack and receive-pack
Change upload-pack and receive-pack to use the namespace-prefixed refs
when working with the repository, and use the unprefixed refs when
talking to the client, maintaining the masquerade.  This allows
clone, pull, fetch, and push to work with a suitably configured
GIT_NAMESPACE.

receive-pack advertises refs outside the current namespace as .have refs
(as it currently does for refs in alternates), so that the client can
use them to minimize data transfer but will otherwise ignore them.

With appropriate configuration, this also allows http-backend to expose
namespaces as multiple repositories with different paths.  This only
requires setting GIT_NAMESPACE, which http-backend passes through to
upload-pack and receive-pack.

Signed-off-by: Josh Triplett <josh@joshtriplett.org>
Signed-off-by: Jamey Sharp <jamey@minilop.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-07-11 09:35:38 -07:00
Junio C Hamano 982f6c90ee Merge branch 'jk/maint-upload-pack-shallow'
* jk/maint-upload-pack-shallow:
  upload-pack: start pack-objects before async rev-list
2011-04-27 11:36:42 -07:00
Jeff King b961219779 upload-pack: start pack-objects before async rev-list
In a pthread-enabled version of upload-pack, there's a race condition
that can cause a deadlock on the fflush(NULL) we call from run-command.

What happens is this:

  1. Upload-pack is informed we are doing a shallow clone.

  2. We call start_async() to spawn a thread that will generate rev-list
     results to feed to pack-objects. It gets a file descriptor to a
     pipe which will eventually hook to pack-objects.

  3. The rev-list thread uses fdopen to create a new output stream
     around the fd we gave it, called pack_pipe.

  4. The thread writes results to pack_pipe. Outside of our control,
     libc is doing locking on the stream. We keep writing until the OS
     pipe buffer is full, and then we block in write(), still holding
     the lock.

  5. The main thread now uses start_command to spawn pack-objects.
     Before forking, it calls fflush(NULL) to flush every stdio output
     buffer. It blocks trying to get the lock on pack_pipe.

And we have a deadlock. The thread will block until somebody starts
reading from the pipe. But nobody will read from the pipe until we
finish flushing to the pipe.

To fix this, we swap the start order: we start the
pack-objects reader first, and then the rev-list writer
after. Thus the problematic fflush(NULL) happens before we
even open the new file descriptor (and even if it didn't,
flushing should no longer block, as the reader at the end of
the pipe is now active).

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-04-06 14:38:47 -07:00
Junio C Hamano 2eee1393f3 Merge branches 'sp/maint-fetch-pack-stop-early' and 'sp/maint-upload-pack-stop-early'
* sp/maint-fetch-pack-stop-early:
  enable "no-done" extension only when fetching over smart-http

* sp/maint-upload-pack-stop-early:
  enable "no-done" extension only when serving over smart-http
2011-03-29 14:09:02 -07:00
Junio C Hamano 4e10cf9a17 Revert two "no-done" reverts
Last night I had to make these two emergency reverts, but now we have a
better understanding of which part of the topic was broken, let's get rid
of the revert to fix it correctly.

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-03-29 12:29:10 -07:00
Junio C Hamano cf2ad8e641 enable "no-done" extension only when serving over smart-http
Do not advertise no-done capability when upload-pack is not serving over
smart-http, as there is no way for this server to know when it should stop
reading in-flight data from the client, even though it is necessary to
drain all the in-flight data in order to unblock the client.

Signed-off-by: Junio C Hamano <gitster@pobox.com>
Acked-by: Shawn O. Pearce <spearce@spearce.org>
2011-03-29 12:21:28 -07:00
Junio C Hamano 4793b7e86d Revert "upload-pack: Implement no-done capability"
This reverts 3e63b21 (upload-pack: Implement no-done capability,
2011-03-14).  Together with 761ecf0 (fetch-pack: Implement no-done
capability, 2011-03-14) it seems to make the fetch-pack process out of
sync and makes it keep talking long after upload-pack stopped listening to
it, terminating the process with SIGPIPE.
2011-03-28 23:33:51 -07:00
Junio C Hamano f59bf09678 Merge branch 'sp/maint-upload-pack-stop-early'
* sp/maint-upload-pack-stop-early:
  upload-pack: Implement no-done capability
  upload-pack: More aggressively send 'ACK %s ready'
2011-03-22 21:38:06 -07:00
Shawn O. Pearce 3e63b21ace upload-pack: Implement no-done capability
If the client requests both multi_ack_detailed and no-done then
upload-pack is free to immediately send a PACK following its first
'ACK %s ready' message.  The upload-pack response actually winds
up being:

  ACK %s common
  ... (maybe more) ...
  ACK %s ready
  NAK
  ACK %s
  PACK.... the pack stream ....

For smart HTTP connections this saves one HTTP RPC, reducing
the overall latency for a trivial fetch.  For git:// and ssh://
a no-done option slightly reduces latency by removing one
server->client->server round-trip at the end of the common
ancestor negotiation.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-03-15 12:14:35 -07:00
Shawn O. Pearce 49bee717f7 upload-pack: More aggressively send 'ACK %s ready'
If a client is merely following the remote (and has not made any
new commits itself), all "have %s" lines sent by the client will be
common to the server.  As all lines are common upload-pack never
calls ok_to_give_up() and does not compute if it has a good cut
point in the commit graph.

Without this computation the following client is going to send all
tagged commits, as these were determined to be COMMON_REF during the
initial advertisement, but the client does not parse their history
to transitively pass the COMMON flag and empty its queue of commits.

For git.git with 339 commit tags, it takes clients 11 rounds of
negotation to fully send all tagged commits and exhaust its queue
of things to send as common.  This is pretty slow for a client that
has not done any local development activity.

Force computing ok_to_give_up() and send "ACK %s ready" at the end
of the current round if this round only contained common objects
and ok_to_give_up() was therefore not called.  This may allow the
client to break early, avoiding transmission of the COMMON_REFs.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-03-14 17:27:25 -07:00
Jeff King bbc30f9963 add packet tracing debug code
This shows a trace of all packets coming in or out of a given
program. This can help with debugging object negotiation or
other protocol issues.

To keep the code changes simple, we operate at the lowest
level, meaning we don't necessarily understand what's in the
packets. The one exception is a packet starting with "PACK",
which causes us to skip that packet and turn off tracing
(since the gigantic pack data will not be interesting to
read, at least not in the trace format).

We show both written and read packets. In the local case,
this may mean you will see packets twice (written by the
sender and read by the receiver). However, for cases where
the other end is remote, this allows you to see the full
conversation.

Packet tracing can be enabled with GIT_TRACE_PACKET=<foo>,
where <foo> takes the same arguments as GIT_TRACE.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-03-08 12:12:04 -08:00
Thiago Farina 47e44ed1dc commit: Add commit_list prefix in two function names.
Add commit_list prefix to insert_by_date function and to sort_by_date,
so it's clear that these functions refer to commit_list structure.

Signed-off-by: Thiago Farina <tfransosi@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-11-29 14:01:52 -08:00
Štěpán Němec 62b4698e55 Use angles for placeholders consistently
Signed-off-by: Štěpán Němec <stepnem@gmail.com>
Acked-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-10-08 12:29:52 -07:00
Thiago Farina 3cd474599f object.h: Add OBJECT_ARRAY_INIT macro and make use of it.
Signed-off-by: Thiago Farina <tfransosi@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-08-29 22:42:49 -07:00
Elijah Newren 9f9aa76130 upload-pack: Improve error message when bad ref requested
When printing an error message saying a ref was requested that we do not
have, only print that ref, rather than the ref and everything sent to us
on the same packet line (e.g. protocol support specifications).

Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-08-02 15:31:59 -07:00
Nguyễn Thái Ngọc Duy 1e39d7deea upload-pack: remove unused "create_full_pack" code in do_rev_list
A bit of history in chronological order, the newest at bottom:

- 80ccaa7 (upload-pack: Move the revision walker into a separate function.)
   do_rev_list was introduced with create_full_pack argument

- 21edd3f (upload-pack: Run rev-list in an asynchronous function.)
   do_rev_list was now called by start_async, create_full_pack was
   passed by rev_list.data

- f0cea83 (Shift object enumeration out of upload-pack)
   rev_list.data was now zero permanently. Creating full pack was
   done by passing --all to pack-objects

- ae6a560 (run-command: support custom fd-set in async)
   rev_list.data = 0 was found out redudant and got rid of.

Get rid of the code as well, for less headache while reading do_rev_list.

[jc: noticed by Elijah Newren]

Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-07-28 13:50:11 -07:00
Erik Faye-Lund ae6a5609c0 run-command: support custom fd-set in async
This patch adds the possibility to supply a set of non-0 file
descriptors for async process communication instead of the
default-created pipe.

Additionally, we now support bi-directional communiction with the
async procedure, by giving the async function both read and write
file descriptors.

To retain compatiblity and similar "API feel" with start_command,
we require start_async callers to set .out = -1 to get a readable
file descriptor.  If either of .in or .out is 0, we supply no file
descriptor to the async process.

[sp: Note: Erik started this patch, and a huge bulk of it is
     his work.  All bugs were introduced later by Shawn.]

Signed-off-by: Erik Faye-Lund <kusmabite@gmail.com>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-02-05 20:57:22 -08:00
Junio C Hamano 4cb51a65a4 Sync with 1.6.5.6
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-12-10 16:20:59 -08:00
Junio C Hamano 1456b043fc Remove post-upload-hook
This hook runs after "git fetch" in the repository the objects are
fetched from as the user who fetched, and has security implications.

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-12-10 12:21:40 -08:00
Junio C Hamano ef3a4fd670 Merge branch 'np/maint-sideband-favor-status' into maint
* np/maint-sideband-favor-status:
  give priority to progress messages
2009-12-03 13:50:24 -08:00
Junio C Hamano 905bf7742c Merge branch 'sp/smart-http'
* sp/smart-http: (37 commits)
  http-backend: Let gcc check the format of more printf-type functions.
  http-backend: Fix access beyond end of string.
  http-backend: Fix bad treatment of uintmax_t in Content-Length
  t5551-http-fetch: Work around broken Accept header in libcurl
  t5551-http-fetch: Work around some libcurl versions
  http-backend: Protect GIT_PROJECT_ROOT from /../ requests
  Git-aware CGI to provide dumb HTTP transport
  http-backend: Test configuration options
  http-backend: Use http.getanyfile to disable dumb HTTP serving
  test smart http fetch and push
  http tests: use /dumb/ URL prefix
  set httpd port before sourcing lib-httpd
  t5540-http-push: remove redundant fetches
  Smart HTTP fetch: gzip requests
  Smart fetch over HTTP: client side
  Smart push over HTTP: client side
  Discover refs via smart HTTP server when available
  http-backend: more explict LocationMatch
  http-backend: add example for gitweb on same URL
  http-backend: use mod_alias instead of mod_rewrite
  ...

Conflicts:
	.gitignore
	remote-curl.c
2009-11-20 23:51:23 -08:00
Junio C Hamano e36e6c00cd Merge branch 'np/maint-sideband-favor-status'
* np/maint-sideband-favor-status:
  give priority to progress messages
2009-11-17 22:03:20 -08:00
Nicolas Pitre 6b59f51b31 give priority to progress messages
In theory it is possible for sideband channel #2 to be delayed if
pack data is quick to come up for sideband channel #1.  And because
data for channel #2 is read only 128 bytes at a time while pack data
is read 8192 bytes at a time, it is possible for many pack blocks to
be sent to the client before the progress message fifo is emptied,
making the situation even worse.  This would result in totally garbled
progress display on the client's console as local progress gets mixed
with partial remote progress lines.

Let's prevent such situations by giving transmission priority to
progress messages over pack data at all times.

Signed-off-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-11-13 14:39:25 -08:00
Shawn O. Pearce 42526b478e Add stateless RPC options to upload-pack, receive-pack
When --stateless-rpc is passed as a command line parameter to
upload-pack or receive-pack the programs now assume they may
perform only a single read-write cycle with stdin and stdout.
This fits with the HTTP POST request processing model where a
program may read the request, write a response, and must exit.

When --advertise-refs is passed as a command line parameter only
the initial ref advertisement is output, and the program exits
immediately.  This fits with the HTTP GET request model, where
no request content is received but a response must be produced.

HTTP headers and/or environment are not processed here, but
instead are assumed to be handled by the program invoking
either service backend.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-11-04 17:58:14 -08:00
Shawn O. Pearce 78affc49de Add multi_ack_detailed capability to fetch-pack/upload-pack
When multi_ack_detailed is enabled the ACK continue messages returned
by the remote upload-pack are broken out to describe the different
states within the peer.  This permits the client to better understand
the server's in-memory state.

The fetch-pack/upload-pack protocol now looks like:

NAK
---------------------------------
  Always sent in response to "done" if there was no common base
  selected from the "have" lines (or no have lines were sent).

  * no multi_ack or multi_ack_detailed:

    Sent when the client has sent a pkt-line flush ("0000") and
    the server has not yet found a common base object.

  * either multi_ack or multi_ack_detailed:

    Always sent in response to a pkt-line flush.

ACK %s
-----------------------------------
  * no multi_ack or multi_ack_detailed:

    Sent in response to "have" when the object exists on the remote
    side and is therefore an object in common between the peers.
    The argument is the SHA-1 of the common object.

  * either multi_ack or multi_ack_detailed:

    Sent in response to "done" if there are common objects.
    The argument is the last SHA-1 determined to be common.

ACK %s continue
-----------------------------------
  * multi_ack only:

    Sent in response to "have".

    The remote side wants the client to consider this object as
    common, and immediately stop transmitting additional "have"
    lines for objects that are reachable from it.  The reason
    the client should stop is not given, but is one of the two
    cases below available under multi_ack_detailed.

ACK %s common
-----------------------------------
  * multi_ack_detailed only:

    Sent in response to "have".  Both sides have this object.
    Like with "ACK %s continue" above the client should stop
    sending have lines reachable for objects from the argument.

ACK %s ready
-----------------------------------
  * multi_ack_detailed only:

    Sent in response to "have".

    The client should stop transmitting objects which are reachable
    from the argument, and send "done" soon to get the objects.

    If the remote side has the specified object, it should
    first send an "ACK %s common" message prior to sending
    "ACK %s ready".

    Clients may still submit additional "have" lines if there are
    more side branches for the client to explore that might be added
    to the common set and reduce the number of objects to transfer.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-30 19:20:54 -07:00
Jim Meyering 41698375ad don't dereference NULL upon fdopen failure
There were several unchecked use of fdopen(); replace them with xfdopen()
that checks and dies.

Signed-off-by: Jim Meyering <meyering@redhat.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-09-13 01:32:20 -07:00
Jim Meyering 2b7ca830c6 use write_str_in_full helper to avoid literal string lengths
In 2d14d65 (Use a clearer style to issue commands to remote helpers,
2009-09-03) I happened to notice two changes like this:

-	write_in_full(helper->in, "list\n", 5);
+
+	strbuf_addstr(&buf, "list\n");
+	write_in_full(helper->in, buf.buf, buf.len);
+	strbuf_reset(&buf);

IMHO, it would be better to define a new function,

    static inline ssize_t write_str_in_full(int fd, const char *str)
    {
           return write_in_full(fd, str, strlen(str));
    }

and then use it like this:

-       strbuf_addstr(&buf, "list\n");
-       write_in_full(helper->in, buf.buf, buf.len);
-       strbuf_reset(&buf);
+       write_str_in_full(helper->in, "list\n");

Thus not requiring the added allocation, and still avoiding
the maintenance risk of literal string lengths.
These days, compilers are good enough that strlen("literal")
imposes no run-time cost.

Transformed via this:

    perl -pi -e \
        's/write_in_full\((.*?), (".*?"), \d+\)/write_str_in_full($1, $2)/'\
      $(git grep -l 'write_in_full.*"')

Signed-off-by: Jim Meyering <meyering@redhat.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-09-13 01:31:10 -07:00
Junio C Hamano e4d1afbcf2 Merge branch 'jc/upload-pack-hook'
* jc/upload-pack-hook:
  upload-pack: feed "kind [clone|fetch]" to post-upload-pack hook
  upload-pack: add a trigger for post-upload-pack hook
2009-09-07 15:24:47 -07:00
Junio C Hamano 8e4384fd44 Merge branch 'np/maint-1.6.3-deepen'
* np/maint-1.6.3-deepen:
  pack-objects: free preferred base memory after usage
  make shallow repository deepening more network efficient
2009-09-07 15:23:50 -07:00
Nicolas Pitre 6523078b96 make shallow repository deepening more network efficient
First of all, I can't find any reason why thin pack generation is
explicitly disabled when dealing with a shallow repository.  The
possible delta base objects are collected from the edge commits which
are always obtained through history walking with the same shallow refs
as the client, Therefore the client is always going to have those base
objects available. So let's remove that restriction.

Then we can make shallow repository deepening much more efficient by
using the remote's unshallowed commits as edge commits to get preferred
base objects for thin pack generation.  On git.git, this makes the data
transfer for the deepening of a shallow repository from depth 1 to depth 2
around 134 KB instead of 3.68 MB.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-09-05 22:25:26 -07:00
Brian Gianforcaro eeefa7c90e Style fixes, add a space after if/for/while.
The majority of code in core git appears to use a single
space after if/for/while. This is an attempt to bring more
code to this standard. These are entirely cosmetic changes.

Signed-off-by: Brian Gianforcaro <b.gianfo@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-08-31 23:26:28 -07:00
Junio C Hamano 11cae066b2 upload-pack: feed "kind [clone|fetch]" to post-upload-pack hook
A request to clone the repository does not give any "have" but asks for
all the refs we offer with "want".  When a request does not ask to clone
the repository fully, but asks to fetch some refs into an empty
repository, it will not give any "have" but its "want" won't ask for all
the refs we offer.

If we suppose (and I would say this is a rather big if) that it makes
sense to distinguish these two cases, a hook cannot reliably do this
alone.  The hook can detect lack of "have" and bunch of "want", but there
is no direct way to tell if the other end asked for all refs we offered,
or merely most of them.

Between the time we talked with the other end and the time the hook got
called, we may have acquired more refs or lost some refs in the repository
by concurrent operations.  Given that we plan to introduce selective
advertisement of refs with a protocol extension, it would become even more
difficult for hooks to guess between these two cases.

This adds "kind [clone|fetch]" to hook's input, as a stable interface to
allow the hooks to tell these cases apart.

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-08-28 22:39:24 -07:00
Junio C Hamano a8563ec851 upload-pack: add a trigger for post-upload-pack hook
After upload-pack successfully finishes its operation, post-upload-pack
hook can be called for logging purposes.

The hook is passed various pieces of information, one per line, from its
standard input.  Currently the following items can be fed to the hook, but
more types of information may be added in the future:

    want SHA-1::
        40-byte hexadecimal object name the client asked to include in the
        resulting pack.  Can occur one or more times in the input.

    have SHA-1::
        40-byte hexadecimal object name the client asked to exclude from
        the resulting pack, claiming to have them already.  Can occur zero
        or more times in the input.

    time float::
        Number of seconds spent for creating the packfile.

    size decimal::
        Size of the resulting packfile in bytes.

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-08-28 22:39:17 -07:00
Junio C Hamano f00ecbe42b Merge branch 'cc/replace'
* cc/replace:
  t6050: check pushing something based on a replaced commit
  Documentation: add documentation for "git replace"
  Add git-replace to .gitignore
  builtin-replace: use "usage_msg_opt" to give better error messages
  parse-options: add new function "usage_msg_opt"
  builtin-replace: teach "git replace" to actually replace
  Add new "git replace" command
  environment: add global variable to disable replacement
  mktag: call "check_sha1_signature" with the replacement sha1
  replace_object: add a test case
  object: call "check_sha1_signature" with the replacement sha1
  sha1_file: add a "read_sha1_file_repl" function
  replace_object: add mechanism to replace objects found in "refs/replace/"
  refs: add a "for_each_replace_ref" function
2009-08-21 18:47:53 -07:00
Junio C Hamano 7d1b509812 Merge branch 'ne/futz-upload-pack'
* ne/futz-upload-pack:
  Shift object enumeration out of upload-pack

Conflicts:
	upload-pack.c
2009-08-05 12:38:29 -07:00
Johannes Sixt 9462e3f59c upload-pack: squelch progress indicator if client cannot see it
upload-pack runs pack-objects, which generates progress indicator output
on its stderr. If the client requests a sideband, this indicator is sent
to the client; but if it did not, then the progress is written to
upload-pack's own stderr.

If upload-pack is itself run from git-daemon (and if the client did not
request a sideband) the progress indicator never reaches the client and it
need not be generated in the first place. With this patch the progress
indicator is suppressed in this situation.

Signed-off-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-06-18 11:38:43 -07:00
Nick Edelen f0cea83f63 Shift object enumeration out of upload-pack
Offload object enumeration in upload-pack to pack-objects, but fall
back on internal revision walker for shallow interaction.   Aside from
architecturally making more sense, this also leaves the door open for
pack-objects to employ a revision cache mechanism.  Test t5530 updated
in order to explicitly check both enumeration methods.

Signed-off-by: Nick Edelen <sirnot@gmail.com>
Acked-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-06-09 23:49:31 -07:00
Christian Couder dae556bdb1 environment: add global variable to disable replacement
This new "read_replace_refs" global variable is set to 1 by
default, so that replace refs are used by default. But
reachability traversal and packing commands ("cmd_fsck",
"cmd_prune", "cmd_pack_objects", "upload_pack",
"cmd_unpack_objects") set it to 0, as they must work with the
original DAG.

Signed-off-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-05-31 17:02:59 -07:00
Junio C Hamano 7d71be242d Merge branch 'lt/pack-object-memuse' into maint
* lt/pack-object-memuse:
  show_object(): push path_name() call further down
  process_{tree,blob}: show objects without buffering
2009-05-03 15:02:40 -07:00
Junio C Hamano 9824a388e5 Merge branch 'lt/pack-object-memuse'
* lt/pack-object-memuse:
  show_object(): push path_name() call further down
  process_{tree,blob}: show objects without buffering

Conflicts:
	builtin-pack-objects.c
	builtin-rev-list.c
	list-objects.c
	list-objects.h
	upload-pack.c
2009-04-18 14:46:17 -07:00
Linus Torvalds cf2ab916af show_object(): push path_name() call further down
In particular, pushing the "path_name()" call _into_ the show() function
would seem to allow

 - more clarity into who "owns" the name (ie now when we free the name in
   the show_object callback, it's because we generated it ourselves by
   calling path_name())

 - not calling path_name() at all, either because we don't care about the
   name in the first place, or because we are actually happy walking the
   linked list of "struct name_path *" and the last component.

Now, I didn't do that latter optimization, because it would require some
more coding, but especially looking at "builtin-pack-objects.c", we really
don't even want the whole pathname, we really would be better off with the
list of path components.

Why? We use that name for two things:
 - add_preferred_base_object(), which actually _wants_ to traverse the
   path, and now does it by looking for '/' characters!
 - for 'name_hash()', which only cares about the last 16 characters of a
   name, so again, generating the full name seems to be just unnecessary
   work.

Anyway, so I didn't look any closer at those things, but it did convince
me that the "show_object()" calling convention was crazy, and we're
actually better off doing _less_ in list-objects.c, and giving people
access to the internal data structures so that they can decide whether
they want to generate a path-name or not.

This patch does that, and then for people who did use the name (even if
they might do something more clever in the future), it just does the
straightforward "name = path_name(path, component); .. free(name);" thing.

Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-04-12 17:28:31 -07:00
Linus Torvalds 8d2dfc49b1 process_{tree,blob}: show objects without buffering
Here's a less trivial thing, and slightly more dubious one.

I was looking at that "struct object_array objects", and wondering why we
do that. I have honestly totally forgotten. Why not just call the "show()"
function as we encounter the objects? Rather than add the objects to the
object_array, and then at the very end going through the array and doing a
'show' on all, just do things more incrementally.

Now, there are possible downsides to this:

 - the "buffer using object_array" _can_ in theory result in at least
   better I-cache usage (two tight loops rather than one more spread out
   one). I don't think this is a real issue, but in theory..

 - this _does_ change the order of the objects printed. Instead of doing a
   "process_tree(revs, commit->tree, &objects, NULL, "");" in the loop
   over the commits (which puts all the root trees _first_ in the object
   list, this patch just adds them to the list of pending objects, and
   then we'll traverse them in that order (and thus show each root tree
   object together with the objects we discover under it)

   I _think_ the new ordering actually makes more sense, but the object
   ordering is actually a subtle thing when it comes to packing
   efficiency, so any change in order is going to have implications for
   packing. Good or bad, I dunno.

 - There may be some reason why we did it that odd way with the object
   array, that I have simply forgotten.

Anyway, now that we don't buffer up the objects before showing them
that may actually result in lower memory usage during that whole
traverse_commit_list() phase.

This is seriously not very deeply tested. It makes sense to me, it seems
to pass all the tests, it looks ok, but...

Does anybody remember why we did that "object_array" thing? It used to be
an "object_list" a long long time ago, but got changed into the array due
to better memory usage patterns (those linked lists of obejcts are
horrible from a memory allocation standpoint). But I wonder why we didn't
do this back then. Maybe there's a reason for it.

Or maybe there _used_ to be a reason, and no longer is.

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-04-12 17:28:31 -07:00
Christian Couder 11c211fa06 list-objects: add "void *data" parameter to show functions
The goal of this patch is to get rid of the "static struct rev_info
revs" static variable in "builtin-rev-list.c".

To do that, we need to pass the revs to the "show_commit" function
in "builtin-rev-list.c" and this in turn means that the
"traverse_commit_list" function in "list-objects.c" must be passed
functions pointers to functions with 2 parameters instead of one.

So we have to change all the callers and all the functions passed
to "traverse_commit_list".

Anyway this makes the code more clean and more generic, so it
should be a good thing in the long run.

Signed-off-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-04-07 22:12:38 -07:00
Benjamin Kramer fd13b21f52 Move local variables to narrower scopes
These weren't used outside and can be safely moved

Signed-off-by: Benjamin Kramer <benny.kra@googlemail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-03-07 20:52:23 -08:00
Jeff King 05ac6b34e2 improve missing repository error message
Certain remote commands, when asked to do something in a
particular directory that was not actually a git repository,
would say "unable to chdir or not a git archive". The
"chdir" bit is an unnecessary detail, and the term "git
archive" is much less common these days than "git repository".

So let's switch them all to:

  fatal: '%s' does not appear to be a git repository

Signed-off-by: Jeff King <peff@peff.net>
Acked-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-03-04 20:37:21 -08:00
Alexander Potashev 34263de026 Replace deprecated dashed git commands in usage
Signed-off-by: Alexander Potashev <aspotashev@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-02-04 15:08:49 -08:00
Steffen Prohaska 2fb3f6db96 Add calls to git_extract_argv0_path() in programs that call git_config_*
Programs that use git_config need to find the global configuration.
When runtime prefix computation is enabled, this requires that
git_extract_argv0_path() is called early in the program's main().

This commit adds the necessary calls.

Signed-off-by: Steffen Prohaska <prohaska@zib.de>
Acked-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-01-26 00:26:05 -08:00
Junio C Hamano 7e44c93558 'git foo' program identifies itself without dash in die() messages
This is a mechanical conversion of all '*.c' files with:

	s/((?:die|error|warning)\("git)-(\S+:)/$1 $2/;

The result was manually inspected and no false positive was found.

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-08-31 09:39:19 -07:00
Johannes Sixt e1464ca7bb Record the command invocation path early
We will need the command invocation path in system_path(). This path was
passed to setup_path(), but  system_path() can be called earlier, for
example via:

    main
      commit_pager_choice
        setup_pager
          git_config
            git_etc_gitconfig
              system_path

Therefore, we introduce git_set_argv0_path() and call it as soon as
possible.

Signed-off-by: Johannes Sixt <johannes.sixt@telecom.at>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-07-25 17:41:13 -07:00
Johannes Sixt 618ebe9ff9 Windows: Implement asynchronous functions as threads.
In upload-pack we must explicitly close the output channel of rev-list.
(On Unix, the channel is closed automatically because process that runs
rev-list terminates.)

Signed-off-by: Johannes Sixt <johannes.sixt@telecom.at>
2008-06-26 08:45:08 +02:00
Shawn O. Pearce 348e390b17 Teach fetch-pack/upload-pack about --include-tag
The new protocol extension "include-tag" allows the client side
of the connection (fetch-pack) to request that the server side of the
native git protocol (upload-pack / pack-objects) use --include-tag
as it prepares the packfile, thus ensuring that an annotated tag object
will be included in the resulting packfile if the object it refers to
was also included into the packfile.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-03-04 23:28:14 -08:00
Shawn O. Pearce 49aaddd102 Teach upload-pack to log the received need lines to an fd
To facilitate testing and verification of the requests sent by
git-fetch to the remote side we permit logging the received packet
lines to the file descriptor specified in GIT_DEBUG_SEND_PACK has
been set.  Special start and end lines are included to indicate
the start and end of each connection.

  $ GIT_DEBUG_SEND_PACK=3 git fetch 3>UPLOAD_LOG
  $ cat UPLOAD_LOG
  #S
  want 8e10cf4e007ad7e003463c30c34b1050b039db78 multi_ack side-band-64k thin-pack ofs-delta
  want ddfa4a33562179aca1ace2bcc662244a17d0b503
  #E
  #S
  want 3253df4d1cf6fb138b52b1938473bcfec1483223 multi_ack side-band-64k thin-pack ofs-delta
  #E

>From the above trace the first connection opened by git-fetch was to
download two refs (with values 8e and dd) and the second connection
was opened to automatically follow an annotated tag (32).

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-03-03 00:05:45 -08:00
Junio C Hamano eadbcd498a Merge branch 'mk/maint-parse-careful'
* mk/maint-parse-careful:
  receive-pack: use strict mode for unpacking objects
  index-pack: introduce checking mode
  unpack-objects: prevent writing of inconsistent objects
  unpack-object: cache for non written objects
  add common fsck error printing function
  builtin-fsck: move common object checking code to fsck.c
  builtin-fsck: reports missing parent commits
  Remove unused object-ref code
  builtin-fsck: move away from object-refs to fsck_walk
  add generic, type aware object chain walker

Conflicts:

	Makefile
	builtin-fsck.c
2008-03-02 15:11:07 -08:00
Martin Koegler 7914053ba9 Remove unused object-ref code
Signed-off-by: Martin Koegler <mkoegler@auto.tuwien.ac.at>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-02-25 23:57:35 -08:00
Junio C Hamano ee4f06c0a6 Merge branch 'mk/maint-parse-careful'
* mk/maint-parse-careful:
  peel_onion: handle NULL
  check return value from parse_commit() in various functions
  parse_commit: don't fail, if object is NULL
  revision.c: handle tag->tagged == NULL
  reachable.c::process_tree/blob: check for NULL
  process_tag: handle tag->tagged == NULL
  check results of parse_commit in merge_bases
  list-objects.c::process_tree/blob: check for NULL
  reachable.c::add_one_tree: handle NULL from lookup_tree
  mark_blob/tree_uninteresting: check for NULL
  get_sha1_oneline: check return value of parse_object
  read_object_with_reference: don't read beyond the buffer
2008-02-18 20:56:01 -08:00
Martin Koegler dec38c8165 check return value from parse_commit() in various functions
Signed-off-by: Martin Koegler <mkoegler@auto.tuwien.ac.at>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-02-18 20:49:13 -08:00
Martin Koegler 3d51e1b5b8 check return code of prepare_revision_walk
A failure in prepare_revision_walk can be caused by
a not parseable object.

Signed-off-by: Martin Koegler <mkoegler@auto.tuwien.ac.at>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-02-17 23:51:12 -08:00
Martin Koegler affeef12fb deref_tag: handle return value NULL
Signed-off-by: Martin Koegler <mkoegler@auto.tuwien.ac.at>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-02-17 23:46:55 -08:00
Johannes Sixt 04b330551e upload-pack: Initialize the exec-path.
Since git-upload-pack has to spawn git-pack-objects, it has to make sure
that the latter can be found in the PATH. Without this patch an attempt
to clone or pull via ssh from a server fails if the git tools are not in
the standard PATH on the server even though git clone or git pull were
invoked with --upload-pack=/path/to/git-upload-pack.

Signed-off-by: Johannes Sixt <johannes.sixt@telecom.at>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-02-13 12:04:01 -08:00
Johannes Sixt 4c324c0050 upload-pack: Use finish_{command,async}() instead of waitpid().
upload-pack spawns two processes, rev-list and pack-objects, and carefully
monitors their status so that it can report failure to the remote end.
This change removes the complicated procedures on the grounds of the
following observations:

- If everything is OK, rev-list closes its output pipe end, upon which
  pack-objects (which reads from the pipe) sees EOF and terminates itself,
  closing its output (and error) pipes. upload-pack reads from both until
  it sees EOF in both. It collects the exit codes of the child processes
  (which indicate success) and terminates successfully.

- If rev-list sees an error, it closes its output and terminates with
  failure. pack-objects sees EOF in its input and terminates successfully.
  Again upload-pack reads its inputs until EOF. When it now collects
  the exit codes of its child processes, it notices the failure of rev-list
  and signals failure to the remote end.

- If pack-objects sees an error, it terminates with failure. Since this
  breaks the pipe to rev-list, rev-list is killed with SIGPIPE.
  upload-pack reads its input until EOF, then collects the exit codes of
  the child processes, notices their failures, and signals failure to the
  remote end.

- If upload-pack itself dies unexpectedly, pack-objects is killed with
  SIGPIPE, and subsequently also rev-list.

The upshot of this is that precise monitoring of child processes is not
required because both terminate if either one of them dies unexpectedly.
This allows us to use finish_command() and finish_async() instead of
an explicit waitpid(2) call.

The change is smaller than it looks because most of it only reduces the
indentation of a large part of the inner loop.

Signed-off-by: Johannes Sixt <johannes.sixt@telecom.at>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2007-11-05 22:47:28 -08:00
Johannes Sixt 21edd3f197 upload-pack: Run rev-list in an asynchronous function.
This gets rid of an explicit fork().

Since upload-pack has to coordinate two processes (rev-list and
pack-objects), we cannot use the normal finish_async(), but have to monitor
the process explicitly. Hence, there are no changes at this front.

Signed-off-by: Johannes Sixt <johannes.sixt@telecom.at>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
2007-10-21 01:30:42 -04:00
Johannes Sixt 80ccaa78a8 upload-pack: Move the revision walker into a separate function.
This allows us later to use start_async() with this function, and at
the same time is a nice cleanup that makes a long function
(create_pack_file()) shorter.

Signed-off-by: Johannes Sixt <johannes.sixt@telecom.at>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
2007-10-21 01:30:41 -04:00
Johannes Sixt cc41fa8da9 upload-pack: Use start_command() to run pack-objects in create_pack_file().
This gets rid of an explicit fork/exec.

Since upload-pack has to coordinate two processes (rev-list and
pack-objects), we cannot use the normal finish_command(), but have to
monitor the processes explicitly. Hence, the waitpid() call remains.

Signed-off-by: Johannes Sixt <johannes.sixt@telecom.at>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
2007-10-21 01:30:40 -04:00
Junio C Hamano 16befb8b7f Even more missing static
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2007-06-08 02:54:57 -07:00
Junio C Hamano a6080a0a44 War on whitespace
This uses "git-apply --whitespace=strip" to fix whitespace errors that have
crept in to our source files over time.  There are a few files that need
to have trailing whitespaces (most notably, test vectors).  The results
still passes the test, and build result in Documentation/ area is unchanged.

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2007-06-07 00:04:01 -07:00
H. Peter Anvin 465b3518a9 git-upload-pack: make sure we close unused pipe ends
Right now, we don't close the read end of the pipe when git-upload-pack
runs git-pack-object, so we hang forever (why don't we get SIGALRM?)
instead of dying with SIGPIPE if the latter dies, which seems to be the
norm if the client disconnects.

Thanks to Johannes Schindelin <Johannes.Schindelin@gmx.de> for
pointing out where this close() needed to go.

This patch has been tested on kernel.org for several weeks and appear
to resolve the problem of git-upload-pack processes hanging around
forever.

Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-03-27 17:05:12 -07:00
Junio C Hamano 3ddad98b74 Merge branch 'js/fetch-progress' (early part)
* 'js/fetch-progress' (early part):
  Fixup no-progress for fetch & clone
  fetch & clone: do not output progress when not on a tty

Conflicts:

	git-fetch.sh
2007-03-04 17:31:21 -08:00
Johannes Schindelin b0e908977e Fixup no-progress for fetch & clone
The intent of the commit 'fetch & clone: do not output progress when
not on a tty' was to make fetching and cloning less chatty when
output was not redirected (such as in a cron job).

However, there was a serious thinko in that commit. It assumed that
the client _and_ the server got this update at the same time. But
this is obviously not the case, and therefore upload-pack died on
seeing the option "--no-progress".

This patch fixes that issue by making it a protocol option. So, until
your server is updated, you still see the progress, but once the
server has this patch, it will be quiet.

A minor issue was also fixed: when cloning, the checkout did not
heed no_progress.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-02-24 00:26:18 -08:00
Junio C Hamano 599065a3bb prefixcmp(): fix-up mechanical conversion.
Previous step converted use of strncmp() with literal string
mechanically even when the result is only used as a boolean:

    if (!strncmp("foo", arg, 3)) ==> if (!(-prefixcmp(arg, "foo")))

This step manually cleans them up to read:

    if (!prefixcmp(arg, "foo"))

Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-02-20 22:03:15 -08:00
Junio C Hamano cc44c7655f Mechanical conversion to use prefixcmp()
This mechanically converts strncmp() to use prefixcmp(), but only when
the parameters match specific patterns, so that they can be verified
easily.  Leftover from this will be fixed in a separate step, including
idiotic conversions like

    if (!strncmp("foo", arg, 3))

  =>

    if (!(-prefixcmp(arg, "foo")))

This was done by using this script in px.perl

   #!/usr/bin/perl -i.bak -p
   if (/strncmp\(([^,]+), "([^\\"]*)", (\d+)\)/ && (length($2) == $3)) {
           s|strncmp\(([^,]+), "([^\\"]*)", (\d+)\)|prefixcmp($1, "$2")|;
   }
   if (/strncmp\("([^\\"]*)", ([^,]+), (\d+)\)/ && (length($1) == $3)) {
           s|strncmp\("([^\\"]*)", ([^,]+), (\d+)\)|(-prefixcmp($2, "$1"))|;
   }

and running:

   $ git grep -l strncmp -- '*.c' | xargs perl px.perl

Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-02-20 22:03:15 -08:00
Johannes Schindelin 83a5ad6126 fetch & clone: do not output progress when not on a tty
This adds the option "--no-progress" to fetch-pack and upload-pack,
and makes fetch and clone pass this option when stdout is not a tty.

While at documenting that option, also document --strict and --timeout
options for upload-pack.

Signed-off-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-02-19 19:20:05 -08:00
Junio C Hamano a0022eebf3 shallow repository: disable unsupported operations for now.
We currently do not support fetching/cloning from a shallow repository
nor pushing into one.  Make sure these are not attempted so that we
do not have to worry about corrupting repositories needlessly.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-01-21 22:23:58 -08:00
Andy Whitcroft 93822c2239 short i/o: fix calls to write to use xwrite or write_in_full
We have a number of badly checked write() calls.  Often we are
expecting write() to write exactly the size we requested or fail,
this fails to handle interrupts or short writes.  Switch to using
the new write_in_full().  Otherwise we at a minimum need to check
for EINTR and EAGAIN, where this is appropriate use xwrite().

Note, the changes to config handling are much larger and handled
in the next patch in the sequence.

Signed-off-by: Andy Whitcroft <apw@shadowen.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-01-08 15:44:47 -08:00
Andy Whitcroft 93d26e4cb9 short i/o: fix calls to read to use xread or read_in_full
We have a number of badly checked read() calls.  Often we are
expecting read() to read exactly the size we requested or fail, this
fails to handle interrupts or short reads.  Add a read_in_full()
providing those semantics.  Otherwise we at a minimum need to check
for EINTR and EAGAIN, where this is appropriate use xread().

Signed-off-by: Andy Whitcroft <apw@shadowen.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-01-08 15:44:47 -08:00
Junio C Hamano 37818d7db0 Merge branch 'master' into js/shallow
This is to adjust to:

  count-objects -v: show number of packs as well.

which will break a test in this series.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-12-27 02:43:46 -08:00
Junio C Hamano 85023577a8 simplify inclusion of system header files.
This is a mechanical clean-up of the way *.c files include
system header files.

 (1) sources under compat/, platform sha-1 implementations, and
     xdelta code are exempt from the following rules;

 (2) the first #include must be "git-compat-util.h" or one of
     our own header file that includes it first (e.g. config.h,
     builtin.h, pkt-line.h);

 (3) system headers that are included in "git-compat-util.h"
     need not be included in individual C source files.

 (4) "git-compat-util.h" does not have to include subsystem
     specific header files (e.g. expat.h).

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-12-20 09:51:35 -08:00
Alexandre Julliard 1f2de76981 upload-pack: Check for NOT_SHALLOW flag before sending a shallow to the client.
A commit may have been put on the shallow list, and then reached from
another branch and marked NOT_SHALLOW without being removed from the
list.

Signed-off-by: Alexandre Julliard <julliard@winehq.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-11-24 15:42:50 -08:00
Junio C Hamano 176d45cb20 shallow clone: unparse and reparse an unshallowed commit
Otherwise we would not read the real parents from the commit
object.
2006-11-24 15:42:49 -08:00
Junio C Hamano c6702f4b95 Why didn't we mark want_obj as ~UNINTERESTING in the old code?
Is this something we would want to do regardless of shallow clone?
2006-11-24 15:42:49 -08:00
Johannes Schindelin f53514bc2d allow deepening of a shallow repository
Now, by saying "git fetch -depth <n> <repo>" you can deepen
a shallow repository.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-11-24 15:42:49 -08:00
Johannes Schindelin 016e6ccbe0 allow cloning a repository "shallowly"
By specifying a depth, you can now clone a repository such that
all fetched ancestor-chains' length is at most "depth". For example,
if the upstream repository has only 2 branches ("A" and "B"), which
are linear, and you specify depth 3, you will get A, A~1, A~2, A~3,
B, B~1, B~2, and B~3. The ends are automatically made shallow
commits.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-11-24 15:42:49 -08:00
Johannes Schindelin ed09aef06f support fetching into a shallow repository
A shallow commit is a commit which has parents, which in turn are
"grafted away", i.e. the commit appears as if it were a root.

Since these shallow commits should not be edited by the user, but
only by core git, they are recorded in the file $GIT_DIR/shallow.

A repository containing shallow commits is called shallow.

The advantage of a shallow repository is that even if the upstream
contains lots of history, your local (shallow) repository needs not
occupy much disk space.

The disadvantage is that you might miss a merge base when pulling
some remote branch.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-11-24 15:42:49 -08:00
Johannes Schindelin 9b8dc263e1 upload-pack: no longer call rev-list
It is trivial to do now, and it is needed for the upcoming shallow
clone stuff.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-11-24 15:42:49 -08:00
Junio C Hamano 3fbe2d54d7 Merge branch 'jc/upload-pack'
* jc/upload-pack:
  upload-pack: stop the other side when they have more roots than we do.
2006-11-24 02:34:27 -08:00
Junio C Hamano 58a1e0e83b Merge branch 'lj/refs'
* lj/refs: (63 commits)
  Fix show-ref usagestring
  t3200: git-branch testsuite update
  sha1_name.c: avoid compilation warnings.
  Make git-branch a builtin
  ref-log: fix D/F conflict coming from deleted refs.
  git-revert with conflicts to behave as git-merge with conflicts
  core.logallrefupdates thinko-fix
  git-pack-refs --all
  core.logallrefupdates create new log file only for branch heads.
  Remove bashism from t3210-pack-refs.sh
  ref-log: allow ref@{count} syntax.
  pack-refs: call fflush before fsync.
  pack-refs: use lockfile as everybody else does.
  git-fetch: do not look into $GIT_DIR/refs to see if a tag exists.
  lock_ref_sha1_basic does not remove empty directories on BSD
  Do not create tag leading directories since git update-ref does it.
  Check that a tag exists using show-ref instead of looking for the ref file.
  Use git-update-ref to delete a tag instead of rm()ing the ref file.
  Fix refs.c;:repack_without_ref() clean-up path
  Clean up "git-branch.sh" and add remove recursive dir test cases.
  ...
2006-11-01 08:48:50 -08:00
Nicolas Pitre e4fe4b8ef7 let the GIT native protocol use offsets to delta base when possible
There is no reason not to always do this when both ends agree.
Therefore a client that can accept offsets to delta base always sends
the "ofs-delta" flag.  The server will stream a pack with or without
offset to delta base depending on whether that flag is provided or not
with no additional cost.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-09-27 00:12:00 -07:00
Junio C Hamano 8da1977554 Tell between packed, unpacked and symbolic refs.
This adds a "int *flag" parameter to resolve_ref() and makes
for_each_ref() family to call callback function with an extra
"int flag" parameter.  They are used to give two bits of
information (REF_ISSYMREF and REF_ISPACKED) about the ref.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-09-20 22:02:01 -07:00
Junio C Hamano cb5d709ff8 Add callback data to for_each_ref() family.
This is a long overdue fix to the API for for_each_ref() family
of functions.  It allows the callers to specify a callback data
pointer, so that the caller does not have to use static
variables to communicate with the callback funciton.

The updated for_each_ref() family takes a function of type

	int (*fn)(const char *, const unsigned char *, void *)

and a void pointer as parameters, and calls the function with
the name of the ref and its SHA-1 with the caller-supplied void
pointer as parameters.

The commit updates two callers, builtin-name-rev.c and
builtin-pack-refs.c as an example.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-09-20 21:47:42 -07:00
Junio C Hamano 7f0f4fa437 Merge part of branch 'jc/upload-pack' 2006-09-17 18:42:33 -07:00
Junio C Hamano d47f3db75c Prepare larger packet buffer for upload-pack protocol.
The original side-band support added to the upload-pack protocol used the
default 1000-byte packet length.  The pkt-line format allows up to 64k, so
prepare the receiver for the maximum size, and have the uploader and
downloader negotiate if larger packet length is allowed.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-09-10 16:27:08 -07:00
Junio C Hamano 958c24b1b8 Move sideband server side support into reusable form.
The server side support; this is just the very low level, and the
caller needs to know which band it wants to send things out.

Signed-off-by: Junio C Hamano <junkio@cox.net>
(cherry picked from b786552b67878c7780c50def4c069d46dc54efbe commit)
2006-09-10 13:36:50 -07:00
Shawn Pearce e702496e43 Convert memcpy(a,b,20) to hashcpy(a,b).
This abstracts away the size of the hash values when copying them
from memory location to memory location, much as the introduction
of hashcmp abstracted away hash value comparsion.

A few call sites were using char* rather than unsigned char* so
I added the cast rather than open hashcpy to be void*.  This is a
reasonable tradeoff as most call sites already use unsigned char*
and the existing hashcmp is also declared to be unsigned char*.

[jc: Splitted the patch to "master" part, to be followed by a
 patch for merge-recursive.c which is not in "master" yet.

 Fixed the cast in the latter hunk to combine-diff.c which was
 wrong in the original.

 Also converted ones left-over in combine-diff.c, diff-lib.c and
 upload-pack.c ]

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-08-23 13:53:10 -07:00
David Rientjes 96f1e58f52 remove unnecessary initializations
[jc: I needed to hand merge the changes to the updated codebase,
 so the result needs to be checked.]

Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-08-15 21:22:20 -07:00
David Rientjes 59076eba6e Make upload_pack void and remove conditional return.
Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-08-14 18:59:03 -07:00
Junio C Hamano 937a515a15 upload-pack: stop the other side when they have more roots than we do.
When the downloader has more roots than we do, we will see many
"have" that leads to the root we do not have and let the other
side walk all the way to that root.  Tell them to stop sending the
branch'es ancestors by sending a fake "ACK" when we already have
common ancestor for the wanted refs.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-08-12 22:51:53 -07:00
Junio C Hamano c04c4e5708 upload-pack: minor clean-up in multi-ack logic
No changes to what it does, but separating the codepath clearly
with if ... else if ... chain makes it easier to follow.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-08-12 22:21:43 -07:00
Junio C Hamano 182a8dabd5 Merge branch 'jc/upload-pack' 2006-08-12 22:16:51 -07:00
Matthias Lederhofer 0d516adab8 upload-pack: fix timeout in create_pack_file
Signed-off-by: Matthias Lederhofer <matled@gmx.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-07-24 00:56:16 -07:00
Junio C Hamano 5cd060b56c Merge branch 'lt/unitype'
* lt/unitype:
  builtin-prune.c: forgot TYPE => OBJ changes.
  Remove TYPE_* constant macros and use object_type enums consistently.
2006-07-14 15:39:19 -07:00
Matthias Lederhofer 5f490ce03c upload-pack: ignore write errors to stderr
Signed-off-by: Matthias Lederhofer <matled@gmx.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-07-13 21:50:22 -07:00
Linus Torvalds 1974632c66 Remove TYPE_* constant macros and use object_type enums consistently.
This updates the type-enumeration constants introduced to reduce
the memory footprint of "struct object" to match the type bits
already used in the packfile format, by removing the former
(i.e. TYPE_* constant macros) and using the latter (i.e. enum
object_type) throughout the code for consistency.

Eventually we can stop passing around the "type strings"
entirely, and this will help - no confusion about two different
integer enumeration.

Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-07-12 23:18:03 -07:00
Junio C Hamano b1e9fff7e7 upload-pack: lift MAX_NEEDS and MAX_HAS limitation
By using object_array data structure, lift the old limitation of
MAX_HAS/MAX_NEEDS.  While we are at it, rename the variables
that hold the objects we use to compute common ancestor to match
the message used at the protocol level.  What the other end has
and we also do are "have"s, and what the other end asks for are
"want"s.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-07-09 00:57:09 -07:00
Junio C Hamano 6ece0d3030 upload-pack: use object pointer not copy of sha1 to keep track of has/needs.
This does not change what happens in the command in any way, but
is to prepare for further work.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-07-09 00:57:09 -07:00
Junio C Hamano e3a5629813 upload-pack.c: <sys/poll.h> includes <ctype.h> on OpenBSD 3.8
Merlyn reports that <sys/poll.h> on OpenBSD 3.8 includes <ctype.h>
and having our custom ctype (done in git-compat-util.h which is
included via cache.h) makes upload-pack.c uncompilable.  Try to
work it around by including the system headers first.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-30 17:25:20 -07:00
Timo Hirvonen 554fe20d80 Make some strings const
Signed-off-by: Timo Hirvonen <tihirvon@gmail.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 03:24:37 -07:00
Junio C Hamano 583b7ea31b upload-pack/fetch-pack: support side-band communication
This implements a protocol extension between fetch-pack and
upload-pack to allow stderr stream from upload-pack (primarily
used for the progress bar display) to be passed back.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-21 02:50:32 -07:00
Junio C Hamano 363b7817e0 upload-pack: prepare for sideband message support.
This does not implement sideband for propagating the status to
the downloader yet, but add code to capture the standard error
output from the pack-objects process in preparation for sending
it off to the client when the protocol extension allows us to do
so.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-21 02:34:14 -07:00
Junio C Hamano b1c71b7281 upload-pack: avoid sending an incomplete pack upon failure
When the repository on the remote side is corrupted, rev-list
spawned from upload-pack would die with error, but pack-objects
that reads from the rev-list happily created a packfile that can
be unpacked by the downloader.  When this happens, the resulting
packfile is not corrupted and unpacks cleanly, but the list of
the objects contained in it is not what the protocol exchange
computed.

This update makes upload-pack to monitor its subprocesses, and
when either of them dies with error, sends an incomplete pack
data to the downloader to cause it to fail.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-21 02:34:14 -07:00
Linus Torvalds 885a86abe2 Shrink "struct object" a bit
This shrinks "struct object" by a small amount, by getting rid of the
"struct type *" pointer and replacing it with a 3-bit bitfield instead.

In addition, we merge the bitfields and the "flags" field, which
incidentally should also remove a useless 4-byte padding from the object
when in 64-bit mode.

Now, our "struct object" is still too damn large, but it's now less
obviously bloated, and of the remaining fields, only the "util" (which is
not used by most things) is clearly something that should be eventually
discarded.

This shrinks the "git-rev-list --all" memory use by about 2.5% on the
kernel archive (and, perhaps more importantly, on the larger mozilla
archive). That may not sound like much, but I suspect it's more on a
64-bit platform.

There are other remaining inefficiencies (the parent lists, for example,
probably have horrible malloc overhead), but this was pretty obvious.

Most of the patch is just changing the comparison of the "type" pointer
from one of the constant string pointers to the appropriate new TYPE_xxx
small integer constant.

Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-17 18:49:18 -07:00
Junio C Hamano 9201c70742 Const tightening.
Mark Wooding noticed there was a type mismatch warning in git.c; this
patch does things slightly differently (mostly tightening const) and
was what I was holding onto, waiting for the setup-revisions change
to be merged into the master branch.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-03-05 02:47:29 -08:00
Junio C Hamano f0b0af1b04 Merge branches 'jc/rev-list' and 'jc/pack-thin'
* jc/rev-list:
  rev-list --objects: use full pathname to help hashing.
  rev-list --objects-edge: remove duplicated edge commit output.
  rev-list --objects-edge

* jc/pack-thin:
  pack-objects: hash basename and direname a bit differently.
  pack-objects: allow "thin" packs to exceed depth limits
  pack-objects: use full pathname to help hashing with "thin" pack.
  pack-objects: thin pack micro-optimization.
  Use thin pack transfer in "git fetch".
  Add git-push --thin.
  send-pack --thin: use "thin pack" delta transfer.
  Thin pack - create packfile with missing delta base.

Conflicts:

	pack-objects.c (taking "next")
	send-pack.c (taking "next")
2006-02-24 21:55:23 -08:00
Junio C Hamano b19696c2e7 Use thin pack transfer in "git fetch".
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-02-20 00:38:39 -08:00
Carl Worth b5b16990f8 Prevent git-upload-pack segfault if object cannot be found
Signed-off-by: Carl Worth <cworth@cworth.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-02-17 16:20:51 -08:00
Michal Ostrowski 77cb17e940 Exec git programs without using PATH.
The git suite may not be in PATH (and thus programs such as
git-send-pack could not exec git-rev-list).  Thus there is a need for
logic that will locate these programs.  Modifying PATH is not
desirable as it result in behavior differing from the user's
intentions, as we may end up prepending "/usr/bin" to PATH.

- git C programs will use exec*_git_cmd() APIs to exec sub-commands.
- exec*_git_cmd() will execute a git program by searching for it in
  the following directories:
	1. --exec-path (as used by "git")
	2. The GIT_EXEC_PATH environment variable.
	3. $(gitexecdir) as set in Makefile (default value $(bindir)).
- git wrapper will modify PATH as before to enable shell scripts to
  invoke "git-foo" commands.

Ideally, shell scripts should use the git wrapper to become independent
of PATH, and then modifying PATH will not be necessary.

[jc: with minor updates after a brief review.]

Signed-off-by: Michal Ostrowski <mostrows@watson.ibm.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-01-13 16:49:01 -08:00
Andreas Ericsson 8d63013291 Server-side support for user-relative paths.
This patch basically just removes the redundant code from
{receive,upload}-pack.c in favour of the library code in path.c.

Signed-off-by: Andreas Ericsson <ae@op5.se>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2005-11-19 20:50:38 -08:00
Junio C Hamano 9534f40bc4 Be careful when dereferencing tags.
One caller of deref_tag() was not careful enough to make sure
what deref_tag() returned was not NULL (i.e. we found a tag
object that points at an object we do not have).  Fix it, and
warn about refs that point at such an incomplete tag where
needed.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2005-11-02 16:50:58 -08:00
Johannes Schindelin 1f5881bb5f fix multi_ack.
Signed-off-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2005-10-28 22:57:01 -07:00
Johannes Schindelin 1bd8c8f00b git-upload-pack: Support the multi_ack protocol
This implements three things (trying very hard to be backwards
compatible):

It sends the "multi_ack" capability via the mechanism proposed by
Sergey Vlasov.

When the client sends "multi_ack" with at least one "want", multi_ack
is enabled.

When multi_ack is enabled, "continue" is appended to each "ACK" until
either the server can not store more refs, or "done" is received.

In contrast to the original protocol, as long as "continue" is sent,
flushes are answered by a "NAK" (not just until an "ACK" was sent),
and if "continue" was sent at least once, the last message is an
"ACK" without "continue".

Signed-off-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2005-10-28 22:57:00 -07:00
Johannes Schindelin f0243f26f6 git-upload-pack: More efficient usage of the has_sha1 array
This patch is based on Junio's proposal. It marks parents of common revs
so that they do not clutter up the has_sha1 array.

Signed-off-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2005-10-28 22:56:59 -07:00
Johannes Schindelin b5c367f75c Fix cloning (memory corruption)
upload-pack would set create_full_pack=1 if nr_has==0, but would ask later
if nr_needs<MAX_NEEDS. If that proves true, it would ignore create_full_pack,
and arguments would be written into unreserved memory.

Signed-off-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2005-10-26 11:52:19 -07:00
Junio C Hamano 565ebbf79f upload-pack: tighten request validation.
This makes sure what the other end asks for are among what we
offered to give them.  Otherwise we would end up running
git-rev-list with 20-byte nonsense, only to find it either die
(because the object was not found) or waste time (because we
ended up serving that phony 'client').

Also avoid wasting needs_sha1 pool to record duplicates, and
detect cloning requests better.

[this used to be on top of Johannes fetch-pack enhancements,
 which we are rewinding it for further testing for now, so
 the commit is rebased.]

Signed-off-by: Junio C Hamano <junkio@cox.net>
2005-10-25 23:53:28 -07:00
Junio C Hamano af2d3aa4d8 Revert recent fetch-pack/upload-pack updates.
Let's have it simmer a bit longer in the proposed updates branch
and shake the problems out.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2005-10-25 14:55:24 -07:00
Junio C Hamano 7efc8e4350 upload-pack: fix thinko in common-commit finder code.
The code to check if we have the object the other side has was bogus
(my fault).

Signed-off-by: Junio C Hamano <junkio@cox.net>
2005-10-24 15:13:38 -07:00
Johannes Schindelin 0f8fdc3958 git-upload-pack: Support sending multiple ACK messages
The current fetch/upload protocol works like this:

- client sends revs it wants to have via "want" messages
- client sends a flush message (message with len 0)
- client sends revs it has via "have" messages
- after one window (32 revs), a flush is sent
- after each subsequent window, a flush is sent, and an ACK/NAK is received.
        (NAK means that server does not have any of the transmitted revs;
         ACK sends also the sha1 of the rev server has)
 - when the first ACK is received, client sends "done", and does not expect
        any further messages

One special case, though:

- if no ACK is received (only NAK's), and client runs out of revs to send,
        "done" is sent, and server sends just one more "NAK"

A smarter scheme, which actually has a chance to detect more than one
common rev, would be to send more than just one ACK. This patch implements
the server side of the following extension to the protocol:

- client sends at least one "want" message with "multi_ack" appended, like

        "want 1234567890123456789012345678901234567890 multi_ack"

- if the server understands that extension, it will send ACK messages for all
        revs it has, not just the first one

- server appends "continue" to the ACK messages like

        "ACK 1234567890123456789012345678901234567890 continue"

        until it has MAX_HAS-1 revs. In this manner, client knows when to
        stop sending revs by checking for the substring "continue" (and
        further knows that server understands multi_ack)

In this manner, the protocol stays backwards compatible, since both client
must send "want ... multi_ack" and server must answer with "ACK ...
continue" to enable the extension.

Signed-off-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2005-10-24 15:13:37 -07:00
Johannes Schindelin 794f9fe7db git-upload-pack: More efficient usage of the has_sha1 array
This patch is based on Junio's proposal. It marks parents of common revs
so that they do not clutter up the has_sha1 array.

Signed-off-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2005-10-24 15:13:36 -07:00
Junio C Hamano 6b32884a09 upload-pack: Increase MAX_HAS.
Later round would further improve fetch-pack not to send useless "have",
but in the meantime, increase it to help upload-pack to find more common
commits, as discussed on the list.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2005-10-22 02:28:27 -07:00
H. Peter Anvin 960deccb26 git-daemon: timeout, eliminate double DWIM
It turns out that not only did git-daemon do DWIM, but git-upload-pack
does as well.  This is bad; security checks have to be performed *after*
canonicalization, not before.

Additionally, the current git-daemon can be trivially DoSed by spewing
SYNs at the target port.

This patch adds a --strict option to git-upload-pack to disable all
DWIM, a --timeout option to git-daemon and git-upload-pack, and an
--init-timeout option to git-daemon (which is typically set to a much
lower value, since the initial request should come immediately from the
client.)

Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2005-10-19 14:27:01 -07:00
Junio C Hamano f6b42a81fd Show peeled onion from upload-pack and server-info.
This updates git-ls-remote to show SHA1 names of objects that are
referred by tags, in the "ref^{}" notation.

This would make git-findtags (without -t flag) almost trivial.

    git-peek-remote . |
    sed -ne "s:^$target	"'refs/tags/\(.*\)^{}$:\1:p'

Also Pasky could do:

    git-ls-remote --tags $remote |
    sed -ne 's:\(	refs/tags/.*\)^{}$:\1:p'

to find out what object each of the remote tags refers to, and
if he has one locally, run "git-fetch $remote tag $tagname" to
automatically catch up with the upstream tags.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2005-10-15 11:23:41 -07:00
Junio C Hamano e091eb9325 upload-pack: Do not choke on too many heads request.
Cloning from a repository with more than 256 refs (heads and tags
included) will choke, because upload-pack has a built-in limit of
feeding not more than MAX_NEEDS (currently 256) heads to underlying
git-rev-list.  This is a problem when cloning a repository with many
tags, like http://www.linux-mips.org/pub/scm/linux.git, which has 290+
tags.

This commit introduces a new flag, --all, to git-rev-list, to include
all refs in the repository.  Updated upload-pack detects requests that
ask more than MAX_NEEDS refs, and sends everything back instead.

We may probably want to tweak the definitions of MAX_NEEDS and
MAX_HAS, but that is a separate topic.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2005-10-05 14:49:54 -07:00
Jason Riedy e72a7d45dc Replace unsetenv() and setenv() with older putenv().
Solaris 8 doesn't have the newer unsetenv() and setenv()
functions, so replace them with putenv().  The one use of
unsetenv() in fsck-cache.c now sets GIT_ALTERNATE_OBJECT_
DIRECTORIES to the empty string.  Every place that var
is used, NULLs are also replaced with empty strings, so
it's ok.

Signed-off-by: Jason Riedy <ejr@cs.berkeley.edu>
2005-08-23 20:41:12 -07:00
Linus Torvalds 113b94751d Make "upload-pack" match git-fetch-pack usage
Do the default "try xyz.git xyz fails" thing for the directory we get
passed in.
2005-07-08 16:22:22 -07:00
Linus Torvalds 319aae2756 Increase the number of possible heads requested from git-upload-pack
Now that git-clone-pack exists, we actually have somebody requesting
more than just a single head in a pack.  So allow the Jeff's of this
world to clone things with tens of heads.
2005-07-05 17:19:20 -07:00
Linus Torvalds 723c31fea2 Add "git_path()" and "head_ref()" helper functions.
"git_path()" returns a static pathname pointer into the git directory
using a printf-like format specifier.

"head_ref()" works like "for_each_ref()", except for just the HEAD.
2005-07-05 11:31:32 -07:00
Linus Torvalds 75bfc6c232 Make git-fetch-pack actually do all the unpacking etc.
It returns the result SHA1 on stdout, so you can do

	remote=$(git-fetch-pack host:dir branchname)

and it will unpack the objects and "remote" will be the SHA1 name of the
branch on the other side.  You can then save that off, or merge it, or
whatever.
2005-07-04 16:35:13 -07:00
Linus Torvalds fb9040cc83 Make git-fetch-pack and git-upload-pack negotiate needs/haves fully
Now the only piece missing is actually generating the pack-file.
2005-07-04 15:29:17 -07:00
Linus Torvalds def88e9afb Commit first cut at "git-fetch-pack"
It's meant to be used by "git fetch" for the local and ssh case.

It doesn't actually do the fetching now, but it does discover the common
commit point.
2005-07-04 13:26:53 -07:00