1
0
Fork 0
mirror of https://github.com/git/git.git synced 2024-04-26 21:45:14 +02:00
git/Makefile

3895 lines
120 KiB
Makefile
Raw Permalink Normal View History

# The default target of this Makefile is...
all::
Makefiles: add "shared.mak", move ".DELETE_ON_ERROR" to it We have various behavior that's shared across our Makefiles, or that really should be (e.g. via defined templates). Let's create a top-level "shared.mak" to house those sorts of things, and start by adding the ".DELETE_ON_ERROR" flag to it. See my own 7b76d6bf221 (Makefile: add and use the ".DELETE_ON_ERROR" flag, 2021-06-29) and db10fc6c09f (doc: simplify Makefile using .DELETE_ON_ERROR, 2021-05-21) for the addition and use of the ".DELETE_ON_ERROR" flag. I.e. this changes the behavior of existing rules in the altered Makefiles (except "Makefile" & "Documentation/Makefile"). I'm confident that this is safe having read the relevant rules in those Makfiles, and as the GNU make manual notes that it isn't the default behavior is out of an abundance of backwards compatibility caution. From edition 0.75 of its manual, covering GNU make 4.3: [Enabling '.DELETE_ON_ERROR' is] almost always what you want 'make' to do, but it is not historical practice; so for compatibility, you must explicitly request it. This doesn't introduce a bug by e.g. having this ".DELETE_ON_ERROR" flag only apply to this new shared.mak, Makefiles have no such scoping semantics. It does increase the danger that any Makefile without an explicit "The default target of this Makefile is..." snippet to define the default target as "all" could have its default rule changed if our new shared.mak ever defines a "real" rule. In subsequent commits we'll be careful not to do that, and such breakage would be obvious e.g. in the case of "make -C t". We might want to make that less fragile still (e.g. by using ".DEFAULT_GOAL" as noted in the preceding commit), but for now let's simply include "shared.mak" without adding that boilerplate to all the Makefiles that don't have it already. Most of those are already exposed to that potential caveat e.g. due to including "config.mak*". Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-03-03 17:04:13 +01:00
# Import tree-wide shared Makefile behavior and libraries
include shared.mak
# == Makefile defines ==
#
# These defines change the behavior of the Makefile itself, but have
# no impact on what it builds:
#
# Define V=1 to have a more verbose compile.
#
# == Portability and optional library defines ==
#
# These defines indicate what Git can expect from the OS, what
# libraries are available etc. Much of this is auto-detected in
# config.mak.uname, or in configure.ac when using the optional "make
# configure && ./configure" (see INSTALL).
#
# Define SHELL_PATH to a POSIX shell if your /bin/sh is broken.
#
# Define SANE_TOOL_PATH to a colon-separated list of paths to prepend
# to PATH if your tools in /usr/bin are broken.
#
# Define SOCKLEN_T to a suitable type (such as 'size_t') if your
# system headers do not define a socklen_t type.
#
# Define INLINE to a suitable substitute (such as '__inline' or '') if git
# fails to compile with errors about undefined inline functions or similar.
#
# Define SNPRINTF_RETURNS_BOGUS if you are on a system which snprintf()
# or vsnprintf() return -1 instead of number of characters which would
# have been written to the final string if enough space had been available.
#
# Define FREAD_READS_DIRECTORIES if you are on a system which succeeds
configure.ac: loosen FREAD_READS_DIRECTORIES test program We added an FREAD_READS_DIRECTORIES Makefile knob long ago in cba22528f (Add compat/fopen.c which returns NULL on attempt to open directory, 2008-02-08) to handle systems where reading from a directory returned garbage. This works by catching the problem at the fopen() stage and returning NULL. More recently, we found that there is a class of systems (including Linux) where fopen() succeeds but fread() fails. Since the solution is the same (having fopen return NULL), they use the same Makefile knob as of e2d90fd1c (config.mak.uname: set FREAD_READS_DIRECTORIES for Linux and FreeBSD, 2017-05-03). This works fine except for one thing: the autoconf test in configure.ac to set FREAD_READS_DIRECTORIES actually checks whether fread succeeds. Which means that on Linux systems, the knob isn't set (and we even override the config.mak.uname default). t1308 catches the failure. We can fix this by tweaking the autoconf test to cover both cases. In theory we might care about the distinction between the traditional "fread reads directories" case and the new "fopen opens directories". But since our solution catches the problem at the fopen stage either way, we don't actually need to know the difference. The "fopen" case is a superset. This does mean the FREAD_READS_DIRECTORIES name is slightly misleading. Probably FOPEN_OPENS_DIRECTORIES would be more accurate. But it would be disruptive to simply change the name (people's existing build configs would fail), and it's not worth the complexity of handling both. Let's just add a comment in the knob description. Reported-by: Øyvind A. Holm <sunny@sunbase.org> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-06-14 07:30:18 +02:00
# when attempting to read from an fopen'ed directory (or even to fopen
# it at all).
#
Makefile: add OPEN_RETURNS_EINTR knob On some platforms, open() reportedly returns EINTR when opening regular files and we receive a signal (usually SIGALRM from our progress meter). This shouldn't happen, as open() should be a restartable syscall, and we specify SA_RESTART when setting up the alarm handler. So it may actually be a kernel or libc bug for this to happen. But it has been reported on at least one version of Linux (on a network filesystem): https://lore.kernel.org/git/c8061cce-71e4-17bd-a56a-a5fed93804da@neanderfunk.de/ as well as on macOS starting with Big Sur even on a regular filesystem. We can work around it by retrying open() calls that get EINTR, just as we do for read(), etc. Since we don't ever _want_ to interrupt an open() call, we can get away with just redefining open, rather than insisting all callsites use xopen(). We actually do have an xopen() wrapper already (and it even does this retry, though there's no indication of it being an observed problem back then; it seems simply to have been lifted from xread(), etc). But it is used hardly anywhere, and isn't suitable for general use because it will die() on error. In theory we could combine the two, but it's awkward to do so because of the variable-args interface of open(). This patch adds a Makefile knob for enabling the workaround. It's not enabled by default for any platforms in config.mak.uname yet, as we don't have enough data to decide how common this is (I have not been able to reproduce on either Linux or Big Sur myself). It may be worth enabling preemptively anyway, since the cost is pretty low (if we don't see an EINTR, it's just an extra conditional). However, note that we must not enable this on Windows. It doesn't do anything there, and the macro overrides the existing mingw_open() redirection. I've added a preemptive #undef here in the mingw header (which is processed first) to just quietly disable it (we could also make it an #error, but there is little point in being so aggressive). Reported-by: Aleksey Kliger <alklig@microsoft.com> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-02-26 07:14:35 +01:00
# Define OPEN_RETURNS_EINTR if your open() system call may return EINTR
# when a signal is received (as opposed to restarting).
#
# Define NO_OPENSSL environment variable if you do not have OpenSSL.
#
Portable alloca for Git In the next patch we'll have to use alloca() for performance reasons, but since alloca is non-standardized and is not portable, let's have a trick with compatibility wrappers: 1. at configure time, determine, do we have working alloca() through alloca.h, and define #define HAVE_ALLOCA_H if yes. 2. in code #ifdef HAVE_ALLOCA_H # include <alloca.h> # define xalloca(size) (alloca(size)) # define xalloca_free(p) do {} while(0) #else # define xalloca(size) (xmalloc(size)) # define xalloca_free(p) (free(p)) #endif and use it like func() { p = xalloca(size); ... xalloca_free(p); } This way, for systems, where alloca is available, we'll have optimal on-stack allocations with fast executions. On the other hand, on systems, where alloca is not available, this gracefully fallbacks to xmalloc/free. Both autoconf and config.mak.uname configurations were updated. For autoconf, we are not bothering considering cases, when no alloca.h is available, but alloca() works some other way - its simply alloca.h is available and works or not, everything else is deep legacy. For config.mak.uname, I've tried to make my almost-sure guess for where alloca() is available, but since I only have access to Linux it is the only change I can be sure about myself, with relevant to other changed systems people Cc'ed. NOTE SunOS and Windows had explicit -DHAVE_ALLOCA_H in their configurations. I've changed that to now-common HAVE_ALLOCA_H=YesPlease which should be correct. Cc: Brandon Casey <drafnel@gmail.com> Cc: Marius Storm-Olsen <mstormo@gmail.com> Cc: Johannes Sixt <j6t@kdbg.org> Cc: Johannes Schindelin <Johannes.Schindelin@gmx.de> Cc: Ramsay Jones <ramsay@ramsay1.demon.co.uk> Cc: Gerrit Pape <pape@smarden.org> Cc: Petr Salinger <Petr.Salinger@seznam.cz> Cc: Jonathan Nieder <jrnieder@gmail.com> Acked-by: Thomas Schwinge <thomas@codesourcery.com> (GNU Hurd changes) Signed-off-by: Kirill Smelkov <kirr@mns.spb.ru> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-03-27 15:22:50 +01:00
# Define HAVE_ALLOCA_H if you have working alloca(3) defined in that header.
#
# Define HAVE_PATHS_H if you have paths.h and want to use the default PATH
# it specifies.
#
# Define NO_D_TYPE_IN_DIRENT if your platform defines DT_UNKNOWN but lacks
# d_type in struct dirent (Cygwin 1.5, fixed in Cygwin 1.7).
#
# Define HAVE_STRINGS_H if you have strings.h and need it for strcasecmp.
#
# Define NO_STRCASESTR if you don't have strcasestr.
#
# Define NO_MEMMEM if you don't have memmem.
#
# Define NO_GETPAGESIZE if you don't have getpagesize.
#
# Define NO_STRLCPY if you don't have strlcpy.
#
# Define NO_STRTOUMAX if you don't have both strtoimax and strtoumax in the
# C library. If your compiler also does not support long long or does not have
# strtoull, define NO_STRTOULL.
#
# Define NO_SETENV if you don't have setenv in the C library.
#
# Define NO_UNSETENV if you don't have unsetenv in the C library.
#
# Define NO_MKDTEMP if you don't have mkdtemp in the C library.
#
# Define MKDIR_WO_TRAILING_SLASH if your mkdir() can't deal with trailing slash.
#
# Define NO_GECOS_IN_PWENT if you don't have pw_gecos in struct passwd
# in the C library.
#
# Define NO_LIBGEN_H if you don't have libgen.h.
#
# Define NEEDS_LIBGEN if your libgen needs -lgen when linking
#
# Define NO_SYS_SELECT_H if you don't have sys/select.h.
#
# Define NO_SYMLINK_HEAD if you never want .git/HEAD to be a symbolic link.
# Enable it on Windows. By default, symrefs are still used.
#
# Define NO_SVN_TESTS if you want to skip time-consuming SVN interoperability
# tests. These tests take up a significant amount of the total test time
# but are not needed unless you plan to talk to SVN repos.
#
# Define NO_FINK if you are building on Darwin/Mac OS X, have Fink
# installed in /sw, but don't want GIT to link against any libraries
# installed there. If defined you may specify your own (or Fink's)
# include directories and library directories by defining CFLAGS
# and LDFLAGS appropriately.
#
# Define NO_DARWIN_PORTS if you are building on Darwin/Mac OS X,
# have DarwinPorts installed in /opt/local, but don't want GIT to
# link against any libraries installed there. If defined you may
# specify your own (or DarwinPort's) include directories and
# library directories by defining CFLAGS and LDFLAGS appropriately.
#
# Define NO_APPLE_COMMON_CRYPTO if you are building on Darwin/Mac OS X
# and do not want to use Apple's CommonCrypto library. This allows you
# to provide your own OpenSSL library, for example from MacPorts.
#
# Define NEEDS_CRYPTO_WITH_SSL if you need -lcrypto when using -lssl (Darwin).
#
# Define NEEDS_SSL_WITH_CRYPTO if you need -lssl when using -lcrypto (Darwin).
#
# Define NEEDS_LIBICONV if linking with libc is not enough (Darwin).
#
# Define NEEDS_LIBINTL_BEFORE_LIBICONV if you need libintl before libiconv.
#
# Define NO_INTPTR_T if you don't have intptr_t or uintptr_t.
#
# Define NO_UINTMAX_T if you don't have uintmax_t.
#
# Define NEEDS_SOCKET if linking with libc is not enough (SunOS,
# Patrick Mauritz).
#
# Define NEEDS_RESOLV if linking with -lnsl and/or -lsocket is not enough.
# Notably on Solaris hstrerror resides in libresolv and on Solaris 7
# inet_ntop and inet_pton additionally reside there.
#
# Define NO_MMAP if you want to avoid mmap.
#
# Define MMAP_PREVENTS_DELETE if a file that is currently mmapped cannot be
# deleted or cannot be replaced using rename().
#
# Define NO_POLL_H if you don't have poll.h.
#
# Define NO_SYS_POLL_H if you don't have sys/poll.h.
#
# Define NO_POLL if you do not have or don't want to use poll().
# This also implies NO_POLL_H and NO_SYS_POLL_H.
#
# Define NEEDS_SYS_PARAM_H if you need to include sys/param.h to compile,
# *PLEASE* REPORT to git@vger.kernel.org if your platform needs this;
# we want to know more about the issue.
#
# Define NO_PTHREADS if you do not have or do not want to use Pthreads.
#
# Define NO_PREAD if you have a problem with pread() system call (e.g.
# cygwin1.dll before v1.5.22).
#
# Define NO_SETITIMER if you don't have setitimer()
#
# Define NO_STRUCT_ITIMERVAL if you don't have struct itimerval
# This also implies NO_SETITIMER
#
Avoid accessing a slow working copy during diffcore operations. The Cygwin folks have done a fine job at creating a POSIX layer on Windows That Just Works(tm). However it comes with a penalty; accessing files in the working tree by way of stat/open/mmap can be slower for diffcore than inflating the data from a blob which is stored in a packfile. This performance problem is especially an issue in merge-recursive when dealing with nearly 7000 added files, as we are loading each file's content from the working directory to perform rename detection. I have literally seen (and sadly watched) paint dry in less time than it takes for merge-recursive to finish such a merge. On the other hand this very same merge runs very fast on Solaris. If Git is compiled with NO_FAST_WORKING_DIRECTORY set then we will avoid looking at the working directory when the blob in question is available within a packfile and the caller doesn't need the data unpacked into a temporary file. We don't use loose objects as they have the same open/mmap/close costs as the working directory file access, but have the additional CPU overhead of needing to inflate the content before use. So it is still faster to use the working tree file over the loose object. If the caller needs the file data unpacked into a temporary file its likely because they are going to call an external diff program, passing the file as a parameter. In this case reusing the working tree file will be faster as we don't need to inflate the data and write it out to a temporary file. The NO_FAST_WORKING_DIRECTORY feature is enabled by default on Cygwin, as that is the platform which currently appears to benefit the most from this option. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-12-14 12:15:57 +01:00
# Define NO_FAST_WORKING_DIRECTORY if accessing objects in pack files is
# generally faster on your platform than accessing the working directory.
#
# Define NO_TRUSTABLE_FILEMODE if your filesystem may claim to support
# the executable mode bit, but doesn't really do so.
#
wrapper: add a helper to generate numbers from a CSPRNG There are many situations in which having access to a cryptographically secure pseudorandom number generator (CSPRNG) is helpful. In the future, we'll encounter one of these when dealing with temporary files. To make this possible, let's add a function which reads from a system CSPRNG and returns some bytes. We know that all systems will have such an interface. A CSPRNG is required for a secure TLS or SSH implementation and a Git implementation which provided neither would be of little practical use. In addition, POSIX is set to standardize getentropy(2) in the next version, so in the (potentially distant) future we can rely on that. For systems which lack one of the other interfaces, we provide the ability to use OpenSSL's CSPRNG. OpenSSL is highly portable and functions on practically every known OS, and we know it will have access to some source of cryptographically secure randomness. We also provide support for the arc4random in libbsd for folks who would prefer to use that. Because this is a security sensitive interface, we take some precautions. We either succeed by filling the buffer completely as we requested, or we fail. We don't return partial data because the caller will almost never find that to be a useful behavior. Specify a makefile knob which users can use to specify one or more suitable CSPRNGs, and turn the multiple string options into a set of defines, since we cannot match on strings in the preprocessor. We allow multiple options to make the job of handling this in autoconf easier. The order of options is important here. On systems with arc4random, which is most of the BSDs, we use that, since, except on MirBSD and macOS, it uses ChaCha20, which is extremely fast, and sits entirely in userspace, avoiding a system call. We then prefer getrandom over getentropy, because the former has been available longer on Linux, and then OpenSSL. Finally, if none of those are available, we use /dev/urandom, because most Unix-like operating systems provide that API. We prefer options that don't involve device files when possible because those work in some restricted environments where device files may not be available. Set the configuration variables appropriately for Linux and the BSDs, including macOS, as well as Windows and NonStop. We specifically only consider versions which receive publicly available security support here. For the same reason, we don't specify getrandom(2) on Linux, because CentOS 7 doesn't support it in glibc (although its kernel does) and we don't want to resort to making syscalls. Finally, add a test helper to allow this to be tested by hand and in tests. We don't add any tests, since invoking the CSPRNG is not likely to produce interesting, reproducible results. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-01-17 22:56:16 +01:00
# Define CSPRNG_METHOD to "arc4random" if your system has arc4random and
# arc4random_buf, "libbsd" if your system has those functions from libbsd,
# "getrandom" if your system has getrandom, "getentropy" if your system has
# getentropy, "rtlgenrandom" for RtlGenRandom (Windows only), or "openssl" if
# you'd want to use the OpenSSL CSPRNG. You may set multiple options with
# spaces, in which case a suitable option will be chosen. If unset or set to
# anything else, defaults to using "/dev/urandom".
#
# Define NEEDS_MODE_TRANSLATION if your OS strays from the typical file type
# bits in mode values (e.g. z/OS defines I_SFMT to 0xFF000000 as opposed to the
# usual 0xF000).
#
# Define NO_IPV6 if you lack IPv6 support and getaddrinfo().
#
# Define NO_UNIX_SOCKETS if your system does not offer unix sockets.
#
# Define NO_SOCKADDR_STORAGE if your platform does not have struct
# sockaddr_storage.
#
# Define NO_ICONV if your libc does not properly support iconv.
#
# Define OLD_ICONV if your library has an old iconv(), where the second
# (input buffer pointer) parameter is declared with type (const char **).
#
utf8: handle systems that don't write BOM for UTF-16 When serializing UTF-16 (and UTF-32), there are three possible ways to write the stream. One can write the data with a BOM in either big-endian or little-endian format, or one can write the data without a BOM in big-endian format. Most systems' iconv implementations choose to write it with a BOM in some endianness, since this is the most foolproof, and it is resistant to misinterpretation on Windows, where UTF-16 and the little-endian serialization are very common. For compatibility with Windows and to avoid accidental misuse there, Git always wants to write UTF-16 with a BOM, and will refuse to read UTF-16 without it. However, musl's iconv implementation writes UTF-16 without a BOM, relying on the user to interpret it as big-endian. This causes t0028 and the related functionality to fail, since Git won't read the file without a BOM. Add a Makefile and #define knob, ICONV_OMITS_BOM, that can be set if the iconv implementation has this behavior. When set, Git will write a BOM manually for UTF-16 and UTF-32 and then force the data to be written in UTF-16BE or UTF-32BE. We choose big-endian behavior here because the tests use the raw "UTF-16" encoding, which will be big-endian when the implementation requires this knob to be set. Update the tests to detect this case and write test data with an added BOM if necessary. Always write the BOM in the tests in big-endian format, since all iconv implementations that omit a BOM must use big-endian serialization according to the Unicode standard. Preserve the existing behavior for systems which do not have this knob enabled, since they may use optimized implementations, including defaulting to the native endianness, which may improve performance. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-02-12 01:52:06 +01:00
# Define ICONV_OMITS_BOM if your iconv implementation does not write a
# byte-order mark (BOM) when writing UTF-16 or UTF-32 and always writes in
# big-endian format.
#
# Define NO_DEFLATE_BOUND if your zlib does not have deflateBound.
#
# Define NO_NORETURN if using buggy versions of gcc 4.6+ and profile feedback,
# as the compiler can crash (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=49299)
#
# Define USE_NSEC below if you want git to care about sub-second file mtimes
# and ctimes. Note that you need recent glibc (at least 2.2.4) for this. On
# Linux, kernel 2.6.11 or newer is required for reliable sub-second file times
# on file systems with exactly 1 ns or 1 s resolution. If you intend to use Git
# on other file systems (e.g. CEPH, CIFS, NTFS, UDF), don't enable USE_NSEC. See
# Documentation/technical/racy-git.txt for details.
#
# Define USE_ST_TIMESPEC if your "struct stat" uses "st_ctimespec" instead of
# "st_ctim"
#
# Define NO_NSEC if your "struct stat" does not have "st_ctim.tv_nsec"
# available. This automatically turns USE_NSEC off.
#
# Define USE_STDEV below if you want git to care about the underlying device
# change being considered an inode change from the update-index perspective.
#
# Define NO_ST_BLOCKS_IN_STRUCT_STAT if your platform does not have st_blocks
# field that counts the on-disk footprint in 512-byte blocks.
#
# Define USE_ASCIIDOCTOR to use Asciidoctor instead of AsciiDoc to build the
# documentation.
#
# Define ASCIIDOCTOR_EXTENSIONS_LAB to point to the location of the Asciidoctor
# Extensions Lab if you have it available.
#
# Define PERL_PATH to the path of your Perl binary (usually /usr/bin/perl).
#
# Define NO_PERL if you do not want Perl scripts or libraries at all.
#
# Define NO_PERL_CPAN_FALLBACKS if you do not want to install bundled
# copies of CPAN modules that serve as a fallback in case the modules
# are not available on the system. This option is intended for
# distributions that want to use their packaged versions of Perl
# modules, instead of the fallbacks shipped with Git.
#
# Define NO_GITWEB if you do not want to build or install
# 'gitweb'. Note that defining NO_PERL currently has the same effect
# on not installing gitweb, but not on whether it's built in the
# gitweb/ directory.
#
# Define PYTHON_PATH to the path of your Python binary (often /usr/bin/python
# but /usr/bin/python2.7 or /usr/bin/python3 on some platforms).
#
# Define NO_PYTHON if you do not want Python scripts or libraries at all.
#
# Define NO_TCLTK if you do not want Tcl/Tk GUI.
#
# The TCL_PATH variable governs the location of the Tcl interpreter
# used to optimize git-gui for your system. Only used if NO_TCLTK
# is not set. Defaults to the bare 'tclsh'.
#
# The TCLTK_PATH variable governs the location of the Tcl/Tk interpreter.
# If not set it defaults to the bare 'wish'. If it is set to the empty
# string then NO_TCLTK will be forced (this is used by configure script).
#
# Define INTERNAL_QSORT to use Git's implementation of qsort(), which
# is a simplified version of the merge sort used in glibc. This is
# recommended if Git triggers O(n^2) behavior in your platform's qsort().
#
# Define HAVE_ISO_QSORT_S if your platform provides a qsort_s() that's
# compatible with the one described in C11 Annex K.
#
# Define UNRELIABLE_FSTAT if your system's fstat does not return the same
# information on a not yet closed file that lstat would return for the same
# file after it was closed.
#
# Define OBJECT_CREATION_USES_RENAMES if your operating systems has problems
# when hardlinking a file to another name and unlinking the original file right
# away (some NTFS drivers seem to zero the contents in that scenario).
#
Makefile: optionally symlink libexec/git-core binaries to bin/git Add a INSTALL_SYMLINKS option which if enabled, changes the default hardlink installation method to one where the relevant binaries in libexec/git-core are symlinked back to ../../bin, instead of being hardlinked. This new option also overrides the behavior of the existing NO_*_HARDLINKS variables which in some cases would produce symlinks within to libexec/, e.g. "git-add" symlinked to "git" which would be copy of the "git" found in bin/, now "git-add" in libexec/ is always going to be symlinked to the "git" found in the bin/ directory. This option is being added because: 1) I think it makes what we're doing a lot more obvious. E.g. I'd never noticed that the libexec binaries were really just hardlinks since e.g. ls(1) won't show that in any obvious way. You need to start stat(1)-ing things and look at the inodes to see what's going on. 2) Some tools have very crappy support for hardlinks, e.g. the Git shipped with GitLab is much bigger than it should be because they're using a chef module that doesn't know about hardlinks, see https://github.com/chef/omnibus/issues/827 I've also ran into other related issues that I think are explained by this, e.g. compiling git with debugging and rpm refusing to install a ~200MB git package with 2GB left on the FS, I think that was because it doesn't consider hardlinks, just the sum of the byte size of everything in the package. As for the implementation, the "../../bin" noted above will vary given some given some values of "../.." and "bin" depending on the depth of the gitexecdir relative to the destdir, and the "bindir" target, e.g. setting "bindir=/tmp/git/binaries gitexecdir=foo/bar/baz" will do the right thing and produce this result: $ file /tmp/git/foo/bar/baz/git-add /tmp/git/foo/bar/baz/git-add: symbolic link to ../../../binaries/git Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-03-13 21:39:35 +01:00
# Define INSTALL_SYMLINKS if you prefer to have everything that can be
# symlinked between bin/ and libexec/ to use relative symlinks between
# the two. This option overrides NO_CROSS_DIRECTORY_HARDLINKS and
# NO_INSTALL_HARDLINKS which will also use symlinking by indirection
# within the same directory in some cases, INSTALL_SYMLINKS will
# always symlink to the final target directly.
#
# Define NO_CROSS_DIRECTORY_HARDLINKS if you plan to distribute the installed
# programs as a tar, where bin/ and libexec/ might be on different file systems.
Add custom memory allocator to MinGW and MacOS builds The standard allocator on Windows is pretty bad prior to Windows Vista, and nedmalloc is better than the modified dlmalloc provided with newer versions of the MinGW libc. NedMalloc stats in Git ---------------------- All results are the best result out of 3 runs. The benchmarks have been done on different hardware, so the repack times are not comparable. These benchmarks are all based on 'git repack -adf' on the Linux kernel. XP ----------------------------------------------- MinGW Threads Total Time Speed ----------------------------------------------- 3.4.2 (1T) 00:12:28.422 3.4.2 + nedmalloc (1T) 00:07:25.437 1.68x 3.4.5 (1T) 00:12:20.718 3.4.5 + nedmalloc (1T) 00:07:24.809 1.67x 4.3.3-tdm (1T) 00:12:01.843 4.3.3-tdm + nedmalloc (1T) 00:07:16.468 1.65x 4.3.3-tdm (2T) 00:07:35.062 4.3.3-tdm + nedmalloc (2T) 00:04:57.874 1.54x Vista ----------------------------------------------- MinGW Threads Total Time Speed ----------------------------------------------- 4.3.3-tdm (1T) 00:07:40.844 4.3.3-tdm + nedmalloc (1T) 00:07:17.548 1.05x 4.3.3-tdm (2T) 00:05:33.746 4.3.3-tdm + nedmalloc (2T) 00:05:27.334 1.02x Mac Mini ----------------------------------------------- GCC Threads Total Time Speed ----------------------------------------------- i686-darwin9-4.0.1 (2T) 00:09:57.346 i686-darwin9-4.0.1+ned (2T) 00:08:51.072 1.12x Signed-off-by: Marius Storm-Olsen <marius@trolltech.com> Signed-off-by: Steffen Prohaska <prohaska@zib.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-05-31 18:15:23 +02:00
#
# Define NO_INSTALL_HARDLINKS if you prefer to use either symbolic links or
# copies to install built-in git commands e.g. git-cat-file.
#
Optionally skip linking/copying the built-ins For a long time already, the non-dashed form of the built-ins is the recommended way to write scripts, i.e. it is better to call `git merge [...]` than to call `git-merge [...]`. While Git still supports the dashed form (by hard-linking the `git` executable to the dashed name in `libexec/git-core/`), in practice, it is probably almost irrelevant. However, we *do* care about keeping people's scripts working (even if they were written before the non-dashed form started to be recommended). Keeping this backwards-compatibility is not necessarily cheap, though: even so much as amending the tip commit in a git.git checkout will require re-linking all of those dashed commands. On this developer's laptop, this makes a noticeable difference: $ touch version.c && time make CC version.o AR libgit.a LINK git-bugreport.exe [... 11 similar lines ...] LN/CP git-remote-https.exe LN/CP git-remote-ftp.exe LN/CP git-remote-ftps.exe LINK git.exe BUILTIN git-add.exe [... 123 similar lines ...] BUILTIN all SUBDIR git-gui SUBDIR gitk-git SUBDIR templates LINK t/helper/test-fake-ssh.exe LINK t/helper/test-line-buffer.exe LINK t/helper/test-svn-fe.exe LINK t/helper/test-tool.exe real 0m36.633s user 0m3.794s sys 0m14.141s $ touch version.c && time make SKIP_DASHED_BUILT_INS=1 CC version.o AR libgit.a LINK git-bugreport.exe [... 11 similar lines ...] LN/CP git-remote-https.exe LN/CP git-remote-ftp.exe LN/CP git-remote-ftps.exe LINK git.exe BUILTIN git-receive-pack.exe BUILTIN git-upload-archive.exe BUILTIN git-upload-pack.exe BUILTIN all SUBDIR git-gui SUBDIR gitk-git SUBDIR templates LINK t/helper/test-fake-ssh.exe LINK t/helper/test-line-buffer.exe LINK t/helper/test-svn-fe.exe LINK t/helper/test-tool.exe real 0m23.717s user 0m1.562s sys 0m5.210s Also, `.zip` files do not have any standardized support for hard-links, therefore "zipping up" the executables will result in inflated disk usage. (To keep down the size of the "MinGit" variant of Git for Windows, which is distributed as a `.zip` file, the hard-links are excluded specifically.) In addition to that, some programs that are regularly used to assess disk usage fail to realize that those are hard-links, and heavily overcount disk usage. Most notably, this was the case with Windows Explorer up until the last couple of Windows 10 versions. See e.g. https://github.com/msysgit/msysgit/issues/58. To save on the time needed to hard-link these dashed commands, with the plan to eventually stop shipping with those hard-links on Windows, let's introduce a Makefile knob to skip generating them. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-22 00:28:16 +02:00
# Define SKIP_DASHED_BUILT_INS if you do not need the dashed versions of the
# built-ins to be linked/copied at all.
#
Add custom memory allocator to MinGW and MacOS builds The standard allocator on Windows is pretty bad prior to Windows Vista, and nedmalloc is better than the modified dlmalloc provided with newer versions of the MinGW libc. NedMalloc stats in Git ---------------------- All results are the best result out of 3 runs. The benchmarks have been done on different hardware, so the repack times are not comparable. These benchmarks are all based on 'git repack -adf' on the Linux kernel. XP ----------------------------------------------- MinGW Threads Total Time Speed ----------------------------------------------- 3.4.2 (1T) 00:12:28.422 3.4.2 + nedmalloc (1T) 00:07:25.437 1.68x 3.4.5 (1T) 00:12:20.718 3.4.5 + nedmalloc (1T) 00:07:24.809 1.67x 4.3.3-tdm (1T) 00:12:01.843 4.3.3-tdm + nedmalloc (1T) 00:07:16.468 1.65x 4.3.3-tdm (2T) 00:07:35.062 4.3.3-tdm + nedmalloc (2T) 00:04:57.874 1.54x Vista ----------------------------------------------- MinGW Threads Total Time Speed ----------------------------------------------- 4.3.3-tdm (1T) 00:07:40.844 4.3.3-tdm + nedmalloc (1T) 00:07:17.548 1.05x 4.3.3-tdm (2T) 00:05:33.746 4.3.3-tdm + nedmalloc (2T) 00:05:27.334 1.02x Mac Mini ----------------------------------------------- GCC Threads Total Time Speed ----------------------------------------------- i686-darwin9-4.0.1 (2T) 00:09:57.346 i686-darwin9-4.0.1+ned (2T) 00:08:51.072 1.12x Signed-off-by: Marius Storm-Olsen <marius@trolltech.com> Signed-off-by: Steffen Prohaska <prohaska@zib.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-05-31 18:15:23 +02:00
# Define USE_NED_ALLOCATOR if you want to replace the platforms default
# memory allocators with the nedmalloc allocator written by Niall Douglas.
#
# Define OVERRIDE_STRDUP to override the libc version of strdup(3).
# This is necessary when using a custom allocator in order to avoid
# crashes due to allocation and free working on different 'heaps'.
# It's defined automatically if USE_NED_ALLOCATOR is set.
#
# Define NO_REGEX if your C library lacks regex support with REG_STARTEND
# feature.
#
# Define USE_ENHANCED_BASIC_REGULAR_EXPRESSIONS if your C library provides
# the flag REG_ENHANCED and you'd like to use it to enable enhanced basic
# regular expressions.
#
# Define HAVE_DEV_TTY if your system can open /dev/tty to interact with the
# user.
#
# Define JSMIN to point to JavaScript minifier that functions as
# a filter to have gitweb.js minified.
#
# Define CSSMIN to point to a CSS minifier in order to generate a minified
# version of gitweb.css
#
# Define DEFAULT_PAGER to a sensible pager command (defaults to "less") if
# you want to use something different. The value will be interpreted by the
# shell at runtime when it is used.
#
# Define DEFAULT_EDITOR to a sensible editor command (defaults to "vi") if you
# want to use something different. The value will be interpreted by the shell
# if necessary when it is used. Examples:
#
# DEFAULT_EDITOR='~/bin/vi',
# DEFAULT_EDITOR='$GIT_FALLBACK_EDITOR',
# DEFAULT_EDITOR='"C:\Program Files\Vim\gvim.exe" --nofork'
#
# Define COMPUTE_HEADER_DEPENDENCIES to "yes" if you want dependencies on
# header files to be automatically computed, to avoid rebuilding objects when
# an unrelated header file changes. Define it to "no" to use the hard-coded
# dependency rules. The default is "auto", which means to use computed header
# dependencies if your compiler is detected to support it.
#
# Define NATIVE_CRLF if your platform uses CRLF for line endings.
xdiff: load full words in the inner loop of xdl_hash_record Redo the hashing loop in xdl_hash_record in a way that loads an entire 'long' at a time, using masking tricks to see when and where we found the terminating '\n'. I stole inspiration and code from the posts by Linus Torvalds around https://lkml.org/lkml/2012/3/2/452 https://lkml.org/lkml/2012/3/5/6 His method reads the buffers in sizeof(long) increments, and may thus overrun it by at most sizeof(long)-1 bytes before it sees the final newline (or hits the buffer length check). I considered padding out all buffers by a suitable amount to "catch" the overrun, but * this does not work for mmap()'d buffers: if you map 4096+8 bytes from a 4096 byte file, accessing the last 8 bytes results in a SIGBUS on my machine; and * it would also be extremely ugly because it intrudes deep into the unpacking machinery. So I adapted it to not read beyond the buffer at all. Instead, it reads the final partial word byte-by-byte and strings it together. Then it can use the same logic as before to finish the hashing. So far we enable this only on x86_64, where it provides nice speedup for diff-related work: Test origin/next tr/xdiff-fast-hash ----------------------------------------------------------------------------- 4000.1: log -3000 (baseline) 0.07(0.05+0.02) 0.08(0.06+0.02) +14.3% 4000.2: log --raw -3000 (tree-only) 0.37(0.33+0.04) 0.37(0.32+0.04) +0.0% 4000.3: log -p -3000 (Myers) 1.75(1.65+0.09) 1.60(1.49+0.10) -8.6% 4000.4: log -p -3000 --histogram 1.73(1.62+0.09) 1.58(1.49+0.08) -8.7% 4000.5: log -p -3000 --patience 2.11(2.00+0.10) 1.94(1.80+0.11) -8.1% Perhaps other platforms could also benefit. However it does NOT work on big-endian systems! [jc: minimum style and compilation fixes] Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-04-06 23:01:23 +02:00
#
# Define GIT_USER_AGENT if you want to change how git identifies itself during
# network interactions. The default is "git/$(GIT_VERSION)".
#
# Define DEFAULT_HELP_FORMAT to "man", "info" or "html"
# (defaults to "man") if you want to have a different default when
# "git help" is called without a parameter specifying the format.
#
# Define GIT_TEST_INDEX_VERSION to 2, 3 or 4 to run the test suite
# with a different indexfile format version. If it isn't set the index
# file format used is index-v[23].
#
# Define GIT_TEST_UTF8_LOCALE to preferred utf-8 locale for testing.
# If it isn't set, fallback to $LC_ALL, $LANG or use the first utf-8
# locale returned by "locale -a".
#
# Define HAVE_CLOCK_GETTIME if your platform has clock_gettime.
#
# Define HAVE_CLOCK_MONOTONIC if your platform has CLOCK_MONOTONIC.
#
# Define HAVE_SYNC_FILE_RANGE if your platform has sync_file_range.
#
# Define NEEDS_LIBRT if your platform requires linking with librt (glibc version
# before 2.17) for clock_gettime and CLOCK_MONOTONIC.
#
# Define HAVE_BSD_SYSCTL if your platform has a BSD-compatible sysctl function.
#
# Define HAVE_GETDELIM if your system has the getdelim() function.
#
# Define FILENO_IS_A_MACRO if fileno() is a macro, not a real function.
#
# Define NEED_ACCESS_ROOT_HANDLER if access() under root may success for X_OK
# even if execution permission isn't granted for any user.
#
# Define PAGER_ENV to a SP separated VAR=VAL pairs to define
# default environment variables to be passed when a pager is spawned, e.g.
#
# PAGER_ENV = LESS=FRX LV=-c
#
# to say "export LESS=FRX (and LV=-c) if the environment variable
# LESS (and LV) is not set, respectively".
#
# Define TEST_SHELL_PATH if you want to use a shell besides SHELL_PATH for
# running the test scripts (e.g., bash has better support for "set -x"
# tracing).
#
# When cross-compiling, define HOST_CPU as the canonical name of the CPU on
# which the built Git will run (for instance "x86_64").
#
# Define RUNTIME_PREFIX to configure Git to resolve its ancillary tooling and
# support files relative to the location of the runtime binary, rather than
# hard-coding them into the binary. Git installations built with RUNTIME_PREFIX
# can be moved to arbitrary filesystem locations. RUNTIME_PREFIX also causes
# Perl scripts to use a modified entry point header allowing them to resolve
# support files at runtime.
#
# When using RUNTIME_PREFIX, define HAVE_BSD_KERN_PROC_SYSCTL if your platform
# supports the KERN_PROC BSD sysctl function.
#
# When using RUNTIME_PREFIX, define PROCFS_EXECUTABLE_PATH if your platform
# mounts a "procfs" filesystem capable of resolving the path of the current
# executable. If defined, this must be the canonical path for the "procfs"
# current executable path.
#
# When using RUNTIME_PREFIX, define HAVE_NS_GET_EXECUTABLE_PATH if your platform
# supports calling _NSGetExecutablePath to retrieve the path of the running
# executable.
#
# When using RUNTIME_PREFIX, define HAVE_WPGMPTR if your platform offers
# the global variable _wpgmptr containing the absolute path of the current
# executable (this is the case on Windows).
#
# INSTALL_STRIP can be set to "-s" to strip binaries during installation,
# if your $(INSTALL) command supports the option.
#
Makefile: add support for generating JSON compilation database Tools based on LibClang [1] can make use of a 'JSON Compilation Database' [2] that keeps track of the exact options used to compile a set of source files. For example, clangd [3], which is a C language server protocol implementation, can use a JSON compilation database to determine the flags needed to compile a file so it can provide proper editor integration. As a result, editors supporting the language server protocol (such as VS Code, Emacs, or Vim, with suitable plugins) can provide better searching, integration, and refactoring tools. The Clang compiler can generate JSON fragments when compiling [4], using the `-MJ` flag. These JSON fragments (one per compiled source file) can then be concatenated to create the compilation database, commonly called 'compile_commands.json'. Add support to the Makefile for generating these JSON fragments as well as the compilation database itself, if the environment variable 'GENERATE_COMPILATION_DATABASE' is set. If this variable is set, check that $(CC) indeed supports the `-MJ` flag, following what is done for automatic dependencies. All JSON fragments are placed in the 'compile_commands/' directory, and the compilation database 'compile_commands.json' is generated as a dependency of the 'all' target using a `sed` invocation. [1] https://clang.llvm.org/docs/Tooling.html [2] https://clang.llvm.org/docs/JSONCompilationDatabase.html [3] https://clangd.llvm.org/ [4] https://clang.llvm.org/docs/ClangCommandLineReference.html#cmdoption-clang-mj-arg Helped-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Philippe Blain <levraiphilippeblain@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-04 00:13:38 +02:00
# Define GENERATE_COMPILATION_DATABASE to "yes" to generate JSON compilation
# database entries during compilation if your compiler supports it, using the
# `-MJ` flag. The JSON entries will be placed in the `compile_commands/`
# directory, and the JSON compilation database 'compile_commands.json' will be
# created at the root of the repository.
#
# If your platform supports a built-in fsmonitor backend, set
# FSMONITOR_DAEMON_BACKEND to the "<name>" of the corresponding
fsmonitor--daemon: stub in health thread Create another thread to watch over the daemon process and automatically shut it down if necessary. This commit creates the basic framework for a "health" thread to monitor the daemon and/or the file system. Later commits will add platform-specific code to do the actual work. The "health" thread is intended to monitor conditions that would be difficult to track inside the IPC thread pool and/or the file system listener threads. For example, when there are file system events outside of the watched worktree root or if we want to have an idle-timeout auto-shutdown feature. This commit creates the health thread itself, defines the thread-proc and sets up the thread's event loop. It integrates this new thread into the existing IPC and Listener thread models. This commit defines the API to the platform-specific code where all of the monitoring will actually happen. The platform-specific code for MacOS is just stubs. Meaning that the health thread will immediately exit on MacOS, but that is OK and expected. Future work can define MacOS-specific monitoring. The platform-specific code for Windows sets up enough of the WaitForMultipleObjects() machinery to watch for system and/or custom events. Currently, the set of wait handles only includes our custom shutdown event (sent from our other theads). Later commits in this series will extend the set of wait handles to monitor other conditions. Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-05-26 23:47:10 +02:00
# `compat/fsmonitor/fsm-listen-<name>.c` and
# `compat/fsmonitor/fsm-health-<name>.c` files
# that implement the `fsm_listen__*()` and `fsm_health__*()` routines.
#
# If your platform has OS-specific ways to tell if a repo is incompatible with
# fsmonitor (whether the hook or IPC daemon version), set FSMONITOR_OS_SETTINGS
# to the "<name>" of the corresponding `compat/fsmonitor/fsm-settings-<name>.c`
# that implements the `fsm_os_settings__*()` routines.
#
fuzz: link fuzz programs with `make all` on Linux Since 5e47215080 (fuzz: add basic fuzz testing target., 2018-10-12), we have compiled object files for the fuzz tests as part of the default 'make all' target. This helps prevent bit-rot in lesser-used parts of the codebase, by making sure that incompatible changes are caught at build time. However, since we never linked the fuzzer executables, this did not protect us from link-time errors. As of 8b9a42bf48 (fuzz: fix fuzz test build rules, 2024-01-19), it's now possible to link the fuzzer executables without using a fuzzing engine and a variety of compiler-specific (and compiler-version-specific) flags, at least on Linux. So let's add a platform-specific option in config.mak.uname to link the executables as part of the default `make all` target. Since linking the fuzzer executables without a fuzzing engine does not require a C++ compiler, we can change the FUZZ_PROGRAMS build rule to use $(CC) by default. This avoids compiler mis-match issues when overriding $(CC) but not $(CXX). When we *do* want to actually link with a fuzzing engine, we can set $(FUZZ_CXX). The build instructions in the CI fuzz-smoke-test job and in the Makefile comment have been updated accordingly. While we're at it, we can consolidate some of the fuzzer build instructions into one location in the Makefile. Suggested-by: Junio C Hamano <gitster@pobox.com> Helped-by: Jeff King <peff@peff.net> Signed-off-by: Josh Steadmon <steadmon@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-24 20:14:42 +02:00
# Define LINK_FUZZ_PROGRAMS if you want `make all` to also build the fuzz test
# programs in oss-fuzz/.
#
# === Optional library: libintl ===
#
# Define NO_GETTEXT if you don't want Git output to be translated.
# A translated Git requires GNU libintl or another gettext implementation,
# plus libintl-perl at runtime.
#
# Define USE_GETTEXT_SCHEME and set it to 'fallthrough', if you don't trust
# the installed gettext translation of the shell scripts output.
#
# Define HAVE_LIBCHARSET_H if you haven't set NO_GETTEXT and you can't
# trust the langinfo.h's nl_langinfo(CODESET) function to return the
# current character set. GNU and Solaris have a nl_langinfo(CODESET),
# FreeBSD can use either, but MinGW and some others need to use
# libcharset.h's locale_charset() instead.
#
# Define CHARSET_LIB to the library you need to link with in order to
# use locale_charset() function. On some platforms this needs to set to
# -lcharset, on others to -liconv .
#
# Define LIBC_CONTAINS_LIBINTL if your gettext implementation doesn't
# need -lintl when linking.
#
# Define NO_MSGFMT_EXTENDED_OPTIONS if your implementation of msgfmt
# doesn't support GNU extensions like --check and --statistics
#
# === Optional library: libexpat ===
#
# Define NO_EXPAT if you do not have expat installed. git-http-push is
# not built, and you cannot push using http:// and https:// transports (dumb).
#
# Define EXPATDIR=/foo/bar if your expat header and library files are in
# /foo/bar/include and /foo/bar/lib directories.
#
# Define EXPAT_NEEDS_XMLPARSE_H if you have an old version of expat (e.g.,
# 1.1 or 1.2) that provides xmlparse.h instead of expat.h.
# === Optional library: libcurl ===
#
# Define NO_CURL if you do not have libcurl installed. git-http-fetch and
# git-http-push are not built, and you cannot use http:// and https://
# transports (neither smart nor dumb).
#
# Define CURLDIR=/foo/bar if your curl header and library files are in
# /foo/bar/include and /foo/bar/lib directories.
#
# Define CURL_CONFIG to curl's configuration program that prints information
# about the library (e.g., its version number). The default is 'curl-config'.
#
# Define CURL_LDFLAGS to specify flags that you need to link when using libcurl,
# if you do not want to rely on the libraries provided by CURL_CONFIG. The
# default value is a result of `curl-config --libs`. An example value for
# CURL_LDFLAGS is as follows:
#
# CURL_LDFLAGS=-lcurl
#
# === Optional library: libpcre2 ===
#
# Define USE_LIBPCRE if you have and want to use libpcre. Various
# commands such as log and grep offer runtime options to use
# Perl-compatible regular expressions instead of standard or extended
# POSIX regular expressions.
#
# Only libpcre version 2 is supported. USE_LIBPCRE2 is a synonym for
# USE_LIBPCRE, support for the old USE_LIBPCRE1 has been removed.
#
# Define LIBPCREDIR=/foo/bar if your PCRE header and library files are
# in /foo/bar/include and /foo/bar/lib directories.
#
# == SHA-1 and SHA-256 defines ==
#
# === SHA-1 backend ===
#
# ==== Security ====
#
# Due to the SHAttered (https://shattered.io) attack vector on SHA-1
# it's strongly recommended to use the sha1collisiondetection
# counter-cryptanalysis library for SHA-1 hashing.
#
# If you know that you can trust the repository contents, or where
# potential SHA-1 attacks are otherwise mitigated the other backends
# listed in "SHA-1 implementations" are faster than
# sha1collisiondetection.
#
# ==== Default SHA-1 backend ====
#
# If no *_SHA1 backend is picked, the first supported one listed in
# "SHA-1 implementations" will be picked.
#
# ==== Options common to all SHA-1 implementations ====
#
# Define SHA1_MAX_BLOCK_SIZE to limit the amount of data that will be hashed
# in one call to the platform's SHA1_Update(). e.g. APPLE_COMMON_CRYPTO
# wants 'SHA1_MAX_BLOCK_SIZE=1024L*1024L*1024L' defined.
#
# ==== SHA-1 implementations ====
#
# Define OPENSSL_SHA1 to link to the SHA-1 routines from the OpenSSL
# library.
#
# Define BLK_SHA1 to make use of optimized C SHA-1 routines bundled
# with git (in the block-sha1/ directory).
#
# Define APPLE_COMMON_CRYPTO_SHA1 to use Apple's CommonCrypto for
# SHA-1.
#
2022-11-07 22:23:10 +01:00
# If don't enable any of the *_SHA1 settings in this section, Git will
# default to its built-in sha1collisiondetection library, which is a
# collision-detecting sha1 This is slower, but may detect attempted
# collision attacks.
#
# ==== Options for the sha1collisiondetection library ====
#
# Define DC_SHA1_EXTERNAL if you want to build / link
# git with the external SHA1 collision-detect library.
# Without this option, i.e. the default behavior is to build git with its
# own built-in code (or submodule).
#
# Define DC_SHA1_SUBMODULE to use the
# sha1collisiondetection shipped as a submodule instead of the
# non-submodule copy in sha1dc/. This is an experimental option used
# by the git project to migrate to using sha1collisiondetection as a
# submodule.
#
# === SHA-256 backend ===
#
# ==== Security ====
#
# Unlike SHA-1 the SHA-256 algorithm does not suffer from any known
# vulnerabilities, so any implementation will do.
#
# ==== SHA-256 implementations ====
#
# Define OPENSSL_SHA256 to use the SHA-256 routines in OpenSSL.
#
# Define NETTLE_SHA256 to use the SHA-256 routines in libnettle.
#
# Define GCRYPT_SHA256 to use the SHA-256 routines in libgcrypt.
#
# If don't enable any of the *_SHA256 settings in this section, Git
# will default to its built-in sha256 implementation.
#
# == DEVELOPER defines ==
#
# Define DEVELOPER to enable more compiler warnings. Compiler version
# and family are auto detected, but could be overridden by defining
Makefile: allow for combining DEVELOPER=1 and CFLAGS="..." Ever since the DEVELOPER=1 facility introduced there's been no way to have custom CFLAGS (e.g. CFLAGS="-O0 -g -ggdb3") while still benefiting from the set of warnings and assertions DEVELOPER=1 enables. This is because the semantics of variables in the Makefile are such that the user setting CFLAGS overrides anything we set, including what we're doing in config.mak.dev[1]. So let's introduce a "DEVELOPER_CFLAGS" variable in config.mak.dev and add it to ALL_CFLAGS. Before this the ALL_CFLAGS variable would (basically, there's some nuance we won't go into) be set to: $(CPPFLAGS) [$(CFLAGS) *or* $(CFLAGS) in config.mak.dev] $(BASIC_CFLAGS) $(EXTRA_CPPFLAGS) But will now be: $(DEVELOPER_CFLAGS) $(CPPFLAGS) $(CFLAGS) $(BASIC_CFLAGS) $(EXTRA_CPPFLAGS) The reason for putting DEVELOPER_CFLAGS first is to allow for selectively overriding something DEVELOPER=1 brings in. On both GCC and Clang later settings override earlier ones. E.g. "-Wextra -Wno-extra" will enable no "extra" warnings, but not if those two arguments are reversed. Examples of things that weren't possible before, but are now: # Use -O0 instead of -O2 for less painful debuggng DEVELOPER=1 CFLAGS="-O0 -g" # DEVELOPER=1 plus -Wextra, but disable some of the warnings DEVELOPER=1 DEVOPTS="no-error extra-all" CFLAGS="-O0 -g -Wno-unused-parameter" The reason for the patches leading up to this one re-arranged the various *FLAGS assignments and includes is just for readability. The Makefile supports assignments out of order, e.g.: $ cat Makefile X = $(A) $(B) $(C) A = A B = B include c.mak all: @echo $(X) $ cat c.mak C=C $ make A B C So we could have gotten away with the much smaller change of changing "CFLAGS" in config.mak.dev to "DEVELOPER_CFLAGS" and adding that to ALL_CFLAGS earlier in the Makefile "before" the config.mak.* includes. But I think it's more readable to use variables "after" they're included. 1. https://www.gnu.org/software/make/manual/html_node/Overriding.html Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-02-22 15:41:27 +01:00
# COMPILER_FEATURES (see config.mak.dev). You can still set
# CFLAGS="..." in combination with DEVELOPER enables, whether that's
# for tweaking something unrelated (e.g. optimization level), or for
# selectively overriding something DEVELOPER or one of the DEVOPTS
# (see just below) brings in.
#
# When DEVELOPER is set, DEVOPTS can be used to control compiler
# options. This variable contains keywords separated by
# whitespace. The following keywords are recognized:
#
# no-error:
#
# suppresses the -Werror that implicitly comes with
# DEVELOPER=1. Useful for getting the full set of errors
# without immediately dying, or for logging them.
#
# extra-all:
#
# The DEVELOPER mode enables -Wextra with a few exceptions. By
# setting this flag the exceptions are removed, and all of
# -Wextra is used.
#
# no-pedantic:
#
# Disable -pedantic compilation.
GIT-VERSION-FILE: FORCE
@$(SHELL_PATH) ./GIT-VERSION-GEN
-include GIT-VERSION-FILE
# Set our default configuration.
#
# Among the variables below, these:
# gitexecdir
# template_dir
# sysconfdir
# can be specified as a relative path some/where/else;
# this is interpreted as relative to $(prefix) and "git" built with
# RUNTIME_PREFIX flag will figure out (at runtime) where they are
# based on the path to the executable.
# Additionally, the following will be treated as relative by "git" if they
# begin with "$(prefix)/":
# mandir
# infodir
# htmldir
# localedir
# perllibdir
# This can help installing the suite in a relocatable way.
prefix = $(HOME)
bindir = $(prefix)/bin
mandir = $(prefix)/share/man
infodir = $(prefix)/share/info
gitexecdir = libexec/git-core
mergetoolsdir = $(gitexecdir)/mergetools
sharedir = $(prefix)/share
gitwebdir = $(sharedir)/gitweb
gitwebstaticdir = $(gitwebdir)/static
Makefile: replace perl/Makefile.PL with simple make rules Replace the perl/Makefile.PL and the fallback perl/Makefile used under NO_PERL_MAKEMAKER=NoThanks with a much simpler implementation heavily inspired by how the i18n infrastructure's build process works[1]. The reason for having the Makefile.PL in the first place is that it was initially[2] building a perl C binding to interface with libgit, this functionality, that was removed[3] before Git.pm ever made it to the master branch. We've since since started maintaining a fallback perl/Makefile, as MakeMaker wouldn't work on some platforms[4]. That's just the tip of the iceberg. We have the PM.stamp hack in the top-level Makefile[5] to detect whether we need to regenerate the perl/perl.mak, which I fixed just recently to deal with issues like the perl version changing from under us[6]. There is absolutely no reason for why this needs to be so complex anymore. All we're getting out of this elaborate Rube Goldberg machine was copying perl/* to perl/blib/* as we do a string-replacement on the *.pm files to hardcode @@LOCALEDIR@@ in the source, as well as pod2man-ing Git.pm & friends. So replace the whole thing with something that's pretty much a copy of how we generate po/build/**.mo from po/*.po, just with a small sed(1) command instead of msgfmt. As that's being done rename the files from *.pm to *.pmc just to indicate that they're generated (see "perldoc -f require"). While I'm at it, change the fallback for Error.pm from being something where we'll ship our own Error.pm if one doesn't exist at build time to one where we just use a Git::Error wrapper that'll always prefer the system-wide Error.pm, only falling back to our own copy if it really doesn't exist at runtime. It's now shipped as Git::FromCPAN::Error, making it easy to add other modules to Git::FromCPAN::* in the future if that's needed. Functional changes: * This will not always install into perl's idea of its global "installsitelib". This only potentially matters for packagers that need to expose Git.pm for non-git use, and as explained in the INSTALL file there's a trivial workaround. * The scripts themselves will 'use lib' the target directory, but if INSTLIBDIR is set it overrides it. It doesn't have to be this way, it could be set in addition to INSTLIBDIR, but my reading of [7] is that this is the desired behavior. * We don't build man pages for all of the perl modules as we used to, only Git(3pm). As discussed on-list[8] that we were building installed manpages for purely internal APIs like Git::I18N or private-Error.pm was always a bug anyway, and all the Git::SVN::* ones say they're internal APIs. There are apparently external users of Git.pm, but I don't expect there to be any of the others. As a side-effect of these general changes the perl documentation now only installed by install-{doc,man}, not a mere "install" as before. 1. 5e9637c629 ("i18n: add infrastructure for translating Git with gettext", 2011-11-18) 2. b1edc53d06 ("Introduce Git.pm (v4)", 2006-06-24) 3. 18b0fc1ce1 ("Git.pm: Kill Git.xs for now", 2006-09-23) 4. f848718a69 ("Make perl/ build procedure ActiveState friendly.", 2006-12-04) 5. ee9be06770 ("perl: detect new files in MakeMaker builds", 2012-07-27) 6. c59c4939c2 ("perl: regenerate perl.mak if perl -V changes", 2017-03-29) 7. 0386dd37b1 ("Makefile: add PERLLIB_EXTRA variable that adds to default perl path", 2013-11-15) 8. 87bmjjv1pu.fsf@evledraar.booking.com ("Re: [PATCH] Makefile: replace perl/Makefile.PL with simple make rules" Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-12-10 22:13:33 +01:00
perllibdir = $(sharedir)/perl5
i18n: add infrastructure for translating Git with gettext Change the skeleton implementation of i18n in Git to one that can show localized strings to users for our C, Shell and Perl programs using either GNU libintl or the Solaris gettext implementation. This new internationalization support is enabled by default. If gettext isn't available, or if Git is compiled with NO_GETTEXT=YesPlease, Git falls back on its current behavior of showing interface messages in English. When using the autoconf script we'll auto-detect if the gettext libraries are installed and act appropriately. This change is somewhat large because as well as adding a C, Shell and Perl i18n interface we're adding a lot of tests for them, and for those tests to work we need a skeleton PO file to actually test translations. A minimal Icelandic translation is included for this purpose. Icelandic includes multi-byte characters which makes it easy to test various edge cases, and it's a language I happen to understand. The rest of the commit message goes into detail about various sub-parts of this commit. = Installation Gettext .mo files will be installed and looked for in the standard $(prefix)/share/locale path. GIT_TEXTDOMAINDIR can also be set to override that, but that's only intended to be used to test Git itself. = Perl Perl code that's to be localized should use the new Git::I18n module. It imports a __ function into the caller's package by default. Instead of using the high level Locale::TextDomain interface I've opted to use the low-level (equivalent to the C interface) Locale::Messages module, which Locale::TextDomain itself uses. Locale::TextDomain does a lot of redundant work we don't need, and some of it would potentially introduce bugs. It tries to set the $TEXTDOMAIN based on package of the caller, and has its own hardcoded paths where it'll search for messages. I found it easier just to completely avoid it rather than try to circumvent its behavior. In any case, this is an issue wholly internal Git::I18N. Its guts can be changed later if that's deemed necessary. See <AANLkTilYD_NyIZMyj9dHtVk-ylVBfvyxpCC7982LWnVd@mail.gmail.com> for a further elaboration on this topic. = Shell Shell code that's to be localized should use the git-sh-i18n library. It's basically just a wrapper for the system's gettext.sh. If gettext.sh isn't available we'll fall back on gettext(1) if it's available. The latter is available without the former on Solaris, which has its own non-GNU gettext implementation. We also need to emulate eval_gettext() there. If neither are present we'll use a dumb printf(1) fall-through wrapper. = About libcharset.h and langinfo.h We use libcharset to query the character set of the current locale if it's available. I.e. we'll use it instead of nl_langinfo if HAVE_LIBCHARSET_H is set. The GNU gettext manual recommends using langinfo.h's nl_langinfo(CODESET) to acquire the current character set, but on systems that have libcharset.h's locale_charset() using the latter is either saner, or the only option on those systems. GNU and Solaris have a nl_langinfo(CODESET), FreeBSD can use either, but MinGW and some others need to use libcharset.h's locale_charset() instead. =Credits This patch is based on work by Jeff Epler <jepler@unpythonic.net> who did the initial Makefile / C work, and a lot of comments from the Git mailing list, including Jonathan Nieder, Jakub Narebski, Johannes Sixt, Erik Faye-Lund, Peter Krefting, Junio C Hamano, Thomas Rast and others. [jc: squashed a small Makefile fix from Ramsay] Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-11-18 00:14:42 +01:00
localedir = $(sharedir)/locale
template_dir = share/git-core/templates
htmldir = $(prefix)/share/doc/git-doc
ETC_GITCONFIG = $(sysconfdir)/gitconfig
ETC_GITATTRIBUTES = $(sysconfdir)/gitattributes
lib = lib
# DESTDIR =
pathsep = :
bindir_relative = $(patsubst $(prefix)/%,%,$(bindir))
mandir_relative = $(patsubst $(prefix)/%,%,$(mandir))
infodir_relative = $(patsubst $(prefix)/%,%,$(infodir))
gitexecdir_relative = $(patsubst $(prefix)/%,%,$(gitexecdir))
localedir_relative = $(patsubst $(prefix)/%,%,$(localedir))
htmldir_relative = $(patsubst $(prefix)/%,%,$(htmldir))
perllibdir_relative = $(patsubst $(prefix)/%,%,$(perllibdir))
export prefix bindir sharedir sysconfdir perllibdir localedir
# Set our default programs
CC = cc
AR = ar
RM = rm -f
DIFF = diff
TAR = tar
FIND = find
INSTALL = install
TCL_PATH = tclsh
TCLTK_PATH = wish
XGETTEXT = xgettext
Makefile: have "make pot" not "reset --hard" Before commit fc0fd5b23b (Makefile: help gettext tools to cope with our custom PRItime format, 2017-07-20), we'd consider source files as-is with gettext, but because we need to understand PRItime in the same way that gettext itself understands PRIuMAX, we'd first check if we had a clean checkout, then munge all of the processed files in-place with "sed", generate "po/git.pot", and then finally "reset --hard" to undo our changes. By generating "pot" snippets in ".build/pot/po" for each source file and rewriting certain source files with PRItime macros to temporary files in ".build/pot/po", we can avoid running "make pot" by altering files in place and doing a "reset --hard" afterwards. This speed of "make pot" is slower than before on an initial run, because we run "xgettext" many times (once per source file), but it can be boosted by parallelization. It is *much* faster for incremental runs, and will allow us to implement related targets in subsequent commits. When the "pot" target was originally added in cd5513a7168 (i18n: Makefile: "pot" target to extract messages marked for translation, 2011-02-22) it behaved like a "normal" target. I.e. we'd skip the re-generation of the po/git.pot if nothing had to be done. Then after po/git.pot was checked in in dce37b66fb0 (l10n: initial git.pot for 1.7.10 upcoming release, 2012-02-13) the target was broken until 1f31963e921 (i18n: treat "make pot" as an explicitly-invoked target, 2014-08-22) when it was made to depend on "FORCE". I.e. the Makefile's dependency resolution inherently can't handle incremental building when the target file may be updated by git (or something else external to "make"). But this case no longer applies, so FORCE is no longer needed. That out of the way, the main logic change here is getting rid of the "reset --hard": We'll generate intermediate ".build/pot/po/%.po" files from "%", which is handy to see at a glance what strings (if any) in a given file are marked for translation: $ make .build/pot/po/pretty.c.po [...] $ cat .build/pot/po/pretty.c.po #: pretty.c:1051 msgid "unable to parse --pretty format" msgstr "" $ For these C source files which contain the PRItime macros, we will create temporary munged "*.c" files in a tree in ".build/pot/po" corresponding to our source tree, and have "xgettext" consider those. The rule needs to be careful to "(cd .build/pot/po && ...)", because otherwise the comments in the po/git.pot file wouldn't refer to the correct source locations (they'd be prefixed with ".build/pot/po"). These temporary munged "*.c” files will be removed immediately after the corresponding po files are generated, because some development tools cannot ignore the duplicate source files in the ".build" directory according to the ".gitignore" file, and that may cause trouble. The output of the generated po/git.pot file is changed in one minor way: Because we're using msgcat(1) instead of xgettext(1) to concatenate the output we'll now disambiguate where "TRANSLATORS" comments come from, in cases where a message is the same in N files, and either only one has a "TRANSLATORS" comment, or they're different. E.g. for the "Your edited hunk[...]" message we'll now apply this change (comment content elided): +#. #-#-#-#-# add-patch.c.po #-#-#-#-# #. TRANSLATORS: do not translate [y/n] [...] +#. #-#-#-#-# git-add--interactive.perl.po #-#-#-#-# #. TRANSLATORS: do not translate [y/n] [...] #: add-patch.c:1253 git-add--interactive.perl:1244 msgid "" "Your edited hunk does not apply. Edit again (saying \"no\" discards!) [y/n]? " msgstr "" There are six such changes, and they all make the context more understandable, as msgcat(1) is better at handling these edge cases than xgettext(1)'s previously used "--join-existing" flag. But filenames in the above disambiguation lines of extracted-comments have an extra ".po" extension compared to the filenames at the file locations. While we could rename the intermediate ".build/pot/po/%.po" files without the ".po" extension to use more intuitive filenames in the disambiguation lines of extracted-comments, but that will confuse developer tools with lots of invalid C or other source files in ".build/pot/po" directory. The addition of "--omit-header" option for xgettext makes the "pot" snippets in ".build/pot/po/*.po" smaller. But as we'll see in a subsequent commit this header behavior has been hiding an encoding-related bug from us, so let's carry it forward instead of re-generating it with xgettext(1). The "po/git.pot" file should have a header entry, because a proper header entry will increase the speed of creating a new po file using msginit and set a proper "POT-Creation-Date:" field in the header entry of a "po/XX.po" file. We use xgettext to generate a separate header file at ".build/pot/git.header" from "/dev/null", and use this header to assemble "po/git.pot". Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Jiang Xin <zhiyou.jx@alibaba-inc.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-05-26 16:50:28 +02:00
MSGCAT = msgcat
i18n: add infrastructure for translating Git with gettext Change the skeleton implementation of i18n in Git to one that can show localized strings to users for our C, Shell and Perl programs using either GNU libintl or the Solaris gettext implementation. This new internationalization support is enabled by default. If gettext isn't available, or if Git is compiled with NO_GETTEXT=YesPlease, Git falls back on its current behavior of showing interface messages in English. When using the autoconf script we'll auto-detect if the gettext libraries are installed and act appropriately. This change is somewhat large because as well as adding a C, Shell and Perl i18n interface we're adding a lot of tests for them, and for those tests to work we need a skeleton PO file to actually test translations. A minimal Icelandic translation is included for this purpose. Icelandic includes multi-byte characters which makes it easy to test various edge cases, and it's a language I happen to understand. The rest of the commit message goes into detail about various sub-parts of this commit. = Installation Gettext .mo files will be installed and looked for in the standard $(prefix)/share/locale path. GIT_TEXTDOMAINDIR can also be set to override that, but that's only intended to be used to test Git itself. = Perl Perl code that's to be localized should use the new Git::I18n module. It imports a __ function into the caller's package by default. Instead of using the high level Locale::TextDomain interface I've opted to use the low-level (equivalent to the C interface) Locale::Messages module, which Locale::TextDomain itself uses. Locale::TextDomain does a lot of redundant work we don't need, and some of it would potentially introduce bugs. It tries to set the $TEXTDOMAIN based on package of the caller, and has its own hardcoded paths where it'll search for messages. I found it easier just to completely avoid it rather than try to circumvent its behavior. In any case, this is an issue wholly internal Git::I18N. Its guts can be changed later if that's deemed necessary. See <AANLkTilYD_NyIZMyj9dHtVk-ylVBfvyxpCC7982LWnVd@mail.gmail.com> for a further elaboration on this topic. = Shell Shell code that's to be localized should use the git-sh-i18n library. It's basically just a wrapper for the system's gettext.sh. If gettext.sh isn't available we'll fall back on gettext(1) if it's available. The latter is available without the former on Solaris, which has its own non-GNU gettext implementation. We also need to emulate eval_gettext() there. If neither are present we'll use a dumb printf(1) fall-through wrapper. = About libcharset.h and langinfo.h We use libcharset to query the character set of the current locale if it's available. I.e. we'll use it instead of nl_langinfo if HAVE_LIBCHARSET_H is set. The GNU gettext manual recommends using langinfo.h's nl_langinfo(CODESET) to acquire the current character set, but on systems that have libcharset.h's locale_charset() using the latter is either saner, or the only option on those systems. GNU and Solaris have a nl_langinfo(CODESET), FreeBSD can use either, but MinGW and some others need to use libcharset.h's locale_charset() instead. =Credits This patch is based on work by Jeff Epler <jepler@unpythonic.net> who did the initial Makefile / C work, and a lot of comments from the Git mailing list, including Jonathan Nieder, Jakub Narebski, Johannes Sixt, Erik Faye-Lund, Peter Krefting, Junio C Hamano, Thomas Rast and others. [jc: squashed a small Makefile fix from Ramsay] Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-11-18 00:14:42 +01:00
MSGFMT = msgfmt
MSGMERGE = msgmerge
CURL_CONFIG = curl-config
GCOV = gcov
STRIP = strip
SPATCH = spatch
export TCL_PATH TCLTK_PATH
# Set our default LIBS variables
PTHREAD_LIBS = -lpthread
# Guard against environment variables
BUILTIN_OBJS =
BUILT_INS =
COMPAT_CFLAGS =
COMPAT_OBJS =
XDIFF_OBJS =
Makefile: fold MISC_H into LIB_H We keep a list of most of the header files in LIB_H, but some are split out into MISC_H. The original point of LIB_H was that it would force recompilation of C files when any of the library headers changed. It was over-encompassing, since not all C files included all of the library headers; this made it simple to maintain, but meant that we sometimes recompiled when it was not necessary. Over time, some new headers were omitted from LIB_H, and rules were added to the Makefile for a few specific targets to explicitly depend on them. This avoided some unnecessary recompilation at the cost of having to maintain the dependency list of those targets manually (e.g., d349a03). Later, we needed a complete list of headers from which we should extract strings to localized. Thus 1b8b2e4 introduced MISC_H to mention all header files not included in LIB_H, and the concatenation of the two lists is fed to xgettext. Headers mentioned as dependencies must also be manually added to MISC_H to receive the benefits of localization. Having to update multiple locations manually is a pain and has led to errors. For example, see "git log -Swt-status.h Makefile" for some back-and-forth between the two locations. Or the fact that column.h was never added to MISC_H, and therefore was not localized (which is fixed by this patch). Moreover, the benefits of keeping these few headers out of LIB_H is not that great, for two reasons: 1. The better way to do this is by auto-computing the dependencies, which is more accurate and less work to maintain. If your compiler supports it, we turn on computed header dependencies by default these days. So these manual dependencies are used only for people who do not have gcc at all (which increases the chance of them becoming stale, as many developers will never even use them). 2. Even if you do not have gcc, the manual header dependencies do not help all that much. They obviously cannot help with an initial compilation (since their purpose is to avoid unnecessary recompilation when a header changes), which means they are only useful when building a new version of git in the working tree that held an existing build (e.g., after checkout or during a bisection). But since a change of a header in LIB_H will force recompilation, and given that the vast majority of headers are in LIB_H, most version changes will result in a full rebuild anyway. Let's just fold MISC_H into LIB_H and get rid of these manual rules. The worst case is some extra compilation, but even that is unlikely to matter due to the reasons above. The one exception is that we should keep common-cmds.h separate. Because it is generated, the computed dependencies do not handle it properly, and we must keep separate individual dependencies on it. Let's therefore rename MISC_H to GENERATED_H to make it more clear what should go in it. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-06-20 20:30:56 +02:00
GENERATED_H =
EXTRA_CPPFLAGS =
FUZZ_OBJS =
FUZZ_PROGRAMS =
GIT_OBJS =
LIB_OBJS =
SCALAR_OBJS =
OBJECTS =
OTHER_PROGRAMS =
PROGRAM_OBJS =
PROGRAMS =
EXCLUDED_PROGRAMS =
SCRIPT_PERL =
SCRIPT_PYTHON =
SCRIPT_SH =
SCRIPT_LIB =
TEST_BUILTINS_OBJS =
TEST_OBJS =
TEST_PROGRAMS_NEED_X =
THIRD_PARTY_SOURCES =
unit tests: add TAP unit test framework This patch contains an implementation for writing unit tests with TAP output. Each test is a function that contains one or more checks. The test is run with the TEST() macro and if any of the checks fail then the test will fail. A complete program that tests STRBUF_INIT would look like #include "test-lib.h" #include "strbuf.h" static void t_static_init(void) { struct strbuf buf = STRBUF_INIT; check_uint(buf.len, ==, 0); check_uint(buf.alloc, ==, 0); check_char(buf.buf[0], ==, '\0'); } int main(void) { TEST(t_static_init(), "static initialization works); return test_done(); } The output of this program would be ok 1 - static initialization works 1..1 If any of the checks in a test fail then they print a diagnostic message to aid debugging and the test will be reported as failing. For example a failing integer check would look like # check "x >= 3" failed at my-test.c:102 # left: 2 # right: 3 not ok 1 - x is greater than or equal to three There are a number of check functions implemented so far. check() checks a boolean condition, check_int(), check_uint() and check_char() take two values to compare and a comparison operator. check_str() will check if two strings are equal. Custom checks are simple to implement as shown in the comments above test_assert() in test-lib.h. Tests can be skipped with test_skip() which can be supplied with a reason for skipping which it will print. Tests can print diagnostic messages with test_msg(). Checks that are known to fail can be wrapped in TEST_TODO(). There are a couple of example test programs included in this patch. t-basic.c implements some self-tests and demonstrates the diagnostic output for failing test. The output of this program is checked by t0080-unit-test-output.sh. t-strbuf.c shows some example unit tests for strbuf.c The unit tests will be built as part of the default "make all" target, to avoid bitrot. If you wish to build just the unit tests, you can run "make build-unit-tests". To run the tests, you can use "make unit-tests" or run the test binaries directly, as in "./t/unit-tests/bin/t-strbuf". Signed-off-by: Phillip Wood <phillip.wood@dunelm.org.uk> Signed-off-by: Josh Steadmon <steadmon@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-11-09 19:50:43 +01:00
UNIT_TEST_PROGRAMS =
UNIT_TEST_DIR = t/unit-tests
UNIT_TEST_BIN = $(UNIT_TEST_DIR)/bin
# Having this variable in your environment would break pipelines because
# you cause "cd" to echo its destination to stdout. It can also take
# scripts to unexpected places. If you like CDPATH, define it for your
# interactive shell sessions without exporting it.
unexport CDPATH
SCRIPT_SH += git-difftool--helper.sh
SCRIPT_SH += git-filter-branch.sh
SCRIPT_SH += git-merge-octopus.sh
SCRIPT_SH += git-merge-one-file.sh
SCRIPT_SH += git-merge-resolve.sh
SCRIPT_SH += git-mergetool.sh
SCRIPT_SH += git-quiltimport.sh
SCRIPT_SH += git-request-pull.sh
SCRIPT_SH += git-submodule.sh
SCRIPT_SH += git-web--browse.sh
SCRIPT_LIB += git-mergetool--lib
SCRIPT_LIB += git-sh-i18n
SCRIPT_LIB += git-sh-setup
SCRIPT_PERL += git-archimport.perl
SCRIPT_PERL += git-cvsexportcommit.perl
SCRIPT_PERL += git-cvsimport.perl
SCRIPT_PERL += git-cvsserver.perl
SCRIPT_PERL += git-send-email.perl
SCRIPT_PERL += git-svn.perl
SCRIPT_PYTHON += git-p4.py
# Generated files for scripts
SCRIPT_SH_GEN = $(patsubst %.sh,%,$(SCRIPT_SH))
SCRIPT_PERL_GEN = $(patsubst %.perl,%,$(SCRIPT_PERL))
SCRIPT_PYTHON_GEN = $(patsubst %.py,%,$(SCRIPT_PYTHON))
# Individual rules to allow e.g.
# "make -C ../.. SCRIPT_PERL=contrib/foo/bar.perl build-perl-script"
# from subdirectories like contrib/*/
.PHONY: build-perl-script build-sh-script build-python-script
build-perl-script: $(SCRIPT_PERL_GEN)
build-sh-script: $(SCRIPT_SH_GEN)
build-python-script: $(SCRIPT_PYTHON_GEN)
.PHONY: install-perl-script install-sh-script install-python-script
install-sh-script: $(SCRIPT_SH_GEN)
$(INSTALL) $^ '$(DESTDIR_SQ)$(gitexec_instdir_SQ)'
install-perl-script: $(SCRIPT_PERL_GEN)
$(INSTALL) $^ '$(DESTDIR_SQ)$(gitexec_instdir_SQ)'
install-python-script: $(SCRIPT_PYTHON_GEN)
$(INSTALL) $^ '$(DESTDIR_SQ)$(gitexec_instdir_SQ)'
.PHONY: clean-perl-script clean-sh-script clean-python-script
clean-sh-script:
$(RM) $(SCRIPT_SH_GEN)
clean-perl-script:
$(RM) $(SCRIPT_PERL_GEN)
clean-python-script:
$(RM) $(SCRIPT_PYTHON_GEN)
SCRIPTS = $(SCRIPT_SH_GEN) \
$(SCRIPT_PERL_GEN) \
$(SCRIPT_PYTHON_GEN) \
git-instaweb
ETAGS_TARGET = TAGS
# Empty...
EXTRA_PROGRAMS =
# ... and all the rest that could be moved out of bindir to gitexecdir
PROGRAMS += $(EXTRA_PROGRAMS)
PROGRAM_OBJS += daemon.o
PROGRAM_OBJS += http-backend.o
PROGRAM_OBJS += imap-send.o
PROGRAM_OBJS += sh-i18n--envsubst.o
PROGRAM_OBJS += shell.o
.PHONY: program-objs
program-objs: $(PROGRAM_OBJS)
# Binary suffix, set to .exe for Windows builds
X =
PROGRAMS += $(patsubst %.o,git-%$X,$(PROGRAM_OBJS))
TEST_BUILTINS_OBJS += test-advise.o
TEST_BUILTINS_OBJS += test-bitmap.o
TEST_BUILTINS_OBJS += test-bloom.o
TEST_BUILTINS_OBJS += test-bundle-uri.o
cache-tree: add perf test comparing update and prime Add a performance test comparing the execution times of 'prime_cache_tree()' and 'cache_tree_update(_, WRITE_TREE_SILENT | WRITE_TREE_REPAIR)'. The goal of comparing these two is to identify which is the faster method for rebuilding an invalid cache tree, ultimately to remove one when both are (reundantly) called in immediate succession. Both methods are fast, so the new tests in 'p0090-cache-tree.sh' must call each tested function multiple times to ensure the reported times (to 0.01s resolution) convey the differences between them. The tests compare the timing of a 'test-tool cache-tree' run as a no-op (to capture a baseline for the overhead associated with running the tool), 'cache_tree_update()', and 'prime_cache_tree()' on four scenarios: - A completely valid cache tree - A cache tree with 2 invalid paths - A cache tree with 50 invalid paths - A completely empty cache tree Example results: Test this tree ----------------------------------------------------------- 0090.2: no-op, clean 1.27(0.48+0.52) 0090.3: prime_cache_tree, clean 2.02(0.83+0.85) 0090.4: cache_tree_update, clean 1.30(0.49+0.54) 0090.5: no-op, invalidate 2 1.29(0.48+0.54) 0090.6: prime_cache_tree, invalidate 2 1.98(0.81+0.83) 0090.7: cache_tree_update, invalidate 2 2.12(0.94+0.86) 0090.8: no-op, invalidate 50 1.32(0.50+0.55) 0090.9: prime_cache_tree, invalidate 50 2.10(0.86+0.89) 0090.10: cache_tree_update, invalidate 50 2.35(1.14+0.90) 0090.11: no-op, empty 1.33(0.50+0.54) 0090.12: prime_cache_tree, empty 2.04(0.84+0.87) 0090.13: cache_tree_update, empty 2.51(1.27+0.92) These timings show that, while 'cache_tree_update()' is faster when the cache tree is completely valid, it is equal to or slower than 'prime_cache_tree()' when there are any invalid paths. Since the redundant calls are mostly in scenarios where the cache tree will be at least partially invalid (e.g., 'git reset --hard'), 'prime_cache_tree()' will likely perform better than 'cache_tree_update()' in typical cases. Helped-by: SZEDER Gábor <szeder.dev@gmail.com> Signed-off-by: Victoria Dye <vdye@github.com> Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-10 20:06:01 +01:00
TEST_BUILTINS_OBJS += test-cache-tree.o
TEST_BUILTINS_OBJS += test-chmtime.o
TEST_BUILTINS_OBJS += test-config.o
maintenance: add start/stop subcommands Add new subcommands to 'git maintenance' that start or stop background maintenance using 'cron', when available. This integration is as simple as I could make it, barring some implementation complications. The schedule is laid out as follows: 0 1-23 * * * $cmd maintenance run --schedule=hourly 0 0 * * 1-6 $cmd maintenance run --schedule=daily 0 0 * * 0 $cmd maintenance run --schedule=weekly where $cmd is a properly-qualified 'git for-each-repo' execution: $cmd=$path/git --exec-path=$path for-each-repo --config=maintenance.repo where $path points to the location of the Git executable running 'git maintenance start'. This is critical for systems with multiple versions of Git. Specifically, macOS has a system version at '/usr/bin/git' while the version that users can install resides at '/usr/local/bin/git' (symlinked to '/usr/local/libexec/git-core/git'). This will also use your locally-built version if you build and run this in your development environment without installing first. This conditional schedule avoids having cron launch multiple 'git for-each-repo' commands in parallel. Such parallel commands would likely lead to the 'hourly' and 'daily' tasks competing over the object database lock. This could lead to to some tasks never being run! Since the --schedule=<frequency> argument will run all tasks with _at least_ the given frequency, the daily runs will also run the hourly tasks. Similarly, the weekly runs will also run the daily and hourly tasks. The GIT_TEST_CRONTAB environment variable is not intended for users to edit, but instead as a way to mock the 'crontab [-l]' command. This variable is set in test-lib.sh to avoid a future test from accidentally running anything with the cron integration from modifying the user's schedule. We use GIT_TEST_CRONTAB='test-tool crontab <file>' in our tests to check how the schedule is modified in 'git maintenance (start|stop)' commands. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-11 19:49:18 +02:00
TEST_BUILTINS_OBJS += test-crontab.o
wrapper: add a helper to generate numbers from a CSPRNG There are many situations in which having access to a cryptographically secure pseudorandom number generator (CSPRNG) is helpful. In the future, we'll encounter one of these when dealing with temporary files. To make this possible, let's add a function which reads from a system CSPRNG and returns some bytes. We know that all systems will have such an interface. A CSPRNG is required for a secure TLS or SSH implementation and a Git implementation which provided neither would be of little practical use. In addition, POSIX is set to standardize getentropy(2) in the next version, so in the (potentially distant) future we can rely on that. For systems which lack one of the other interfaces, we provide the ability to use OpenSSL's CSPRNG. OpenSSL is highly portable and functions on practically every known OS, and we know it will have access to some source of cryptographically secure randomness. We also provide support for the arc4random in libbsd for folks who would prefer to use that. Because this is a security sensitive interface, we take some precautions. We either succeed by filling the buffer completely as we requested, or we fail. We don't return partial data because the caller will almost never find that to be a useful behavior. Specify a makefile knob which users can use to specify one or more suitable CSPRNGs, and turn the multiple string options into a set of defines, since we cannot match on strings in the preprocessor. We allow multiple options to make the job of handling this in autoconf easier. The order of options is important here. On systems with arc4random, which is most of the BSDs, we use that, since, except on MirBSD and macOS, it uses ChaCha20, which is extremely fast, and sits entirely in userspace, avoiding a system call. We then prefer getrandom over getentropy, because the former has been available longer on Linux, and then OpenSSL. Finally, if none of those are available, we use /dev/urandom, because most Unix-like operating systems provide that API. We prefer options that don't involve device files when possible because those work in some restricted environments where device files may not be available. Set the configuration variables appropriately for Linux and the BSDs, including macOS, as well as Windows and NonStop. We specifically only consider versions which receive publicly available security support here. For the same reason, we don't specify getrandom(2) on Linux, because CentOS 7 doesn't support it in glibc (although its kernel does) and we don't want to resort to making syscalls. Finally, add a test helper to allow this to be tested by hand and in tests. We don't add any tests, since invoking the CSPRNG is not likely to produce interesting, reproducible results. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-01-17 22:56:16 +01:00
TEST_BUILTINS_OBJS += test-csprng.o
TEST_BUILTINS_OBJS += test-date.o
TEST_BUILTINS_OBJS += test-delete-gpgsig.o
TEST_BUILTINS_OBJS += test-delta.o
TEST_BUILTINS_OBJS += test-dir-iterator.o
TEST_BUILTINS_OBJS += test-drop-caches.o
TEST_BUILTINS_OBJS += test-dump-cache-tree.o
TEST_BUILTINS_OBJS += test-dump-fsmonitor.o
TEST_BUILTINS_OBJS += test-dump-split-index.o
TEST_BUILTINS_OBJS += test-dump-untracked-cache.o
TEST_BUILTINS_OBJS += test-env-helper.o
TEST_BUILTINS_OBJS += test-example-decorate.o
TEST_BUILTINS_OBJS += test-find-pack.o
TEST_BUILTINS_OBJS += test-fsmonitor-client.o
TEST_BUILTINS_OBJS += test-genrandom.o
TEST_BUILTINS_OBJS += test-genzeros.o
t0001: fix broken not-quite getcwd(3) test in bed67874e2 With a54e938e5b (strbuf: support long paths w/o read rights in strbuf_getcwd() on FreeBSD, 2017-03-26) we had t0001 break on systems like OpenBSD and AIX whose getcwd(3) has standard (but not like glibc et al) behavior. This was partially fixed in bed67874e2 (t0001: skip test with restrictive permissions if getpwd(3) respects them, 2017-08-07). The problem with that fix is that while its analysis of the problem is correct, it doesn't actually call getcwd(3), instead it invokes "pwd -P". There is no guarantee that "pwd -P" is going to call getcwd(3), as opposed to e.g. being a shell built-in. On AIX under both bash and ksh this test breaks because "pwd -P" will happily display the current working directory, but getcwd(3) called by the "git init" we're testing here will fail to get it. I checked whether clobbering the $PWD environment variable would affect it, and it didn't. Presumably these shells keep track of their working directory internally. There's possible follow-up work here in teaching strbuf_getcwd() to get the working directory with whatever method "pwd" uses on these platforms. See [1] for a discussion of that, but let's take the easy way out here and just skip these tests by fixing the GETCWD_IGNORES_PERMS prerequisite to match the limitations of strbuf_getcwd(). 1. https://lore.kernel.org/git/b650bef5-d739-d98d-e9f1-fa292b6ce982@web.de/ Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-07-30 18:18:14 +02:00
TEST_BUILTINS_OBJS += test-getcwd.o
TEST_BUILTINS_OBJS += test-hash-speed.o
TEST_BUILTINS_OBJS += test-hash.o
TEST_BUILTINS_OBJS += test-hashmap.o
TEST_BUILTINS_OBJS += test-hexdump.o
TEST_BUILTINS_OBJS += test-json-writer.o
TEST_BUILTINS_OBJS += test-lazy-init-name-hash.o
TEST_BUILTINS_OBJS += test-match-trees.o
TEST_BUILTINS_OBJS += test-mergesort.o
TEST_BUILTINS_OBJS += test-mktemp.o
TEST_BUILTINS_OBJS += test-oid-array.o
TEST_BUILTINS_OBJS += test-oidmap.o
oidtree: a crit-bit tree for odb_loose_cache This saves 8K per `struct object_directory', meaning it saves around 800MB in my case involving 100K alternates (half or more of those alternates are unlikely to hold loose objects). This is implemented in two parts: a generic, allocation-free `cbtree' and the `oidtree' wrapper on top of it. The latter provides allocation using alloc_state as a memory pool to improve locality and reduce free(3) overhead. Unlike oid-array, the crit-bit tree does not require sorting. Performance is bound by the key length, for oidtree that is fixed at sizeof(struct object_id). There's no need to have 256 oidtrees to mitigate the O(n log n) overhead like we did with oid-array. Being a prefix trie, it is natively suited for expanding short object IDs via prefix-limited iteration in `find_short_object_filename'. On my busy workstation, p4205 performance seems to be roughly unchanged (+/-8%). Startup with 100K total alternates with no loose objects seems around 10-20% faster on a hot cache. (800MB in memory savings means more memory for the kernel FS cache). The generic cbtree implementation does impose some extra overhead for oidtree in that it uses memcmp(3) on "struct object_id" so it wastes cycles comparing 12 extra bytes on SHA-1 repositories. I've not yet explored reducing this overhead, but I expect there are many places in our code base where we'd want to investigate this. More information on crit-bit trees: https://cr.yp.to/critbit.html Signed-off-by: Eric Wong <e@80x24.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-07-08 01:10:19 +02:00
TEST_BUILTINS_OBJS += test-oidtree.o
TEST_BUILTINS_OBJS += test-online-cpus.o
TEST_BUILTINS_OBJS += test-pack-mtimes.o
TEST_BUILTINS_OBJS += test-parse-options.o
TEST_BUILTINS_OBJS += test-parse-pathspec-file.o
TEST_BUILTINS_OBJS += test-partial-clone.o
TEST_BUILTINS_OBJS += test-path-utils.o
TEST_BUILTINS_OBJS += test-pcre2-config.o
TEST_BUILTINS_OBJS += test-pkt-line.o
receive-pack: add new proc-receive hook Git calls an internal `execute_commands` function to handle commands sent from client to `git-receive-pack`. Regardless of what references the user pushes, git creates or updates the corresponding references if the user has write-permission. A contributor who has no write-permission, cannot push to the repository directly. So, the contributor has to write commits to an alternate location, and sends pull request by emails or by other ways. We call this workflow as a distributed workflow. It would be more convenient to work in a centralized workflow like what Gerrit provided for some cases. For example, a read-only user who cannot push to a branch directly can run the following `git push` command to push commits to a pseudo reference (has a prefix "refs/for/", not "refs/heads/") to create a code review. git push origin \ HEAD:refs/for/<branch-name>/<session> The `<branch-name>` in the above example can be as simple as "master", or a more complicated branch name like "foo/bar". The `<session>` in the above example command can be the local branch name of the client side, such as "my/topic". We cannot implement a centralized workflow elegantly by using "pre-receive" + "post-receive", because Git will call the internal function "execute_commands" to create references (even the special pseudo reference) between these two hooks. Even though we can delete the temporarily created pseudo reference via the "post-receive" hook, having a temporary reference is not safe for concurrent pushes. So, add a filter and a new handler to support this kind of workflow. The filter will check the prefix of the reference name, and if the command has a special reference name, the filter will turn a specific field (`run_proc_receive`) on for the command. Commands with this filed turned on will be executed by a new handler (a hook named "proc-receive") instead of the internal `execute_commands` function. We can use this "proc-receive" command to create pull requests or send emails for code review. Suggested by Junio, this "proc-receive" hook reads the commands, push-options (optional), and send result using a protocol in pkt-line format. In the following example, the letter "S" stands for "receive-pack" and letter "H" stands for the hook. # Version and features negotiation. S: PKT-LINE(version=1\0push-options atomic...) S: flush-pkt H: PKT-LINE(version=1\0push-options...) H: flush-pkt # Send commands from server to the hook. S: PKT-LINE(<old-oid> <new-oid> <ref>) S: ... ... S: flush-pkt # Send push-options only if the 'push-options' feature is enabled. S: PKT-LINE(push-option) S: ... ... S: flush-pkt # Receive result from the hook. # OK, run this command successfully. H: PKT-LINE(ok <ref>) # NO, I reject it. H: PKT-LINE(ng <ref> <reason>) # Fall through, let 'receive-pack' to execute it. H: PKT-LINE(ok <ref>) H: PKT-LINE(option fall-through) # OK, but has an alternate reference. The alternate reference name # and other status can be given in options H: PKT-LINE(ok <ref>) H: PKT-LINE(option refname <refname>) H: PKT-LINE(option old-oid <old-oid>) H: PKT-LINE(option new-oid <new-oid>) H: PKT-LINE(option forced-update) H: ... ... H: flush-pkt After receiving a command, the hook will execute the command, and may create/update different reference. For example, a command for a pseudo reference "refs/for/master/topic" may create/update different reference such as "refs/pull/123/head". The alternate reference name and other status are given in option lines. The list of commands returned from "proc-receive" will replace the relevant commands that are sent from user to "receive-pack", and "receive-pack" will continue to run the "execute_commands" function and other routines. Finally, the result of the execution of these commands will be reported to end user. The reporting function from "receive-pack" to "send-pack" will be extended in latter commit just like what the "proc-receive" hook reports to "receive-pack". Signed-off-by: Jiang Xin <zhiyou.jx@alibaba-inc.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-08-27 17:45:44 +02:00
TEST_BUILTINS_OBJS += test-proc-receive.o
Test the progress display 'progress.c' has seen a few fixes recently [1], and, unfortunately, some of those fixes required further fixes [2]. It seems it's time to have a few tests focusing on the subtleties of the progress display. Add the 'test-tool progress' subcommand to help testing the progress display, reading instructions from standard input and turning them into calls to the display_progress() and display_throughput() functions with the given parameters. The progress display is, however, critically dependent on timing, because it's only updated once every second or, if the toal is known in advance, every 1%, and there is the throughput rate as well. These make the progress display far too undeterministic for testing as-is. To address this, add a few testing-specific variables and functions to 'progress.c', allowing the the new test helper to: - Disable the triggered-every-second SIGALRM and set the 'progress_update' flag explicitly based in the input instructions. This way the progress line will be updated deterministically when the test wants it to be updated. - Specify the time elapsed since start_progress() to make the throughput rate calculations deterministic. Add the new test script 't0500-progress-display.sh' to check a few simple cases with and without throughput, and that a shorter progress line properly covers up the previously displayed line in different situations. [1] See commits 545dc345eb (progress: break too long progress bar lines, 2019-04-12) and 9f1fd84e15 (progress: clear previous progress update dynamically, 2019-04-12). [2] 1aed1a5f25 (progress: avoid empty line when breaking the progress line, 2019-05-19) Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-09-16 22:54:12 +02:00
TEST_BUILTINS_OBJS += test-progress.o
TEST_BUILTINS_OBJS += test-reach.o
TEST_BUILTINS_OBJS += test-read-cache.o
TEST_BUILTINS_OBJS += test-read-graph.o
TEST_BUILTINS_OBJS += test-read-midx.o
TEST_BUILTINS_OBJS += test-ref-store.o
TEST_BUILTINS_OBJS += test-reftable.o
TEST_BUILTINS_OBJS += test-regex.o
TEST_BUILTINS_OBJS += test-rot13-filter.o
TEST_BUILTINS_OBJS += test-repository.o
TEST_BUILTINS_OBJS += test-revision-walking.o
TEST_BUILTINS_OBJS += test-run-command.o
TEST_BUILTINS_OBJS += test-scrap-cache-tree.o
TEST_BUILTINS_OBJS += test-serve-v2.o
TEST_BUILTINS_OBJS += test-sha1.o
TEST_BUILTINS_OBJS += test-sha256.o
TEST_BUILTINS_OBJS += test-sigchain.o
TEST_BUILTINS_OBJS += test-simple-ipc.o
TEST_BUILTINS_OBJS += test-strcmp-offset.o
TEST_BUILTINS_OBJS += test-string-list.o
TEST_BUILTINS_OBJS += test-submodule-config.o
TEST_BUILTINS_OBJS += test-submodule-nested-repo-config.o
TEST_BUILTINS_OBJS += test-submodule.o
TEST_BUILTINS_OBJS += test-subprocess.o
TEST_BUILTINS_OBJS += test-trace2.o
TEST_BUILTINS_OBJS += test-truncate.o
TEST_BUILTINS_OBJS += test-urlmatch-normalization.o
TEST_BUILTINS_OBJS += test-userdiff.o
TEST_BUILTINS_OBJS += test-wildmatch.o
TEST_BUILTINS_OBJS += test-windows-named-pipe.o
TEST_BUILTINS_OBJS += test-write-cache.o
TEST_BUILTINS_OBJS += test-xml-encode.o
# Do not add more tests here unless they have extra dependencies. Add
# them in TEST_BUILTINS_OBJS above.
TEST_PROGRAMS_NEED_X += test-fake-ssh
TEST_PROGRAMS_NEED_X += test-tool
TEST_PROGRAMS = $(patsubst %,t/helper/%$X,$(TEST_PROGRAMS_NEED_X))
# List built-in command $C whose implementation cmd_$C() is not in
Move 'builtin-*' into a 'builtin/' subdirectory This shrinks the top-level directory a bit, and makes it much more pleasant to use auto-completion on the thing. Instead of [torvalds@nehalem git]$ em buil<tab> Display all 180 possibilities? (y or n) [torvalds@nehalem git]$ em builtin-sh builtin-shortlog.c builtin-show-branch.c builtin-show-ref.c builtin-shortlog.o builtin-show-branch.o builtin-show-ref.o [torvalds@nehalem git]$ em builtin-shor<tab> builtin-shortlog.c builtin-shortlog.o [torvalds@nehalem git]$ em builtin-shortlog.c you get [torvalds@nehalem git]$ em buil<tab> [type] builtin/ builtin.h [torvalds@nehalem git]$ em builtin [auto-completes to] [torvalds@nehalem git]$ em builtin/sh<tab> [type] shortlog.c shortlog.o show-branch.c show-branch.o show-ref.c show-ref.o [torvalds@nehalem git]$ em builtin/sho [auto-completes to] [torvalds@nehalem git]$ em builtin/shor<tab> [type] shortlog.c shortlog.o [torvalds@nehalem git]$ em builtin/shortlog.c which doesn't seem all that different, but not having that annoying break in "Display all 180 possibilities?" is quite a relief. NOTE! If you do this in a clean tree (no object files etc), or using an editor that has auto-completion rules that ignores '*.o' files, you won't see that annoying 'Display all 180 possibilities?' message - it will just show the choices instead. I think bash has some cut-off around 100 choices or something. So the reason I see this is that I'm using an odd editory, and thus don't have the rules to cut down on auto-completion. But you can simulate that by using 'ls' instead, or something similar. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-02-22 17:42:18 +01:00
# builtin/$C.o but is linked in as part of some other command.
BUILT_INS += $(patsubst builtin/%.o,git-%$X,$(BUILTIN_OBJS))
BUILT_INS += git-cherry$X
BUILT_INS += git-cherry-pick$X
BUILT_INS += git-format-patch$X
BUILT_INS += git-fsck-objects$X
BUILT_INS += git-init$X
BUILT_INS += git-maintenance$X
BUILT_INS += git-merge-subtree$X
checkout: split part of it to new command 'restore' Previously the switching branch business of 'git checkout' becomes a new command 'switch'. This adds the restore command for the checking out paths path. Similar to git-switch, a new man page is added to describe what the command will become. The implementation will be updated shortly to match the man page. A couple main differences from 'git checkout <paths>': - 'restore' by default will only update worktree. This matters more when --source is specified ('checkout <tree> <paths>' updates both worktree and index). - 'restore --staged' can be used to restore the index. This command overlaps with 'git reset <paths>'. - both worktree and index could also be restored at the same time (from a tree) when both --staged and --worktree are specified. This overlaps with 'git checkout <tree> <paths>' - default source for restoring worktree and index is the index and HEAD respectively. A different (tree) source could be specified as with --source (*). - when both index and worktree are restored, --source must be specified since the default source for these two individual targets are different (**) - --no-overlay is enabled by default, if an entry is missing in the source, restoring means deleting the entry (*) I originally went with --from instead of --source. I still think --from is a better name. The short option -f however is already taken by force. And I do think short option is good to have, e.g. to write -s@ or -s@^ instead of --source=HEAD. (**) If you sit down and think about it, moving worktree's source from the index to HEAD makes sense, but nobody is really thinking it through when they type the commands. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-04-25 11:45:45 +02:00
BUILT_INS += git-restore$X
BUILT_INS += git-show$X
BUILT_INS += git-stage$X
BUILT_INS += git-status$X
BUILT_INS += git-switch$X
BUILT_INS += git-version$X
BUILT_INS += git-whatchanged$X
# what 'all' will build but not install in gitexecdir
OTHER_PROGRAMS += git$X
OTHER_PROGRAMS += scalar$X
# what test wrappers are needed and 'install' will install, in bindir
BINDIR_PROGRAMS_NEED_X += git
BINDIR_PROGRAMS_NEED_X += scalar
BINDIR_PROGRAMS_NEED_X += git-receive-pack
BINDIR_PROGRAMS_NEED_X += git-shell
BINDIR_PROGRAMS_NEED_X += git-upload-archive
BINDIR_PROGRAMS_NEED_X += git-upload-pack
BINDIR_PROGRAMS_NO_X += git-cvsserver
# Set paths to tools early so that they can be used for version tests.
ifndef SHELL_PATH
SHELL_PATH = /bin/sh
endif
ifndef PERL_PATH
PERL_PATH = /usr/bin/perl
endif
ifndef PYTHON_PATH
PYTHON_PATH = /usr/bin/python
endif
export PERL_PATH
export PYTHON_PATH
TEST_SHELL_PATH = $(SHELL_PATH)
LIB_FILE = libgit.a
XDIFF_LIB = xdiff/lib.a
REFTABLE_LIB = reftable/libreftable.a
REFTABLE_TEST_LIB = reftable/libreftable_test.a
GENERATED_H += command-list.h
GENERATED_H += config-list.h
hook-list.h: add a generated list of hooks, like config-list.h Make githooks(5) the source of truth for what hooks git supports, and punt out early on hooks we don't know about in find_hook(). This ensures that the documentation and the C code's idea about existing hooks doesn't diverge. We still have Perl and Python code running its own hooks, but that'll be addressed by Emily Shaffer's upcoming "git hook run" command. This resolves a long-standing TODO item in bugreport.c of there being no centralized listing of hooks, and fixes a bug with the bugreport listing only knowing about 1/4 of the p4 hooks. It didn't know about the recent "reference-transaction" hook either. We could make the find_hook() function die() or BUG() out if the new known_hook() returned 0, but let's make it return NULL just as it does when it can't find a hook of a known type. Making it die() is overly anal, and unlikely to be what we need in catching stupid typos in the name of some new hook hardcoded in git.git's sources. By making this be tolerant of unknown hook names, changes in a later series to make "git hook run" run arbitrary user-configured hook names will be easier to implement. I have not been able to directly test the CMake change being made here. Since 4c2c38e800 (ci: modification of main.yml to use cmake for vs-build job, 2020-06-26) some of the Windows CI has a hard dependency on CMake, this change works there, and is to my eyes an obviously correct use of a pattern established in previous CMake changes, namely: - 061c2240b1 (Introduce CMake support for configuring Git, 2020-06-12) - 709df95b78 (help: move list_config_help to builtin/help, 2020-04-16) - 976aaedca0 (msvc: add a Makefile target to pre-generate the Visual Studio solution, 2019-07-29) The LC_ALL=C is needed because at least in my locale the dash ("-") is ignored for the purposes of sorting, which results in a different order. I'm not aware of anything in git that has a hard dependency on the order, but e.g. the bugreport output would end up using whatever locale was in effect when git was compiled. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Helped-by: René Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-09-26 21:03:29 +02:00
GENERATED_H += hook-list.h
.PHONY: generated-hdrs
generated-hdrs: $(GENERATED_H)
## Exhaustive lists of our source files, either dynamically generated,
## or hardcoded.
SOURCES_CMD = ( \
git ls-files --deduplicate \
'*.[hcS]' \
'*.sh' \
':!*[tp][0-9][0-9][0-9][0-9]*' \
':!contrib' \
2>/dev/null || \
Makefile: use `git ls-files` to list header files, if possible In d85b0dff72 (Makefile: use `find` to determine static header dependencies, 2014-08-25), we switched from a static list of header files to a dynamically-generated one, asking `find` to enumerate them. Back in those days, we did not use `$(LIB_H)` by default, and many a `make` implementation seems smart enough not to run that `find` command in that case, so it was deemed okay to run `find` for special targets requiring this macro. However, as of ebb7baf02f (Makefile: add a hdr-check target, 2018-09-19), $(LIB_H) is part of a global rule and therefore must be expanded. Meaning: this `find` command has to be run upon every `make` invocation. In the presence of many a worktree, this can tax the developers' patience quite a bit. Even in the absence of worktrees or other untracked files and directories, the cost of I/O to generate that list of header files is simply a lot larger than a simple `git ls-files` call. Therefore, just like in 335339758c (Makefile: ask "ls-files" to list source files if available, 2011-10-18), we now prefer to use `git ls-files` to enumerate the header files to enumerating them via `find`, falling back to the latter if the former failed (which would be the case e.g. in a worktree that was extracted from a source .tar file rather than from a clone of Git's sources). This has one notable consequence: we no longer include `command-list.h` in `LIB_H`, as it is a generated file, not a tracked one, but that is easily worked around. Of the three sites that use `LIB_H`, two (`LOCALIZED_C` and `CHK_HDRS`) already handle generated headers separately. In the third, the computed-dependency fallback, we can just add in a reference to $(GENERATED_H). Likewise, we no longer include not-yet-tracked header files in `LIB_H`. Given the speed improvements, these consequences seem a comparably small price to pay. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Acked-by: Ramsay Jones <ramsay@ramsayjones.plus.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-03-04 14:47:06 +01:00
$(FIND) . \
\( -name .git -type d -prune \) \
-o \( -name '[tp][0-9][0-9][0-9][0-9]*' -prune \) \
-o \( -name contrib -type d -prune \) \
-o \( -name build -type d -prune \) \
Makefile: have "make pot" not "reset --hard" Before commit fc0fd5b23b (Makefile: help gettext tools to cope with our custom PRItime format, 2017-07-20), we'd consider source files as-is with gettext, but because we need to understand PRItime in the same way that gettext itself understands PRIuMAX, we'd first check if we had a clean checkout, then munge all of the processed files in-place with "sed", generate "po/git.pot", and then finally "reset --hard" to undo our changes. By generating "pot" snippets in ".build/pot/po" for each source file and rewriting certain source files with PRItime macros to temporary files in ".build/pot/po", we can avoid running "make pot" by altering files in place and doing a "reset --hard" afterwards. This speed of "make pot" is slower than before on an initial run, because we run "xgettext" many times (once per source file), but it can be boosted by parallelization. It is *much* faster for incremental runs, and will allow us to implement related targets in subsequent commits. When the "pot" target was originally added in cd5513a7168 (i18n: Makefile: "pot" target to extract messages marked for translation, 2011-02-22) it behaved like a "normal" target. I.e. we'd skip the re-generation of the po/git.pot if nothing had to be done. Then after po/git.pot was checked in in dce37b66fb0 (l10n: initial git.pot for 1.7.10 upcoming release, 2012-02-13) the target was broken until 1f31963e921 (i18n: treat "make pot" as an explicitly-invoked target, 2014-08-22) when it was made to depend on "FORCE". I.e. the Makefile's dependency resolution inherently can't handle incremental building when the target file may be updated by git (or something else external to "make"). But this case no longer applies, so FORCE is no longer needed. That out of the way, the main logic change here is getting rid of the "reset --hard": We'll generate intermediate ".build/pot/po/%.po" files from "%", which is handy to see at a glance what strings (if any) in a given file are marked for translation: $ make .build/pot/po/pretty.c.po [...] $ cat .build/pot/po/pretty.c.po #: pretty.c:1051 msgid "unable to parse --pretty format" msgstr "" $ For these C source files which contain the PRItime macros, we will create temporary munged "*.c" files in a tree in ".build/pot/po" corresponding to our source tree, and have "xgettext" consider those. The rule needs to be careful to "(cd .build/pot/po && ...)", because otherwise the comments in the po/git.pot file wouldn't refer to the correct source locations (they'd be prefixed with ".build/pot/po"). These temporary munged "*.c” files will be removed immediately after the corresponding po files are generated, because some development tools cannot ignore the duplicate source files in the ".build" directory according to the ".gitignore" file, and that may cause trouble. The output of the generated po/git.pot file is changed in one minor way: Because we're using msgcat(1) instead of xgettext(1) to concatenate the output we'll now disambiguate where "TRANSLATORS" comments come from, in cases where a message is the same in N files, and either only one has a "TRANSLATORS" comment, or they're different. E.g. for the "Your edited hunk[...]" message we'll now apply this change (comment content elided): +#. #-#-#-#-# add-patch.c.po #-#-#-#-# #. TRANSLATORS: do not translate [y/n] [...] +#. #-#-#-#-# git-add--interactive.perl.po #-#-#-#-# #. TRANSLATORS: do not translate [y/n] [...] #: add-patch.c:1253 git-add--interactive.perl:1244 msgid "" "Your edited hunk does not apply. Edit again (saying \"no\" discards!) [y/n]? " msgstr "" There are six such changes, and they all make the context more understandable, as msgcat(1) is better at handling these edge cases than xgettext(1)'s previously used "--join-existing" flag. But filenames in the above disambiguation lines of extracted-comments have an extra ".po" extension compared to the filenames at the file locations. While we could rename the intermediate ".build/pot/po/%.po" files without the ".po" extension to use more intuitive filenames in the disambiguation lines of extracted-comments, but that will confuse developer tools with lots of invalid C or other source files in ".build/pot/po" directory. The addition of "--omit-header" option for xgettext makes the "pot" snippets in ".build/pot/po/*.po" smaller. But as we'll see in a subsequent commit this header behavior has been hiding an encoding-related bug from us, so let's carry it forward instead of re-generating it with xgettext(1). The "po/git.pot" file should have a header entry, because a proper header entry will increase the speed of creating a new po file using msginit and set a proper "POT-Creation-Date:" field in the header entry of a "po/XX.po" file. We use xgettext to generate a separate header file at ".build/pot/git.header" from "/dev/null", and use this header to assemble "po/git.pot". Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Jiang Xin <zhiyou.jx@alibaba-inc.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-05-26 16:50:28 +02:00
-o \( -name .build -type d -prune \) \
-o \( -name 'trash*' -type d -prune \) \
-o \( -name '*.[hcS]' -type f -print \) \
-o \( -name '*.sh' -type f -print \) \
| sed -e 's|^\./||' \
)
FOUND_SOURCE_FILES := $(filter-out $(GENERATED_H),$(shell $(SOURCES_CMD)))
FOUND_C_SOURCES = $(filter %.c,$(FOUND_SOURCE_FILES))
FOUND_H_SOURCES = $(filter %.h,$(FOUND_SOURCE_FILES))
COCCI_SOURCES = $(filter-out $(THIRD_PARTY_SOURCES),$(FOUND_C_SOURCES))
LIB_H = $(FOUND_H_SOURCES)
LIB_OBJS += abspath.o
Start to implement a built-in version of `git add --interactive` Unlike previous conversions to C, where we started with a built-in helper, we start this conversion by adding an interception in the `run_add_interactive()` function when the new opt-in `add.interactive.useBuiltin` config knob is turned on (or the corresponding environment variable `GIT_TEST_ADD_I_USE_BUILTIN`), and calling the new internal API function `run_add_i()` that is implemented directly in libgit.a. At this point, the built-in version of `git add -i` only states that it cannot do anything yet. In subsequent patches/patch series, the `run_add_i()` function will gain more and more functionality, until it is feature complete. The whole arc of the conversion can be found in the PRs #170-175 at https://github.com/gitgitgadget/git. The "--helper approach" can unfortunately not be used here: on Windows we face the very specific problem that a `system()` call in Perl seems to close `stdin` in the parent process when the spawned process consumes even one character from `stdin`. Which prevents us from implementing the main loop in C and still trying to hand off to the Perl script. The very real downside of the approach we have to take here is that the test suite won't pass with `GIT_TEST_ADD_I_USE_BUILTIN=true` until the conversion is complete (the `--helper` approach would have let it pass, even at each of the incremental conversion steps). Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-11-13 13:40:57 +01:00
LIB_OBJS += add-interactive.o
built-in add -i: start implementing the `patch` functionality in C In the previous steps, we re-implemented the main loop of `git add -i` in C, and most of the commands. Notably, we left out the actual functionality of `patch`, as the relevant code makes up more than half of `git-add--interactive.perl`, and is actually pretty independent of the rest of the commands. With this commit, we start to tackle that `patch` part. For better separation of concerns, we keep the code in a separate file, `add-patch.c`. The new code is still guarded behind the `add.interactive.useBuiltin` config setting, and for the moment, it can only be called via `git add -p`. The actual functionality follows the original implementation of 5cde71d64aff (git-add --interactive, 2006-12-10), but not too closely (for example, we use string offsets rather than copying strings around, and after seeing whether the `k` and `j` commands are applicable, in the C version we remember which previous/next hunk was undecided, and use it rather than looking again when the user asked to jump). As a further deviation from that commit, We also use a comma instead of a slash to separate the available commands in the prompt, as the current version of the Perl script does this, and we also add a line about the question mark ("print help") to the help text. While it is tempting to use this conversion of `git add -p` as an excuse to work on `apply_all_patches()` so that it does _not_ want to read a file from `stdin` or from a file, but accepts, say, an `strbuf` instead, we will refrain from this particular rabbit hole at this stage. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-12-13 09:07:48 +01:00
LIB_OBJS += add-patch.o
LIB_OBJS += advice.o
LIB_OBJS += alias.o
LIB_OBJS += alloc.o
LIB_OBJS += apply.o
LIB_OBJS += archive-tar.o
LIB_OBJS += archive-zip.o
LIB_OBJS += archive.o
LIB_OBJS += attr.o
LIB_OBJS += base85.o
LIB_OBJS += bisect.o
LIB_OBJS += blame.o
LIB_OBJS += blob.o
LIB_OBJS += bloom.o
LIB_OBJS += branch.o
LIB_OBJS += bulk-checkin.o
LIB_OBJS += bundle-uri.o
LIB_OBJS += bundle.o
LIB_OBJS += cache-tree.o
oidtree: a crit-bit tree for odb_loose_cache This saves 8K per `struct object_directory', meaning it saves around 800MB in my case involving 100K alternates (half or more of those alternates are unlikely to hold loose objects). This is implemented in two parts: a generic, allocation-free `cbtree' and the `oidtree' wrapper on top of it. The latter provides allocation using alloc_state as a memory pool to improve locality and reduce free(3) overhead. Unlike oid-array, the crit-bit tree does not require sorting. Performance is bound by the key length, for oidtree that is fixed at sizeof(struct object_id). There's no need to have 256 oidtrees to mitigate the O(n log n) overhead like we did with oid-array. Being a prefix trie, it is natively suited for expanding short object IDs via prefix-limited iteration in `find_short_object_filename'. On my busy workstation, p4205 performance seems to be roughly unchanged (+/-8%). Startup with 100K total alternates with no loose objects seems around 10-20% faster on a hot cache. (800MB in memory savings means more memory for the kernel FS cache). The generic cbtree implementation does impose some extra overhead for oidtree in that it uses memcmp(3) on "struct object_id" so it wastes cycles comparing 12 extra bytes on SHA-1 repositories. I've not yet explored reducing this overhead, but I expect there are many places in our code base where we'd want to investigate this. More information on crit-bit trees: https://cr.yp.to/critbit.html Signed-off-by: Eric Wong <e@80x24.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-07-08 01:10:19 +02:00
LIB_OBJS += cbtree.o
LIB_OBJS += chdir-notify.o
LIB_OBJS += checkout.o
LIB_OBJS += chunk-format.o
LIB_OBJS += color.o
LIB_OBJS += column.o
LIB_OBJS += combine-diff.o
LIB_OBJS += commit-graph.o
LIB_OBJS += commit-reach.o
LIB_OBJS += commit.o
LIB_OBJS += compat/nonblock.o
LIB_OBJS += compat/obstack.o
LIB_OBJS += compat/terminal.o
compat: auto-detect if zlib has uncompress2() We have a copy of uncompress2() implementation in compat/ so that we can build with an older version of zlib that lack the function, and the build procedure selects if it is used via the NO_UNCOMPRESS2 $(MAKE) variable. This is yet another "annoying" knob the porters need to tweak on platforms that are not common enough to have the default set in the config.mak.uname file. Attempt to instead ask the system header <zlib.h> to decide if we need the compatibility implementation. This is a deviation from the way we have been handling the "compatiblity" features so far, and if it can be done cleanly enough, it could work as a model for features that need compatibility definition we discover in the future. With that goal in mind, avoid expedient but ugly hacks, like shoving the code that is conditionally compiled into an unrelated .c file, which may not work in future cases---instead, take an approach that uses a file that is independently compiled and stands on its own. Compile and link compat/zlib-uncompress2.c file unconditionally, but conditionally hide the implementation behind #if/#endif when zlib version is 1.2.9 or newer, and unconditionally archive the resulting object file in the libgit.a to be picked up by the linker. There are a few things to note in the shape of the code base after this change: - We no longer use NO_UNCOMPRESS2 knob; if the system header <zlib.h> claims a version that is more cent than the library actually is, this would break, but it is easy to add it back when we find such a system. - The object file compat/zlib-uncompress2.o is always compiled and archived in libgit.a, just like a few other compat/ object files already are. - The inclusion of <zlib.h> is done in <git-compat-util.h>; we used to do so from <cache.h> which includes <git-compat-util.h> as the first thing it does, so from the *.c codes, there is no practical change. - Until objects in libgit.a that is already used gains a reference to the function, the reftable code will be the only one that wants it, so libgit.a on the linker command line needs to appear once more at the end to satisify the mutual dependency. - Beat found a trick used by OpenSSL to avoid making the conditionally-compiled object truly empty (apparently because they had to deal with compilers that do not want to see an effectively empty input file). Our compat/zlib-uncompress2.c file borrows the same trick for portabilty. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Helped-by: Beat Bolli <dev+git@drbeat.li> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-01-24 19:27:59 +01:00
LIB_OBJS += compat/zlib-uncompress2.o
LIB_OBJS += config.o
LIB_OBJS += connect.o
LIB_OBJS += connected.o
LIB_OBJS += convert.o
LIB_OBJS += copy.o
LIB_OBJS += credential.o
LIB_OBJS += csum-file.o
LIB_OBJS += ctype.o
LIB_OBJS += date.o
LIB_OBJS += decorate.o
2018-08-16 08:13:07 +02:00
LIB_OBJS += delta-islands.o
LIB_OBJS += diagnose.o
LIB_OBJS += diff-delta.o
LIB_OBJS += diff-merges.o
LIB_OBJS += diff-lib.o
LIB_OBJS += diff-no-index.o
LIB_OBJS += diff.o
LIB_OBJS += diffcore-break.o
LIB_OBJS += diffcore-delta.o
LIB_OBJS += diffcore-order.o
LIB_OBJS += diffcore-pickaxe.o
LIB_OBJS += diffcore-rename.o
LIB_OBJS += diffcore-rotate.o
LIB_OBJS += dir-iterator.o
LIB_OBJS += dir.o
LIB_OBJS += editor.o
LIB_OBJS += entry.o
LIB_OBJS += environment.o
2013-11-14 13:43:51 +01:00
LIB_OBJS += ewah/bitmap.o
LIB_OBJS += ewah/ewah_bitmap.o
LIB_OBJS += ewah/ewah_io.o
LIB_OBJS += ewah/ewah_rlw.o
LIB_OBJS += exec-cmd.o
LIB_OBJS += fetch-negotiator.o
LIB_OBJS += fetch-pack.o
LIB_OBJS += fmt-merge-msg.o
LIB_OBJS += fsck.o
LIB_OBJS += fsmonitor.o
LIB_OBJS += fsmonitor-ipc.o
fsmonitor: config settings are repository-specific Move fsmonitor config settings to a new and opaque `struct fsmonitor_settings` structure. Add a lazily-loaded pointer to this into `struct repo_settings` Create an `enum fsmonitor_mode` type in `struct fsmonitor_settings` to represent the state of fsmonitor. This lets us represent which, if any, fsmonitor provider (hook or IPC) is enabled. Create `fsm_settings__get_*()` getters to lazily look up fsmonitor- related config settings. Get rid of the `core_fsmonitor` global variable. Move the code to lookup the existing `core.fsmonitor` config value into the fsmonitor settings. Create a hook pathname variable in `struct fsmonitor-settings` and only set it when in hook mode. Extend the definition of `core.fsmonitor` to be either a boolean or a hook pathname. When true, the builtin FSMonitor is used. When false or unset, no FSMonitor (neither builtin nor hook) is used. The existing `core_fsmonitor` global variable was used to store the pathname to the fsmonitor hook *and* it was used as a boolean to see if fsmonitor was enabled. This dual usage and global visibility leads to confusion when we add the IPC-based provider. So lets hide the details in fsmonitor-settings.c and let it decide which provider to use in the case of multiple settings. This avoids cluttering up repo-settings.c with these private details. A future commit in builtin-fsmonitor series will add the ability to disqualify worktrees for various reasons, such as being mounted from a remote volume, where fsmonitor should not be started. Having the config settings hidden in fsmonitor-settings.c allows such worktree restrictions to override the config values used. Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-03-25 19:02:46 +01:00
LIB_OBJS += fsmonitor-settings.o
i18n: add infrastructure for translating Git with gettext Change the skeleton implementation of i18n in Git to one that can show localized strings to users for our C, Shell and Perl programs using either GNU libintl or the Solaris gettext implementation. This new internationalization support is enabled by default. If gettext isn't available, or if Git is compiled with NO_GETTEXT=YesPlease, Git falls back on its current behavior of showing interface messages in English. When using the autoconf script we'll auto-detect if the gettext libraries are installed and act appropriately. This change is somewhat large because as well as adding a C, Shell and Perl i18n interface we're adding a lot of tests for them, and for those tests to work we need a skeleton PO file to actually test translations. A minimal Icelandic translation is included for this purpose. Icelandic includes multi-byte characters which makes it easy to test various edge cases, and it's a language I happen to understand. The rest of the commit message goes into detail about various sub-parts of this commit. = Installation Gettext .mo files will be installed and looked for in the standard $(prefix)/share/locale path. GIT_TEXTDOMAINDIR can also be set to override that, but that's only intended to be used to test Git itself. = Perl Perl code that's to be localized should use the new Git::I18n module. It imports a __ function into the caller's package by default. Instead of using the high level Locale::TextDomain interface I've opted to use the low-level (equivalent to the C interface) Locale::Messages module, which Locale::TextDomain itself uses. Locale::TextDomain does a lot of redundant work we don't need, and some of it would potentially introduce bugs. It tries to set the $TEXTDOMAIN based on package of the caller, and has its own hardcoded paths where it'll search for messages. I found it easier just to completely avoid it rather than try to circumvent its behavior. In any case, this is an issue wholly internal Git::I18N. Its guts can be changed later if that's deemed necessary. See <AANLkTilYD_NyIZMyj9dHtVk-ylVBfvyxpCC7982LWnVd@mail.gmail.com> for a further elaboration on this topic. = Shell Shell code that's to be localized should use the git-sh-i18n library. It's basically just a wrapper for the system's gettext.sh. If gettext.sh isn't available we'll fall back on gettext(1) if it's available. The latter is available without the former on Solaris, which has its own non-GNU gettext implementation. We also need to emulate eval_gettext() there. If neither are present we'll use a dumb printf(1) fall-through wrapper. = About libcharset.h and langinfo.h We use libcharset to query the character set of the current locale if it's available. I.e. we'll use it instead of nl_langinfo if HAVE_LIBCHARSET_H is set. The GNU gettext manual recommends using langinfo.h's nl_langinfo(CODESET) to acquire the current character set, but on systems that have libcharset.h's locale_charset() using the latter is either saner, or the only option on those systems. GNU and Solaris have a nl_langinfo(CODESET), FreeBSD can use either, but MinGW and some others need to use libcharset.h's locale_charset() instead. =Credits This patch is based on work by Jeff Epler <jepler@unpythonic.net> who did the initial Makefile / C work, and a lot of comments from the Git mailing list, including Jonathan Nieder, Jakub Narebski, Johannes Sixt, Erik Faye-Lund, Peter Krefting, Junio C Hamano, Thomas Rast and others. [jc: squashed a small Makefile fix from Ramsay] Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-11-18 00:14:42 +01:00
LIB_OBJS += gettext.o
LIB_OBJS += git-zlib.o
LIB_OBJS += gpg-interface.o
LIB_OBJS += graph.o
LIB_OBJS += grep.o
LIB_OBJS += hash-lookup.o
LIB_OBJS += hashmap.o
LIB_OBJS += help.o
LIB_OBJS += hex.o
LIB_OBJS += hex-ll.o
LIB_OBJS += hook.o
LIB_OBJS += ident.o
LIB_OBJS += json-writer.o
Use kwset in pickaxe Benchmarks in the hot cache case: before: $ perf stat --repeat=5 git log -Sqwerty Performance counter stats for 'git log -Sqwerty' (5 runs): 47,092,744 cache-misses # 2.825 M/sec ( +- 1.607% ) 123,368,389 cache-references # 7.400 M/sec ( +- 0.812% ) 330,040,998 branch-misses # 3.134 % ( +- 0.257% ) 10,530,896,750 branches # 631.663 M/sec ( +- 0.121% ) 62,037,201,030 instructions # 1.399 IPC ( +- 0.142% ) 44,331,294,321 cycles # 2659.073 M/sec ( +- 0.326% ) 96,794 page-faults # 0.006 M/sec ( +- 11.952% ) 25 CPU-migrations # 0.000 M/sec ( +- 25.266% ) 1,424 context-switches # 0.000 M/sec ( +- 0.540% ) 16671.708650 task-clock-msecs # 0.997 CPUs ( +- 0.343% ) 16.728692052 seconds time elapsed ( +- 0.344% ) after: $ perf stat --repeat=5 git log -Sqwerty Performance counter stats for 'git log -Sqwerty' (5 runs): 51,385,522 cache-misses # 4.619 M/sec ( +- 0.565% ) 129,177,880 cache-references # 11.611 M/sec ( +- 0.219% ) 319,222,775 branch-misses # 6.946 % ( +- 0.134% ) 4,595,913,233 branches # 413.086 M/sec ( +- 0.112% ) 31,395,042,533 instructions # 1.062 IPC ( +- 0.129% ) 29,558,348,598 cycles # 2656.740 M/sec ( +- 0.204% ) 93,224 page-faults # 0.008 M/sec ( +- 4.487% ) 19 CPU-migrations # 0.000 M/sec ( +- 10.425% ) 950 context-switches # 0.000 M/sec ( +- 0.360% ) 11125.796039 task-clock-msecs # 0.997 CPUs ( +- 0.239% ) 11.164216599 seconds time elapsed ( +- 0.240% ) So the kwset code is about 33% faster. Signed-off-by: Fredrik Kuivinen <frekui@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-08-21 00:41:57 +02:00
LIB_OBJS += kwset.o
LIB_OBJS += levenshtein.o
Implement line-history search (git log -L) This is a rewrite of much of Bo's work, mainly in an effort to split it into smaller, easier to understand routines. The algorithm is built around the struct range_set, which encodes a series of line ranges as intervals [a,b). This is used in two contexts: * A set of lines we are tracking (which will change as we dig through history). * To encode diffs, as pairs of ranges. The main routine is range_set_map_across_diff(). It processes the diff between a commit C and some parent P. It determines which diff hunks are relevant to the ranges tracked in C, and computes the new ranges for P. The algorithm is then simply to process history in topological order from newest to oldest, computing ranges and (partial) diffs. At branch points, we need to merge the ranges we are watching. We will find that many commits do not affect the chosen ranges, and mark them TREESAME (in addition to those already filtered by pathspec limiting). Another pass of history simplification then gets rid of such commits. This is wired as an extra filtering pass in the log machinery. This currently only reduces code duplication, but should allow for other simplifications and options to be used. Finally, we hook a diff printer into the output chain. Ideally we would wire directly into the diff logic, to optionally use features like word diff. However, that will require some major reworking of the diff chain, so we completely replace the output with our own diff for now. As this was a GSoC project, and has quite some history by now, many people have helped. In no particular order, thanks go to Jakub Narebski <jnareb@gmail.com> Jens Lehmann <Jens.Lehmann@web.de> Jonathan Nieder <jrnieder@gmail.com> Junio C Hamano <gitster@pobox.com> Ramsay Jones <ramsay@ramsay1.demon.co.uk> Will Palmer <wmpalmer@gmail.com> Apologies to everyone I forgot. Signed-off-by: Bo Yang <struggleyb.nku@gmail.com> Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-03-28 17:47:32 +01:00
LIB_OBJS += line-log.o
LIB_OBJS += line-range.o
LIB_OBJS += linear-assignment.o
LIB_OBJS += list-objects-filter-options.o
LIB_OBJS += list-objects-filter.o
LIB_OBJS += list-objects.o
LIB_OBJS += lockfile.o
LIB_OBJS += log-tree.o
LIB_OBJS += loose.o
LIB_OBJS += ls-refs.o
LIB_OBJS += mailinfo.o
LIB_OBJS += mailmap.o
LIB_OBJS += match-trees.o
LIB_OBJS += mem-pool.o
LIB_OBJS += merge-blobs.o
merge-ll: rename from ll-merge A long term (but rather minor) pet-peeve of mine was the name ll-merge.[ch]. I thought it made it harder to realize what stuff was related to merging when I was working on the merge machinery and trying to improve it. Further, back in d1cbe1e6d8a ("hash-ll.h: split out of hash.h to remove dependency on repository.h", 2023-04-22), we have split the portions of hash.h that do not depend upon repository.h into a "hash-ll.h" (due to the recommendation to use "ll" for "low-level" in its name[1], but which I used as a suffix precisely because of my distaste for "ll-merge"). When we discussed adding additional "*-ll.h" files, a request was made that we use "ll" consistently as either a prefix or a suffix. Since it is already in use as both a prefix and a suffix, the only way to do so is to rename some files. Besides my distaste for the ll-merge.[ch] name, let me also note that the files ll-fsmonitor.h, ll-hash.h, ll-merge.h, ll-object-store.h, ll-read-cache.h would have essentially nothing to do with each other and make no sense to group. But giving them the common "ll-" prefix would group them. Using "-ll" as a suffix thus seems just much more logical to me. Rename ll-merge.[ch] to merge-ll.[ch] to achieve this consistency, and to ensure we get a more logical grouping of files. [1] https://lore.kernel.org/git/kl6lsfcu1g8w.fsf@chooglen-macbookpro.roam.corp.google.com/ Signed-off-by: Elijah Newren <newren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-05-16 08:34:04 +02:00
LIB_OBJS += merge-ll.o
LIB_OBJS += merge-ort.o
LIB_OBJS += merge-ort-wrappers.o
LIB_OBJS += merge-recursive.o
LIB_OBJS += merge.o
LIB_OBJS += midx.o
LIB_OBJS += midx-write.o
LIB_OBJS += name-hash.o
LIB_OBJS += negotiator/default.o
LIB_OBJS += negotiator/noop.o
LIB_OBJS += negotiator/skipping.o
LIB_OBJS += notes-cache.o
LIB_OBJS += notes-merge.o
LIB_OBJS += notes-utils.o
LIB_OBJS += notes.o
LIB_OBJS += object-file-convert.o
LIB_OBJS += object-file.o
LIB_OBJS += object-name.o
LIB_OBJS += object.o
LIB_OBJS += oid-array.o
LIB_OBJS += oidmap.o
LIB_OBJS += oidset.o
oidtree: a crit-bit tree for odb_loose_cache This saves 8K per `struct object_directory', meaning it saves around 800MB in my case involving 100K alternates (half or more of those alternates are unlikely to hold loose objects). This is implemented in two parts: a generic, allocation-free `cbtree' and the `oidtree' wrapper on top of it. The latter provides allocation using alloc_state as a memory pool to improve locality and reduce free(3) overhead. Unlike oid-array, the crit-bit tree does not require sorting. Performance is bound by the key length, for oidtree that is fixed at sizeof(struct object_id). There's no need to have 256 oidtrees to mitigate the O(n log n) overhead like we did with oid-array. Being a prefix trie, it is natively suited for expanding short object IDs via prefix-limited iteration in `find_short_object_filename'. On my busy workstation, p4205 performance seems to be roughly unchanged (+/-8%). Startup with 100K total alternates with no loose objects seems around 10-20% faster on a hot cache. (800MB in memory savings means more memory for the kernel FS cache). The generic cbtree implementation does impose some extra overhead for oidtree in that it uses memcmp(3) on "struct object_id" so it wastes cycles comparing 12 extra bytes on SHA-1 repositories. I've not yet explored reducing this overhead, but I expect there are many places in our code base where we'd want to investigate this. More information on crit-bit trees: https://cr.yp.to/critbit.html Signed-off-by: Eric Wong <e@80x24.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-07-08 01:10:19 +02:00
LIB_OBJS += oidtree.o
pack-objects: implement bitmap writing This commit extends more the functionality of `pack-objects` by allowing it to write out a `.bitmap` index next to any written packs, together with the `.idx` index that currently gets written. If bitmap writing is enabled for a given repository (either by calling `pack-objects` with the `--write-bitmap-index` flag or by having `pack.writebitmaps` set to `true` in the config) and pack-objects is writing a packfile that would normally be indexed (i.e. not piping to stdout), we will attempt to write the corresponding bitmap index for the packfile. Bitmap index writing happens after the packfile and its index has been successfully written to disk (`finish_tmp_packfile`). The process is performed in several steps: 1. `bitmap_writer_set_checksum`: this call stores the partial checksum for the packfile being written; the checksum will be written in the resulting bitmap index to verify its integrity 2. `bitmap_writer_build_type_index`: this call uses the array of `struct object_entry` that has just been sorted when writing out the actual packfile index to disk to generate 4 type-index bitmaps (one for each object type). These bitmaps have their nth bit set if the given object is of the bitmap's type. E.g. the nth bit of the Commits bitmap will be 1 if the nth object in the packfile index is a commit. This is a very cheap operation because the bitmap writing code has access to the metadata stored in the `struct object_entry` array, and hence the real type for each object in the packfile. 3. `bitmap_writer_reuse_bitmaps`: if there exists an existing bitmap index for one of the packfiles we're trying to repack, this call will efficiently rebuild the existing bitmaps so they can be reused on the new index. All the existing bitmaps will be stored in a `reuse` hash table, and the commit selection phase will prioritize these when selecting, as they can be written directly to the new index without having to perform a revision walk to fill the bitmap. This can greatly speed up the repack of a repository that already has bitmaps. 4. `bitmap_writer_select_commits`: if bitmap writing is enabled for a given `pack-objects` run, the sequence of commits generated during the Counting Objects phase will be stored in an array. We then use that array to build up the list of selected commits. Writing a bitmap in the index for each object in the repository would be cost-prohibitive, so we use a simple heuristic to pick the commits that will be indexed with bitmaps. The current heuristics are a simplified version of JGit's original implementation. We select a higher density of commits depending on their age: the 100 most recent commits are always selected, after that we pick 1 commit of each 100, and the gap increases as the commits grow older. On top of that, we make sure that every single branch that has not been merged (all the tips that would be required from a clone) gets their own bitmap, and when selecting commits between a gap, we tend to prioritize the commit with the most parents. Do note that there is no right/wrong way to perform commit selection; different selection algorithms will result in different commits being selected, but there's no such thing as "missing a commit". The bitmap walker algorithm implemented in `prepare_bitmap_walk` is able to adapt to missing bitmaps by performing manual walks that complete the bitmap: the ideal selection algorithm, however, would select the commits that are more likely to be used as roots for a walk in the future (e.g. the tips of each branch, and so on) to ensure a bitmap for them is always available. 5. `bitmap_writer_build`: this is the computationally expensive part of bitmap generation. Based on the list of commits that were selected in the previous step, we perform several incremental walks to generate the bitmap for each commit. The walks begin from the oldest commit, and are built up incrementally for each branch. E.g. consider this dag where A, B, C, D, E, F are the selected commits, and a, b, c, e are a chunk of simplified history that will not receive bitmaps. A---a---B--b--C--c--D \ E--e--F We start by building the bitmap for A, using A as the root for a revision walk and marking all the objects that are reachable until the walk is over. Once this bitmap is stored, we reuse the bitmap walker to perform the walk for B, assuming that once we reach A again, the walk will be terminated because A has already been SEEN on the previous walk. This process is repeated for C, and D, but when we try to generate the bitmaps for E, we can reuse neither the current walk nor the bitmap we have generated so far. What we do now is resetting both the walk and clearing the bitmap, and performing the walk from scratch using E as the origin. This new walk, however, does not need to be completed. Once we hit B, we can lookup the bitmap we have already stored for that commit and OR it with the existing bitmap we've composed so far, allowing us to limit the walk early. After all the bitmaps have been generated, another iteration through the list of commits is performed to find the best XOR offsets for compression before writing them to disk. Because of the incremental nature of these bitmaps, XORing one of them with its predecesor results in a minimal "bitmap delta" most of the time. We can write this delta to the on-disk bitmap index, and then re-compose the original bitmaps by XORing them again when loaded. This is a phase very similar to pack-object's `find_delta` (using bitmaps instead of objects, of course), except the heuristics have been greatly simplified: we only check the 10 bitmaps before any given one to find best compressing one. This gives good results in practice, because there is locality in the ordering of the objects (and therefore bitmaps) in the packfile. 6. `bitmap_writer_finish`: the last step in the process is serializing to disk all the bitmap data that has been generated in the two previous steps. The bitmap is written to a tmp file and then moved atomically to its final destination, using the same process as `pack-write.c:write_idx_file`. Signed-off-by: Vicent Marti <tanoku@gmail.com> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-12-21 15:00:16 +01:00
LIB_OBJS += pack-bitmap-write.o
LIB_OBJS += pack-bitmap.o
LIB_OBJS += pack-check.o
LIB_OBJS += pack-mtimes.o
LIB_OBJS += pack-objects.o
LIB_OBJS += pack-revindex.o
LIB_OBJS += pack-write.o
LIB_OBJS += packfile.o
LIB_OBJS += pager.o
unpack-trees: add basic support for parallel checkout This new interface allows us to enqueue some of the entries being checked out to later uncompress them, apply in-process filters, and write out the files in parallel. For now, the parallel checkout machinery is enabled by default and there is no user configuration, but run_parallel_checkout() just writes the queued entries in sequence (without spawning additional workers). The next patch will actually implement the parallelism and, later, we will make it configurable. Note that, to avoid potential data races, not all entries are eligible for parallel checkout. Also, paths that collide on disk (e.g. case-sensitive paths in case-insensitive file systems), are detected by the parallel checkout code and skipped, so that they can be safely sequentially handled later. The collision detection works like the following: - If the collision was at basename (e.g. 'a/b' and 'a/B'), the framework detects it by looking for EEXIST and EISDIR errors after an open(O_CREAT | O_EXCL) failure. - If the collision was at dirname (e.g. 'a/b' and 'A'), it is detected at the has_dirs_only_path() check, which is done for the leading path of each item in the parallel checkout queue. Both verifications rely on the fact that, before enqueueing an entry for parallel checkout, checkout_entry() makes sure that there is no file at the entry's path and that its leading components are all real directories. So, any later change in these conditions indicates that there was a collision (either between two parallel-eligible entries or between an eligible and an ineligible one). After all parallel-eligible entries have been processed, the collided (and thus, skipped) entries are sequentially fed to checkout_entry() again. This is similar to the way the current code deals with collisions, overwriting the previously checked out entries with the subsequent ones. The only difference is that, since we no longer create the files in the same order that they appear on index, we are not able to determine which of the colliding entries will survive on disk (for the classic code, it is always the last entry). Co-authored-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Co-authored-by: Jeff Hostetler <jeffhost@microsoft.com> Signed-off-by: Matheus Tavares <matheus.bernardino@usp.br> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-04-19 02:14:53 +02:00
LIB_OBJS += parallel-checkout.o
LIB_OBJS += parse.o
LIB_OBJS += parse-options-cb.o
LIB_OBJS += parse-options.o
LIB_OBJS += patch-delta.o
LIB_OBJS += patch-ids.o
LIB_OBJS += path.o
LIB_OBJS += pathspec.o
LIB_OBJS += pkt-line.o
LIB_OBJS += preload-index.o
LIB_OBJS += pretty.o
LIB_OBJS += prio-queue.o
LIB_OBJS += progress.o
LIB_OBJS += promisor-remote.o
LIB_OBJS += prompt.o
LIB_OBJS += protocol.o
LIB_OBJS += protocol-caps.o
LIB_OBJS += prune-packed.o
LIB_OBJS += quote.o
LIB_OBJS += range-diff.o
LIB_OBJS += reachable.o
LIB_OBJS += read-cache.o
LIB_OBJS += rebase-interactive.o
LIB_OBJS += rebase.o
LIB_OBJS += ref-filter.o
LIB_OBJS += reflog-walk.o
LIB_OBJS += reflog.o
LIB_OBJS += refs.o
LIB_OBJS += refs/debug.o
LIB_OBJS += refs/files-backend.o
refs: introduce reftable backend Due to scalability issues, Shawn Pearce has originally proposed a new "reftable" format more than six years ago [1]. Initially, this new format was implemented in JGit with promising results. Around two years ago, we have then added the "reftable" library to the Git codebase via a4bbd13be3 (Merge branch 'hn/reftable', 2021-12-15). With this we have landed all the low-level code to read and write reftables. Notably missing though was the integration of this low-level code into the Git code base in the form of a new ref backend that ties all of this together. This gap is now finally closed by introducing a new "reftable" backend into the Git codebase. This new backend promises to bring some notable improvements to Git repositories: - It becomes possible to do truly atomic writes where either all refs are committed to disk or none are. This was not possible with the "files" backend because ref updates were split across multiple loose files. - The disk space required to store many refs is reduced, both compared to loose refs and packed-refs. This is enabled both by the reftable format being a binary format, which is more compact, and by prefix compression. - We can ignore filesystem-specific behaviour as ref names are not encoded via paths anymore. This means there is no need to handle case sensitivity on Windows systems or Unicode precomposition on macOS. - There is no need to rewrite the complete refdb anymore every time a ref is being deleted like it was the case for packed-refs. This means that ref deletions are now constant time instead of scaling linearly with the number of refs. - We can ignore file/directory conflicts so that it becomes possible to store both "refs/heads/foo" and "refs/heads/foo/bar". - Due to this property we can retain reflogs for deleted refs. We have previously been deleting reflogs together with their refs to avoid file/directory conflicts, which is not necessary anymore. - We can properly enumerate all refs. With the "files" backend it is not easily possible to distinguish between refs and non-refs because they may live side by side in the gitdir. Not all of these improvements are realized with the current "reftable" backend implementation. At this point, the new backend is supposed to be a drop-in replacement for the "files" backend that is used by basically all Git repositories nowadays. It strives for 1:1 compatibility, which means that a user can expect the same behaviour regardless of whether they use the "reftable" backend or the "files" backend for most of the part. Most notably, this means we artificially limit the capabilities of the "reftable" backend to match the limits of the "files" backend. It is not possible to create refs that would end up with file/directory conflicts, we do not retain reflogs, we perform stricter-than-necessary checks. This is done intentionally due to two main reasons: - It makes it significantly easier to land the "reftable" backend as tests behave the same. It would be tough to argue for each and every single test that doesn't pass with the "reftable" backend. - It ensures compatibility between repositories that use the "files" backend and repositories that use the "reftable" backend. Like this, hosters can migrate their repositories to use the "reftable" backend without causing issues for clients that use the "files" backend in their clones. It is expected that these artificial limitations may eventually go away in the long term. Performance-wise things very much depend on the actual workload. The following benchmarks compare the "files" and "reftable" backends in the current version: - Creating N refs in separate transactions shows that the "files" backend is ~50% faster. This is not surprising given that creating a ref only requires us to create a single loose ref. The "reftable" backend will also perform auto compaction on updates. In real-world workloads we would likely also want to perform pack loose refs, which would likely change the picture. Benchmark 1: update-ref: create refs sequentially (refformat = files, refcount = 1) Time (mean ± σ): 2.1 ms ± 0.3 ms [User: 0.6 ms, System: 1.7 ms] Range (min … max): 1.8 ms … 4.3 ms 133 runs Benchmark 2: update-ref: create refs sequentially (refformat = reftable, refcount = 1) Time (mean ± σ): 2.7 ms ± 0.1 ms [User: 0.6 ms, System: 2.2 ms] Range (min … max): 2.4 ms … 2.9 ms 132 runs Benchmark 3: update-ref: create refs sequentially (refformat = files, refcount = 1000) Time (mean ± σ): 1.975 s ± 0.006 s [User: 0.437 s, System: 1.535 s] Range (min … max): 1.969 s … 1.980 s 3 runs Benchmark 4: update-ref: create refs sequentially (refformat = reftable, refcount = 1000) Time (mean ± σ): 2.611 s ± 0.013 s [User: 0.782 s, System: 1.825 s] Range (min … max): 2.597 s … 2.622 s 3 runs Benchmark 5: update-ref: create refs sequentially (refformat = files, refcount = 100000) Time (mean ± σ): 198.442 s ± 0.241 s [User: 43.051 s, System: 155.250 s] Range (min … max): 198.189 s … 198.670 s 3 runs Benchmark 6: update-ref: create refs sequentially (refformat = reftable, refcount = 100000) Time (mean ± σ): 294.509 s ± 4.269 s [User: 104.046 s, System: 190.326 s] Range (min … max): 290.223 s … 298.761 s 3 runs - Creating N refs in a single transaction shows that the "files" backend is significantly slower once we start to write many refs. The "reftable" backend only needs to update two files, whereas the "files" backend needs to write one file per ref. Benchmark 1: update-ref: create many refs (refformat = files, refcount = 1) Time (mean ± σ): 1.9 ms ± 0.1 ms [User: 0.4 ms, System: 1.4 ms] Range (min … max): 1.8 ms … 2.6 ms 151 runs Benchmark 2: update-ref: create many refs (refformat = reftable, refcount = 1) Time (mean ± σ): 2.5 ms ± 0.1 ms [User: 0.7 ms, System: 1.7 ms] Range (min … max): 2.4 ms … 3.4 ms 148 runs Benchmark 3: update-ref: create many refs (refformat = files, refcount = 1000) Time (mean ± σ): 152.5 ms ± 5.2 ms [User: 19.1 ms, System: 133.1 ms] Range (min … max): 148.5 ms … 167.8 ms 15 runs Benchmark 4: update-ref: create many refs (refformat = reftable, refcount = 1000) Time (mean ± σ): 58.0 ms ± 2.5 ms [User: 28.4 ms, System: 29.4 ms] Range (min … max): 56.3 ms … 72.9 ms 40 runs Benchmark 5: update-ref: create many refs (refformat = files, refcount = 1000000) Time (mean ± σ): 152.752 s ± 0.710 s [User: 20.315 s, System: 131.310 s] Range (min … max): 152.165 s … 153.542 s 3 runs Benchmark 6: update-ref: create many refs (refformat = reftable, refcount = 1000000) Time (mean ± σ): 51.912 s ± 0.127 s [User: 26.483 s, System: 25.424 s] Range (min … max): 51.769 s … 52.012 s 3 runs - Deleting a ref in a fully-packed repository shows that the "files" backend scales with the number of refs. The "reftable" backend has constant-time deletions. Benchmark 1: update-ref: delete ref (refformat = files, refcount = 1) Time (mean ± σ): 1.7 ms ± 0.1 ms [User: 0.4 ms, System: 1.2 ms] Range (min … max): 1.6 ms … 2.1 ms 316 runs Benchmark 2: update-ref: delete ref (refformat = reftable, refcount = 1) Time (mean ± σ): 1.8 ms ± 0.1 ms [User: 0.4 ms, System: 1.3 ms] Range (min … max): 1.7 ms … 2.1 ms 294 runs Benchmark 3: update-ref: delete ref (refformat = files, refcount = 1000) Time (mean ± σ): 2.0 ms ± 0.1 ms [User: 0.5 ms, System: 1.4 ms] Range (min … max): 1.9 ms … 2.5 ms 287 runs Benchmark 4: update-ref: delete ref (refformat = reftable, refcount = 1000) Time (mean ± σ): 1.9 ms ± 0.1 ms [User: 0.5 ms, System: 1.3 ms] Range (min … max): 1.8 ms … 2.1 ms 217 runs Benchmark 5: update-ref: delete ref (refformat = files, refcount = 1000000) Time (mean ± σ): 229.8 ms ± 7.9 ms [User: 182.6 ms, System: 46.8 ms] Range (min … max): 224.6 ms … 245.2 ms 6 runs Benchmark 6: update-ref: delete ref (refformat = reftable, refcount = 1000000) Time (mean ± σ): 2.0 ms ± 0.0 ms [User: 0.6 ms, System: 1.3 ms] Range (min … max): 2.0 ms … 2.1 ms 3 runs - Listing all refs shows no significant advantage for either of the backends. The "files" backend is a bit faster, but not by a significant margin. When repositories are not packed the "reftable" backend outperforms the "files" backend because the "reftable" backend performs auto-compaction. Benchmark 1: show-ref: print all refs (refformat = files, refcount = 1, packed = true) Time (mean ± σ): 1.6 ms ± 0.1 ms [User: 0.4 ms, System: 1.1 ms] Range (min … max): 1.5 ms … 2.0 ms 1729 runs Benchmark 2: show-ref: print all refs (refformat = reftable, refcount = 1, packed = true) Time (mean ± σ): 1.6 ms ± 0.1 ms [User: 0.4 ms, System: 1.1 ms] Range (min … max): 1.5 ms … 1.8 ms 1816 runs Benchmark 3: show-ref: print all refs (refformat = files, refcount = 1000, packed = true) Time (mean ± σ): 4.3 ms ± 0.1 ms [User: 0.9 ms, System: 3.3 ms] Range (min … max): 4.1 ms … 4.6 ms 645 runs Benchmark 4: show-ref: print all refs (refformat = reftable, refcount = 1000, packed = true) Time (mean ± σ): 4.5 ms ± 0.2 ms [User: 1.0 ms, System: 3.3 ms] Range (min … max): 4.2 ms … 5.9 ms 643 runs Benchmark 5: show-ref: print all refs (refformat = files, refcount = 1000000, packed = true) Time (mean ± σ): 2.537 s ± 0.034 s [User: 0.488 s, System: 2.048 s] Range (min … max): 2.511 s … 2.627 s 10 runs Benchmark 6: show-ref: print all refs (refformat = reftable, refcount = 1000000, packed = true) Time (mean ± σ): 2.712 s ± 0.017 s [User: 0.653 s, System: 2.059 s] Range (min … max): 2.692 s … 2.752 s 10 runs Benchmark 7: show-ref: print all refs (refformat = files, refcount = 1, packed = false) Time (mean ± σ): 1.6 ms ± 0.1 ms [User: 0.4 ms, System: 1.1 ms] Range (min … max): 1.5 ms … 1.9 ms 1834 runs Benchmark 8: show-ref: print all refs (refformat = reftable, refcount = 1, packed = false) Time (mean ± σ): 1.6 ms ± 0.1 ms [User: 0.4 ms, System: 1.1 ms] Range (min … max): 1.4 ms … 2.0 ms 1840 runs Benchmark 9: show-ref: print all refs (refformat = files, refcount = 1000, packed = false) Time (mean ± σ): 13.8 ms ± 0.2 ms [User: 2.8 ms, System: 10.8 ms] Range (min … max): 13.3 ms … 14.5 ms 208 runs Benchmark 10: show-ref: print all refs (refformat = reftable, refcount = 1000, packed = false) Time (mean ± σ): 4.5 ms ± 0.2 ms [User: 1.2 ms, System: 3.3 ms] Range (min … max): 4.3 ms … 6.2 ms 624 runs Benchmark 11: show-ref: print all refs (refformat = files, refcount = 1000000, packed = false) Time (mean ± σ): 12.127 s ± 0.129 s [User: 2.675 s, System: 9.451 s] Range (min … max): 11.965 s … 12.370 s 10 runs Benchmark 12: show-ref: print all refs (refformat = reftable, refcount = 1000000, packed = false) Time (mean ± σ): 2.799 s ± 0.022 s [User: 0.735 s, System: 2.063 s] Range (min … max): 2.769 s … 2.836 s 10 runs - Printing a single ref shows no real difference between the "files" and "reftable" backends. Benchmark 1: show-ref: print single ref (refformat = files, refcount = 1) Time (mean ± σ): 1.5 ms ± 0.1 ms [User: 0.4 ms, System: 1.0 ms] Range (min … max): 1.4 ms … 1.8 ms 1779 runs Benchmark 2: show-ref: print single ref (refformat = reftable, refcount = 1) Time (mean ± σ): 1.6 ms ± 0.1 ms [User: 0.4 ms, System: 1.1 ms] Range (min … max): 1.4 ms … 2.5 ms 1753 runs Benchmark 3: show-ref: print single ref (refformat = files, refcount = 1000) Time (mean ± σ): 1.5 ms ± 0.1 ms [User: 0.3 ms, System: 1.1 ms] Range (min … max): 1.4 ms … 1.9 ms 1840 runs Benchmark 4: show-ref: print single ref (refformat = reftable, refcount = 1000) Time (mean ± σ): 1.6 ms ± 0.1 ms [User: 0.4 ms, System: 1.1 ms] Range (min … max): 1.5 ms … 2.0 ms 1831 runs Benchmark 5: show-ref: print single ref (refformat = files, refcount = 1000000) Time (mean ± σ): 1.6 ms ± 0.1 ms [User: 0.4 ms, System: 1.1 ms] Range (min … max): 1.5 ms … 2.1 ms 1848 runs Benchmark 6: show-ref: print single ref (refformat = reftable, refcount = 1000000) Time (mean ± σ): 1.6 ms ± 0.1 ms [User: 0.4 ms, System: 1.1 ms] Range (min … max): 1.5 ms … 2.1 ms 1762 runs So overall, performance depends on the usecases. Except for many sequential writes the "reftable" backend is roughly on par or significantly faster than the "files" backend though. Given that the "files" backend has received 18 years of optimizations by now this can be seen as a win. Furthermore, we can expect that the "reftable" backend will grow faster over time when attention turns more towards optimizations. The complete test suite passes, except for those tests explicitly marked to require the REFFILES prerequisite. Some tests in t0610 are marked as failing because they depend on still-in-flight bug fixes. Tests can be run with the new backend by setting the GIT_TEST_DEFAULT_REF_FORMAT environment variable to "reftable". There is a single known conceptual incompatibility with the dumb HTTP transport. As "info/refs" SHOULD NOT contain the HEAD reference, and because the "HEAD" file is not valid anymore, it is impossible for the remote client to figure out the default branch without changing the protocol. This shortcoming needs to be handled in a subsequent patch series. As the reftable library has already been introduced a while ago, this commit message will not go into the details of how exactly the on-disk format works. Please refer to our preexisting technical documentation at Documentation/technical/reftable for this. [1]: https://public-inbox.org/git/CAJo=hJtyof=HRy=2sLP0ng0uZ4=S-DpZ5dR1aF+VHVETKG20OQ@mail.gmail.com/ Original-idea-by: Shawn Pearce <spearce@spearce.org> Based-on-patch-by: Han-Wen Nienhuys <hanwen@google.com> Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-02-07 08:20:31 +01:00
LIB_OBJS += refs/reftable-backend.o
refs: introduce an iterator interface Currently, the API for iterating over references is via a family of for_each_ref()-type functions that invoke a callback function for each selected reference. All of these eventually call do_for_each_ref(), which knows how to do one thing: iterate in parallel through two ref_caches, one for loose and one for packed refs, giving loose references precedence over packed refs. This is rather complicated code, and is quite specialized to the files backend. It also requires callers to encapsulate their work into a callback function, which often means that they have to define and use a "cb_data" struct to manage their context. The current design is already bursting at the seams, and will become even more awkward in the upcoming world of multiple reference storage backends: * Per-worktree vs. shared references are currently handled via a kludge in git_path() rather than iterating over each part of the reference namespace separately and merging the results. This kludge will cease to work when we have multiple reference storage backends. * The current scheme is inflexible. What if we sometimes want to bypass the ref_cache, or use it only for packed or only for loose refs? What if we want to store symbolic refs in one type of storage backend and non-symbolic ones in another? In the future, each reference backend will need to define its own way of iterating over references. The crux of the problem with the current design is that it is impossible to compose for_each_ref()-style iterations, because the flow of control is owned by the for_each_ref() function. There is nothing that a caller can do but iterate through all references in a single burst, so there is no way for it to interleave references from multiple backends and present the result to the rest of the world as a single compound backend. This commit introduces a new iteration primitive for references: a ref_iterator. A ref_iterator is a polymorphic object that a reference storage backend can be asked to instantiate. There are three functions that can be applied to a ref_iterator: * ref_iterator_advance(): move to the next reference in the iteration * ref_iterator_abort(): end the iteration before it is exhausted * ref_iterator_peel(): peel the reference currently being looked at Iterating using a ref_iterator leaves the flow of control in the hands of the caller, which means that ref_iterators from multiple sources (e.g., loose and packed refs) can be composed and presented to the world as a single compound ref_iterator. It also means that the backend code for implementing reference iteration will sometimes be more complicated. For example, the cache_ref_iterator (which iterates over a ref_cache) can't use the C stack to recurse; instead, it must manage its own stack internally as explicit data structures. There is also a lot of boilerplate connected with object-oriented programming in C. Eventually, end-user callers will be able to be written in a more natural way—managing their own flow of control rather than having to work via callbacks. Since there will only be a few reference backends but there are many consumers of this API, this is a good tradeoff. More importantly, we gain composability, and especially the possibility of writing interchangeable parts that can work with any ref_iterator. For example, merge_ref_iterator implements a generic way of merging the contents of any two ref_iterators. It is used to merge loose + packed refs as part of the implementation of the files_ref_iterator. But it will also be possible to use it to merge other pairs of reference sources (e.g., per-worktree vs. shared refs). Another example is prefix_ref_iterator, which can be used to trim a prefix off the front of reference names before presenting them to the caller (e.g., "refs/heads/master" -> "master"). In this patch, we introduce the iterator abstraction and many utilities, and implement a reference iterator for the files ref storage backend. (I've written several other obvious utilities, for example a generic way to filter references being iterated over. These will probably be useful in the future. But they are not needed for this patch series, so I am not including them at this time.) In a moment we will rewrite do_for_each_ref() to work via reference iterators (allowing some special-purpose code to be discarded), and do something similar for reflogs. In future patch series, we will expose the ref_iterator abstraction in the public refs API so that callers can use it directly. Implementation note: I tried abstracting this a layer further to allow generic iterators (over arbitrary types of objects) and generic utilities like a generic merge_iterator. But the implementation in C was very cumbersome, involving (in my opinion) too much boilerplate and too much unsafe casting, some of which would have had to be done on the caller side. However, I did put a few iterator-related constants in a top-level header file, iterator.h, as they will be useful in a moment to implement iteration over directory trees and possibly other types of iterators in the future. Signed-off-by: Ramsay Jones <ramsay@ramsayjones.plus.com> Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-06-18 06:15:15 +02:00
LIB_OBJS += refs/iterator.o
LIB_OBJS += refs/packed-backend.o
LIB_OBJS += refs/ref-cache.o
LIB_OBJS += refspec.o
LIB_OBJS += remote.o
LIB_OBJS += replace-object.o
LIB_OBJS += repo-settings.o
LIB_OBJS += repository.o
LIB_OBJS += rerere.o
LIB_OBJS += reset.o
LIB_OBJS += resolve-undo.o
LIB_OBJS += revision.o
LIB_OBJS += run-command.o
LIB_OBJS += send-pack.o
LIB_OBJS += sequencer.o
LIB_OBJS += serve.o
LIB_OBJS += server-info.o
LIB_OBJS += setup.o
LIB_OBJS += shallow.o
LIB_OBJS += sideband.o
LIB_OBJS += sigchain.o
LIB_OBJS += sparse-index.o
LIB_OBJS += split-index.o
LIB_OBJS += stable-qsort.o
LIB_OBJS += statinfo.o
LIB_OBJS += strbuf.o
LIB_OBJS += streaming.o
LIB_OBJS += string-list.o
LIB_OBJS += strmap.o
LIB_OBJS += strvec.o
LIB_OBJS += sub-process.o
LIB_OBJS += submodule-config.o
LIB_OBJS += submodule.o
LIB_OBJS += symlinks.o
LIB_OBJS += tag.o
LIB_OBJS += tempfile.o
LIB_OBJS += thread-utils.o
LIB_OBJS += tmp-objdir.o
LIB_OBJS += trace.o
LIB_OBJS += trace2.o
LIB_OBJS += trace2/tr2_cfg.o
LIB_OBJS += trace2/tr2_cmd_name.o
LIB_OBJS += trace2/tr2_ctr.o
LIB_OBJS += trace2/tr2_dst.o
LIB_OBJS += trace2/tr2_sid.o
LIB_OBJS += trace2/tr2_sysenv.o
LIB_OBJS += trace2/tr2_tbuf.o
LIB_OBJS += trace2/tr2_tgt_event.o
LIB_OBJS += trace2/tr2_tgt_normal.o
LIB_OBJS += trace2/tr2_tgt_perf.o
LIB_OBJS += trace2/tr2_tls.o
LIB_OBJS += trace2/tr2_tmr.o
LIB_OBJS += trailer.o
LIB_OBJS += transport-helper.o
LIB_OBJS += transport.o
LIB_OBJS += tree-diff.o
LIB_OBJS += tree-walk.o
LIB_OBJS += tree.o
LIB_OBJS += unpack-trees.o
LIB_OBJS += upload-pack.o
LIB_OBJS += url.o
LIB_OBJS += urlmatch.o
LIB_OBJS += usage.o
LIB_OBJS += userdiff.o
LIB_OBJS += utf8.o
LIB_OBJS += varint.o
LIB_OBJS += version.o
LIB_OBJS += versioncmp.o
LIB_OBJS += walker.o
LIB_OBJS += wildmatch.o
LIB_OBJS += worktree.o
Shrink the git binary a bit by avoiding unnecessary inline functions So I was looking at the disgusting size of the git binary, and even with the debugging removed, and using -Os instead of -O2, the size of the text section was pretty high. In this day and age I guess almost a megabyte of text isn't really all that surprising, but it still doesn't exactly make me think "lean and mean". With -Os, a surprising amount of text space is wasted on inline functions that end up just being replicated multiple times, and where performance really isn't a valid reason to inline them. In particular, the trivial wrapper functions like "xmalloc()" are used _everywhere_, and making them inline just duplicates the text (and the string we use to 'die()' on failure) unnecessarily. So this just moves them into a "wrapper.c" file, getting rid of a tiny bit of unnecessary bloat. The following numbers are both with "CFLAGS=-Os": Before: [torvalds@woody git]$ size git text data bss dec hex filename 700460 15160 292184 1007804 f60bc git After: [torvalds@woody git]$ size git text data bss dec hex filename 670540 15160 292184 977884 eebdc git so it saves almost 30k of text-space (it actually saves more than that with the default -O2, but I don't think that's necessarily a very relevant number from a "try to shrink git" standpoint). It might conceivably have a performance impact, but none of this should be _that_ performance critical. The real cost is not generally in the wrapper anyway, but in the code it wraps (ie the cost of "xread()" is all in the read itself, not in the trivial wrapping of it). Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-06-22 21:19:25 +02:00
LIB_OBJS += wrapper.o
LIB_OBJS += write-or-die.o
LIB_OBJS += ws.o
LIB_OBJS += wt-status.o
LIB_OBJS += xdiff-interface.o
Move 'builtin-*' into a 'builtin/' subdirectory This shrinks the top-level directory a bit, and makes it much more pleasant to use auto-completion on the thing. Instead of [torvalds@nehalem git]$ em buil<tab> Display all 180 possibilities? (y or n) [torvalds@nehalem git]$ em builtin-sh builtin-shortlog.c builtin-show-branch.c builtin-show-ref.c builtin-shortlog.o builtin-show-branch.o builtin-show-ref.o [torvalds@nehalem git]$ em builtin-shor<tab> builtin-shortlog.c builtin-shortlog.o [torvalds@nehalem git]$ em builtin-shortlog.c you get [torvalds@nehalem git]$ em buil<tab> [type] builtin/ builtin.h [torvalds@nehalem git]$ em builtin [auto-completes to] [torvalds@nehalem git]$ em builtin/sh<tab> [type] shortlog.c shortlog.o show-branch.c show-branch.o show-ref.c show-ref.o [torvalds@nehalem git]$ em builtin/sho [auto-completes to] [torvalds@nehalem git]$ em builtin/shor<tab> [type] shortlog.c shortlog.o [torvalds@nehalem git]$ em builtin/shortlog.c which doesn't seem all that different, but not having that annoying break in "Display all 180 possibilities?" is quite a relief. NOTE! If you do this in a clean tree (no object files etc), or using an editor that has auto-completion rules that ignores '*.o' files, you won't see that annoying 'Display all 180 possibilities?' message - it will just show the choices instead. I think bash has some cut-off around 100 choices or something. So the reason I see this is that I'm using an odd editory, and thus don't have the rules to cut down on auto-completion. But you can simulate that by using 'ls' instead, or something similar. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-02-22 17:42:18 +01:00
BUILTIN_OBJS += builtin/add.o
BUILTIN_OBJS += builtin/am.o
Move 'builtin-*' into a 'builtin/' subdirectory This shrinks the top-level directory a bit, and makes it much more pleasant to use auto-completion on the thing. Instead of [torvalds@nehalem git]$ em buil<tab> Display all 180 possibilities? (y or n) [torvalds@nehalem git]$ em builtin-sh builtin-shortlog.c builtin-show-branch.c builtin-show-ref.c builtin-shortlog.o builtin-show-branch.o builtin-show-ref.o [torvalds@nehalem git]$ em builtin-shor<tab> builtin-shortlog.c builtin-shortlog.o [torvalds@nehalem git]$ em builtin-shortlog.c you get [torvalds@nehalem git]$ em buil<tab> [type] builtin/ builtin.h [torvalds@nehalem git]$ em builtin [auto-completes to] [torvalds@nehalem git]$ em builtin/sh<tab> [type] shortlog.c shortlog.o show-branch.c show-branch.o show-ref.c show-ref.o [torvalds@nehalem git]$ em builtin/sho [auto-completes to] [torvalds@nehalem git]$ em builtin/shor<tab> [type] shortlog.c shortlog.o [torvalds@nehalem git]$ em builtin/shortlog.c which doesn't seem all that different, but not having that annoying break in "Display all 180 possibilities?" is quite a relief. NOTE! If you do this in a clean tree (no object files etc), or using an editor that has auto-completion rules that ignores '*.o' files, you won't see that annoying 'Display all 180 possibilities?' message - it will just show the choices instead. I think bash has some cut-off around 100 choices or something. So the reason I see this is that I'm using an odd editory, and thus don't have the rules to cut down on auto-completion. But you can simulate that by using 'ls' instead, or something similar. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-02-22 17:42:18 +01:00
BUILTIN_OBJS += builtin/annotate.o
BUILTIN_OBJS += builtin/apply.o
BUILTIN_OBJS += builtin/archive.o
BUILTIN_OBJS += builtin/bisect.o
Move 'builtin-*' into a 'builtin/' subdirectory This shrinks the top-level directory a bit, and makes it much more pleasant to use auto-completion on the thing. Instead of [torvalds@nehalem git]$ em buil<tab> Display all 180 possibilities? (y or n) [torvalds@nehalem git]$ em builtin-sh builtin-shortlog.c builtin-show-branch.c builtin-show-ref.c builtin-shortlog.o builtin-show-branch.o builtin-show-ref.o [torvalds@nehalem git]$ em builtin-shor<tab> builtin-shortlog.c builtin-shortlog.o [torvalds@nehalem git]$ em builtin-shortlog.c you get [torvalds@nehalem git]$ em buil<tab> [type] builtin/ builtin.h [torvalds@nehalem git]$ em builtin [auto-completes to] [torvalds@nehalem git]$ em builtin/sh<tab> [type] shortlog.c shortlog.o show-branch.c show-branch.o show-ref.c show-ref.o [torvalds@nehalem git]$ em builtin/sho [auto-completes to] [torvalds@nehalem git]$ em builtin/shor<tab> [type] shortlog.c shortlog.o [torvalds@nehalem git]$ em builtin/shortlog.c which doesn't seem all that different, but not having that annoying break in "Display all 180 possibilities?" is quite a relief. NOTE! If you do this in a clean tree (no object files etc), or using an editor that has auto-completion rules that ignores '*.o' files, you won't see that annoying 'Display all 180 possibilities?' message - it will just show the choices instead. I think bash has some cut-off around 100 choices or something. So the reason I see this is that I'm using an odd editory, and thus don't have the rules to cut down on auto-completion. But you can simulate that by using 'ls' instead, or something similar. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-02-22 17:42:18 +01:00
BUILTIN_OBJS += builtin/blame.o
BUILTIN_OBJS += builtin/branch.o
BUILTIN_OBJS += builtin/bugreport.o
Move 'builtin-*' into a 'builtin/' subdirectory This shrinks the top-level directory a bit, and makes it much more pleasant to use auto-completion on the thing. Instead of [torvalds@nehalem git]$ em buil<tab> Display all 180 possibilities? (y or n) [torvalds@nehalem git]$ em builtin-sh builtin-shortlog.c builtin-show-branch.c builtin-show-ref.c builtin-shortlog.o builtin-show-branch.o builtin-show-ref.o [torvalds@nehalem git]$ em builtin-shor<tab> builtin-shortlog.c builtin-shortlog.o [torvalds@nehalem git]$ em builtin-shortlog.c you get [torvalds@nehalem git]$ em buil<tab> [type] builtin/ builtin.h [torvalds@nehalem git]$ em builtin [auto-completes to] [torvalds@nehalem git]$ em builtin/sh<tab> [type] shortlog.c shortlog.o show-branch.c show-branch.o show-ref.c show-ref.o [torvalds@nehalem git]$ em builtin/sho [auto-completes to] [torvalds@nehalem git]$ em builtin/shor<tab> [type] shortlog.c shortlog.o [torvalds@nehalem git]$ em builtin/shortlog.c which doesn't seem all that different, but not having that annoying break in "Display all 180 possibilities?" is quite a relief. NOTE! If you do this in a clean tree (no object files etc), or using an editor that has auto-completion rules that ignores '*.o' files, you won't see that annoying 'Display all 180 possibilities?' message - it will just show the choices instead. I think bash has some cut-off around 100 choices or something. So the reason I see this is that I'm using an odd editory, and thus don't have the rules to cut down on auto-completion. But you can simulate that by using 'ls' instead, or something similar. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-02-22 17:42:18 +01:00
BUILTIN_OBJS += builtin/bundle.o
BUILTIN_OBJS += builtin/cat-file.o
BUILTIN_OBJS += builtin/check-attr.o
BUILTIN_OBJS += builtin/check-ignore.o
BUILTIN_OBJS += builtin/check-mailmap.o
Move 'builtin-*' into a 'builtin/' subdirectory This shrinks the top-level directory a bit, and makes it much more pleasant to use auto-completion on the thing. Instead of [torvalds@nehalem git]$ em buil<tab> Display all 180 possibilities? (y or n) [torvalds@nehalem git]$ em builtin-sh builtin-shortlog.c builtin-show-branch.c builtin-show-ref.c builtin-shortlog.o builtin-show-branch.o builtin-show-ref.o [torvalds@nehalem git]$ em builtin-shor<tab> builtin-shortlog.c builtin-shortlog.o [torvalds@nehalem git]$ em builtin-shortlog.c you get [torvalds@nehalem git]$ em buil<tab> [type] builtin/ builtin.h [torvalds@nehalem git]$ em builtin [auto-completes to] [torvalds@nehalem git]$ em builtin/sh<tab> [type] shortlog.c shortlog.o show-branch.c show-branch.o show-ref.c show-ref.o [torvalds@nehalem git]$ em builtin/sho [auto-completes to] [torvalds@nehalem git]$ em builtin/shor<tab> [type] shortlog.c shortlog.o [torvalds@nehalem git]$ em builtin/shortlog.c which doesn't seem all that different, but not having that annoying break in "Display all 180 possibilities?" is quite a relief. NOTE! If you do this in a clean tree (no object files etc), or using an editor that has auto-completion rules that ignores '*.o' files, you won't see that annoying 'Display all 180 possibilities?' message - it will just show the choices instead. I think bash has some cut-off around 100 choices or something. So the reason I see this is that I'm using an odd editory, and thus don't have the rules to cut down on auto-completion. But you can simulate that by using 'ls' instead, or something similar. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-02-22 17:42:18 +01:00
BUILTIN_OBJS += builtin/check-ref-format.o
parallel-checkout: make it truly parallel Use multiple worker processes to distribute the queued entries and call write_pc_item() in parallel for them. The items are distributed uniformly in contiguous chunks. This minimizes the chances of two workers writing to the same directory simultaneously, which could affect performance due to lock contention in the kernel. Work stealing (or any other format of re-distribution) is not implemented yet. The protocol between the main process and the workers is quite simple. They exchange binary messages packed in pkt-line format, and use PKT-FLUSH to mark the end of input (from both sides). The main process starts the communication by sending N pkt-lines, each corresponding to an item that needs to be written. These packets contain all the necessary information to load, smudge, and write the blob associated with each item. Then it waits for the worker to send back N pkt-lines containing the results for each item. The resulting packet must contain: the identification number of the item that it refers to, the status of the operation, and the lstat() data gathered after writing the file (iff the operation was successful). For now, checkout always uses a hardcoded value of 2 workers, only to demonstrate that the parallel checkout framework correctly divides and writes the queued entries. The next patch will add user configurations and define a more reasonable default, based on tests with the said settings. Co-authored-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Co-authored-by: Jeff Hostetler <jeffhost@microsoft.com> Signed-off-by: Matheus Tavares <matheus.bernardino@usp.br> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-04-19 02:14:54 +02:00
BUILTIN_OBJS += builtin/checkout--worker.o
Move 'builtin-*' into a 'builtin/' subdirectory This shrinks the top-level directory a bit, and makes it much more pleasant to use auto-completion on the thing. Instead of [torvalds@nehalem git]$ em buil<tab> Display all 180 possibilities? (y or n) [torvalds@nehalem git]$ em builtin-sh builtin-shortlog.c builtin-show-branch.c builtin-show-ref.c builtin-shortlog.o builtin-show-branch.o builtin-show-ref.o [torvalds@nehalem git]$ em builtin-shor<tab> builtin-shortlog.c builtin-shortlog.o [torvalds@nehalem git]$ em builtin-shortlog.c you get [torvalds@nehalem git]$ em buil<tab> [type] builtin/ builtin.h [torvalds@nehalem git]$ em builtin [auto-completes to] [torvalds@nehalem git]$ em builtin/sh<tab> [type] shortlog.c shortlog.o show-branch.c show-branch.o show-ref.c show-ref.o [torvalds@nehalem git]$ em builtin/sho [auto-completes to] [torvalds@nehalem git]$ em builtin/shor<tab> [type] shortlog.c shortlog.o [torvalds@nehalem git]$ em builtin/shortlog.c which doesn't seem all that different, but not having that annoying break in "Display all 180 possibilities?" is quite a relief. NOTE! If you do this in a clean tree (no object files etc), or using an editor that has auto-completion rules that ignores '*.o' files, you won't see that annoying 'Display all 180 possibilities?' message - it will just show the choices instead. I think bash has some cut-off around 100 choices or something. So the reason I see this is that I'm using an odd editory, and thus don't have the rules to cut down on auto-completion. But you can simulate that by using 'ls' instead, or something similar. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-02-22 17:42:18 +01:00
BUILTIN_OBJS += builtin/checkout-index.o
BUILTIN_OBJS += builtin/checkout.o
BUILTIN_OBJS += builtin/clean.o
BUILTIN_OBJS += builtin/clone.o
BUILTIN_OBJS += builtin/column.o
BUILTIN_OBJS += builtin/commit-graph.o
Move 'builtin-*' into a 'builtin/' subdirectory This shrinks the top-level directory a bit, and makes it much more pleasant to use auto-completion on the thing. Instead of [torvalds@nehalem git]$ em buil<tab> Display all 180 possibilities? (y or n) [torvalds@nehalem git]$ em builtin-sh builtin-shortlog.c builtin-show-branch.c builtin-show-ref.c builtin-shortlog.o builtin-show-branch.o builtin-show-ref.o [torvalds@nehalem git]$ em builtin-shor<tab> builtin-shortlog.c builtin-shortlog.o [torvalds@nehalem git]$ em builtin-shortlog.c you get [torvalds@nehalem git]$ em buil<tab> [type] builtin/ builtin.h [torvalds@nehalem git]$ em builtin [auto-completes to] [torvalds@nehalem git]$ em builtin/sh<tab> [type] shortlog.c shortlog.o show-branch.c show-branch.o show-ref.c show-ref.o [torvalds@nehalem git]$ em builtin/sho [auto-completes to] [torvalds@nehalem git]$ em builtin/shor<tab> [type] shortlog.c shortlog.o [torvalds@nehalem git]$ em builtin/shortlog.c which doesn't seem all that different, but not having that annoying break in "Display all 180 possibilities?" is quite a relief. NOTE! If you do this in a clean tree (no object files etc), or using an editor that has auto-completion rules that ignores '*.o' files, you won't see that annoying 'Display all 180 possibilities?' message - it will just show the choices instead. I think bash has some cut-off around 100 choices or something. So the reason I see this is that I'm using an odd editory, and thus don't have the rules to cut down on auto-completion. But you can simulate that by using 'ls' instead, or something similar. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-02-22 17:42:18 +01:00
BUILTIN_OBJS += builtin/commit-tree.o
BUILTIN_OBJS += builtin/commit.o
BUILTIN_OBJS += builtin/config.o
BUILTIN_OBJS += builtin/count-objects.o
BUILTIN_OBJS += builtin/credential-cache--daemon.o
BUILTIN_OBJS += builtin/credential-cache.o
BUILTIN_OBJS += builtin/credential-store.o
BUILTIN_OBJS += builtin/credential.o
Move 'builtin-*' into a 'builtin/' subdirectory This shrinks the top-level directory a bit, and makes it much more pleasant to use auto-completion on the thing. Instead of [torvalds@nehalem git]$ em buil<tab> Display all 180 possibilities? (y or n) [torvalds@nehalem git]$ em builtin-sh builtin-shortlog.c builtin-show-branch.c builtin-show-ref.c builtin-shortlog.o builtin-show-branch.o builtin-show-ref.o [torvalds@nehalem git]$ em builtin-shor<tab> builtin-shortlog.c builtin-shortlog.o [torvalds@nehalem git]$ em builtin-shortlog.c you get [torvalds@nehalem git]$ em buil<tab> [type] builtin/ builtin.h [torvalds@nehalem git]$ em builtin [auto-completes to] [torvalds@nehalem git]$ em builtin/sh<tab> [type] shortlog.c shortlog.o show-branch.c show-branch.o show-ref.c show-ref.o [torvalds@nehalem git]$ em builtin/sho [auto-completes to] [torvalds@nehalem git]$ em builtin/shor<tab> [type] shortlog.c shortlog.o [torvalds@nehalem git]$ em builtin/shortlog.c which doesn't seem all that different, but not having that annoying break in "Display all 180 possibilities?" is quite a relief. NOTE! If you do this in a clean tree (no object files etc), or using an editor that has auto-completion rules that ignores '*.o' files, you won't see that annoying 'Display all 180 possibilities?' message - it will just show the choices instead. I think bash has some cut-off around 100 choices or something. So the reason I see this is that I'm using an odd editory, and thus don't have the rules to cut down on auto-completion. But you can simulate that by using 'ls' instead, or something similar. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-02-22 17:42:18 +01:00
BUILTIN_OBJS += builtin/describe.o
BUILTIN_OBJS += builtin/diagnose.o
Move 'builtin-*' into a 'builtin/' subdirectory This shrinks the top-level directory a bit, and makes it much more pleasant to use auto-completion on the thing. Instead of [torvalds@nehalem git]$ em buil<tab> Display all 180 possibilities? (y or n) [torvalds@nehalem git]$ em builtin-sh builtin-shortlog.c builtin-show-branch.c builtin-show-ref.c builtin-shortlog.o builtin-show-branch.o builtin-show-ref.o [torvalds@nehalem git]$ em builtin-shor<tab> builtin-shortlog.c builtin-shortlog.o [torvalds@nehalem git]$ em builtin-shortlog.c you get [torvalds@nehalem git]$ em buil<tab> [type] builtin/ builtin.h [torvalds@nehalem git]$ em builtin [auto-completes to] [torvalds@nehalem git]$ em builtin/sh<tab> [type] shortlog.c shortlog.o show-branch.c show-branch.o show-ref.c show-ref.o [torvalds@nehalem git]$ em builtin/sho [auto-completes to] [torvalds@nehalem git]$ em builtin/shor<tab> [type] shortlog.c shortlog.o [torvalds@nehalem git]$ em builtin/shortlog.c which doesn't seem all that different, but not having that annoying break in "Display all 180 possibilities?" is quite a relief. NOTE! If you do this in a clean tree (no object files etc), or using an editor that has auto-completion rules that ignores '*.o' files, you won't see that annoying 'Display all 180 possibilities?' message - it will just show the choices instead. I think bash has some cut-off around 100 choices or something. So the reason I see this is that I'm using an odd editory, and thus don't have the rules to cut down on auto-completion. But you can simulate that by using 'ls' instead, or something similar. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-02-22 17:42:18 +01:00
BUILTIN_OBJS += builtin/diff-files.o
BUILTIN_OBJS += builtin/diff-index.o
BUILTIN_OBJS += builtin/diff-tree.o
BUILTIN_OBJS += builtin/diff.o
BUILTIN_OBJS += builtin/difftool.o
Move 'builtin-*' into a 'builtin/' subdirectory This shrinks the top-level directory a bit, and makes it much more pleasant to use auto-completion on the thing. Instead of [torvalds@nehalem git]$ em buil<tab> Display all 180 possibilities? (y or n) [torvalds@nehalem git]$ em builtin-sh builtin-shortlog.c builtin-show-branch.c builtin-show-ref.c builtin-shortlog.o builtin-show-branch.o builtin-show-ref.o [torvalds@nehalem git]$ em builtin-shor<tab> builtin-shortlog.c builtin-shortlog.o [torvalds@nehalem git]$ em builtin-shortlog.c you get [torvalds@nehalem git]$ em buil<tab> [type] builtin/ builtin.h [torvalds@nehalem git]$ em builtin [auto-completes to] [torvalds@nehalem git]$ em builtin/sh<tab> [type] shortlog.c shortlog.o show-branch.c show-branch.o show-ref.c show-ref.o [torvalds@nehalem git]$ em builtin/sho [auto-completes to] [torvalds@nehalem git]$ em builtin/shor<tab> [type] shortlog.c shortlog.o [torvalds@nehalem git]$ em builtin/shortlog.c which doesn't seem all that different, but not having that annoying break in "Display all 180 possibilities?" is quite a relief. NOTE! If you do this in a clean tree (no object files etc), or using an editor that has auto-completion rules that ignores '*.o' files, you won't see that annoying 'Display all 180 possibilities?' message - it will just show the choices instead. I think bash has some cut-off around 100 choices or something. So the reason I see this is that I'm using an odd editory, and thus don't have the rules to cut down on auto-completion. But you can simulate that by using 'ls' instead, or something similar. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-02-22 17:42:18 +01:00
BUILTIN_OBJS += builtin/fast-export.o
BUILTIN_OBJS += builtin/fast-import.o
Move 'builtin-*' into a 'builtin/' subdirectory This shrinks the top-level directory a bit, and makes it much more pleasant to use auto-completion on the thing. Instead of [torvalds@nehalem git]$ em buil<tab> Display all 180 possibilities? (y or n) [torvalds@nehalem git]$ em builtin-sh builtin-shortlog.c builtin-show-branch.c builtin-show-ref.c builtin-shortlog.o builtin-show-branch.o builtin-show-ref.o [torvalds@nehalem git]$ em builtin-shor<tab> builtin-shortlog.c builtin-shortlog.o [torvalds@nehalem git]$ em builtin-shortlog.c you get [torvalds@nehalem git]$ em buil<tab> [type] builtin/ builtin.h [torvalds@nehalem git]$ em builtin [auto-completes to] [torvalds@nehalem git]$ em builtin/sh<tab> [type] shortlog.c shortlog.o show-branch.c show-branch.o show-ref.c show-ref.o [torvalds@nehalem git]$ em builtin/sho [auto-completes to] [torvalds@nehalem git]$ em builtin/shor<tab> [type] shortlog.c shortlog.o [torvalds@nehalem git]$ em builtin/shortlog.c which doesn't seem all that different, but not having that annoying break in "Display all 180 possibilities?" is quite a relief. NOTE! If you do this in a clean tree (no object files etc), or using an editor that has auto-completion rules that ignores '*.o' files, you won't see that annoying 'Display all 180 possibilities?' message - it will just show the choices instead. I think bash has some cut-off around 100 choices or something. So the reason I see this is that I'm using an odd editory, and thus don't have the rules to cut down on auto-completion. But you can simulate that by using 'ls' instead, or something similar. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-02-22 17:42:18 +01:00
BUILTIN_OBJS += builtin/fetch-pack.o
BUILTIN_OBJS += builtin/fetch.o
BUILTIN_OBJS += builtin/fmt-merge-msg.o
BUILTIN_OBJS += builtin/for-each-ref.o
BUILTIN_OBJS += builtin/for-each-repo.o
Move 'builtin-*' into a 'builtin/' subdirectory This shrinks the top-level directory a bit, and makes it much more pleasant to use auto-completion on the thing. Instead of [torvalds@nehalem git]$ em buil<tab> Display all 180 possibilities? (y or n) [torvalds@nehalem git]$ em builtin-sh builtin-shortlog.c builtin-show-branch.c builtin-show-ref.c builtin-shortlog.o builtin-show-branch.o builtin-show-ref.o [torvalds@nehalem git]$ em builtin-shor<tab> builtin-shortlog.c builtin-shortlog.o [torvalds@nehalem git]$ em builtin-shortlog.c you get [torvalds@nehalem git]$ em buil<tab> [type] builtin/ builtin.h [torvalds@nehalem git]$ em builtin [auto-completes to] [torvalds@nehalem git]$ em builtin/sh<tab> [type] shortlog.c shortlog.o show-branch.c show-branch.o show-ref.c show-ref.o [torvalds@nehalem git]$ em builtin/sho [auto-completes to] [torvalds@nehalem git]$ em builtin/shor<tab> [type] shortlog.c shortlog.o [torvalds@nehalem git]$ em builtin/shortlog.c which doesn't seem all that different, but not having that annoying break in "Display all 180 possibilities?" is quite a relief. NOTE! If you do this in a clean tree (no object files etc), or using an editor that has auto-completion rules that ignores '*.o' files, you won't see that annoying 'Display all 180 possibilities?' message - it will just show the choices instead. I think bash has some cut-off around 100 choices or something. So the reason I see this is that I'm using an odd editory, and thus don't have the rules to cut down on auto-completion. But you can simulate that by using 'ls' instead, or something similar. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-02-22 17:42:18 +01:00
BUILTIN_OBJS += builtin/fsck.o
BUILTIN_OBJS += builtin/fsmonitor--daemon.o
Move 'builtin-*' into a 'builtin/' subdirectory This shrinks the top-level directory a bit, and makes it much more pleasant to use auto-completion on the thing. Instead of [torvalds@nehalem git]$ em buil<tab> Display all 180 possibilities? (y or n) [torvalds@nehalem git]$ em builtin-sh builtin-shortlog.c builtin-show-branch.c builtin-show-ref.c builtin-shortlog.o builtin-show-branch.o builtin-show-ref.o [torvalds@nehalem git]$ em builtin-shor<tab> builtin-shortlog.c builtin-shortlog.o [torvalds@nehalem git]$ em builtin-shortlog.c you get [torvalds@nehalem git]$ em buil<tab> [type] builtin/ builtin.h [torvalds@nehalem git]$ em builtin [auto-completes to] [torvalds@nehalem git]$ em builtin/sh<tab> [type] shortlog.c shortlog.o show-branch.c show-branch.o show-ref.c show-ref.o [torvalds@nehalem git]$ em builtin/sho [auto-completes to] [torvalds@nehalem git]$ em builtin/shor<tab> [type] shortlog.c shortlog.o [torvalds@nehalem git]$ em builtin/shortlog.c which doesn't seem all that different, but not having that annoying break in "Display all 180 possibilities?" is quite a relief. NOTE! If you do this in a clean tree (no object files etc), or using an editor that has auto-completion rules that ignores '*.o' files, you won't see that annoying 'Display all 180 possibilities?' message - it will just show the choices instead. I think bash has some cut-off around 100 choices or something. So the reason I see this is that I'm using an odd editory, and thus don't have the rules to cut down on auto-completion. But you can simulate that by using 'ls' instead, or something similar. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-02-22 17:42:18 +01:00
BUILTIN_OBJS += builtin/gc.o
BUILTIN_OBJS += builtin/get-tar-commit-id.o
Move 'builtin-*' into a 'builtin/' subdirectory This shrinks the top-level directory a bit, and makes it much more pleasant to use auto-completion on the thing. Instead of [torvalds@nehalem git]$ em buil<tab> Display all 180 possibilities? (y or n) [torvalds@nehalem git]$ em builtin-sh builtin-shortlog.c builtin-show-branch.c builtin-show-ref.c builtin-shortlog.o builtin-show-branch.o builtin-show-ref.o [torvalds@nehalem git]$ em builtin-shor<tab> builtin-shortlog.c builtin-shortlog.o [torvalds@nehalem git]$ em builtin-shortlog.c you get [torvalds@nehalem git]$ em buil<tab> [type] builtin/ builtin.h [torvalds@nehalem git]$ em builtin [auto-completes to] [torvalds@nehalem git]$ em builtin/sh<tab> [type] shortlog.c shortlog.o show-branch.c show-branch.o show-ref.c show-ref.o [torvalds@nehalem git]$ em builtin/sho [auto-completes to] [torvalds@nehalem git]$ em builtin/shor<tab> [type] shortlog.c shortlog.o [torvalds@nehalem git]$ em builtin/shortlog.c which doesn't seem all that different, but not having that annoying break in "Display all 180 possibilities?" is quite a relief. NOTE! If you do this in a clean tree (no object files etc), or using an editor that has auto-completion rules that ignores '*.o' files, you won't see that annoying 'Display all 180 possibilities?' message - it will just show the choices instead. I think bash has some cut-off around 100 choices or something. So the reason I see this is that I'm using an odd editory, and thus don't have the rules to cut down on auto-completion. But you can simulate that by using 'ls' instead, or something similar. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-02-22 17:42:18 +01:00
BUILTIN_OBJS += builtin/grep.o
BUILTIN_OBJS += builtin/hash-object.o
BUILTIN_OBJS += builtin/help.o
BUILTIN_OBJS += builtin/hook.o
Move 'builtin-*' into a 'builtin/' subdirectory This shrinks the top-level directory a bit, and makes it much more pleasant to use auto-completion on the thing. Instead of [torvalds@nehalem git]$ em buil<tab> Display all 180 possibilities? (y or n) [torvalds@nehalem git]$ em builtin-sh builtin-shortlog.c builtin-show-branch.c builtin-show-ref.c builtin-shortlog.o builtin-show-branch.o builtin-show-ref.o [torvalds@nehalem git]$ em builtin-shor<tab> builtin-shortlog.c builtin-shortlog.o [torvalds@nehalem git]$ em builtin-shortlog.c you get [torvalds@nehalem git]$ em buil<tab> [type] builtin/ builtin.h [torvalds@nehalem git]$ em builtin [auto-completes to] [torvalds@nehalem git]$ em builtin/sh<tab> [type] shortlog.c shortlog.o show-branch.c show-branch.o show-ref.c show-ref.o [torvalds@nehalem git]$ em builtin/sho [auto-completes to] [torvalds@nehalem git]$ em builtin/shor<tab> [type] shortlog.c shortlog.o [torvalds@nehalem git]$ em builtin/shortlog.c which doesn't seem all that different, but not having that annoying break in "Display all 180 possibilities?" is quite a relief. NOTE! If you do this in a clean tree (no object files etc), or using an editor that has auto-completion rules that ignores '*.o' files, you won't see that annoying 'Display all 180 possibilities?' message - it will just show the choices instead. I think bash has some cut-off around 100 choices or something. So the reason I see this is that I'm using an odd editory, and thus don't have the rules to cut down on auto-completion. But you can simulate that by using 'ls' instead, or something similar. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-02-22 17:42:18 +01:00
BUILTIN_OBJS += builtin/index-pack.o
BUILTIN_OBJS += builtin/init-db.o
BUILTIN_OBJS += builtin/interpret-trailers.o
Move 'builtin-*' into a 'builtin/' subdirectory This shrinks the top-level directory a bit, and makes it much more pleasant to use auto-completion on the thing. Instead of [torvalds@nehalem git]$ em buil<tab> Display all 180 possibilities? (y or n) [torvalds@nehalem git]$ em builtin-sh builtin-shortlog.c builtin-show-branch.c builtin-show-ref.c builtin-shortlog.o builtin-show-branch.o builtin-show-ref.o [torvalds@nehalem git]$ em builtin-shor<tab> builtin-shortlog.c builtin-shortlog.o [torvalds@nehalem git]$ em builtin-shortlog.c you get [torvalds@nehalem git]$ em buil<tab> [type] builtin/ builtin.h [torvalds@nehalem git]$ em builtin [auto-completes to] [torvalds@nehalem git]$ em builtin/sh<tab> [type] shortlog.c shortlog.o show-branch.c show-branch.o show-ref.c show-ref.o [torvalds@nehalem git]$ em builtin/sho [auto-completes to] [torvalds@nehalem git]$ em builtin/shor<tab> [type] shortlog.c shortlog.o [torvalds@nehalem git]$ em builtin/shortlog.c which doesn't seem all that different, but not having that annoying break in "Display all 180 possibilities?" is quite a relief. NOTE! If you do this in a clean tree (no object files etc), or using an editor that has auto-completion rules that ignores '*.o' files, you won't see that annoying 'Display all 180 possibilities?' message - it will just show the choices instead. I think bash has some cut-off around 100 choices or something. So the reason I see this is that I'm using an odd editory, and thus don't have the rules to cut down on auto-completion. But you can simulate that by using 'ls' instead, or something similar. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-02-22 17:42:18 +01:00
BUILTIN_OBJS += builtin/log.o
BUILTIN_OBJS += builtin/ls-files.o
BUILTIN_OBJS += builtin/ls-remote.o
BUILTIN_OBJS += builtin/ls-tree.o
BUILTIN_OBJS += builtin/mailinfo.o
BUILTIN_OBJS += builtin/mailsplit.o
BUILTIN_OBJS += builtin/merge-base.o
BUILTIN_OBJS += builtin/merge-file.o
BUILTIN_OBJS += builtin/merge-index.o
BUILTIN_OBJS += builtin/merge-ours.o
BUILTIN_OBJS += builtin/merge-recursive.o
BUILTIN_OBJS += builtin/merge-tree.o
BUILTIN_OBJS += builtin/merge.o
Move 'builtin-*' into a 'builtin/' subdirectory This shrinks the top-level directory a bit, and makes it much more pleasant to use auto-completion on the thing. Instead of [torvalds@nehalem git]$ em buil<tab> Display all 180 possibilities? (y or n) [torvalds@nehalem git]$ em builtin-sh builtin-shortlog.c builtin-show-branch.c builtin-show-ref.c builtin-shortlog.o builtin-show-branch.o builtin-show-ref.o [torvalds@nehalem git]$ em builtin-shor<tab> builtin-shortlog.c builtin-shortlog.o [torvalds@nehalem git]$ em builtin-shortlog.c you get [torvalds@nehalem git]$ em buil<tab> [type] builtin/ builtin.h [torvalds@nehalem git]$ em builtin [auto-completes to] [torvalds@nehalem git]$ em builtin/sh<tab> [type] shortlog.c shortlog.o show-branch.c show-branch.o show-ref.c show-ref.o [torvalds@nehalem git]$ em builtin/sho [auto-completes to] [torvalds@nehalem git]$ em builtin/shor<tab> [type] shortlog.c shortlog.o [torvalds@nehalem git]$ em builtin/shortlog.c which doesn't seem all that different, but not having that annoying break in "Display all 180 possibilities?" is quite a relief. NOTE! If you do this in a clean tree (no object files etc), or using an editor that has auto-completion rules that ignores '*.o' files, you won't see that annoying 'Display all 180 possibilities?' message - it will just show the choices instead. I think bash has some cut-off around 100 choices or something. So the reason I see this is that I'm using an odd editory, and thus don't have the rules to cut down on auto-completion. But you can simulate that by using 'ls' instead, or something similar. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-02-22 17:42:18 +01:00
BUILTIN_OBJS += builtin/mktag.o
BUILTIN_OBJS += builtin/mktree.o
BUILTIN_OBJS += builtin/multi-pack-index.o
Move 'builtin-*' into a 'builtin/' subdirectory This shrinks the top-level directory a bit, and makes it much more pleasant to use auto-completion on the thing. Instead of [torvalds@nehalem git]$ em buil<tab> Display all 180 possibilities? (y or n) [torvalds@nehalem git]$ em builtin-sh builtin-shortlog.c builtin-show-branch.c builtin-show-ref.c builtin-shortlog.o builtin-show-branch.o builtin-show-ref.o [torvalds@nehalem git]$ em builtin-shor<tab> builtin-shortlog.c builtin-shortlog.o [torvalds@nehalem git]$ em builtin-shortlog.c you get [torvalds@nehalem git]$ em buil<tab> [type] builtin/ builtin.h [torvalds@nehalem git]$ em builtin [auto-completes to] [torvalds@nehalem git]$ em builtin/sh<tab> [type] shortlog.c shortlog.o show-branch.c show-branch.o show-ref.c show-ref.o [torvalds@nehalem git]$ em builtin/sho [auto-completes to] [torvalds@nehalem git]$ em builtin/shor<tab> [type] shortlog.c shortlog.o [torvalds@nehalem git]$ em builtin/shortlog.c which doesn't seem all that different, but not having that annoying break in "Display all 180 possibilities?" is quite a relief. NOTE! If you do this in a clean tree (no object files etc), or using an editor that has auto-completion rules that ignores '*.o' files, you won't see that annoying 'Display all 180 possibilities?' message - it will just show the choices instead. I think bash has some cut-off around 100 choices or something. So the reason I see this is that I'm using an odd editory, and thus don't have the rules to cut down on auto-completion. But you can simulate that by using 'ls' instead, or something similar. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-02-22 17:42:18 +01:00
BUILTIN_OBJS += builtin/mv.o
BUILTIN_OBJS += builtin/name-rev.o
BUILTIN_OBJS += builtin/notes.o
Move 'builtin-*' into a 'builtin/' subdirectory This shrinks the top-level directory a bit, and makes it much more pleasant to use auto-completion on the thing. Instead of [torvalds@nehalem git]$ em buil<tab> Display all 180 possibilities? (y or n) [torvalds@nehalem git]$ em builtin-sh builtin-shortlog.c builtin-show-branch.c builtin-show-ref.c builtin-shortlog.o builtin-show-branch.o builtin-show-ref.o [torvalds@nehalem git]$ em builtin-shor<tab> builtin-shortlog.c builtin-shortlog.o [torvalds@nehalem git]$ em builtin-shortlog.c you get [torvalds@nehalem git]$ em buil<tab> [type] builtin/ builtin.h [torvalds@nehalem git]$ em builtin [auto-completes to] [torvalds@nehalem git]$ em builtin/sh<tab> [type] shortlog.c shortlog.o show-branch.c show-branch.o show-ref.c show-ref.o [torvalds@nehalem git]$ em builtin/sho [auto-completes to] [torvalds@nehalem git]$ em builtin/shor<tab> [type] shortlog.c shortlog.o [torvalds@nehalem git]$ em builtin/shortlog.c which doesn't seem all that different, but not having that annoying break in "Display all 180 possibilities?" is quite a relief. NOTE! If you do this in a clean tree (no object files etc), or using an editor that has auto-completion rules that ignores '*.o' files, you won't see that annoying 'Display all 180 possibilities?' message - it will just show the choices instead. I think bash has some cut-off around 100 choices or something. So the reason I see this is that I'm using an odd editory, and thus don't have the rules to cut down on auto-completion. But you can simulate that by using 'ls' instead, or something similar. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-02-22 17:42:18 +01:00
BUILTIN_OBJS += builtin/pack-objects.o
BUILTIN_OBJS += builtin/pack-redundant.o
BUILTIN_OBJS += builtin/pack-refs.o
BUILTIN_OBJS += builtin/patch-id.o
BUILTIN_OBJS += builtin/prune-packed.o
BUILTIN_OBJS += builtin/prune.o
BUILTIN_OBJS += builtin/pull.o
Move 'builtin-*' into a 'builtin/' subdirectory This shrinks the top-level directory a bit, and makes it much more pleasant to use auto-completion on the thing. Instead of [torvalds@nehalem git]$ em buil<tab> Display all 180 possibilities? (y or n) [torvalds@nehalem git]$ em builtin-sh builtin-shortlog.c builtin-show-branch.c builtin-show-ref.c builtin-shortlog.o builtin-show-branch.o builtin-show-ref.o [torvalds@nehalem git]$ em builtin-shor<tab> builtin-shortlog.c builtin-shortlog.o [torvalds@nehalem git]$ em builtin-shortlog.c you get [torvalds@nehalem git]$ em buil<tab> [type] builtin/ builtin.h [torvalds@nehalem git]$ em builtin [auto-completes to] [torvalds@nehalem git]$ em builtin/sh<tab> [type] shortlog.c shortlog.o show-branch.c show-branch.o show-ref.c show-ref.o [torvalds@nehalem git]$ em builtin/sho [auto-completes to] [torvalds@nehalem git]$ em builtin/shor<tab> [type] shortlog.c shortlog.o [torvalds@nehalem git]$ em builtin/shortlog.c which doesn't seem all that different, but not having that annoying break in "Display all 180 possibilities?" is quite a relief. NOTE! If you do this in a clean tree (no object files etc), or using an editor that has auto-completion rules that ignores '*.o' files, you won't see that annoying 'Display all 180 possibilities?' message - it will just show the choices instead. I think bash has some cut-off around 100 choices or something. So the reason I see this is that I'm using an odd editory, and thus don't have the rules to cut down on auto-completion. But you can simulate that by using 'ls' instead, or something similar. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-02-22 17:42:18 +01:00
BUILTIN_OBJS += builtin/push.o
BUILTIN_OBJS += builtin/range-diff.o
Move 'builtin-*' into a 'builtin/' subdirectory This shrinks the top-level directory a bit, and makes it much more pleasant to use auto-completion on the thing. Instead of [torvalds@nehalem git]$ em buil<tab> Display all 180 possibilities? (y or n) [torvalds@nehalem git]$ em builtin-sh builtin-shortlog.c builtin-show-branch.c builtin-show-ref.c builtin-shortlog.o builtin-show-branch.o builtin-show-ref.o [torvalds@nehalem git]$ em builtin-shor<tab> builtin-shortlog.c builtin-shortlog.o [torvalds@nehalem git]$ em builtin-shortlog.c you get [torvalds@nehalem git]$ em buil<tab> [type] builtin/ builtin.h [torvalds@nehalem git]$ em builtin [auto-completes to] [torvalds@nehalem git]$ em builtin/sh<tab> [type] shortlog.c shortlog.o show-branch.c show-branch.o show-ref.c show-ref.o [torvalds@nehalem git]$ em builtin/sho [auto-completes to] [torvalds@nehalem git]$ em builtin/shor<tab> [type] shortlog.c shortlog.o [torvalds@nehalem git]$ em builtin/shortlog.c which doesn't seem all that different, but not having that annoying break in "Display all 180 possibilities?" is quite a relief. NOTE! If you do this in a clean tree (no object files etc), or using an editor that has auto-completion rules that ignores '*.o' files, you won't see that annoying 'Display all 180 possibilities?' message - it will just show the choices instead. I think bash has some cut-off around 100 choices or something. So the reason I see this is that I'm using an odd editory, and thus don't have the rules to cut down on auto-completion. But you can simulate that by using 'ls' instead, or something similar. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-02-22 17:42:18 +01:00
BUILTIN_OBJS += builtin/read-tree.o
BUILTIN_OBJS += builtin/rebase.o
Move 'builtin-*' into a 'builtin/' subdirectory This shrinks the top-level directory a bit, and makes it much more pleasant to use auto-completion on the thing. Instead of [torvalds@nehalem git]$ em buil<tab> Display all 180 possibilities? (y or n) [torvalds@nehalem git]$ em builtin-sh builtin-shortlog.c builtin-show-branch.c builtin-show-ref.c builtin-shortlog.o builtin-show-branch.o builtin-show-ref.o [torvalds@nehalem git]$ em builtin-shor<tab> builtin-shortlog.c builtin-shortlog.o [torvalds@nehalem git]$ em builtin-shortlog.c you get [torvalds@nehalem git]$ em buil<tab> [type] builtin/ builtin.h [torvalds@nehalem git]$ em builtin [auto-completes to] [torvalds@nehalem git]$ em builtin/sh<tab> [type] shortlog.c shortlog.o show-branch.c show-branch.o show-ref.c show-ref.o [torvalds@nehalem git]$ em builtin/sho [auto-completes to] [torvalds@nehalem git]$ em builtin/shor<tab> [type] shortlog.c shortlog.o [torvalds@nehalem git]$ em builtin/shortlog.c which doesn't seem all that different, but not having that annoying break in "Display all 180 possibilities?" is quite a relief. NOTE! If you do this in a clean tree (no object files etc), or using an editor that has auto-completion rules that ignores '*.o' files, you won't see that annoying 'Display all 180 possibilities?' message - it will just show the choices instead. I think bash has some cut-off around 100 choices or something. So the reason I see this is that I'm using an odd editory, and thus don't have the rules to cut down on auto-completion. But you can simulate that by using 'ls' instead, or something similar. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-02-22 17:42:18 +01:00
BUILTIN_OBJS += builtin/receive-pack.o
BUILTIN_OBJS += builtin/reflog.o
BUILTIN_OBJS += builtin/remote-ext.o
BUILTIN_OBJS += builtin/remote-fd.o
BUILTIN_OBJS += builtin/remote.o
BUILTIN_OBJS += builtin/repack.o
Move 'builtin-*' into a 'builtin/' subdirectory This shrinks the top-level directory a bit, and makes it much more pleasant to use auto-completion on the thing. Instead of [torvalds@nehalem git]$ em buil<tab> Display all 180 possibilities? (y or n) [torvalds@nehalem git]$ em builtin-sh builtin-shortlog.c builtin-show-branch.c builtin-show-ref.c builtin-shortlog.o builtin-show-branch.o builtin-show-ref.o [torvalds@nehalem git]$ em builtin-shor<tab> builtin-shortlog.c builtin-shortlog.o [torvalds@nehalem git]$ em builtin-shortlog.c you get [torvalds@nehalem git]$ em buil<tab> [type] builtin/ builtin.h [torvalds@nehalem git]$ em builtin [auto-completes to] [torvalds@nehalem git]$ em builtin/sh<tab> [type] shortlog.c shortlog.o show-branch.c show-branch.o show-ref.c show-ref.o [torvalds@nehalem git]$ em builtin/sho [auto-completes to] [torvalds@nehalem git]$ em builtin/shor<tab> [type] shortlog.c shortlog.o [torvalds@nehalem git]$ em builtin/shortlog.c which doesn't seem all that different, but not having that annoying break in "Display all 180 possibilities?" is quite a relief. NOTE! If you do this in a clean tree (no object files etc), or using an editor that has auto-completion rules that ignores '*.o' files, you won't see that annoying 'Display all 180 possibilities?' message - it will just show the choices instead. I think bash has some cut-off around 100 choices or something. So the reason I see this is that I'm using an odd editory, and thus don't have the rules to cut down on auto-completion. But you can simulate that by using 'ls' instead, or something similar. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-02-22 17:42:18 +01:00
BUILTIN_OBJS += builtin/replace.o
BUILTIN_OBJS += builtin/replay.o
Move 'builtin-*' into a 'builtin/' subdirectory This shrinks the top-level directory a bit, and makes it much more pleasant to use auto-completion on the thing. Instead of [torvalds@nehalem git]$ em buil<tab> Display all 180 possibilities? (y or n) [torvalds@nehalem git]$ em builtin-sh builtin-shortlog.c builtin-show-branch.c builtin-show-ref.c builtin-shortlog.o builtin-show-branch.o builtin-show-ref.o [torvalds@nehalem git]$ em builtin-shor<tab> builtin-shortlog.c builtin-shortlog.o [torvalds@nehalem git]$ em builtin-shortlog.c you get [torvalds@nehalem git]$ em buil<tab> [type] builtin/ builtin.h [torvalds@nehalem git]$ em builtin [auto-completes to] [torvalds@nehalem git]$ em builtin/sh<tab> [type] shortlog.c shortlog.o show-branch.c show-branch.o show-ref.c show-ref.o [torvalds@nehalem git]$ em builtin/sho [auto-completes to] [torvalds@nehalem git]$ em builtin/shor<tab> [type] shortlog.c shortlog.o [torvalds@nehalem git]$ em builtin/shortlog.c which doesn't seem all that different, but not having that annoying break in "Display all 180 possibilities?" is quite a relief. NOTE! If you do this in a clean tree (no object files etc), or using an editor that has auto-completion rules that ignores '*.o' files, you won't see that annoying 'Display all 180 possibilities?' message - it will just show the choices instead. I think bash has some cut-off around 100 choices or something. So the reason I see this is that I'm using an odd editory, and thus don't have the rules to cut down on auto-completion. But you can simulate that by using 'ls' instead, or something similar. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-02-22 17:42:18 +01:00
BUILTIN_OBJS += builtin/rerere.o
BUILTIN_OBJS += builtin/reset.o
BUILTIN_OBJS += builtin/rev-list.o
BUILTIN_OBJS += builtin/rev-parse.o
BUILTIN_OBJS += builtin/revert.o
BUILTIN_OBJS += builtin/rm.o
BUILTIN_OBJS += builtin/send-pack.o
BUILTIN_OBJS += builtin/shortlog.o
BUILTIN_OBJS += builtin/show-branch.o
BUILTIN_OBJS += builtin/show-index.o
Move 'builtin-*' into a 'builtin/' subdirectory This shrinks the top-level directory a bit, and makes it much more pleasant to use auto-completion on the thing. Instead of [torvalds@nehalem git]$ em buil<tab> Display all 180 possibilities? (y or n) [torvalds@nehalem git]$ em builtin-sh builtin-shortlog.c builtin-show-branch.c builtin-show-ref.c builtin-shortlog.o builtin-show-branch.o builtin-show-ref.o [torvalds@nehalem git]$ em builtin-shor<tab> builtin-shortlog.c builtin-shortlog.o [torvalds@nehalem git]$ em builtin-shortlog.c you get [torvalds@nehalem git]$ em buil<tab> [type] builtin/ builtin.h [torvalds@nehalem git]$ em builtin [auto-completes to] [torvalds@nehalem git]$ em builtin/sh<tab> [type] shortlog.c shortlog.o show-branch.c show-branch.o show-ref.c show-ref.o [torvalds@nehalem git]$ em builtin/sho [auto-completes to] [torvalds@nehalem git]$ em builtin/shor<tab> [type] shortlog.c shortlog.o [torvalds@nehalem git]$ em builtin/shortlog.c which doesn't seem all that different, but not having that annoying break in "Display all 180 possibilities?" is quite a relief. NOTE! If you do this in a clean tree (no object files etc), or using an editor that has auto-completion rules that ignores '*.o' files, you won't see that annoying 'Display all 180 possibilities?' message - it will just show the choices instead. I think bash has some cut-off around 100 choices or something. So the reason I see this is that I'm using an odd editory, and thus don't have the rules to cut down on auto-completion. But you can simulate that by using 'ls' instead, or something similar. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-02-22 17:42:18 +01:00
BUILTIN_OBJS += builtin/show-ref.o
BUILTIN_OBJS += builtin/sparse-checkout.o
BUILTIN_OBJS += builtin/stash.o
Move 'builtin-*' into a 'builtin/' subdirectory This shrinks the top-level directory a bit, and makes it much more pleasant to use auto-completion on the thing. Instead of [torvalds@nehalem git]$ em buil<tab> Display all 180 possibilities? (y or n) [torvalds@nehalem git]$ em builtin-sh builtin-shortlog.c builtin-show-branch.c builtin-show-ref.c builtin-shortlog.o builtin-show-branch.o builtin-show-ref.o [torvalds@nehalem git]$ em builtin-shor<tab> builtin-shortlog.c builtin-shortlog.o [torvalds@nehalem git]$ em builtin-shortlog.c you get [torvalds@nehalem git]$ em buil<tab> [type] builtin/ builtin.h [torvalds@nehalem git]$ em builtin [auto-completes to] [torvalds@nehalem git]$ em builtin/sh<tab> [type] shortlog.c shortlog.o show-branch.c show-branch.o show-ref.c show-ref.o [torvalds@nehalem git]$ em builtin/sho [auto-completes to] [torvalds@nehalem git]$ em builtin/shor<tab> [type] shortlog.c shortlog.o [torvalds@nehalem git]$ em builtin/shortlog.c which doesn't seem all that different, but not having that annoying break in "Display all 180 possibilities?" is quite a relief. NOTE! If you do this in a clean tree (no object files etc), or using an editor that has auto-completion rules that ignores '*.o' files, you won't see that annoying 'Display all 180 possibilities?' message - it will just show the choices instead. I think bash has some cut-off around 100 choices or something. So the reason I see this is that I'm using an odd editory, and thus don't have the rules to cut down on auto-completion. But you can simulate that by using 'ls' instead, or something similar. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-02-22 17:42:18 +01:00
BUILTIN_OBJS += builtin/stripspace.o
BUILTIN_OBJS += builtin/submodule--helper.o
Move 'builtin-*' into a 'builtin/' subdirectory This shrinks the top-level directory a bit, and makes it much more pleasant to use auto-completion on the thing. Instead of [torvalds@nehalem git]$ em buil<tab> Display all 180 possibilities? (y or n) [torvalds@nehalem git]$ em builtin-sh builtin-shortlog.c builtin-show-branch.c builtin-show-ref.c builtin-shortlog.o builtin-show-branch.o builtin-show-ref.o [torvalds@nehalem git]$ em builtin-shor<tab> builtin-shortlog.c builtin-shortlog.o [torvalds@nehalem git]$ em builtin-shortlog.c you get [torvalds@nehalem git]$ em buil<tab> [type] builtin/ builtin.h [torvalds@nehalem git]$ em builtin [auto-completes to] [torvalds@nehalem git]$ em builtin/sh<tab> [type] shortlog.c shortlog.o show-branch.c show-branch.o show-ref.c show-ref.o [torvalds@nehalem git]$ em builtin/sho [auto-completes to] [torvalds@nehalem git]$ em builtin/shor<tab> [type] shortlog.c shortlog.o [torvalds@nehalem git]$ em builtin/shortlog.c which doesn't seem all that different, but not having that annoying break in "Display all 180 possibilities?" is quite a relief. NOTE! If you do this in a clean tree (no object files etc), or using an editor that has auto-completion rules that ignores '*.o' files, you won't see that annoying 'Display all 180 possibilities?' message - it will just show the choices instead. I think bash has some cut-off around 100 choices or something. So the reason I see this is that I'm using an odd editory, and thus don't have the rules to cut down on auto-completion. But you can simulate that by using 'ls' instead, or something similar. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-02-22 17:42:18 +01:00
BUILTIN_OBJS += builtin/symbolic-ref.o
BUILTIN_OBJS += builtin/tag.o
BUILTIN_OBJS += builtin/unpack-file.o
BUILTIN_OBJS += builtin/unpack-objects.o
BUILTIN_OBJS += builtin/update-index.o
BUILTIN_OBJS += builtin/update-ref.o
BUILTIN_OBJS += builtin/update-server-info.o
BUILTIN_OBJS += builtin/upload-archive.o
BUILTIN_OBJS += builtin/upload-pack.o
Move 'builtin-*' into a 'builtin/' subdirectory This shrinks the top-level directory a bit, and makes it much more pleasant to use auto-completion on the thing. Instead of [torvalds@nehalem git]$ em buil<tab> Display all 180 possibilities? (y or n) [torvalds@nehalem git]$ em builtin-sh builtin-shortlog.c builtin-show-branch.c builtin-show-ref.c builtin-shortlog.o builtin-show-branch.o builtin-show-ref.o [torvalds@nehalem git]$ em builtin-shor<tab> builtin-shortlog.c builtin-shortlog.o [torvalds@nehalem git]$ em builtin-shortlog.c you get [torvalds@nehalem git]$ em buil<tab> [type] builtin/ builtin.h [torvalds@nehalem git]$ em builtin [auto-completes to] [torvalds@nehalem git]$ em builtin/sh<tab> [type] shortlog.c shortlog.o show-branch.c show-branch.o show-ref.c show-ref.o [torvalds@nehalem git]$ em builtin/sho [auto-completes to] [torvalds@nehalem git]$ em builtin/shor<tab> [type] shortlog.c shortlog.o [torvalds@nehalem git]$ em builtin/shortlog.c which doesn't seem all that different, but not having that annoying break in "Display all 180 possibilities?" is quite a relief. NOTE! If you do this in a clean tree (no object files etc), or using an editor that has auto-completion rules that ignores '*.o' files, you won't see that annoying 'Display all 180 possibilities?' message - it will just show the choices instead. I think bash has some cut-off around 100 choices or something. So the reason I see this is that I'm using an odd editory, and thus don't have the rules to cut down on auto-completion. But you can simulate that by using 'ls' instead, or something similar. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-02-22 17:42:18 +01:00
BUILTIN_OBJS += builtin/var.o
BUILTIN_OBJS += builtin/verify-commit.o
Move 'builtin-*' into a 'builtin/' subdirectory This shrinks the top-level directory a bit, and makes it much more pleasant to use auto-completion on the thing. Instead of [torvalds@nehalem git]$ em buil<tab> Display all 180 possibilities? (y or n) [torvalds@nehalem git]$ em builtin-sh builtin-shortlog.c builtin-show-branch.c builtin-show-ref.c builtin-shortlog.o builtin-show-branch.o builtin-show-ref.o [torvalds@nehalem git]$ em builtin-shor<tab> builtin-shortlog.c builtin-shortlog.o [torvalds@nehalem git]$ em builtin-shortlog.c you get [torvalds@nehalem git]$ em buil<tab> [type] builtin/ builtin.h [torvalds@nehalem git]$ em builtin [auto-completes to] [torvalds@nehalem git]$ em builtin/sh<tab> [type] shortlog.c shortlog.o show-branch.c show-branch.o show-ref.c show-ref.o [torvalds@nehalem git]$ em builtin/sho [auto-completes to] [torvalds@nehalem git]$ em builtin/shor<tab> [type] shortlog.c shortlog.o [torvalds@nehalem git]$ em builtin/shortlog.c which doesn't seem all that different, but not having that annoying break in "Display all 180 possibilities?" is quite a relief. NOTE! If you do this in a clean tree (no object files etc), or using an editor that has auto-completion rules that ignores '*.o' files, you won't see that annoying 'Display all 180 possibilities?' message - it will just show the choices instead. I think bash has some cut-off around 100 choices or something. So the reason I see this is that I'm using an odd editory, and thus don't have the rules to cut down on auto-completion. But you can simulate that by using 'ls' instead, or something similar. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-02-22 17:42:18 +01:00
BUILTIN_OBJS += builtin/verify-pack.o
BUILTIN_OBJS += builtin/verify-tag.o
BUILTIN_OBJS += builtin/worktree.o
Move 'builtin-*' into a 'builtin/' subdirectory This shrinks the top-level directory a bit, and makes it much more pleasant to use auto-completion on the thing. Instead of [torvalds@nehalem git]$ em buil<tab> Display all 180 possibilities? (y or n) [torvalds@nehalem git]$ em builtin-sh builtin-shortlog.c builtin-show-branch.c builtin-show-ref.c builtin-shortlog.o builtin-show-branch.o builtin-show-ref.o [torvalds@nehalem git]$ em builtin-shor<tab> builtin-shortlog.c builtin-shortlog.o [torvalds@nehalem git]$ em builtin-shortlog.c you get [torvalds@nehalem git]$ em buil<tab> [type] builtin/ builtin.h [torvalds@nehalem git]$ em builtin [auto-completes to] [torvalds@nehalem git]$ em builtin/sh<tab> [type] shortlog.c shortlog.o show-branch.c show-branch.o show-ref.c show-ref.o [torvalds@nehalem git]$ em builtin/sho [auto-completes to] [torvalds@nehalem git]$ em builtin/shor<tab> [type] shortlog.c shortlog.o [torvalds@nehalem git]$ em builtin/shortlog.c which doesn't seem all that different, but not having that annoying break in "Display all 180 possibilities?" is quite a relief. NOTE! If you do this in a clean tree (no object files etc), or using an editor that has auto-completion rules that ignores '*.o' files, you won't see that annoying 'Display all 180 possibilities?' message - it will just show the choices instead. I think bash has some cut-off around 100 choices or something. So the reason I see this is that I'm using an odd editory, and thus don't have the rules to cut down on auto-completion. But you can simulate that by using 'ls' instead, or something similar. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-02-22 17:42:18 +01:00
BUILTIN_OBJS += builtin/write-tree.o
# THIRD_PARTY_SOURCES is a list of patterns compatible with the
# $(filter) and $(filter-out) family of functions. They specify source
# files which are taken from some third-party source where we want to be
# less strict about issues such as coding style so we don't diverge from
# upstream unnecessarily (making merging in future changes easier).
THIRD_PARTY_SOURCES += compat/inet_ntop.c
THIRD_PARTY_SOURCES += compat/inet_pton.c
THIRD_PARTY_SOURCES += compat/nedmalloc/%
THIRD_PARTY_SOURCES += compat/obstack.%
THIRD_PARTY_SOURCES += compat/poll/%
THIRD_PARTY_SOURCES += compat/regex/%
THIRD_PARTY_SOURCES += sha1collisiondetection/%
THIRD_PARTY_SOURCES += sha1dc/%
unit tests: add TAP unit test framework This patch contains an implementation for writing unit tests with TAP output. Each test is a function that contains one or more checks. The test is run with the TEST() macro and if any of the checks fail then the test will fail. A complete program that tests STRBUF_INIT would look like #include "test-lib.h" #include "strbuf.h" static void t_static_init(void) { struct strbuf buf = STRBUF_INIT; check_uint(buf.len, ==, 0); check_uint(buf.alloc, ==, 0); check_char(buf.buf[0], ==, '\0'); } int main(void) { TEST(t_static_init(), "static initialization works); return test_done(); } The output of this program would be ok 1 - static initialization works 1..1 If any of the checks in a test fail then they print a diagnostic message to aid debugging and the test will be reported as failing. For example a failing integer check would look like # check "x >= 3" failed at my-test.c:102 # left: 2 # right: 3 not ok 1 - x is greater than or equal to three There are a number of check functions implemented so far. check() checks a boolean condition, check_int(), check_uint() and check_char() take two values to compare and a comparison operator. check_str() will check if two strings are equal. Custom checks are simple to implement as shown in the comments above test_assert() in test-lib.h. Tests can be skipped with test_skip() which can be supplied with a reason for skipping which it will print. Tests can print diagnostic messages with test_msg(). Checks that are known to fail can be wrapped in TEST_TODO(). There are a couple of example test programs included in this patch. t-basic.c implements some self-tests and demonstrates the diagnostic output for failing test. The output of this program is checked by t0080-unit-test-output.sh. t-strbuf.c shows some example unit tests for strbuf.c The unit tests will be built as part of the default "make all" target, to avoid bitrot. If you wish to build just the unit tests, you can run "make build-unit-tests". To run the tests, you can use "make unit-tests" or run the test binaries directly, as in "./t/unit-tests/bin/t-strbuf". Signed-off-by: Phillip Wood <phillip.wood@dunelm.org.uk> Signed-off-by: Josh Steadmon <steadmon@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-11-09 19:50:43 +01:00
UNIT_TEST_PROGRAMS += t-basic
UNIT_TEST_PROGRAMS += t-mem-pool
UNIT_TEST_PROGRAMS += t-strbuf
UNIT_TEST_PROGRAMS += t-ctype
UNIT_TEST_PROGRAMS += t-prio-queue
unit tests: add TAP unit test framework This patch contains an implementation for writing unit tests with TAP output. Each test is a function that contains one or more checks. The test is run with the TEST() macro and if any of the checks fail then the test will fail. A complete program that tests STRBUF_INIT would look like #include "test-lib.h" #include "strbuf.h" static void t_static_init(void) { struct strbuf buf = STRBUF_INIT; check_uint(buf.len, ==, 0); check_uint(buf.alloc, ==, 0); check_char(buf.buf[0], ==, '\0'); } int main(void) { TEST(t_static_init(), "static initialization works); return test_done(); } The output of this program would be ok 1 - static initialization works 1..1 If any of the checks in a test fail then they print a diagnostic message to aid debugging and the test will be reported as failing. For example a failing integer check would look like # check "x >= 3" failed at my-test.c:102 # left: 2 # right: 3 not ok 1 - x is greater than or equal to three There are a number of check functions implemented so far. check() checks a boolean condition, check_int(), check_uint() and check_char() take two values to compare and a comparison operator. check_str() will check if two strings are equal. Custom checks are simple to implement as shown in the comments above test_assert() in test-lib.h. Tests can be skipped with test_skip() which can be supplied with a reason for skipping which it will print. Tests can print diagnostic messages with test_msg(). Checks that are known to fail can be wrapped in TEST_TODO(). There are a couple of example test programs included in this patch. t-basic.c implements some self-tests and demonstrates the diagnostic output for failing test. The output of this program is checked by t0080-unit-test-output.sh. t-strbuf.c shows some example unit tests for strbuf.c The unit tests will be built as part of the default "make all" target, to avoid bitrot. If you wish to build just the unit tests, you can run "make build-unit-tests". To run the tests, you can use "make unit-tests" or run the test binaries directly, as in "./t/unit-tests/bin/t-strbuf". Signed-off-by: Phillip Wood <phillip.wood@dunelm.org.uk> Signed-off-by: Josh Steadmon <steadmon@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-11-09 19:50:43 +01:00
UNIT_TEST_PROGS = $(patsubst %,$(UNIT_TEST_BIN)/%$X,$(UNIT_TEST_PROGRAMS))
UNIT_TEST_OBJS = $(patsubst %,$(UNIT_TEST_DIR)/%.o,$(UNIT_TEST_PROGRAMS))
UNIT_TEST_OBJS += $(UNIT_TEST_DIR)/test-lib.o
compat: auto-detect if zlib has uncompress2() We have a copy of uncompress2() implementation in compat/ so that we can build with an older version of zlib that lack the function, and the build procedure selects if it is used via the NO_UNCOMPRESS2 $(MAKE) variable. This is yet another "annoying" knob the porters need to tweak on platforms that are not common enough to have the default set in the config.mak.uname file. Attempt to instead ask the system header <zlib.h> to decide if we need the compatibility implementation. This is a deviation from the way we have been handling the "compatiblity" features so far, and if it can be done cleanly enough, it could work as a model for features that need compatibility definition we discover in the future. With that goal in mind, avoid expedient but ugly hacks, like shoving the code that is conditionally compiled into an unrelated .c file, which may not work in future cases---instead, take an approach that uses a file that is independently compiled and stands on its own. Compile and link compat/zlib-uncompress2.c file unconditionally, but conditionally hide the implementation behind #if/#endif when zlib version is 1.2.9 or newer, and unconditionally archive the resulting object file in the libgit.a to be picked up by the linker. There are a few things to note in the shape of the code base after this change: - We no longer use NO_UNCOMPRESS2 knob; if the system header <zlib.h> claims a version that is more cent than the library actually is, this would break, but it is easy to add it back when we find such a system. - The object file compat/zlib-uncompress2.o is always compiled and archived in libgit.a, just like a few other compat/ object files already are. - The inclusion of <zlib.h> is done in <git-compat-util.h>; we used to do so from <cache.h> which includes <git-compat-util.h> as the first thing it does, so from the *.c codes, there is no practical change. - Until objects in libgit.a that is already used gains a reference to the function, the reftable code will be the only one that wants it, so libgit.a on the linker command line needs to appear once more at the end to satisify the mutual dependency. - Beat found a trick used by OpenSSL to avoid making the conditionally-compiled object truly empty (apparently because they had to deal with compilers that do not want to see an effectively empty input file). Our compat/zlib-uncompress2.c file borrows the same trick for portabilty. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Helped-by: Beat Bolli <dev+git@drbeat.li> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-01-24 19:27:59 +01:00
# xdiff and reftable libs may in turn depend on what is in libgit.a
GITLIBS = common-main.o $(LIB_FILE) $(XDIFF_LIB) $(REFTABLE_LIB) $(LIB_FILE)
EXTLIBS =
GIT_USER_AGENT = git/$(GIT_VERSION)
ifeq ($(wildcard sha1collisiondetection/lib/sha1.h),sha1collisiondetection/lib/sha1.h)
DC_SHA1_SUBMODULE = auto
endif
# Set CFLAGS, LDFLAGS and other *FLAGS variables. These might be
# tweaked by config.* below as well as the command-line, both of
# which'll override these defaults.
# Older versions of GCC may require adding "-std=gnu99" at the end.
CFLAGS = -g -O2 -Wall
LDFLAGS =
Makefile: remove the NO_R_TO_GCC_LINKER flag Change our default CC_LD_DYNPATH invocation to something GCC likes these days. Since the GCC 4.6 release unknown flags haven't been passed through to ld(1). Thus our previous default of CC_LD_DYNPATH=-R would cause an error on modern GCC unless NO_R_TO_GCC_LINKER was set. This CC_LD_DYNPATH flag is really obscure, and I don't expect anyone except those working on git development ever use this. It's not needed to simply link to libraries like say libpcre, but *only* for those cases where we're linking to such a library not present in the OS's library directories. See e.g. ldconfig(8) on Linux for more details. I use this to compile my git with a LIBPCREDIR=$HOME/g/pcre2/inst as I'm building that from source, but someone maintaining an OS package is almost certainly not going to use this. They're just going to set USE_LIBPCRE=YesPlease after installing the libpcre dependency, which'll point to OS libraries which ld(1) will find without the help of CC_LD_DYNPATH. Another thing that helps mitigate any potential breakage is that we detect the right type of invocation in configure.ac, which e.g. HP/UX uses[1], as does IBM's AIX package[2]. From what I can tell both AIX and Solaris packagers are building git with GCC, so I'm not adding a corresponding config.mak.uname default to cater to their OS-native linkers. Now for an overview of past development in this area: Our use of "-R" dates back to 455a7f3275 ("More portability.", 2005-09-30). Soon after that in bbfc63dd78 ("gcc does not necessarily pass runtime libpath with -R", 2006-12-27) the NO_R_TO_GCC flag was added, allowing optional use of "-Wl,-rpath=". Then in f5b904db6b ("Makefile: Allow CC_LD_DYNPATH to be overriden", 2008-08-16) the ability to override this flag to something else entirely was added, as some linkers use neither "-Wl,-rpath," nor "-R". From what I can tell we should, with the benefit of hindsight, have made this change back in 2006. GCC & ld supported this type of invocation back then, or since at least binutils-gdb.git's[3] a1ad915dc4 ("[...]Add support for -rpath[...]", 1994-07-20). Further reading and prior art can be found at [4][5][6][7]. Making a plain "-R" an error seems from reading those reports to have been introduced in GCC 4.6 released on March 25, 2011[8], but I couldn't confirm this with absolute certainty, its release notes are ambiguous on the subject, and I couldn't be bothered to try to build & bisect it against GCC 4.5. 1. https://public-inbox.org/git/20190516093412.14795-1-avarab@gmail.com/ 2. https://www.ibm.com/developerworks/aix/library/aix-toolbox/alpha.html 3. git://sourceware.org/git/binutils-gdb.git 4. https://github.com/tsuna/boost.m4/issues/15 5. https://bugzilla.gnome.org/show_bug.cgi?id=641416 6. https://stackoverflow.com/questions/12629042/g-4-6-real-error-unrecognized-option-r 7. https://curl.haxx.se/mail/archive-2014-11/0005.html 8. https://gcc.gnu.org/gcc-4.6/changes.html Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-05-17 23:58:47 +02:00
CC_LD_DYNPATH = -Wl,-rpath,
BASIC_CFLAGS = -I.
BASIC_LDFLAGS =
# library flags
ARFLAGS = rcs
PTHREAD_CFLAGS =
# For the 'sparse' target
git-compat-util: add a test balloon for C99 support The C99 standard was released in January 1999, now 22 years ago. It provides a variety of useful features, including variadic arguments for macros, declarations after statements, designated initializers, and a wide variety of other useful features, many of which we already use. We'd like to take advantage of these features, but we want to be cautious. As far as we know, all major compilers now support C99 or a later C standard, such as C11 or C17. POSIX has required C99 support as a requirement for the 2001 revision, so we can safely assume any POSIX system which we are interested in supporting has C99. Even MSVC, long a holdout against modern C, now supports both C11 and C17 with an appropriate update. Moreover, even if people are using an older version of MSVC on these systems, they will generally need some implementation of the standard Unix utilities for the testsuite, and GNU coreutils, the most common option, has required C99 since 2009. Therefore, we can safely assume that a suitable version of GCC or clang is available to users even if their version of MSVC is not sufficiently capable. Let's add a test balloon to git-compat-util.h to see if anyone is using an older compiler. We'll add a comment telling people how to enable this functionality on GCC and Clang, even though modern versions of both will automatically do the right thing, and ask people still experiencing a problem to report that to us on the list. Note that C89 compilers don't provide the __STDC_VERSION__ macro, so we use a well-known hack of using "- 0". On compilers with this macro, it doesn't change the value, and on C89 compilers, the macro will be replaced with nothing, and our value will be 0. For sparse, we explicitly request the gnu99 style because we've traditionally taken advantage of some GCC- and clang-specific extensions when available and we'd like to retain the ability to do that. sparse also defaults to C89 without it, so things will fail for us if we don't. Update the cmake configuration to require C11 for MSVC. We do this because this will make MSVC to use C11, since it does not explicitly support C99. We do this with a compiler options because setting the C_STANDARD option does not work in our CI on MSVC and at the moment, we don't want to require C11 for Unix compilers. In the Makefile, don't set any compiler flags for the compiler itself, since on some systems, such as FreeBSD, we actually need C11, and asking for C99 causes things to fail to compile. The error message should make it obvious what's going wrong and allow a user to set the appropriate option when building in the event they're using a Unix compiler that doesn't support it by default. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-12-01 02:40:50 +01:00
SPARSE_FLAGS ?= -std=gnu99
SP_EXTRA_FLAGS = -Wno-universal-initializer
# For informing GIT-BUILD-OPTIONS of the SANITIZE=leak,address targets
SANITIZE_LEAK =
SANITIZE_ADDRESS =
# For the 'coccicheck' target
SPATCH_INCLUDE_FLAGS = --all-includes
SPATCH_FLAGS =
SPATCH_TEST_FLAGS =
cocci: optimistically use COMPUTE_HEADER_DEPENDENCIES Improve the incremental rebuilding support of "coccicheck" by piggy-backing on the computed dependency information of the corresponding *.o file, rather than rebuilding all <RULE>/<FILE> pairs if either their corresponding file changes, or if any header changes. This in effect uses the same method that the "sparse" target was made to use in c234e8a0ecf (Makefile: make the "sparse" target non-.PHONY, 2021-09-23), except that the dependency on the *.o file isn't a hard one, we check with $(wildcard) if the *.o file exists, and if so we'll depend on it. This means that the common case of: make make coccicheck Will benefit from incremental rebuilding, now changing e.g. a header will only re-run "spatch" on those those *.c files that make use of it: By depending on the *.o we piggy-back on COMPUTE_HEADER_DEPENDENCIES. See c234e8a0ecf (Makefile: make the "sparse" target non-.PHONY, 2021-09-23) for prior art of doing that for the *.sp files. E.g.: make contrib/coccinelle/free.cocci.patch make -W column.h contrib/coccinelle/free.cocci.patch Will take around 15 seconds for the second command on my 8 core box if I didn't run "make" beforehand to create the *.o files. But around 2 seconds if I did and we have those "*.o" files. Notes about the approach of piggy-backing on *.o for dependencies: * It *is* a trade-off since we'll pay the extra cost of running the C compiler, but we're probably doing that anyway. The compiler is much faster than "spatch", so even though we need to re-compile the *.o to create the dependency info for the *.c for "spatch" it's faster (especially if using "ccache"). * There *are* use-cases where some would like to have *.o files around, but to have the "make coccicheck" ignore them. See: https://lore.kernel.org/git/20220826104312.GJ1735@szeder.dev/ For those users a: make make coccicheck SPATCH_USE_O_DEPENDENCIES= Will avoid considering the *.o files. * If that *.o file doesn't exist we'll depend on an intermediate file of ours which in turn depends on $(FOUND_H_SOURCES). This covers both an initial build, or where "coccicheck" is run without running "all" beforehand, and because we run "coccicheck" on e.g. files in compat/* that we don't know how to build unless the requisite flag was provided to the Makefile. Most of the runtime of "incremental" runs is now spent on various compat/* files, i.e. we conditionally add files to COMPAT_OBJS, and therefore conflate whether we *can* compile an object and generate dependency information for it with whether we'd like to link it into our binary. Before this change the distinction didn't matter, but now one way to make this even faster on incremental builds would be to peel those concerns apart so that we can see that e.g. compat/mmap.c doesn't depend on column.h. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-01 23:35:51 +01:00
# If *.o files are present, have "coccicheck" depend on them, with
# COMPUTE_HEADER_DEPENDENCIES this will speed up the common-case of
# only needing to re-generate coccicheck results for the users of a
# given API if it's changed, and not all files in the project. If
# COMPUTE_HEADER_DEPENDENCIES=no this will be unset too.
SPATCH_USE_O_DEPENDENCIES = YesPlease
cocci: run against a generated ALL.cocci The preceding commits to make the "coccicheck" target incremental made it slower in some cases. As an optimization let's not have the many=many mapping of <*.cocci>=<*.[ch]>, but instead concat the <*.cocci> into an ALL.cocci, and then run one-to-many ALL.cocci=<*.[ch]>. A "make coccicheck" is now around 2x as fast as it was on "master", and around 1.5x as fast as the preceding change to make the run incremental: $ git hyperfine -L rev origin/master,HEAD~,HEAD -p 'make clean' 'make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' -r 3 Benchmark 1: make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'origin/master Time (mean ± σ): 4.258 s ± 0.015 s [User: 27.432 s, System: 1.532 s] Range (min … max): 4.241 s … 4.268 s 3 runs Benchmark 2: make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'HEAD~ Time (mean ± σ): 5.365 s ± 0.079 s [User: 36.899 s, System: 1.810 s] Range (min … max): 5.281 s … 5.436 s 3 runs Benchmark 3: make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'HEAD Time (mean ± σ): 2.725 s ± 0.063 s [User: 14.796 s, System: 0.233 s] Range (min … max): 2.667 s … 2.792 s 3 runs Summary 'make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'HEAD' ran 1.56 ± 0.04 times faster than 'make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'origin/master' 1.97 ± 0.05 times faster than 'make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'HEAD~' This can be turned off with SPATCH_CONCAT_COCCI, but as the beneficiaries of "SPATCH_CONCAT_COCCI=" would mainly be those developing the *.cocci rules themselves, let's leave this optimization on by default. For more information see my "Optimizing *.cocci rules by concat'ing them" (<220901.8635dbjfko.gmgdl@evledraar.gmail.com>) on the cocci@inria.fr mailing list. This potentially changes the results of our *.cocci rules, but as noted in that discussion it should be safe for our use. We don't name rules, or if we do their names don't conflict across our *.cocci files. To the extent that we'd have any inter-dependencies between rules this doesn't make that worse, as we'd have them now if we ran "make coccicheck", applied the results, and would then have (due to hypothetical interdependencies) suggested changes on the subsequent "make coccicheck". Our "coccicheck-test" target makes use of the ALL.cocci when running tests, e.g. when testing unused.{c,out} we test it against ALL.cocci, not unused.cocci. We thus assert (to the extent that we have test coverage) that this concatenation doesn't change the expected results of running these rules. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-01 23:35:54 +01:00
# Set SPATCH_CONCAT_COCCI to concatenate the contrib/cocci/*.cocci
# files into a single contrib/cocci/ALL.cocci before running
# "coccicheck".
#
# Pros:
#
# - Speeds up a one-shot run of "make coccicheck", as we won't have to
# parse *.[ch] files N times for the N *.cocci rules
#
# Cons:
#
# - Will make incremental development of *.cocci slower, as
# e.g. changing strbuf.cocci will re-run all *.cocci.
#
# - Makes error and performance analysis harder, as rules will be
# applied from a monolithic ALL.cocci, rather than
# e.g. strbuf.cocci. To work around this either undefine this, or
# generate a specific patch, e.g. this will always use strbuf.cocci,
# not ALL.cocci:
#
# make contrib/coccinelle/strbuf.cocci.patch
SPATCH_CONCAT_COCCI = YesPlease
# Rebuild 'coccicheck' if $(SPATCH), its flags etc. change
TRACK_SPATCH_DEFINES =
TRACK_SPATCH_DEFINES += $(SPATCH)
TRACK_SPATCH_DEFINES += $(SPATCH_INCLUDE_FLAGS)
TRACK_SPATCH_DEFINES += $(SPATCH_FLAGS)
TRACK_SPATCH_DEFINES += $(SPATCH_TEST_FLAGS)
GIT-SPATCH-DEFINES: FORCE
@FLAGS='$(TRACK_SPATCH_DEFINES)'; \
if test x"$$FLAGS" != x"`cat GIT-SPATCH-DEFINES 2>/dev/null`" ; then \
echo >&2 " * new spatch flags"; \
echo "$$FLAGS" >GIT-SPATCH-DEFINES; \
fi
include config.mak.uname
-include config.mak.autogen
-include config.mak
ifdef DEVELOPER
include config.mak.dev
endif
# what 'all' will build and 'install' will install in gitexecdir,
# excluding programs for built-in commands
ALL_PROGRAMS = $(PROGRAMS) $(SCRIPTS)
ALL_COMMANDS_TO_INSTALL = $(ALL_PROGRAMS)
ifeq (,$(SKIP_DASHED_BUILT_INS))
ALL_COMMANDS_TO_INSTALL += $(BUILT_INS)
else
# git-upload-pack, git-receive-pack and git-upload-archive are special: they
# are _expected_ to be present in the `bin/` directory in their dashed form.
ALL_COMMANDS_TO_INSTALL += git-receive-pack$(X)
ALL_COMMANDS_TO_INSTALL += git-upload-archive$(X)
ALL_COMMANDS_TO_INSTALL += git-upload-pack$(X)
endif
Makefile: allow for combining DEVELOPER=1 and CFLAGS="..." Ever since the DEVELOPER=1 facility introduced there's been no way to have custom CFLAGS (e.g. CFLAGS="-O0 -g -ggdb3") while still benefiting from the set of warnings and assertions DEVELOPER=1 enables. This is because the semantics of variables in the Makefile are such that the user setting CFLAGS overrides anything we set, including what we're doing in config.mak.dev[1]. So let's introduce a "DEVELOPER_CFLAGS" variable in config.mak.dev and add it to ALL_CFLAGS. Before this the ALL_CFLAGS variable would (basically, there's some nuance we won't go into) be set to: $(CPPFLAGS) [$(CFLAGS) *or* $(CFLAGS) in config.mak.dev] $(BASIC_CFLAGS) $(EXTRA_CPPFLAGS) But will now be: $(DEVELOPER_CFLAGS) $(CPPFLAGS) $(CFLAGS) $(BASIC_CFLAGS) $(EXTRA_CPPFLAGS) The reason for putting DEVELOPER_CFLAGS first is to allow for selectively overriding something DEVELOPER=1 brings in. On both GCC and Clang later settings override earlier ones. E.g. "-Wextra -Wno-extra" will enable no "extra" warnings, but not if those two arguments are reversed. Examples of things that weren't possible before, but are now: # Use -O0 instead of -O2 for less painful debuggng DEVELOPER=1 CFLAGS="-O0 -g" # DEVELOPER=1 plus -Wextra, but disable some of the warnings DEVELOPER=1 DEVOPTS="no-error extra-all" CFLAGS="-O0 -g -Wno-unused-parameter" The reason for the patches leading up to this one re-arranged the various *FLAGS assignments and includes is just for readability. The Makefile supports assignments out of order, e.g.: $ cat Makefile X = $(A) $(B) $(C) A = A B = B include c.mak all: @echo $(X) $ cat c.mak C=C $ make A B C So we could have gotten away with the much smaller change of changing "CFLAGS" in config.mak.dev to "DEVELOPER_CFLAGS" and adding that to ALL_CFLAGS earlier in the Makefile "before" the config.mak.* includes. But I think it's more readable to use variables "after" they're included. 1. https://www.gnu.org/software/make/manual/html_node/Overriding.html Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-02-22 15:41:27 +01:00
ALL_CFLAGS = $(DEVELOPER_CFLAGS) $(CPPFLAGS) $(CFLAGS)
ALL_LDFLAGS = $(LDFLAGS)
ifdef SANITIZE
SANITIZERS := $(foreach flag,$(subst $(comma),$(space),$(SANITIZE)),$(flag))
BASIC_CFLAGS += -fsanitize=$(SANITIZE) -fno-sanitize-recover=$(SANITIZE)
BASIC_CFLAGS += -fno-omit-frame-pointer
ifneq ($(filter undefined,$(SANITIZERS)),)
BASIC_CFLAGS += -DSHA1DC_FORCE_ALIGNED_ACCESS
endif
add UNLEAK annotation for reducing leak false positives It's a common pattern in git commands to allocate some memory that should last for the lifetime of the program and then not bother to free it, relying on the OS to throw it away. This keeps the code simple, and it's fast (we don't waste time traversing structures or calling free at the end of the program). But it also triggers warnings from memory-leak checkers like valgrind or LSAN. They know that the memory was still allocated at program exit, but they don't know _when_ the leaked memory stopped being useful. If it was early in the program, then it's probably a real and important leak. But if it was used right up until program exit, it's not an interesting leak and we'd like to suppress it so that we can see the real leaks. This patch introduces an UNLEAK() macro that lets us do so. To understand its design, let's first look at some of the alternatives. Unfortunately the suppression systems offered by leak-checking tools don't quite do what we want. A leak-checker basically knows two things: 1. Which blocks were allocated via malloc, and the callstack during the allocation. 2. Which blocks were left un-freed at the end of the program (and which are unreachable, but more on that later). Their suppressions work by mentioning the function or callstack of a particular allocation, and marking it as OK to leak. So imagine you have code like this: int cmd_foo(...) { /* this allocates some memory */ char *p = some_function(); printf("%s", p); return 0; } You can say "ignore allocations from some_function(), they're not leaks". But that's not right. That function may be called elsewhere, too, and we would potentially want to know about those leaks. So you can say "ignore the callstack when main calls some_function". That works, but your annotations are brittle. In this case it's only two functions, but you can imagine that the actual allocation is much deeper. If any of the intermediate code changes, you have to update the suppression. What we _really_ want to say is that "the value assigned to p at the end of the function is not a real leak". But leak-checkers can't understand that; they don't know about "p" in the first place. However, we can do something a little bit tricky if we make some assumptions about how leak-checkers work. They generally don't just report all un-freed blocks. That would report even globals which are still accessible when the leak-check is run. Instead they take some set of memory (like BSS) as a root and mark it as "reachable". Then they scan the reachable blocks for anything that looks like a pointer to a malloc'd block, and consider that block reachable. And then they scan those blocks, and so on, transitively marking anything reachable from a global as "not leaked" (or at least leaked in a different category). So we can mark the value of "p" as reachable by putting it into a variable with program lifetime. One way to do that is to just mark "p" as static. But that actually affects the run-time behavior if the function is called twice (you aren't likely to call main() twice, but some of our cmd_*() functions are called from other commands). Instead, we can trick the leak-checker by putting the value into _any_ reachable bytes. This patch keeps a global linked-list of bytes copied from "unleaked" variables. That list is reachable even at program exit, which confers recursive reachability on whatever values we unleak. In other words, you can do: int cmd_foo(...) { char *p = some_function(); printf("%s", p); UNLEAK(p); return 0; } to annotate "p" and suppress the leak report. But wait, couldn't we just say "free(p)"? In this toy example, yes. But UNLEAK()'s byte-copying strategy has several advantages over actually freeing the memory: 1. It's recursive across structures. In many cases our "p" is not just a pointer, but a complex struct whose fields may have been allocated by a sub-function. And in some cases (e.g., dir_struct) we don't even have a function which knows how to free all of the struct members. By marking the struct itself as reachable, that confers reachability on any pointers it contains (including those found in embedded structs, or reachable by walking heap blocks recursively. 2. It works on cases where we're not sure if the value is allocated or not. For example: char *p = argc > 1 ? argv[1] : some_function(); It's safe to use UNLEAK(p) here, because it's not freeing any memory. In the case that we're pointing to argv here, the reachability checker will just ignore our bytes. 3. Likewise, it works even if the variable has _already_ been freed. We're just copying the pointer bytes. If the block has been freed, the leak-checker will skip over those bytes as uninteresting. 4. Because it's not actually freeing memory, you can UNLEAK() before we are finished accessing the variable. This is helpful in cases like this: char *p = some_function(); return another_function(p); Writing this with free() requires: int ret; char *p = some_function(); ret = another_function(p); free(p); return ret; But with unleak we can just write: char *p = some_function(); UNLEAK(p); return another_function(p); This patch adds the UNLEAK() macro and enables it automatically when Git is compiled with SANITIZE=leak. In normal builds it's a noop, so we pay no runtime cost. It also adds some UNLEAK() annotations to show off how the feature works. On top of other recent leak fixes, these are enough to get t0000 and t0001 to pass when compiled with LSAN. Note the case in commit.c which actually converts a strbuf_release() into an UNLEAK. This code was already non-leaky, but the free didn't do anything useful, since we're exiting. Converting it to an annotation means that non-leak-checking builds pay no runtime cost. The cost is minimal enough that it's probably not worth going on a crusade to convert these kinds of frees to UNLEAKS. I did it here for consistency with the "sb" leak (though it would have been equally correct to go the other way, and turn them both into strbuf_release() calls). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-08 08:38:41 +02:00
ifneq ($(filter leak,$(SANITIZERS)),)
BASIC_CFLAGS += -DSUPPRESS_ANNOTATED_LEAKS
Makefile: force -O0 when compiling with SANITIZE=leak Compiling with -O2 can interact badly with LSan's leak-checker, causing false positives. Imagine a simplified example like: char *str = allocate_some_string(); if (some_func(str) < 0) die("bad str"); free(str); The compiler may eliminate "str" as a stack variable, and just leave it in a register. The register is preserved through most of the function, including across the call to some_func(), since we'd eventually need to free it. But because die() is marked with NORETURN, the compiler knows that it doesn't need to save registers, and just clobbers it. When die() eventually exits, the leak-checker runs. It looks in registers and on the stack for any reference to the memory allocated by str (which would indicate that it's not leaked), but can't find one. So it reports it as a leak. Neither system is wrong, really. The C standard (mostly section 5.1.2.3) defines an abstract machine, and compilers are allowed to modify the program as long as the observable behavior of that abstract machine is unchanged. Looking at random memory values on the stack is undefined behavior, and not something that the optimizer needs to support. But there really isn't any other way for a leak checker to work; it inherently has to do undefined things like scouring memory for pointers. So the two things are inherently at odds with each other. We can't fix it by changing the code, because from the perspective of the program running in an abstract machine, there is no leak. This has caused real false positives in the past, like: - https://lore.kernel.org/git/patch-v3-5.6-9a44204c4c9-20211022T175227Z-avarab@gmail.com/ - https://lore.kernel.org/git/Yy4eo6500C0ijhk+@coredump.intra.peff.net/ - https://lore.kernel.org/git/Y07yeEQu+C7AH7oN@nand.local/ This patch makes those go away by forcing -O0 when compiling with LSan. There are a few ways we could do this: - we could just teach the linux-leaks CI job to set -O0. That's the smallest change, and means we wouldn't get spurious CI failures. But it doesn't help people looking for leaks manually or in a specific test (and because the problem depends on the vagaries of the optimizer, investigating these can waste a lot of time in head-scratching as the problem comes and goes) - we default to -O2 in CFLAGS; we could pull this out to a separate variable ("-O$(O)" or something) and modify "O" when LSan is in use. This is the most flexible, in that you could still build with "make O=2 SANITIZE=leak" if you really wanted to (say, for experimenting). But it would also fail to kick in if the user defines their own CFLAGS variable, which again leads to head-scratching. - we can just stick -O0 into BASIC_CFLAGS when enabling LSan. Since this comes after the user-provided CFLAGS, it will override any previous -O setting found there. This is more foolproof, albeit less flexible. If you want to experiment with an optimized leak-checking build, you'll have to put "-O2 -fsanitize=leak" into CFLAGS manually, rather than using our SANITIZE=leak Makefile magic. Since the final one is the least likely to break in normal use, this patch uses that approach. The resulting build is a little slower, of course, but since LSan is already about 2x slower than a regular build, another 10% slowdown isn't that big a deal. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-10-18 22:15:33 +02:00
BASIC_CFLAGS += -O0
SANITIZE_LEAK = YesCompiledWithIt
add UNLEAK annotation for reducing leak false positives It's a common pattern in git commands to allocate some memory that should last for the lifetime of the program and then not bother to free it, relying on the OS to throw it away. This keeps the code simple, and it's fast (we don't waste time traversing structures or calling free at the end of the program). But it also triggers warnings from memory-leak checkers like valgrind or LSAN. They know that the memory was still allocated at program exit, but they don't know _when_ the leaked memory stopped being useful. If it was early in the program, then it's probably a real and important leak. But if it was used right up until program exit, it's not an interesting leak and we'd like to suppress it so that we can see the real leaks. This patch introduces an UNLEAK() macro that lets us do so. To understand its design, let's first look at some of the alternatives. Unfortunately the suppression systems offered by leak-checking tools don't quite do what we want. A leak-checker basically knows two things: 1. Which blocks were allocated via malloc, and the callstack during the allocation. 2. Which blocks were left un-freed at the end of the program (and which are unreachable, but more on that later). Their suppressions work by mentioning the function or callstack of a particular allocation, and marking it as OK to leak. So imagine you have code like this: int cmd_foo(...) { /* this allocates some memory */ char *p = some_function(); printf("%s", p); return 0; } You can say "ignore allocations from some_function(), they're not leaks". But that's not right. That function may be called elsewhere, too, and we would potentially want to know about those leaks. So you can say "ignore the callstack when main calls some_function". That works, but your annotations are brittle. In this case it's only two functions, but you can imagine that the actual allocation is much deeper. If any of the intermediate code changes, you have to update the suppression. What we _really_ want to say is that "the value assigned to p at the end of the function is not a real leak". But leak-checkers can't understand that; they don't know about "p" in the first place. However, we can do something a little bit tricky if we make some assumptions about how leak-checkers work. They generally don't just report all un-freed blocks. That would report even globals which are still accessible when the leak-check is run. Instead they take some set of memory (like BSS) as a root and mark it as "reachable". Then they scan the reachable blocks for anything that looks like a pointer to a malloc'd block, and consider that block reachable. And then they scan those blocks, and so on, transitively marking anything reachable from a global as "not leaked" (or at least leaked in a different category). So we can mark the value of "p" as reachable by putting it into a variable with program lifetime. One way to do that is to just mark "p" as static. But that actually affects the run-time behavior if the function is called twice (you aren't likely to call main() twice, but some of our cmd_*() functions are called from other commands). Instead, we can trick the leak-checker by putting the value into _any_ reachable bytes. This patch keeps a global linked-list of bytes copied from "unleaked" variables. That list is reachable even at program exit, which confers recursive reachability on whatever values we unleak. In other words, you can do: int cmd_foo(...) { char *p = some_function(); printf("%s", p); UNLEAK(p); return 0; } to annotate "p" and suppress the leak report. But wait, couldn't we just say "free(p)"? In this toy example, yes. But UNLEAK()'s byte-copying strategy has several advantages over actually freeing the memory: 1. It's recursive across structures. In many cases our "p" is not just a pointer, but a complex struct whose fields may have been allocated by a sub-function. And in some cases (e.g., dir_struct) we don't even have a function which knows how to free all of the struct members. By marking the struct itself as reachable, that confers reachability on any pointers it contains (including those found in embedded structs, or reachable by walking heap blocks recursively. 2. It works on cases where we're not sure if the value is allocated or not. For example: char *p = argc > 1 ? argv[1] : some_function(); It's safe to use UNLEAK(p) here, because it's not freeing any memory. In the case that we're pointing to argv here, the reachability checker will just ignore our bytes. 3. Likewise, it works even if the variable has _already_ been freed. We're just copying the pointer bytes. If the block has been freed, the leak-checker will skip over those bytes as uninteresting. 4. Because it's not actually freeing memory, you can UNLEAK() before we are finished accessing the variable. This is helpful in cases like this: char *p = some_function(); return another_function(p); Writing this with free() requires: int ret; char *p = some_function(); ret = another_function(p); free(p); return ret; But with unleak we can just write: char *p = some_function(); UNLEAK(p); return another_function(p); This patch adds the UNLEAK() macro and enables it automatically when Git is compiled with SANITIZE=leak. In normal builds it's a noop, so we pay no runtime cost. It also adds some UNLEAK() annotations to show off how the feature works. On top of other recent leak fixes, these are enough to get t0000 and t0001 to pass when compiled with LSAN. Note the case in commit.c which actually converts a strbuf_release() into an UNLEAK. This code was already non-leaky, but the free didn't do anything useful, since we're exiting. Converting it to an annotation means that non-leak-checking builds pay no runtime cost. The cost is minimal enough that it's probably not worth going on a crusade to convert these kinds of frees to UNLEAKS. I did it here for consistency with the "sb" leak (though it would have been equally correct to go the other way, and turn them both into strbuf_release() calls). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-08 08:38:41 +02:00
endif
Makefile: use compat regex with SANITIZE=address Recent versions of the gcc and clang Address Sanitizer produce test failures related to regexec(). This triggers with gcc-10 and clang-8 (but not gcc-9 nor clang-7). Running: make CC=gcc-10 SANITIZE=address test results in failures in t4018, t3206, and t4062. The cause seems to be that when built with ASan, we use a different version of regexec() than normal. And this version doesn't understand the REG_STARTEND flag. Here's my evidence supporting that. The failure in t4062 is an ASan warning: expecting success of 4062.2 '-G matches': git diff --name-only -G "^(0{64}){64}$" HEAD^ >out && test 4096-zeroes.txt = "$(cat out)" ================================================================= ==672994==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x7fa76f672000 at pc 0x7fa7726f75b6 bp 0x7ffe41bdda70 sp 0x7ffe41bdd220 READ of size 4097 at 0x7fa76f672000 thread T0 #0 0x7fa7726f75b5 (/lib/x86_64-linux-gnu/libasan.so.6+0x4f5b5) #1 0x562ae0c9c40e in regexec_buf /home/peff/compile/git/git-compat-util.h:1117 #2 0x562ae0c9c40e in diff_grep /home/peff/compile/git/diffcore-pickaxe.c:52 #3 0x562ae0c9cc28 in pickaxe_match /home/peff/compile/git/diffcore-pickaxe.c:166 [...] In this case we're looking in a buffer which was mmap'd via reuse_worktree_file(), and whose size is 4096 bytes. But libasan's regex tries to look at byte 4097 anyway! If we tweak Git like this: diff --git a/diff.c b/diff.c index 8e2914c031..cfae60c120 100644 --- a/diff.c +++ b/diff.c @@ -3880,7 +3880,7 @@ static int reuse_worktree_file(struct index_state *istate, */ if (ce_uptodate(ce) || (!lstat(name, &st) && !ie_match_stat(istate, ce, &st, 0))) - return 1; + return 0; return 0; } to use a regular buffer (with a trailing NUL) instead of an mmap, then the complaint goes away. The other failures are actually diff output with an incorrect funcname header. If I instrument xdiff to show the funcname matching like so: diff --git a/xdiff-interface.c b/xdiff-interface.c index 8509f9ea22..f6c3dc1986 100644 --- a/xdiff-interface.c +++ b/xdiff-interface.c @@ -197,6 +197,7 @@ struct ff_regs { struct ff_reg { regex_t re; int negate; + char *printable; } *array; }; @@ -218,7 +219,12 @@ static long ff_regexp(const char *line, long len, for (i = 0; i < regs->nr; i++) { struct ff_reg *reg = regs->array + i; - if (!regexec_buf(&reg->re, line, len, 2, pmatch, 0)) { + int ret = regexec_buf(&reg->re, line, len, 2, pmatch, 0); + warning("regexec %s:\n regex: %s\n buf: %.*s", + ret == 0 ? "matched" : "did not match", + reg->printable, + (int)len, line); + if (!ret) { if (reg->negate) return -1; break; @@ -264,6 +270,7 @@ void xdiff_set_find_func(xdemitconf_t *xecfg, const char *value, int cflags) expression = value; if (regcomp(&reg->re, expression, cflags)) die("Invalid regexp to look for hunk header: %s", expression); + reg->printable = xstrdup(expression); free(buffer); value = ep + 1; } then when compiling with ASan and gcc-10, running the diff from t4018.66 produces this: $ git diff -U1 cpp-skip-access-specifiers warning: regexec did not match: regex: ^[ ]*[A-Za-z_][A-Za-z_0-9]*:[[:space:]]*($|/[/*]) buf: private: warning: regexec matched: regex: ^((::[[:space:]]*)?[A-Za-z_].*)$ buf: private: diff --git a/cpp-skip-access-specifiers b/cpp-skip-access-specifiers index 4d4a9db..ebd6f42 100644 --- a/cpp-skip-access-specifiers +++ b/cpp-skip-access-specifiers @@ -6,3 +6,3 @@ private: void DoSomething(); int ChangeMe; }; void DoSomething(); - int ChangeMe; + int IWasChanged; }; That first regex should match (and is negated, so it should be telling us _not_ to match "private:"). But it wouldn't if regexec() is looking at the whole buffer, and not just the length-limited line we've fed to regexec_buf(). So this is consistent again with REG_STARTEND being ignored. The correct output (compiling without ASan, or gcc-9 with Asan) looks like this: warning: regexec matched: regex: ^[ ]*[A-Za-z_][A-Za-z_0-9]*:[[:space:]]*($|/[/*]) buf: private: [...more lines that we end up not using...] warning: regexec matched: regex: ^((::[[:space:]]*)?[A-Za-z_].*)$ buf: class RIGHT : public Baseclass diff --git a/cpp-skip-access-specifiers b/cpp-skip-access-specifiers index 4d4a9db..ebd6f42 100644 --- a/cpp-skip-access-specifiers +++ b/cpp-skip-access-specifiers @@ -6,3 +6,3 @@ class RIGHT : public Baseclass void DoSomething(); - int ChangeMe; + int IWasChanged; }; So it really does seem like libasan's regex engine is ignoring REG_STARTEND. We should be able to work around it by compiling with NO_REGEX, which would use our local regexec(). But to make matters even more interesting, this isn't enough by itself. Because ASan has support from the compiler, it doesn't seem to intercept our call to regexec() at the dynamic library level. It actually recognizes when we are compiling a call to regexec() and replaces it with ASan-specific code at that point. And unlike most of our other compat code, where we might have git_mmap() or similar, the actual symbol name in the compiled compat/regex code is regexec(). So just compiling with NO_REGEX isn't enough; we still end up in libasan! We can work around that by having the preprocessor replace regexec with git_regexec (both in the callers and in the actual implementation), and we truly end up with a call to our custom regex code, even when compiling with ASan. That's probably a good thing to do anyway, as it means anybody looking at the symbols later (e.g., in a debugger) would have a better indication of which function is which. So we'll do the same for the other common regex functions (even though just regexec() is enough to fix this ASan problem). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-01-16 18:51:38 +01:00
ifneq ($(filter address,$(SANITIZERS)),)
NO_REGEX = NeededForASAN
SANITIZE_ADDRESS = YesCompiledWithIt
Makefile: use compat regex with SANITIZE=address Recent versions of the gcc and clang Address Sanitizer produce test failures related to regexec(). This triggers with gcc-10 and clang-8 (but not gcc-9 nor clang-7). Running: make CC=gcc-10 SANITIZE=address test results in failures in t4018, t3206, and t4062. The cause seems to be that when built with ASan, we use a different version of regexec() than normal. And this version doesn't understand the REG_STARTEND flag. Here's my evidence supporting that. The failure in t4062 is an ASan warning: expecting success of 4062.2 '-G matches': git diff --name-only -G "^(0{64}){64}$" HEAD^ >out && test 4096-zeroes.txt = "$(cat out)" ================================================================= ==672994==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x7fa76f672000 at pc 0x7fa7726f75b6 bp 0x7ffe41bdda70 sp 0x7ffe41bdd220 READ of size 4097 at 0x7fa76f672000 thread T0 #0 0x7fa7726f75b5 (/lib/x86_64-linux-gnu/libasan.so.6+0x4f5b5) #1 0x562ae0c9c40e in regexec_buf /home/peff/compile/git/git-compat-util.h:1117 #2 0x562ae0c9c40e in diff_grep /home/peff/compile/git/diffcore-pickaxe.c:52 #3 0x562ae0c9cc28 in pickaxe_match /home/peff/compile/git/diffcore-pickaxe.c:166 [...] In this case we're looking in a buffer which was mmap'd via reuse_worktree_file(), and whose size is 4096 bytes. But libasan's regex tries to look at byte 4097 anyway! If we tweak Git like this: diff --git a/diff.c b/diff.c index 8e2914c031..cfae60c120 100644 --- a/diff.c +++ b/diff.c @@ -3880,7 +3880,7 @@ static int reuse_worktree_file(struct index_state *istate, */ if (ce_uptodate(ce) || (!lstat(name, &st) && !ie_match_stat(istate, ce, &st, 0))) - return 1; + return 0; return 0; } to use a regular buffer (with a trailing NUL) instead of an mmap, then the complaint goes away. The other failures are actually diff output with an incorrect funcname header. If I instrument xdiff to show the funcname matching like so: diff --git a/xdiff-interface.c b/xdiff-interface.c index 8509f9ea22..f6c3dc1986 100644 --- a/xdiff-interface.c +++ b/xdiff-interface.c @@ -197,6 +197,7 @@ struct ff_regs { struct ff_reg { regex_t re; int negate; + char *printable; } *array; }; @@ -218,7 +219,12 @@ static long ff_regexp(const char *line, long len, for (i = 0; i < regs->nr; i++) { struct ff_reg *reg = regs->array + i; - if (!regexec_buf(&reg->re, line, len, 2, pmatch, 0)) { + int ret = regexec_buf(&reg->re, line, len, 2, pmatch, 0); + warning("regexec %s:\n regex: %s\n buf: %.*s", + ret == 0 ? "matched" : "did not match", + reg->printable, + (int)len, line); + if (!ret) { if (reg->negate) return -1; break; @@ -264,6 +270,7 @@ void xdiff_set_find_func(xdemitconf_t *xecfg, const char *value, int cflags) expression = value; if (regcomp(&reg->re, expression, cflags)) die("Invalid regexp to look for hunk header: %s", expression); + reg->printable = xstrdup(expression); free(buffer); value = ep + 1; } then when compiling with ASan and gcc-10, running the diff from t4018.66 produces this: $ git diff -U1 cpp-skip-access-specifiers warning: regexec did not match: regex: ^[ ]*[A-Za-z_][A-Za-z_0-9]*:[[:space:]]*($|/[/*]) buf: private: warning: regexec matched: regex: ^((::[[:space:]]*)?[A-Za-z_].*)$ buf: private: diff --git a/cpp-skip-access-specifiers b/cpp-skip-access-specifiers index 4d4a9db..ebd6f42 100644 --- a/cpp-skip-access-specifiers +++ b/cpp-skip-access-specifiers @@ -6,3 +6,3 @@ private: void DoSomething(); int ChangeMe; }; void DoSomething(); - int ChangeMe; + int IWasChanged; }; That first regex should match (and is negated, so it should be telling us _not_ to match "private:"). But it wouldn't if regexec() is looking at the whole buffer, and not just the length-limited line we've fed to regexec_buf(). So this is consistent again with REG_STARTEND being ignored. The correct output (compiling without ASan, or gcc-9 with Asan) looks like this: warning: regexec matched: regex: ^[ ]*[A-Za-z_][A-Za-z_0-9]*:[[:space:]]*($|/[/*]) buf: private: [...more lines that we end up not using...] warning: regexec matched: regex: ^((::[[:space:]]*)?[A-Za-z_].*)$ buf: class RIGHT : public Baseclass diff --git a/cpp-skip-access-specifiers b/cpp-skip-access-specifiers index 4d4a9db..ebd6f42 100644 --- a/cpp-skip-access-specifiers +++ b/cpp-skip-access-specifiers @@ -6,3 +6,3 @@ class RIGHT : public Baseclass void DoSomething(); - int ChangeMe; + int IWasChanged; }; So it really does seem like libasan's regex engine is ignoring REG_STARTEND. We should be able to work around it by compiling with NO_REGEX, which would use our local regexec(). But to make matters even more interesting, this isn't enough by itself. Because ASan has support from the compiler, it doesn't seem to intercept our call to regexec() at the dynamic library level. It actually recognizes when we are compiling a call to regexec() and replaces it with ASan-specific code at that point. And unlike most of our other compat code, where we might have git_mmap() or similar, the actual symbol name in the compiled compat/regex code is regexec(). So just compiling with NO_REGEX isn't enough; we still end up in libasan! We can work around that by having the preprocessor replace regexec with git_regexec (both in the callers and in the actual implementation), and we truly end up with a call to our custom regex code, even when compiling with ASan. That's probably a good thing to do anyway, as it means anybody looking at the symbols later (e.g., in a debugger) would have a better indication of which function is which. So we'll do the same for the other common regex functions (even though just regexec() is enough to fix this ASan problem). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-01-16 18:51:38 +01:00
endif
endif
ifndef sysconfdir
ifeq ($(prefix),/usr)
sysconfdir = /etc
else
sysconfdir = etc
endif
endif
ifndef COMPUTE_HEADER_DEPENDENCIES
COMPUTE_HEADER_DEPENDENCIES = auto
endif
ifeq ($(COMPUTE_HEADER_DEPENDENCIES),auto)
dep_check = $(shell $(CC) $(ALL_CFLAGS) \
Makefile: make COMPUTE_HEADER_DEPENDENCIES=auto work with DEVOPTS=pedantic The "COMPUTE_HEADER_DEPENDENCIES" feature added in [1] was extended to use auto-detection in [2], that "auto" detection has always piped STDERR to /dev/null, so any failures on compilers that didn't support these GCC flags would silently fall back to "COMPUTE_HEADER_DEPENDENCIES=no". Later when -Wpedantic support was added to DEVOPTS in [3] we started passing -Wpedantic in combination with -Werror to the compiler here. Note (to the pedantic): [3] actually passed "-pedantic", but it and "-Wpedantic" are synonyms. Turning on -Wpedantic in [3] broke the auto-detection, since this relies on compiling an empty program. GCC would loudly complain on STDERR: /dev/null:1: error: ISO C forbids an empty translation unit [-Werror=pedantic] cc1: note: unrecognized command-line option ‘-Wno-pedantic-ms-format’ may have been intended to silence earlier diagnostics cc1: all warnings being treated as errors But as that ended up in the "$(dep_check)" variable due to the "2>&1" in [2] we didn't see it. Then when [4] made DEVOPTS=pedantic the default specifying "DEVELOPER=1" would effectively set "COMPUTE_HEADER_DEPENDENCIES=no". To fix these issues let's unconditionally pass -Wno-pedantic after $(ALL_CFLAGS), we might get a -Wpedantic via config.mak.dev after, or the builder might specify it via CFLAGS. In either case this will undo current and future problems with -Wpedantic. I think it would make sense to simply remove the "2>&1", it would mean that anyone using a non-GCC-like compiler would get warnings under COMPUTE_HEADER_DEPENDENCIES=auto, e.g on AIX's xlc would emit: /opt/IBM/xlc/13.1.3/bin/.orig/xlc: 1501-208 (S) command option D is missing a subargument Non-zero 40 exit with COMPUTE_HEADER_DEPENDENCIES=auto, set it to "yes" or "no" to quiet auto-detect And on Solaris with SunCC: cc: Warning: Option -x passed to ld, if ld is invoked, ignored otherwise cc: refused to overwrite input file by output file: /dev/null cc: Warning: Option -x passed to ld, if ld is invoked, ignored otherwise cc: refused to overwrite input file by output file: /dev/null Non-zero 1 exit with COMPUTE_HEADER_DEPENDENCIES=auto, set it to "yes" or "no" to quiet auto-detect Both could be quieted by setting COMPUTE_HEADER_DEPENDENCIES=no explicitly, as suggested, but let's see if this'll fix it without emitting too much noise at those that aren't using "gcc" or "clang". 1. f2fabbf76e4 (Teach Makefile to check header dependencies, 2010-01-26) 2. 111ee18c31f (Makefile: Use computed header dependencies if the compiler supports it, 2011-08-18) 3. 729b3925ed9 (Makefile: add a DEVOPTS flag to get pedantic compilation, 2018-07-24) 4. 6a8cbc41bac (developer: enable pedantic by default, 2021-09-03) Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-09-23 00:08:01 +02:00
-Wno-pedantic \
-c -MF /dev/null -MQ /dev/null -MMD -MP \
-x c /dev/null -o /dev/null 2>&1; \
echo $$?)
ifeq ($(dep_check),0)
override COMPUTE_HEADER_DEPENDENCIES = yes
else
override COMPUTE_HEADER_DEPENDENCIES = no
endif
endif
ifeq ($(COMPUTE_HEADER_DEPENDENCIES),yes)
USE_COMPUTED_HEADER_DEPENDENCIES = YesPlease
else
ifneq ($(COMPUTE_HEADER_DEPENDENCIES),no)
$(error please set COMPUTE_HEADER_DEPENDENCIES to yes, no, or auto \
(not "$(COMPUTE_HEADER_DEPENDENCIES)"))
endif
endif
Makefile: add support for generating JSON compilation database Tools based on LibClang [1] can make use of a 'JSON Compilation Database' [2] that keeps track of the exact options used to compile a set of source files. For example, clangd [3], which is a C language server protocol implementation, can use a JSON compilation database to determine the flags needed to compile a file so it can provide proper editor integration. As a result, editors supporting the language server protocol (such as VS Code, Emacs, or Vim, with suitable plugins) can provide better searching, integration, and refactoring tools. The Clang compiler can generate JSON fragments when compiling [4], using the `-MJ` flag. These JSON fragments (one per compiled source file) can then be concatenated to create the compilation database, commonly called 'compile_commands.json'. Add support to the Makefile for generating these JSON fragments as well as the compilation database itself, if the environment variable 'GENERATE_COMPILATION_DATABASE' is set. If this variable is set, check that $(CC) indeed supports the `-MJ` flag, following what is done for automatic dependencies. All JSON fragments are placed in the 'compile_commands/' directory, and the compilation database 'compile_commands.json' is generated as a dependency of the 'all' target using a `sed` invocation. [1] https://clang.llvm.org/docs/Tooling.html [2] https://clang.llvm.org/docs/JSONCompilationDatabase.html [3] https://clangd.llvm.org/ [4] https://clang.llvm.org/docs/ClangCommandLineReference.html#cmdoption-clang-mj-arg Helped-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Philippe Blain <levraiphilippeblain@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-04 00:13:38 +02:00
ifndef GENERATE_COMPILATION_DATABASE
GENERATE_COMPILATION_DATABASE = no
endif
ifeq ($(GENERATE_COMPILATION_DATABASE),yes)
compdb_check = $(shell $(CC) $(ALL_CFLAGS) \
-Wno-pedantic \
Makefile: add support for generating JSON compilation database Tools based on LibClang [1] can make use of a 'JSON Compilation Database' [2] that keeps track of the exact options used to compile a set of source files. For example, clangd [3], which is a C language server protocol implementation, can use a JSON compilation database to determine the flags needed to compile a file so it can provide proper editor integration. As a result, editors supporting the language server protocol (such as VS Code, Emacs, or Vim, with suitable plugins) can provide better searching, integration, and refactoring tools. The Clang compiler can generate JSON fragments when compiling [4], using the `-MJ` flag. These JSON fragments (one per compiled source file) can then be concatenated to create the compilation database, commonly called 'compile_commands.json'. Add support to the Makefile for generating these JSON fragments as well as the compilation database itself, if the environment variable 'GENERATE_COMPILATION_DATABASE' is set. If this variable is set, check that $(CC) indeed supports the `-MJ` flag, following what is done for automatic dependencies. All JSON fragments are placed in the 'compile_commands/' directory, and the compilation database 'compile_commands.json' is generated as a dependency of the 'all' target using a `sed` invocation. [1] https://clang.llvm.org/docs/Tooling.html [2] https://clang.llvm.org/docs/JSONCompilationDatabase.html [3] https://clangd.llvm.org/ [4] https://clang.llvm.org/docs/ClangCommandLineReference.html#cmdoption-clang-mj-arg Helped-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Philippe Blain <levraiphilippeblain@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-04 00:13:38 +02:00
-c -MJ /dev/null \
-x c /dev/null -o /dev/null 2>&1; \
echo $$?)
ifneq ($(compdb_check),0)
override GENERATE_COMPILATION_DATABASE = no
$(warning GENERATE_COMPILATION_DATABASE is set to "yes", but your compiler does not \
support generating compilation database entries)
endif
else
ifneq ($(GENERATE_COMPILATION_DATABASE),no)
$(error please set GENERATE_COMPILATION_DATABASE to "yes" or "no" \
(not "$(GENERATE_COMPILATION_DATABASE)"))
endif
endif
ifdef SANE_TOOL_PATH
SANE_TOOL_PATH_SQ = $(subst ','\'',$(SANE_TOOL_PATH))
BROKEN_PATH_FIX = 's|^\# @@BROKEN_PATH_FIX@@$$|git_broken_path_fix "$(SANE_TOOL_PATH_SQ)"|'
PATH := $(SANE_TOOL_PATH):${PATH}
else
BROKEN_PATH_FIX = '/^\# @@BROKEN_PATH_FIX@@$$/d'
endif
ifeq (,$(HOST_CPU))
BASIC_CFLAGS += -DGIT_HOST_CPU="\"$(firstword $(subst -, ,$(uname_M)))\""
else
BASIC_CFLAGS += -DGIT_HOST_CPU="\"$(HOST_CPU)\""
endif
ifneq (,$(INLINE))
BASIC_CFLAGS += -Dinline=$(INLINE)
endif
ifneq (,$(SOCKLEN_T))
BASIC_CFLAGS += -Dsocklen_t=$(SOCKLEN_T)
endif
ifeq ($(uname_S),Darwin)
Makefile(s): avoid recipe prefix in conditional statements In GNU Make commit 07fcee35 ([SV 64815] Recipe lines cannot contain conditional statements, 2023-05-22) and following, conditional statements may no longer be preceded by a tab character (which Make refers to as the recipe prefix). There are a handful of spots in our various Makefile(s) which will break in a future release of Make containing 07fcee35. For instance, trying to compile the pre-image of this patch with the tip of make.git results in the following: $ make -v | head -1 && make GNU Make 4.4.90 config.mak.uname:842: *** missing 'endif'. Stop. The kernel addressed this issue in 82175d1f9430 (kbuild: Replace tabs with spaces when followed by conditionals, 2024-01-28). Address the issues in Git's tree by applying the same strategy. When a conditional word (ifeq, ifneq, ifdef, etc.) is preceded by one or more tab characters, replace each tab character with 8 space characters with the following: find . -type f -not -path './.git/*' -name Makefile -or -name '*.mak' | xargs perl -i -pe ' s/(\t+)(ifn?eq|ifn?def|else|endif)/" " x (length($1) * 8) . $2/ge unless /\\$/ ' The "unless /\\$/" removes any false-positives (like "\telse \" appearing within a shell script as part of a recipe). After doing so, Git compiles on newer versions of Make: $ make -v | head -1 && make GNU Make 4.4.90 GIT_VERSION = 2.44.0.414.gfac1dc44ca9 [...] $ echo $? 0 Reported-by: Dario Gjorgjevski <dario.gjorgjevski@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-08 17:51:44 +02:00
ifndef NO_FINK
ifeq ($(shell test -d /sw/lib && echo y),y)
BASIC_CFLAGS += -I/sw/include
BASIC_LDFLAGS += -L/sw/lib
Makefile(s): avoid recipe prefix in conditional statements In GNU Make commit 07fcee35 ([SV 64815] Recipe lines cannot contain conditional statements, 2023-05-22) and following, conditional statements may no longer be preceded by a tab character (which Make refers to as the recipe prefix). There are a handful of spots in our various Makefile(s) which will break in a future release of Make containing 07fcee35. For instance, trying to compile the pre-image of this patch with the tip of make.git results in the following: $ make -v | head -1 && make GNU Make 4.4.90 config.mak.uname:842: *** missing 'endif'. Stop. The kernel addressed this issue in 82175d1f9430 (kbuild: Replace tabs with spaces when followed by conditionals, 2024-01-28). Address the issues in Git's tree by applying the same strategy. When a conditional word (ifeq, ifneq, ifdef, etc.) is preceded by one or more tab characters, replace each tab character with 8 space characters with the following: find . -type f -not -path './.git/*' -name Makefile -or -name '*.mak' | xargs perl -i -pe ' s/(\t+)(ifn?eq|ifn?def|else|endif)/" " x (length($1) * 8) . $2/ge unless /\\$/ ' The "unless /\\$/" removes any false-positives (like "\telse \" appearing within a shell script as part of a recipe). After doing so, Git compiles on newer versions of Make: $ make -v | head -1 && make GNU Make 4.4.90 GIT_VERSION = 2.44.0.414.gfac1dc44ca9 [...] $ echo $? 0 Reported-by: Dario Gjorgjevski <dario.gjorgjevski@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-08 17:51:44 +02:00
endif
endif
ifndef NO_DARWIN_PORTS
ifeq ($(shell test -d /opt/local/lib && echo y),y)
BASIC_CFLAGS += -I/opt/local/include
BASIC_LDFLAGS += -L/opt/local/lib
Makefile(s): avoid recipe prefix in conditional statements In GNU Make commit 07fcee35 ([SV 64815] Recipe lines cannot contain conditional statements, 2023-05-22) and following, conditional statements may no longer be preceded by a tab character (which Make refers to as the recipe prefix). There are a handful of spots in our various Makefile(s) which will break in a future release of Make containing 07fcee35. For instance, trying to compile the pre-image of this patch with the tip of make.git results in the following: $ make -v | head -1 && make GNU Make 4.4.90 config.mak.uname:842: *** missing 'endif'. Stop. The kernel addressed this issue in 82175d1f9430 (kbuild: Replace tabs with spaces when followed by conditionals, 2024-01-28). Address the issues in Git's tree by applying the same strategy. When a conditional word (ifeq, ifneq, ifdef, etc.) is preceded by one or more tab characters, replace each tab character with 8 space characters with the following: find . -type f -not -path './.git/*' -name Makefile -or -name '*.mak' | xargs perl -i -pe ' s/(\t+)(ifn?eq|ifn?def|else|endif)/" " x (length($1) * 8) . $2/ge unless /\\$/ ' The "unless /\\$/" removes any false-positives (like "\telse \" appearing within a shell script as part of a recipe). After doing so, Git compiles on newer versions of Make: $ make -v | head -1 && make GNU Make 4.4.90 GIT_VERSION = 2.44.0.414.gfac1dc44ca9 [...] $ echo $? 0 Reported-by: Dario Gjorgjevski <dario.gjorgjevski@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-08 17:51:44 +02:00
endif
endif
ifndef NO_APPLE_COMMON_CRYPTO
NO_OPENSSL = YesPlease
APPLE_COMMON_CRYPTO = YesPlease
COMPAT_CFLAGS += -DAPPLE_COMMON_CRYPTO
Makefile(s): avoid recipe prefix in conditional statements In GNU Make commit 07fcee35 ([SV 64815] Recipe lines cannot contain conditional statements, 2023-05-22) and following, conditional statements may no longer be preceded by a tab character (which Make refers to as the recipe prefix). There are a handful of spots in our various Makefile(s) which will break in a future release of Make containing 07fcee35. For instance, trying to compile the pre-image of this patch with the tip of make.git results in the following: $ make -v | head -1 && make GNU Make 4.4.90 config.mak.uname:842: *** missing 'endif'. Stop. The kernel addressed this issue in 82175d1f9430 (kbuild: Replace tabs with spaces when followed by conditionals, 2024-01-28). Address the issues in Git's tree by applying the same strategy. When a conditional word (ifeq, ifneq, ifdef, etc.) is preceded by one or more tab characters, replace each tab character with 8 space characters with the following: find . -type f -not -path './.git/*' -name Makefile -or -name '*.mak' | xargs perl -i -pe ' s/(\t+)(ifn?eq|ifn?def|else|endif)/" " x (length($1) * 8) . $2/ge unless /\\$/ ' The "unless /\\$/" removes any false-positives (like "\telse \" appearing within a shell script as part of a recipe). After doing so, Git compiles on newer versions of Make: $ make -v | head -1 && make GNU Make 4.4.90 GIT_VERSION = 2.44.0.414.gfac1dc44ca9 [...] $ echo $? 0 Reported-by: Dario Gjorgjevski <dario.gjorgjevski@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-08 17:51:44 +02:00
endif
PTHREAD_LIBS =
endif
ifdef NO_LIBGEN_H
COMPAT_CFLAGS += -DNO_LIBGEN_H
COMPAT_OBJS += compat/basename.o
endif
ifdef USE_LIBPCRE1
$(error The USE_LIBPCRE1 build option has been removed, use version 2 with USE_LIBPCRE)
endif
USE_LIBPCRE2 ?= $(USE_LIBPCRE)
grep: add support for PCRE v2 Add support for v2 of the PCRE API. This is a new major version of PCRE that came out in early 2015[1]. The regular expression syntax is the same, but while the API is similar, pretty much every function is either renamed or takes different arguments. Thus using it via entirely new functions makes sense, as opposed to trying to e.g. have one compile_pcre_pattern() that would call either PCRE v1 or v2 functions. Git can now be compiled with either USE_LIBPCRE1=YesPlease or USE_LIBPCRE2=YesPlease, with USE_LIBPCRE=YesPlease currently being a synonym for the former. Providing both is a compile-time error. With earlier patches to enable JIT for PCRE v1 the performance of the release versions of both libraries is almost exactly the same, with PCRE v2 being around 1% slower. However after I reported this to the pcre-dev mailing list[2] I got a lot of help with the API use from Zoltán Herczeg, he subsequently optimized some of the JIT functionality in v2 of the library. Running the p7820-grep-engines.sh performance test against the latest Subversion trunk of both, with both them and git compiled as -O3, and the test run against linux.git, gives the following results. Just the /perl/ tests shown: $ GIT_PERF_REPEAT_COUNT=30 GIT_PERF_LARGE_REPO=~/g/linux GIT_PERF_MAKE_COMMAND='grep -q LIBPCRE2 Makefile && make -j8 USE_LIBPCRE2=YesPlease CC=~/perl5/installed/bin/gcc NO_R_TO_GCC_LINKER=YesPlease CFLAGS=-O3 LIBPCREDIR=/home/avar/g/pcre2/inst LDFLAGS=-Wl,-rpath,/home/avar/g/pcre2/inst/lib || make -j8 USE_LIBPCRE=YesPlease CC=~/perl5/installed/bin/gcc NO_R_TO_GCC_LINKER=YesPlease CFLAGS=-O3 LIBPCREDIR=/home/avar/g/pcre/inst LDFLAGS=-Wl,-rpath,/home/avar/g/pcre/inst/lib' ./run HEAD~5 HEAD~ HEAD p7820-grep-engines.sh [...] Test HEAD~5 HEAD~ HEAD ----------------------------------------------------------------------------------------------------------------- 7820.3: perl grep 'how.to' 0.31(1.10+0.48) 0.21(0.35+0.56) -32.3% 0.21(0.34+0.55) -32.3% 7820.7: perl grep '^how to' 0.56(2.70+0.40) 0.24(0.64+0.52) -57.1% 0.20(0.28+0.60) -64.3% 7820.11: perl grep '[how] to' 0.56(2.66+0.38) 0.29(0.95+0.45) -48.2% 0.23(0.45+0.54) -58.9% 7820.15: perl grep '(e.t[^ ]*|v.ry) rare' 1.02(5.77+0.42) 0.31(1.02+0.54) -69.6% 0.23(0.50+0.54) -77.5% 7820.19: perl grep 'm(ú|u)lt.b(æ|y)te' 0.38(1.57+0.42) 0.27(0.85+0.46) -28.9% 0.21(0.33+0.57) -44.7% See commit ("perf: add a comparison test of grep regex engines", 2017-04-19) for details on the machine the above test run was executed on. Here HEAD~2 is git with PCRE v1 without JIT, HEAD~ is PCRE v1 with JIT, and HEAD is PCRE v2 (also with JIT). See previous commits of mine mentioning p7820-grep-engines.sh for more details on the test setup. For ease of readability, a different run just of HEAD~ (PCRE v1 with JIT v.s. PCRE v2), again with just the /perl/ tests shown: [...] Test HEAD~ HEAD ---------------------------------------------------------------------------------------- 7820.3: perl grep 'how.to' 0.21(0.42+0.52) 0.21(0.31+0.58) +0.0% 7820.7: perl grep '^how to' 0.25(0.65+0.50) 0.20(0.31+0.57) -20.0% 7820.11: perl grep '[how] to' 0.30(0.90+0.50) 0.23(0.46+0.53) -23.3% 7820.15: perl grep '(e.t[^ ]*|v.ry) rare' 0.30(1.19+0.38) 0.23(0.51+0.51) -23.3% 7820.19: perl grep 'm(ú|u)lt.b(æ|y)te' 0.27(0.84+0.48) 0.21(0.34+0.57) -22.2% I.e. the two are either neck-to-neck, but PCRE v2 usually pulls ahead, when it does it's around 20% faster. A brief note on thread safety: As noted in pcre2api(3) & pcre2jit(3) the compiled pattern can be shared between threads, but not some of the JIT context, however the grep threading support does all pattern & JIT compilation in separate threads, so this code doesn't need to concern itself with thread safety. See commit 63e7e9d8b6 ("git-grep: Learn PCRE", 2011-05-09) for the initial addition of PCRE v1. This change follows some of the same patterns it did (and which were discussed on list at the time), e.g. mocking up types with typedef instead of ifdef-ing them out when USE_LIBPCRE2 isn't defined. This adds some trivial memory use to the program, but makes the code look nicer. 1. https://lists.exim.org/lurker/message/20150105.162835.0666407a.en.html 2. https://lists.exim.org/lurker/thread/20170419.172322.833ee099.en.html Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-06-01 20:20:56 +02:00
ifneq (,$(USE_LIBPCRE2))
BASIC_CFLAGS += -DUSE_LIBPCRE2
EXTLIBS += -lpcre2-8
endif
grep: add support for PCRE v2 Add support for v2 of the PCRE API. This is a new major version of PCRE that came out in early 2015[1]. The regular expression syntax is the same, but while the API is similar, pretty much every function is either renamed or takes different arguments. Thus using it via entirely new functions makes sense, as opposed to trying to e.g. have one compile_pcre_pattern() that would call either PCRE v1 or v2 functions. Git can now be compiled with either USE_LIBPCRE1=YesPlease or USE_LIBPCRE2=YesPlease, with USE_LIBPCRE=YesPlease currently being a synonym for the former. Providing both is a compile-time error. With earlier patches to enable JIT for PCRE v1 the performance of the release versions of both libraries is almost exactly the same, with PCRE v2 being around 1% slower. However after I reported this to the pcre-dev mailing list[2] I got a lot of help with the API use from Zoltán Herczeg, he subsequently optimized some of the JIT functionality in v2 of the library. Running the p7820-grep-engines.sh performance test against the latest Subversion trunk of both, with both them and git compiled as -O3, and the test run against linux.git, gives the following results. Just the /perl/ tests shown: $ GIT_PERF_REPEAT_COUNT=30 GIT_PERF_LARGE_REPO=~/g/linux GIT_PERF_MAKE_COMMAND='grep -q LIBPCRE2 Makefile && make -j8 USE_LIBPCRE2=YesPlease CC=~/perl5/installed/bin/gcc NO_R_TO_GCC_LINKER=YesPlease CFLAGS=-O3 LIBPCREDIR=/home/avar/g/pcre2/inst LDFLAGS=-Wl,-rpath,/home/avar/g/pcre2/inst/lib || make -j8 USE_LIBPCRE=YesPlease CC=~/perl5/installed/bin/gcc NO_R_TO_GCC_LINKER=YesPlease CFLAGS=-O3 LIBPCREDIR=/home/avar/g/pcre/inst LDFLAGS=-Wl,-rpath,/home/avar/g/pcre/inst/lib' ./run HEAD~5 HEAD~ HEAD p7820-grep-engines.sh [...] Test HEAD~5 HEAD~ HEAD ----------------------------------------------------------------------------------------------------------------- 7820.3: perl grep 'how.to' 0.31(1.10+0.48) 0.21(0.35+0.56) -32.3% 0.21(0.34+0.55) -32.3% 7820.7: perl grep '^how to' 0.56(2.70+0.40) 0.24(0.64+0.52) -57.1% 0.20(0.28+0.60) -64.3% 7820.11: perl grep '[how] to' 0.56(2.66+0.38) 0.29(0.95+0.45) -48.2% 0.23(0.45+0.54) -58.9% 7820.15: perl grep '(e.t[^ ]*|v.ry) rare' 1.02(5.77+0.42) 0.31(1.02+0.54) -69.6% 0.23(0.50+0.54) -77.5% 7820.19: perl grep 'm(ú|u)lt.b(æ|y)te' 0.38(1.57+0.42) 0.27(0.85+0.46) -28.9% 0.21(0.33+0.57) -44.7% See commit ("perf: add a comparison test of grep regex engines", 2017-04-19) for details on the machine the above test run was executed on. Here HEAD~2 is git with PCRE v1 without JIT, HEAD~ is PCRE v1 with JIT, and HEAD is PCRE v2 (also with JIT). See previous commits of mine mentioning p7820-grep-engines.sh for more details on the test setup. For ease of readability, a different run just of HEAD~ (PCRE v1 with JIT v.s. PCRE v2), again with just the /perl/ tests shown: [...] Test HEAD~ HEAD ---------------------------------------------------------------------------------------- 7820.3: perl grep 'how.to' 0.21(0.42+0.52) 0.21(0.31+0.58) +0.0% 7820.7: perl grep '^how to' 0.25(0.65+0.50) 0.20(0.31+0.57) -20.0% 7820.11: perl grep '[how] to' 0.30(0.90+0.50) 0.23(0.46+0.53) -23.3% 7820.15: perl grep '(e.t[^ ]*|v.ry) rare' 0.30(1.19+0.38) 0.23(0.51+0.51) -23.3% 7820.19: perl grep 'm(ú|u)lt.b(æ|y)te' 0.27(0.84+0.48) 0.21(0.34+0.57) -22.2% I.e. the two are either neck-to-neck, but PCRE v2 usually pulls ahead, when it does it's around 20% faster. A brief note on thread safety: As noted in pcre2api(3) & pcre2jit(3) the compiled pattern can be shared between threads, but not some of the JIT context, however the grep threading support does all pattern & JIT compilation in separate threads, so this code doesn't need to concern itself with thread safety. See commit 63e7e9d8b6 ("git-grep: Learn PCRE", 2011-05-09) for the initial addition of PCRE v1. This change follows some of the same patterns it did (and which were discussed on list at the time), e.g. mocking up types with typedef instead of ifdef-ing them out when USE_LIBPCRE2 isn't defined. This adds some trivial memory use to the program, but makes the code look nicer. 1. https://lists.exim.org/lurker/message/20150105.162835.0666407a.en.html 2. https://lists.exim.org/lurker/thread/20170419.172322.833ee099.en.html Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-06-01 20:20:56 +02:00
ifdef LIBPCREDIR
BASIC_CFLAGS += -I$(LIBPCREDIR)/include
EXTLIBS += $(call libpath_template,$(LIBPCREDIR)/$(lib))
endif
Portable alloca for Git In the next patch we'll have to use alloca() for performance reasons, but since alloca is non-standardized and is not portable, let's have a trick with compatibility wrappers: 1. at configure time, determine, do we have working alloca() through alloca.h, and define #define HAVE_ALLOCA_H if yes. 2. in code #ifdef HAVE_ALLOCA_H # include <alloca.h> # define xalloca(size) (alloca(size)) # define xalloca_free(p) do {} while(0) #else # define xalloca(size) (xmalloc(size)) # define xalloca_free(p) (free(p)) #endif and use it like func() { p = xalloca(size); ... xalloca_free(p); } This way, for systems, where alloca is available, we'll have optimal on-stack allocations with fast executions. On the other hand, on systems, where alloca is not available, this gracefully fallbacks to xmalloc/free. Both autoconf and config.mak.uname configurations were updated. For autoconf, we are not bothering considering cases, when no alloca.h is available, but alloca() works some other way - its simply alloca.h is available and works or not, everything else is deep legacy. For config.mak.uname, I've tried to make my almost-sure guess for where alloca() is available, but since I only have access to Linux it is the only change I can be sure about myself, with relevant to other changed systems people Cc'ed. NOTE SunOS and Windows had explicit -DHAVE_ALLOCA_H in their configurations. I've changed that to now-common HAVE_ALLOCA_H=YesPlease which should be correct. Cc: Brandon Casey <drafnel@gmail.com> Cc: Marius Storm-Olsen <mstormo@gmail.com> Cc: Johannes Sixt <j6t@kdbg.org> Cc: Johannes Schindelin <Johannes.Schindelin@gmx.de> Cc: Ramsay Jones <ramsay@ramsay1.demon.co.uk> Cc: Gerrit Pape <pape@smarden.org> Cc: Petr Salinger <Petr.Salinger@seznam.cz> Cc: Jonathan Nieder <jrnieder@gmail.com> Acked-by: Thomas Schwinge <thomas@codesourcery.com> (GNU Hurd changes) Signed-off-by: Kirill Smelkov <kirr@mns.spb.ru> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-03-27 15:22:50 +01:00
ifdef HAVE_ALLOCA_H
BASIC_CFLAGS += -DHAVE_ALLOCA_H
endif
IMAP_SEND_BUILDDEPS =
IMAP_SEND_LDFLAGS =
ifdef NO_CURL
BASIC_CFLAGS += -DNO_CURL
REMOTE_CURL_PRIMARY =
REMOTE_CURL_ALIASES =
REMOTE_CURL_NAMES =
EXCLUDED_PROGRAMS += git-http-fetch git-http-push
else
Makefile(s): avoid recipe prefix in conditional statements In GNU Make commit 07fcee35 ([SV 64815] Recipe lines cannot contain conditional statements, 2023-05-22) and following, conditional statements may no longer be preceded by a tab character (which Make refers to as the recipe prefix). There are a handful of spots in our various Makefile(s) which will break in a future release of Make containing 07fcee35. For instance, trying to compile the pre-image of this patch with the tip of make.git results in the following: $ make -v | head -1 && make GNU Make 4.4.90 config.mak.uname:842: *** missing 'endif'. Stop. The kernel addressed this issue in 82175d1f9430 (kbuild: Replace tabs with spaces when followed by conditionals, 2024-01-28). Address the issues in Git's tree by applying the same strategy. When a conditional word (ifeq, ifneq, ifdef, etc.) is preceded by one or more tab characters, replace each tab character with 8 space characters with the following: find . -type f -not -path './.git/*' -name Makefile -or -name '*.mak' | xargs perl -i -pe ' s/(\t+)(ifn?eq|ifn?def|else|endif)/" " x (length($1) * 8) . $2/ge unless /\\$/ ' The "unless /\\$/" removes any false-positives (like "\telse \" appearing within a shell script as part of a recipe). After doing so, Git compiles on newer versions of Make: $ make -v | head -1 && make GNU Make 4.4.90 GIT_VERSION = 2.44.0.414.gfac1dc44ca9 [...] $ echo $? 0 Reported-by: Dario Gjorgjevski <dario.gjorgjevski@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-08 17:51:44 +02:00
ifdef CURLDIR
# Try "-Wl,-rpath=$(CURLDIR)/$(lib)" in such a case.
CURL_CFLAGS = -I$(CURLDIR)/include
CURL_LIBCURL = $(call libpath_template,$(CURLDIR)/$(lib))
Makefile(s): avoid recipe prefix in conditional statements In GNU Make commit 07fcee35 ([SV 64815] Recipe lines cannot contain conditional statements, 2023-05-22) and following, conditional statements may no longer be preceded by a tab character (which Make refers to as the recipe prefix). There are a handful of spots in our various Makefile(s) which will break in a future release of Make containing 07fcee35. For instance, trying to compile the pre-image of this patch with the tip of make.git results in the following: $ make -v | head -1 && make GNU Make 4.4.90 config.mak.uname:842: *** missing 'endif'. Stop. The kernel addressed this issue in 82175d1f9430 (kbuild: Replace tabs with spaces when followed by conditionals, 2024-01-28). Address the issues in Git's tree by applying the same strategy. When a conditional word (ifeq, ifneq, ifdef, etc.) is preceded by one or more tab characters, replace each tab character with 8 space characters with the following: find . -type f -not -path './.git/*' -name Makefile -or -name '*.mak' | xargs perl -i -pe ' s/(\t+)(ifn?eq|ifn?def|else|endif)/" " x (length($1) * 8) . $2/ge unless /\\$/ ' The "unless /\\$/" removes any false-positives (like "\telse \" appearing within a shell script as part of a recipe). After doing so, Git compiles on newer versions of Make: $ make -v | head -1 && make GNU Make 4.4.90 GIT_VERSION = 2.44.0.414.gfac1dc44ca9 [...] $ echo $? 0 Reported-by: Dario Gjorgjevski <dario.gjorgjevski@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-08 17:51:44 +02:00
else
CURL_CFLAGS =
CURL_LIBCURL =
Makefile(s): avoid recipe prefix in conditional statements In GNU Make commit 07fcee35 ([SV 64815] Recipe lines cannot contain conditional statements, 2023-05-22) and following, conditional statements may no longer be preceded by a tab character (which Make refers to as the recipe prefix). There are a handful of spots in our various Makefile(s) which will break in a future release of Make containing 07fcee35. For instance, trying to compile the pre-image of this patch with the tip of make.git results in the following: $ make -v | head -1 && make GNU Make 4.4.90 config.mak.uname:842: *** missing 'endif'. Stop. The kernel addressed this issue in 82175d1f9430 (kbuild: Replace tabs with spaces when followed by conditionals, 2024-01-28). Address the issues in Git's tree by applying the same strategy. When a conditional word (ifeq, ifneq, ifdef, etc.) is preceded by one or more tab characters, replace each tab character with 8 space characters with the following: find . -type f -not -path './.git/*' -name Makefile -or -name '*.mak' | xargs perl -i -pe ' s/(\t+)(ifn?eq|ifn?def|else|endif)/" " x (length($1) * 8) . $2/ge unless /\\$/ ' The "unless /\\$/" removes any false-positives (like "\telse \" appearing within a shell script as part of a recipe). After doing so, Git compiles on newer versions of Make: $ make -v | head -1 && make GNU Make 4.4.90 GIT_VERSION = 2.44.0.414.gfac1dc44ca9 [...] $ echo $? 0 Reported-by: Dario Gjorgjevski <dario.gjorgjevski@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-08 17:51:44 +02:00
endif
Makefile(s): avoid recipe prefix in conditional statements In GNU Make commit 07fcee35 ([SV 64815] Recipe lines cannot contain conditional statements, 2023-05-22) and following, conditional statements may no longer be preceded by a tab character (which Make refers to as the recipe prefix). There are a handful of spots in our various Makefile(s) which will break in a future release of Make containing 07fcee35. For instance, trying to compile the pre-image of this patch with the tip of make.git results in the following: $ make -v | head -1 && make GNU Make 4.4.90 config.mak.uname:842: *** missing 'endif'. Stop. The kernel addressed this issue in 82175d1f9430 (kbuild: Replace tabs with spaces when followed by conditionals, 2024-01-28). Address the issues in Git's tree by applying the same strategy. When a conditional word (ifeq, ifneq, ifdef, etc.) is preceded by one or more tab characters, replace each tab character with 8 space characters with the following: find . -type f -not -path './.git/*' -name Makefile -or -name '*.mak' | xargs perl -i -pe ' s/(\t+)(ifn?eq|ifn?def|else|endif)/" " x (length($1) * 8) . $2/ge unless /\\$/ ' The "unless /\\$/" removes any false-positives (like "\telse \" appearing within a shell script as part of a recipe). After doing so, Git compiles on newer versions of Make: $ make -v | head -1 && make GNU Make 4.4.90 GIT_VERSION = 2.44.0.414.gfac1dc44ca9 [...] $ echo $? 0 Reported-by: Dario Gjorgjevski <dario.gjorgjevski@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-08 17:51:44 +02:00
ifndef CURL_LDFLAGS
CURL_LDFLAGS = $(eval CURL_LDFLAGS := $$(shell $$(CURL_CONFIG) --libs))$(CURL_LDFLAGS)
Makefile(s): avoid recipe prefix in conditional statements In GNU Make commit 07fcee35 ([SV 64815] Recipe lines cannot contain conditional statements, 2023-05-22) and following, conditional statements may no longer be preceded by a tab character (which Make refers to as the recipe prefix). There are a handful of spots in our various Makefile(s) which will break in a future release of Make containing 07fcee35. For instance, trying to compile the pre-image of this patch with the tip of make.git results in the following: $ make -v | head -1 && make GNU Make 4.4.90 config.mak.uname:842: *** missing 'endif'. Stop. The kernel addressed this issue in 82175d1f9430 (kbuild: Replace tabs with spaces when followed by conditionals, 2024-01-28). Address the issues in Git's tree by applying the same strategy. When a conditional word (ifeq, ifneq, ifdef, etc.) is preceded by one or more tab characters, replace each tab character with 8 space characters with the following: find . -type f -not -path './.git/*' -name Makefile -or -name '*.mak' | xargs perl -i -pe ' s/(\t+)(ifn?eq|ifn?def|else|endif)/" " x (length($1) * 8) . $2/ge unless /\\$/ ' The "unless /\\$/" removes any false-positives (like "\telse \" appearing within a shell script as part of a recipe). After doing so, Git compiles on newer versions of Make: $ make -v | head -1 && make GNU Make 4.4.90 GIT_VERSION = 2.44.0.414.gfac1dc44ca9 [...] $ echo $? 0 Reported-by: Dario Gjorgjevski <dario.gjorgjevski@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-08 17:51:44 +02:00
endif
CURL_LIBCURL += $(CURL_LDFLAGS)
Makefile(s): avoid recipe prefix in conditional statements In GNU Make commit 07fcee35 ([SV 64815] Recipe lines cannot contain conditional statements, 2023-05-22) and following, conditional statements may no longer be preceded by a tab character (which Make refers to as the recipe prefix). There are a handful of spots in our various Makefile(s) which will break in a future release of Make containing 07fcee35. For instance, trying to compile the pre-image of this patch with the tip of make.git results in the following: $ make -v | head -1 && make GNU Make 4.4.90 config.mak.uname:842: *** missing 'endif'. Stop. The kernel addressed this issue in 82175d1f9430 (kbuild: Replace tabs with spaces when followed by conditionals, 2024-01-28). Address the issues in Git's tree by applying the same strategy. When a conditional word (ifeq, ifneq, ifdef, etc.) is preceded by one or more tab characters, replace each tab character with 8 space characters with the following: find . -type f -not -path './.git/*' -name Makefile -or -name '*.mak' | xargs perl -i -pe ' s/(\t+)(ifn?eq|ifn?def|else|endif)/" " x (length($1) * 8) . $2/ge unless /\\$/ ' The "unless /\\$/" removes any false-positives (like "\telse \" appearing within a shell script as part of a recipe). After doing so, Git compiles on newer versions of Make: $ make -v | head -1 && make GNU Make 4.4.90 GIT_VERSION = 2.44.0.414.gfac1dc44ca9 [...] $ echo $? 0 Reported-by: Dario Gjorgjevski <dario.gjorgjevski@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-08 17:51:44 +02:00
ifndef CURL_CFLAGS
CURL_CFLAGS = $(eval CURL_CFLAGS := $$(shell $$(CURL_CONFIG) --cflags))$(CURL_CFLAGS)
Makefile(s): avoid recipe prefix in conditional statements In GNU Make commit 07fcee35 ([SV 64815] Recipe lines cannot contain conditional statements, 2023-05-22) and following, conditional statements may no longer be preceded by a tab character (which Make refers to as the recipe prefix). There are a handful of spots in our various Makefile(s) which will break in a future release of Make containing 07fcee35. For instance, trying to compile the pre-image of this patch with the tip of make.git results in the following: $ make -v | head -1 && make GNU Make 4.4.90 config.mak.uname:842: *** missing 'endif'. Stop. The kernel addressed this issue in 82175d1f9430 (kbuild: Replace tabs with spaces when followed by conditionals, 2024-01-28). Address the issues in Git's tree by applying the same strategy. When a conditional word (ifeq, ifneq, ifdef, etc.) is preceded by one or more tab characters, replace each tab character with 8 space characters with the following: find . -type f -not -path './.git/*' -name Makefile -or -name '*.mak' | xargs perl -i -pe ' s/(\t+)(ifn?eq|ifn?def|else|endif)/" " x (length($1) * 8) . $2/ge unless /\\$/ ' The "unless /\\$/" removes any false-positives (like "\telse \" appearing within a shell script as part of a recipe). After doing so, Git compiles on newer versions of Make: $ make -v | head -1 && make GNU Make 4.4.90 GIT_VERSION = 2.44.0.414.gfac1dc44ca9 [...] $ echo $? 0 Reported-by: Dario Gjorgjevski <dario.gjorgjevski@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-08 17:51:44 +02:00
endif
BASIC_CFLAGS += $(CURL_CFLAGS)
REMOTE_CURL_PRIMARY = git-remote-http$X
REMOTE_CURL_ALIASES = git-remote-https$X git-remote-ftp$X git-remote-ftps$X
REMOTE_CURL_NAMES = $(REMOTE_CURL_PRIMARY) $(REMOTE_CURL_ALIASES)
PROGRAM_OBJS += http-fetch.o
PROGRAMS += $(REMOTE_CURL_NAMES)
Makefile(s): avoid recipe prefix in conditional statements In GNU Make commit 07fcee35 ([SV 64815] Recipe lines cannot contain conditional statements, 2023-05-22) and following, conditional statements may no longer be preceded by a tab character (which Make refers to as the recipe prefix). There are a handful of spots in our various Makefile(s) which will break in a future release of Make containing 07fcee35. For instance, trying to compile the pre-image of this patch with the tip of make.git results in the following: $ make -v | head -1 && make GNU Make 4.4.90 config.mak.uname:842: *** missing 'endif'. Stop. The kernel addressed this issue in 82175d1f9430 (kbuild: Replace tabs with spaces when followed by conditionals, 2024-01-28). Address the issues in Git's tree by applying the same strategy. When a conditional word (ifeq, ifneq, ifdef, etc.) is preceded by one or more tab characters, replace each tab character with 8 space characters with the following: find . -type f -not -path './.git/*' -name Makefile -or -name '*.mak' | xargs perl -i -pe ' s/(\t+)(ifn?eq|ifn?def|else|endif)/" " x (length($1) * 8) . $2/ge unless /\\$/ ' The "unless /\\$/" removes any false-positives (like "\telse \" appearing within a shell script as part of a recipe). After doing so, Git compiles on newer versions of Make: $ make -v | head -1 && make GNU Make 4.4.90 GIT_VERSION = 2.44.0.414.gfac1dc44ca9 [...] $ echo $? 0 Reported-by: Dario Gjorgjevski <dario.gjorgjevski@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-08 17:51:44 +02:00
ifndef NO_EXPAT
PROGRAM_OBJS += http-push.o
Makefile(s): avoid recipe prefix in conditional statements In GNU Make commit 07fcee35 ([SV 64815] Recipe lines cannot contain conditional statements, 2023-05-22) and following, conditional statements may no longer be preceded by a tab character (which Make refers to as the recipe prefix). There are a handful of spots in our various Makefile(s) which will break in a future release of Make containing 07fcee35. For instance, trying to compile the pre-image of this patch with the tip of make.git results in the following: $ make -v | head -1 && make GNU Make 4.4.90 config.mak.uname:842: *** missing 'endif'. Stop. The kernel addressed this issue in 82175d1f9430 (kbuild: Replace tabs with spaces when followed by conditionals, 2024-01-28). Address the issues in Git's tree by applying the same strategy. When a conditional word (ifeq, ifneq, ifdef, etc.) is preceded by one or more tab characters, replace each tab character with 8 space characters with the following: find . -type f -not -path './.git/*' -name Makefile -or -name '*.mak' | xargs perl -i -pe ' s/(\t+)(ifn?eq|ifn?def|else|endif)/" " x (length($1) * 8) . $2/ge unless /\\$/ ' The "unless /\\$/" removes any false-positives (like "\telse \" appearing within a shell script as part of a recipe). After doing so, Git compiles on newer versions of Make: $ make -v | head -1 && make GNU Make 4.4.90 GIT_VERSION = 2.44.0.414.gfac1dc44ca9 [...] $ echo $? 0 Reported-by: Dario Gjorgjevski <dario.gjorgjevski@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-08 17:51:44 +02:00
endif
curl_check := $(shell (echo 072200; $(CURL_CONFIG) --vernum | sed -e '/^70[BC]/s/^/0/') 2>/dev/null | sort -r | sed -ne 2p)
Makefile(s): avoid recipe prefix in conditional statements In GNU Make commit 07fcee35 ([SV 64815] Recipe lines cannot contain conditional statements, 2023-05-22) and following, conditional statements may no longer be preceded by a tab character (which Make refers to as the recipe prefix). There are a handful of spots in our various Makefile(s) which will break in a future release of Make containing 07fcee35. For instance, trying to compile the pre-image of this patch with the tip of make.git results in the following: $ make -v | head -1 && make GNU Make 4.4.90 config.mak.uname:842: *** missing 'endif'. Stop. The kernel addressed this issue in 82175d1f9430 (kbuild: Replace tabs with spaces when followed by conditionals, 2024-01-28). Address the issues in Git's tree by applying the same strategy. When a conditional word (ifeq, ifneq, ifdef, etc.) is preceded by one or more tab characters, replace each tab character with 8 space characters with the following: find . -type f -not -path './.git/*' -name Makefile -or -name '*.mak' | xargs perl -i -pe ' s/(\t+)(ifn?eq|ifn?def|else|endif)/" " x (length($1) * 8) . $2/ge unless /\\$/ ' The "unless /\\$/" removes any false-positives (like "\telse \" appearing within a shell script as part of a recipe). After doing so, Git compiles on newer versions of Make: $ make -v | head -1 && make GNU Make 4.4.90 GIT_VERSION = 2.44.0.414.gfac1dc44ca9 [...] $ echo $? 0 Reported-by: Dario Gjorgjevski <dario.gjorgjevski@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-08 17:51:44 +02:00
ifeq "$(curl_check)" "072200"
USE_CURL_FOR_IMAP_SEND = YesPlease
Makefile(s): avoid recipe prefix in conditional statements In GNU Make commit 07fcee35 ([SV 64815] Recipe lines cannot contain conditional statements, 2023-05-22) and following, conditional statements may no longer be preceded by a tab character (which Make refers to as the recipe prefix). There are a handful of spots in our various Makefile(s) which will break in a future release of Make containing 07fcee35. For instance, trying to compile the pre-image of this patch with the tip of make.git results in the following: $ make -v | head -1 && make GNU Make 4.4.90 config.mak.uname:842: *** missing 'endif'. Stop. The kernel addressed this issue in 82175d1f9430 (kbuild: Replace tabs with spaces when followed by conditionals, 2024-01-28). Address the issues in Git's tree by applying the same strategy. When a conditional word (ifeq, ifneq, ifdef, etc.) is preceded by one or more tab characters, replace each tab character with 8 space characters with the following: find . -type f -not -path './.git/*' -name Makefile -or -name '*.mak' | xargs perl -i -pe ' s/(\t+)(ifn?eq|ifn?def|else|endif)/" " x (length($1) * 8) . $2/ge unless /\\$/ ' The "unless /\\$/" removes any false-positives (like "\telse \" appearing within a shell script as part of a recipe). After doing so, Git compiles on newer versions of Make: $ make -v | head -1 && make GNU Make 4.4.90 GIT_VERSION = 2.44.0.414.gfac1dc44ca9 [...] $ echo $? 0 Reported-by: Dario Gjorgjevski <dario.gjorgjevski@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-08 17:51:44 +02:00
endif
ifdef USE_CURL_FOR_IMAP_SEND
BASIC_CFLAGS += -DUSE_CURL_FOR_IMAP_SEND
IMAP_SEND_BUILDDEPS = http.o
IMAP_SEND_LDFLAGS += $(CURL_LIBCURL)
Makefile(s): avoid recipe prefix in conditional statements In GNU Make commit 07fcee35 ([SV 64815] Recipe lines cannot contain conditional statements, 2023-05-22) and following, conditional statements may no longer be preceded by a tab character (which Make refers to as the recipe prefix). There are a handful of spots in our various Makefile(s) which will break in a future release of Make containing 07fcee35. For instance, trying to compile the pre-image of this patch with the tip of make.git results in the following: $ make -v | head -1 && make GNU Make 4.4.90 config.mak.uname:842: *** missing 'endif'. Stop. The kernel addressed this issue in 82175d1f9430 (kbuild: Replace tabs with spaces when followed by conditionals, 2024-01-28). Address the issues in Git's tree by applying the same strategy. When a conditional word (ifeq, ifneq, ifdef, etc.) is preceded by one or more tab characters, replace each tab character with 8 space characters with the following: find . -type f -not -path './.git/*' -name Makefile -or -name '*.mak' | xargs perl -i -pe ' s/(\t+)(ifn?eq|ifn?def|else|endif)/" " x (length($1) * 8) . $2/ge unless /\\$/ ' The "unless /\\$/" removes any false-positives (like "\telse \" appearing within a shell script as part of a recipe). After doing so, Git compiles on newer versions of Make: $ make -v | head -1 && make GNU Make 4.4.90 GIT_VERSION = 2.44.0.414.gfac1dc44ca9 [...] $ echo $? 0 Reported-by: Dario Gjorgjevski <dario.gjorgjevski@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-08 17:51:44 +02:00
endif
ifndef NO_EXPAT
ifdef EXPATDIR
BASIC_CFLAGS += -I$(EXPATDIR)/include
EXPAT_LIBEXPAT = $(call libpath_template,$(EXPATDIR)/$(lib)) -lexpat
Makefile(s): avoid recipe prefix in conditional statements In GNU Make commit 07fcee35 ([SV 64815] Recipe lines cannot contain conditional statements, 2023-05-22) and following, conditional statements may no longer be preceded by a tab character (which Make refers to as the recipe prefix). There are a handful of spots in our various Makefile(s) which will break in a future release of Make containing 07fcee35. For instance, trying to compile the pre-image of this patch with the tip of make.git results in the following: $ make -v | head -1 && make GNU Make 4.4.90 config.mak.uname:842: *** missing 'endif'. Stop. The kernel addressed this issue in 82175d1f9430 (kbuild: Replace tabs with spaces when followed by conditionals, 2024-01-28). Address the issues in Git's tree by applying the same strategy. When a conditional word (ifeq, ifneq, ifdef, etc.) is preceded by one or more tab characters, replace each tab character with 8 space characters with the following: find . -type f -not -path './.git/*' -name Makefile -or -name '*.mak' | xargs perl -i -pe ' s/(\t+)(ifn?eq|ifn?def|else|endif)/" " x (length($1) * 8) . $2/ge unless /\\$/ ' The "unless /\\$/" removes any false-positives (like "\telse \" appearing within a shell script as part of a recipe). After doing so, Git compiles on newer versions of Make: $ make -v | head -1 && make GNU Make 4.4.90 GIT_VERSION = 2.44.0.414.gfac1dc44ca9 [...] $ echo $? 0 Reported-by: Dario Gjorgjevski <dario.gjorgjevski@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-08 17:51:44 +02:00
else
EXPAT_LIBEXPAT = -lexpat
Makefile(s): avoid recipe prefix in conditional statements In GNU Make commit 07fcee35 ([SV 64815] Recipe lines cannot contain conditional statements, 2023-05-22) and following, conditional statements may no longer be preceded by a tab character (which Make refers to as the recipe prefix). There are a handful of spots in our various Makefile(s) which will break in a future release of Make containing 07fcee35. For instance, trying to compile the pre-image of this patch with the tip of make.git results in the following: $ make -v | head -1 && make GNU Make 4.4.90 config.mak.uname:842: *** missing 'endif'. Stop. The kernel addressed this issue in 82175d1f9430 (kbuild: Replace tabs with spaces when followed by conditionals, 2024-01-28). Address the issues in Git's tree by applying the same strategy. When a conditional word (ifeq, ifneq, ifdef, etc.) is preceded by one or more tab characters, replace each tab character with 8 space characters with the following: find . -type f -not -path './.git/*' -name Makefile -or -name '*.mak' | xargs perl -i -pe ' s/(\t+)(ifn?eq|ifn?def|else|endif)/" " x (length($1) * 8) . $2/ge unless /\\$/ ' The "unless /\\$/" removes any false-positives (like "\telse \" appearing within a shell script as part of a recipe). After doing so, Git compiles on newer versions of Make: $ make -v | head -1 && make GNU Make 4.4.90 GIT_VERSION = 2.44.0.414.gfac1dc44ca9 [...] $ echo $? 0 Reported-by: Dario Gjorgjevski <dario.gjorgjevski@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-08 17:51:44 +02:00
endif
ifdef EXPAT_NEEDS_XMLPARSE_H
BASIC_CFLAGS += -DEXPAT_NEEDS_XMLPARSE_H
Makefile(s): avoid recipe prefix in conditional statements In GNU Make commit 07fcee35 ([SV 64815] Recipe lines cannot contain conditional statements, 2023-05-22) and following, conditional statements may no longer be preceded by a tab character (which Make refers to as the recipe prefix). There are a handful of spots in our various Makefile(s) which will break in a future release of Make containing 07fcee35. For instance, trying to compile the pre-image of this patch with the tip of make.git results in the following: $ make -v | head -1 && make GNU Make 4.4.90 config.mak.uname:842: *** missing 'endif'. Stop. The kernel addressed this issue in 82175d1f9430 (kbuild: Replace tabs with spaces when followed by conditionals, 2024-01-28). Address the issues in Git's tree by applying the same strategy. When a conditional word (ifeq, ifneq, ifdef, etc.) is preceded by one or more tab characters, replace each tab character with 8 space characters with the following: find . -type f -not -path './.git/*' -name Makefile -or -name '*.mak' | xargs perl -i -pe ' s/(\t+)(ifn?eq|ifn?def|else|endif)/" " x (length($1) * 8) . $2/ge unless /\\$/ ' The "unless /\\$/" removes any false-positives (like "\telse \" appearing within a shell script as part of a recipe). After doing so, Git compiles on newer versions of Make: $ make -v | head -1 && make GNU Make 4.4.90 GIT_VERSION = 2.44.0.414.gfac1dc44ca9 [...] $ echo $? 0 Reported-by: Dario Gjorgjevski <dario.gjorgjevski@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-08 17:51:44 +02:00
endif
endif
endif
IMAP_SEND_LDFLAGS += $(OPENSSL_LINK) $(OPENSSL_LIBSSL) $(LIB_4_CRYPTO)
ifdef ZLIB_PATH
BASIC_CFLAGS += -I$(ZLIB_PATH)/include
EXTLIBS += $(call libpath_template,$(ZLIB_PATH)/$(lib))
endif
EXTLIBS += -lz
ifndef NO_OPENSSL
OPENSSL_LIBSSL = -lssl
Makefile(s): avoid recipe prefix in conditional statements In GNU Make commit 07fcee35 ([SV 64815] Recipe lines cannot contain conditional statements, 2023-05-22) and following, conditional statements may no longer be preceded by a tab character (which Make refers to as the recipe prefix). There are a handful of spots in our various Makefile(s) which will break in a future release of Make containing 07fcee35. For instance, trying to compile the pre-image of this patch with the tip of make.git results in the following: $ make -v | head -1 && make GNU Make 4.4.90 config.mak.uname:842: *** missing 'endif'. Stop. The kernel addressed this issue in 82175d1f9430 (kbuild: Replace tabs with spaces when followed by conditionals, 2024-01-28). Address the issues in Git's tree by applying the same strategy. When a conditional word (ifeq, ifneq, ifdef, etc.) is preceded by one or more tab characters, replace each tab character with 8 space characters with the following: find . -type f -not -path './.git/*' -name Makefile -or -name '*.mak' | xargs perl -i -pe ' s/(\t+)(ifn?eq|ifn?def|else|endif)/" " x (length($1) * 8) . $2/ge unless /\\$/ ' The "unless /\\$/" removes any false-positives (like "\telse \" appearing within a shell script as part of a recipe). After doing so, Git compiles on newer versions of Make: $ make -v | head -1 && make GNU Make 4.4.90 GIT_VERSION = 2.44.0.414.gfac1dc44ca9 [...] $ echo $? 0 Reported-by: Dario Gjorgjevski <dario.gjorgjevski@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-08 17:51:44 +02:00
ifdef OPENSSLDIR
BASIC_CFLAGS += -I$(OPENSSLDIR)/include
OPENSSL_LINK = $(call libpath_template,$(OPENSSLDIR)/$(lib))
Makefile(s): avoid recipe prefix in conditional statements In GNU Make commit 07fcee35 ([SV 64815] Recipe lines cannot contain conditional statements, 2023-05-22) and following, conditional statements may no longer be preceded by a tab character (which Make refers to as the recipe prefix). There are a handful of spots in our various Makefile(s) which will break in a future release of Make containing 07fcee35. For instance, trying to compile the pre-image of this patch with the tip of make.git results in the following: $ make -v | head -1 && make GNU Make 4.4.90 config.mak.uname:842: *** missing 'endif'. Stop. The kernel addressed this issue in 82175d1f9430 (kbuild: Replace tabs with spaces when followed by conditionals, 2024-01-28). Address the issues in Git's tree by applying the same strategy. When a conditional word (ifeq, ifneq, ifdef, etc.) is preceded by one or more tab characters, replace each tab character with 8 space characters with the following: find . -type f -not -path './.git/*' -name Makefile -or -name '*.mak' | xargs perl -i -pe ' s/(\t+)(ifn?eq|ifn?def|else|endif)/" " x (length($1) * 8) . $2/ge unless /\\$/ ' The "unless /\\$/" removes any false-positives (like "\telse \" appearing within a shell script as part of a recipe). After doing so, Git compiles on newer versions of Make: $ make -v | head -1 && make GNU Make 4.4.90 GIT_VERSION = 2.44.0.414.gfac1dc44ca9 [...] $ echo $? 0 Reported-by: Dario Gjorgjevski <dario.gjorgjevski@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-08 17:51:44 +02:00
else
OPENSSL_LINK =
Makefile(s): avoid recipe prefix in conditional statements In GNU Make commit 07fcee35 ([SV 64815] Recipe lines cannot contain conditional statements, 2023-05-22) and following, conditional statements may no longer be preceded by a tab character (which Make refers to as the recipe prefix). There are a handful of spots in our various Makefile(s) which will break in a future release of Make containing 07fcee35. For instance, trying to compile the pre-image of this patch with the tip of make.git results in the following: $ make -v | head -1 && make GNU Make 4.4.90 config.mak.uname:842: *** missing 'endif'. Stop. The kernel addressed this issue in 82175d1f9430 (kbuild: Replace tabs with spaces when followed by conditionals, 2024-01-28). Address the issues in Git's tree by applying the same strategy. When a conditional word (ifeq, ifneq, ifdef, etc.) is preceded by one or more tab characters, replace each tab character with 8 space characters with the following: find . -type f -not -path './.git/*' -name Makefile -or -name '*.mak' | xargs perl -i -pe ' s/(\t+)(ifn?eq|ifn?def|else|endif)/" " x (length($1) * 8) . $2/ge unless /\\$/ ' The "unless /\\$/" removes any false-positives (like "\telse \" appearing within a shell script as part of a recipe). After doing so, Git compiles on newer versions of Make: $ make -v | head -1 && make GNU Make 4.4.90 GIT_VERSION = 2.44.0.414.gfac1dc44ca9 [...] $ echo $? 0 Reported-by: Dario Gjorgjevski <dario.gjorgjevski@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-08 17:51:44 +02:00
endif
ifdef NEEDS_CRYPTO_WITH_SSL
OPENSSL_LIBSSL += -lcrypto
Makefile(s): avoid recipe prefix in conditional statements In GNU Make commit 07fcee35 ([SV 64815] Recipe lines cannot contain conditional statements, 2023-05-22) and following, conditional statements may no longer be preceded by a tab character (which Make refers to as the recipe prefix). There are a handful of spots in our various Makefile(s) which will break in a future release of Make containing 07fcee35. For instance, trying to compile the pre-image of this patch with the tip of make.git results in the following: $ make -v | head -1 && make GNU Make 4.4.90 config.mak.uname:842: *** missing 'endif'. Stop. The kernel addressed this issue in 82175d1f9430 (kbuild: Replace tabs with spaces when followed by conditionals, 2024-01-28). Address the issues in Git's tree by applying the same strategy. When a conditional word (ifeq, ifneq, ifdef, etc.) is preceded by one or more tab characters, replace each tab character with 8 space characters with the following: find . -type f -not -path './.git/*' -name Makefile -or -name '*.mak' | xargs perl -i -pe ' s/(\t+)(ifn?eq|ifn?def|else|endif)/" " x (length($1) * 8) . $2/ge unless /\\$/ ' The "unless /\\$/" removes any false-positives (like "\telse \" appearing within a shell script as part of a recipe). After doing so, Git compiles on newer versions of Make: $ make -v | head -1 && make GNU Make 4.4.90 GIT_VERSION = 2.44.0.414.gfac1dc44ca9 [...] $ echo $? 0 Reported-by: Dario Gjorgjevski <dario.gjorgjevski@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-08 17:51:44 +02:00
endif
else
BASIC_CFLAGS += -DNO_OPENSSL
OPENSSL_LIBSSL =
endif
ifdef NO_OPENSSL
LIB_4_CRYPTO =
else
ifdef NEEDS_SSL_WITH_CRYPTO
LIB_4_CRYPTO = $(OPENSSL_LINK) -lcrypto -lssl
else
LIB_4_CRYPTO = $(OPENSSL_LINK) -lcrypto
endif
ifdef APPLE_COMMON_CRYPTO
LIB_4_CRYPTO += -framework Security -framework CoreFoundation
endif
endif
ifndef NO_ICONV
Makefile(s): avoid recipe prefix in conditional statements In GNU Make commit 07fcee35 ([SV 64815] Recipe lines cannot contain conditional statements, 2023-05-22) and following, conditional statements may no longer be preceded by a tab character (which Make refers to as the recipe prefix). There are a handful of spots in our various Makefile(s) which will break in a future release of Make containing 07fcee35. For instance, trying to compile the pre-image of this patch with the tip of make.git results in the following: $ make -v | head -1 && make GNU Make 4.4.90 config.mak.uname:842: *** missing 'endif'. Stop. The kernel addressed this issue in 82175d1f9430 (kbuild: Replace tabs with spaces when followed by conditionals, 2024-01-28). Address the issues in Git's tree by applying the same strategy. When a conditional word (ifeq, ifneq, ifdef, etc.) is preceded by one or more tab characters, replace each tab character with 8 space characters with the following: find . -type f -not -path './.git/*' -name Makefile -or -name '*.mak' | xargs perl -i -pe ' s/(\t+)(ifn?eq|ifn?def|else|endif)/" " x (length($1) * 8) . $2/ge unless /\\$/ ' The "unless /\\$/" removes any false-positives (like "\telse \" appearing within a shell script as part of a recipe). After doing so, Git compiles on newer versions of Make: $ make -v | head -1 && make GNU Make 4.4.90 GIT_VERSION = 2.44.0.414.gfac1dc44ca9 [...] $ echo $? 0 Reported-by: Dario Gjorgjevski <dario.gjorgjevski@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-08 17:51:44 +02:00
ifdef NEEDS_LIBICONV
ifdef ICONVDIR
BASIC_CFLAGS += -I$(ICONVDIR)/include
ICONV_LINK = $(call libpath_template,$(ICONVDIR)/$(lib))
Makefile(s): avoid recipe prefix in conditional statements In GNU Make commit 07fcee35 ([SV 64815] Recipe lines cannot contain conditional statements, 2023-05-22) and following, conditional statements may no longer be preceded by a tab character (which Make refers to as the recipe prefix). There are a handful of spots in our various Makefile(s) which will break in a future release of Make containing 07fcee35. For instance, trying to compile the pre-image of this patch with the tip of make.git results in the following: $ make -v | head -1 && make GNU Make 4.4.90 config.mak.uname:842: *** missing 'endif'. Stop. The kernel addressed this issue in 82175d1f9430 (kbuild: Replace tabs with spaces when followed by conditionals, 2024-01-28). Address the issues in Git's tree by applying the same strategy. When a conditional word (ifeq, ifneq, ifdef, etc.) is preceded by one or more tab characters, replace each tab character with 8 space characters with the following: find . -type f -not -path './.git/*' -name Makefile -or -name '*.mak' | xargs perl -i -pe ' s/(\t+)(ifn?eq|ifn?def|else|endif)/" " x (length($1) * 8) . $2/ge unless /\\$/ ' The "unless /\\$/" removes any false-positives (like "\telse \" appearing within a shell script as part of a recipe). After doing so, Git compiles on newer versions of Make: $ make -v | head -1 && make GNU Make 4.4.90 GIT_VERSION = 2.44.0.414.gfac1dc44ca9 [...] $ echo $? 0 Reported-by: Dario Gjorgjevski <dario.gjorgjevski@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-08 17:51:44 +02:00
else
ICONV_LINK =
Makefile(s): avoid recipe prefix in conditional statements In GNU Make commit 07fcee35 ([SV 64815] Recipe lines cannot contain conditional statements, 2023-05-22) and following, conditional statements may no longer be preceded by a tab character (which Make refers to as the recipe prefix). There are a handful of spots in our various Makefile(s) which will break in a future release of Make containing 07fcee35. For instance, trying to compile the pre-image of this patch with the tip of make.git results in the following: $ make -v | head -1 && make GNU Make 4.4.90 config.mak.uname:842: *** missing 'endif'. Stop. The kernel addressed this issue in 82175d1f9430 (kbuild: Replace tabs with spaces when followed by conditionals, 2024-01-28). Address the issues in Git's tree by applying the same strategy. When a conditional word (ifeq, ifneq, ifdef, etc.) is preceded by one or more tab characters, replace each tab character with 8 space characters with the following: find . -type f -not -path './.git/*' -name Makefile -or -name '*.mak' | xargs perl -i -pe ' s/(\t+)(ifn?eq|ifn?def|else|endif)/" " x (length($1) * 8) . $2/ge unless /\\$/ ' The "unless /\\$/" removes any false-positives (like "\telse \" appearing within a shell script as part of a recipe). After doing so, Git compiles on newer versions of Make: $ make -v | head -1 && make GNU Make 4.4.90 GIT_VERSION = 2.44.0.414.gfac1dc44ca9 [...] $ echo $? 0 Reported-by: Dario Gjorgjevski <dario.gjorgjevski@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-08 17:51:44 +02:00
endif
ifdef NEEDS_LIBINTL_BEFORE_LIBICONV
ICONV_LINK += -lintl
Makefile(s): avoid recipe prefix in conditional statements In GNU Make commit 07fcee35 ([SV 64815] Recipe lines cannot contain conditional statements, 2023-05-22) and following, conditional statements may no longer be preceded by a tab character (which Make refers to as the recipe prefix). There are a handful of spots in our various Makefile(s) which will break in a future release of Make containing 07fcee35. For instance, trying to compile the pre-image of this patch with the tip of make.git results in the following: $ make -v | head -1 && make GNU Make 4.4.90 config.mak.uname:842: *** missing 'endif'. Stop. The kernel addressed this issue in 82175d1f9430 (kbuild: Replace tabs with spaces when followed by conditionals, 2024-01-28). Address the issues in Git's tree by applying the same strategy. When a conditional word (ifeq, ifneq, ifdef, etc.) is preceded by one or more tab characters, replace each tab character with 8 space characters with the following: find . -type f -not -path './.git/*' -name Makefile -or -name '*.mak' | xargs perl -i -pe ' s/(\t+)(ifn?eq|ifn?def|else|endif)/" " x (length($1) * 8) . $2/ge unless /\\$/ ' The "unless /\\$/" removes any false-positives (like "\telse \" appearing within a shell script as part of a recipe). After doing so, Git compiles on newer versions of Make: $ make -v | head -1 && make GNU Make 4.4.90 GIT_VERSION = 2.44.0.414.gfac1dc44ca9 [...] $ echo $? 0 Reported-by: Dario Gjorgjevski <dario.gjorgjevski@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-08 17:51:44 +02:00
endif
EXTLIBS += $(ICONV_LINK) -liconv
Makefile(s): avoid recipe prefix in conditional statements In GNU Make commit 07fcee35 ([SV 64815] Recipe lines cannot contain conditional statements, 2023-05-22) and following, conditional statements may no longer be preceded by a tab character (which Make refers to as the recipe prefix). There are a handful of spots in our various Makefile(s) which will break in a future release of Make containing 07fcee35. For instance, trying to compile the pre-image of this patch with the tip of make.git results in the following: $ make -v | head -1 && make GNU Make 4.4.90 config.mak.uname:842: *** missing 'endif'. Stop. The kernel addressed this issue in 82175d1f9430 (kbuild: Replace tabs with spaces when followed by conditionals, 2024-01-28). Address the issues in Git's tree by applying the same strategy. When a conditional word (ifeq, ifneq, ifdef, etc.) is preceded by one or more tab characters, replace each tab character with 8 space characters with the following: find . -type f -not -path './.git/*' -name Makefile -or -name '*.mak' | xargs perl -i -pe ' s/(\t+)(ifn?eq|ifn?def|else|endif)/" " x (length($1) * 8) . $2/ge unless /\\$/ ' The "unless /\\$/" removes any false-positives (like "\telse \" appearing within a shell script as part of a recipe). After doing so, Git compiles on newer versions of Make: $ make -v | head -1 && make GNU Make 4.4.90 GIT_VERSION = 2.44.0.414.gfac1dc44ca9 [...] $ echo $? 0 Reported-by: Dario Gjorgjevski <dario.gjorgjevski@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-08 17:51:44 +02:00
endif
endif
utf8: handle systems that don't write BOM for UTF-16 When serializing UTF-16 (and UTF-32), there are three possible ways to write the stream. One can write the data with a BOM in either big-endian or little-endian format, or one can write the data without a BOM in big-endian format. Most systems' iconv implementations choose to write it with a BOM in some endianness, since this is the most foolproof, and it is resistant to misinterpretation on Windows, where UTF-16 and the little-endian serialization are very common. For compatibility with Windows and to avoid accidental misuse there, Git always wants to write UTF-16 with a BOM, and will refuse to read UTF-16 without it. However, musl's iconv implementation writes UTF-16 without a BOM, relying on the user to interpret it as big-endian. This causes t0028 and the related functionality to fail, since Git won't read the file without a BOM. Add a Makefile and #define knob, ICONV_OMITS_BOM, that can be set if the iconv implementation has this behavior. When set, Git will write a BOM manually for UTF-16 and UTF-32 and then force the data to be written in UTF-16BE or UTF-32BE. We choose big-endian behavior here because the tests use the raw "UTF-16" encoding, which will be big-endian when the implementation requires this knob to be set. Update the tests to detect this case and write test data with an added BOM if necessary. Always write the BOM in the tests in big-endian format, since all iconv implementations that omit a BOM must use big-endian serialization according to the Unicode standard. Preserve the existing behavior for systems which do not have this knob enabled, since they may use optimized implementations, including defaulting to the native endianness, which may improve performance. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-02-12 01:52:06 +01:00
ifdef ICONV_OMITS_BOM
BASIC_CFLAGS += -DICONV_OMITS_BOM
endif
ifdef NEEDS_LIBGEN
EXTLIBS += -lgen
endif
i18n: add infrastructure for translating Git with gettext Change the skeleton implementation of i18n in Git to one that can show localized strings to users for our C, Shell and Perl programs using either GNU libintl or the Solaris gettext implementation. This new internationalization support is enabled by default. If gettext isn't available, or if Git is compiled with NO_GETTEXT=YesPlease, Git falls back on its current behavior of showing interface messages in English. When using the autoconf script we'll auto-detect if the gettext libraries are installed and act appropriately. This change is somewhat large because as well as adding a C, Shell and Perl i18n interface we're adding a lot of tests for them, and for those tests to work we need a skeleton PO file to actually test translations. A minimal Icelandic translation is included for this purpose. Icelandic includes multi-byte characters which makes it easy to test various edge cases, and it's a language I happen to understand. The rest of the commit message goes into detail about various sub-parts of this commit. = Installation Gettext .mo files will be installed and looked for in the standard $(prefix)/share/locale path. GIT_TEXTDOMAINDIR can also be set to override that, but that's only intended to be used to test Git itself. = Perl Perl code that's to be localized should use the new Git::I18n module. It imports a __ function into the caller's package by default. Instead of using the high level Locale::TextDomain interface I've opted to use the low-level (equivalent to the C interface) Locale::Messages module, which Locale::TextDomain itself uses. Locale::TextDomain does a lot of redundant work we don't need, and some of it would potentially introduce bugs. It tries to set the $TEXTDOMAIN based on package of the caller, and has its own hardcoded paths where it'll search for messages. I found it easier just to completely avoid it rather than try to circumvent its behavior. In any case, this is an issue wholly internal Git::I18N. Its guts can be changed later if that's deemed necessary. See <AANLkTilYD_NyIZMyj9dHtVk-ylVBfvyxpCC7982LWnVd@mail.gmail.com> for a further elaboration on this topic. = Shell Shell code that's to be localized should use the git-sh-i18n library. It's basically just a wrapper for the system's gettext.sh. If gettext.sh isn't available we'll fall back on gettext(1) if it's available. The latter is available without the former on Solaris, which has its own non-GNU gettext implementation. We also need to emulate eval_gettext() there. If neither are present we'll use a dumb printf(1) fall-through wrapper. = About libcharset.h and langinfo.h We use libcharset to query the character set of the current locale if it's available. I.e. we'll use it instead of nl_langinfo if HAVE_LIBCHARSET_H is set. The GNU gettext manual recommends using langinfo.h's nl_langinfo(CODESET) to acquire the current character set, but on systems that have libcharset.h's locale_charset() using the latter is either saner, or the only option on those systems. GNU and Solaris have a nl_langinfo(CODESET), FreeBSD can use either, but MinGW and some others need to use libcharset.h's locale_charset() instead. =Credits This patch is based on work by Jeff Epler <jepler@unpythonic.net> who did the initial Makefile / C work, and a lot of comments from the Git mailing list, including Jonathan Nieder, Jakub Narebski, Johannes Sixt, Erik Faye-Lund, Peter Krefting, Junio C Hamano, Thomas Rast and others. [jc: squashed a small Makefile fix from Ramsay] Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-11-18 00:14:42 +01:00
ifndef NO_GETTEXT
ifndef LIBC_CONTAINS_LIBINTL
EXTLIBS += -lintl
endif
endif
ifdef NEEDS_SOCKET
EXTLIBS += -lsocket
endif
ifdef NEEDS_NSL
EXTLIBS += -lnsl
endif
ifdef NEEDS_RESOLV
EXTLIBS += -lresolv
endif
ifdef NO_D_TYPE_IN_DIRENT
BASIC_CFLAGS += -DNO_D_TYPE_IN_DIRENT
endif
ifdef NO_GECOS_IN_PWENT
BASIC_CFLAGS += -DNO_GECOS_IN_PWENT
endif
ifdef NO_ST_BLOCKS_IN_STRUCT_STAT
BASIC_CFLAGS += -DNO_ST_BLOCKS_IN_STRUCT_STAT
endif
ifdef USE_NSEC
BASIC_CFLAGS += -DUSE_NSEC
endif
ifdef USE_ST_TIMESPEC
BASIC_CFLAGS += -DUSE_ST_TIMESPEC
endif
ifdef NO_NORETURN
BASIC_CFLAGS += -DNO_NORETURN
endif
ifdef NO_NSEC
BASIC_CFLAGS += -DNO_NSEC
endif
ifdef SNPRINTF_RETURNS_BOGUS
COMPAT_CFLAGS += -DSNPRINTF_RETURNS_BOGUS
COMPAT_OBJS += compat/snprintf.o
endif
ifdef FREAD_READS_DIRECTORIES
COMPAT_CFLAGS += -DFREAD_READS_DIRECTORIES
COMPAT_OBJS += compat/fopen.o
endif
Makefile: add OPEN_RETURNS_EINTR knob On some platforms, open() reportedly returns EINTR when opening regular files and we receive a signal (usually SIGALRM from our progress meter). This shouldn't happen, as open() should be a restartable syscall, and we specify SA_RESTART when setting up the alarm handler. So it may actually be a kernel or libc bug for this to happen. But it has been reported on at least one version of Linux (on a network filesystem): https://lore.kernel.org/git/c8061cce-71e4-17bd-a56a-a5fed93804da@neanderfunk.de/ as well as on macOS starting with Big Sur even on a regular filesystem. We can work around it by retrying open() calls that get EINTR, just as we do for read(), etc. Since we don't ever _want_ to interrupt an open() call, we can get away with just redefining open, rather than insisting all callsites use xopen(). We actually do have an xopen() wrapper already (and it even does this retry, though there's no indication of it being an observed problem back then; it seems simply to have been lifted from xread(), etc). But it is used hardly anywhere, and isn't suitable for general use because it will die() on error. In theory we could combine the two, but it's awkward to do so because of the variable-args interface of open(). This patch adds a Makefile knob for enabling the workaround. It's not enabled by default for any platforms in config.mak.uname yet, as we don't have enough data to decide how common this is (I have not been able to reproduce on either Linux or Big Sur myself). It may be worth enabling preemptively anyway, since the cost is pretty low (if we don't see an EINTR, it's just an extra conditional). However, note that we must not enable this on Windows. It doesn't do anything there, and the macro overrides the existing mingw_open() redirection. I've added a preemptive #undef here in the mingw header (which is processed first) to just quietly disable it (we could also make it an #error, but there is little point in being so aggressive). Reported-by: Aleksey Kliger <alklig@microsoft.com> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-02-26 07:14:35 +01:00
ifdef OPEN_RETURNS_EINTR
COMPAT_CFLAGS += -DOPEN_RETURNS_EINTR
COMPAT_OBJS += compat/open.o
endif
ifdef NO_SYMLINK_HEAD
BASIC_CFLAGS += -DNO_SYMLINK_HEAD
endif
i18n: add infrastructure for translating Git with gettext Change the skeleton implementation of i18n in Git to one that can show localized strings to users for our C, Shell and Perl programs using either GNU libintl or the Solaris gettext implementation. This new internationalization support is enabled by default. If gettext isn't available, or if Git is compiled with NO_GETTEXT=YesPlease, Git falls back on its current behavior of showing interface messages in English. When using the autoconf script we'll auto-detect if the gettext libraries are installed and act appropriately. This change is somewhat large because as well as adding a C, Shell and Perl i18n interface we're adding a lot of tests for them, and for those tests to work we need a skeleton PO file to actually test translations. A minimal Icelandic translation is included for this purpose. Icelandic includes multi-byte characters which makes it easy to test various edge cases, and it's a language I happen to understand. The rest of the commit message goes into detail about various sub-parts of this commit. = Installation Gettext .mo files will be installed and looked for in the standard $(prefix)/share/locale path. GIT_TEXTDOMAINDIR can also be set to override that, but that's only intended to be used to test Git itself. = Perl Perl code that's to be localized should use the new Git::I18n module. It imports a __ function into the caller's package by default. Instead of using the high level Locale::TextDomain interface I've opted to use the low-level (equivalent to the C interface) Locale::Messages module, which Locale::TextDomain itself uses. Locale::TextDomain does a lot of redundant work we don't need, and some of it would potentially introduce bugs. It tries to set the $TEXTDOMAIN based on package of the caller, and has its own hardcoded paths where it'll search for messages. I found it easier just to completely avoid it rather than try to circumvent its behavior. In any case, this is an issue wholly internal Git::I18N. Its guts can be changed later if that's deemed necessary. See <AANLkTilYD_NyIZMyj9dHtVk-ylVBfvyxpCC7982LWnVd@mail.gmail.com> for a further elaboration on this topic. = Shell Shell code that's to be localized should use the git-sh-i18n library. It's basically just a wrapper for the system's gettext.sh. If gettext.sh isn't available we'll fall back on gettext(1) if it's available. The latter is available without the former on Solaris, which has its own non-GNU gettext implementation. We also need to emulate eval_gettext() there. If neither are present we'll use a dumb printf(1) fall-through wrapper. = About libcharset.h and langinfo.h We use libcharset to query the character set of the current locale if it's available. I.e. we'll use it instead of nl_langinfo if HAVE_LIBCHARSET_H is set. The GNU gettext manual recommends using langinfo.h's nl_langinfo(CODESET) to acquire the current character set, but on systems that have libcharset.h's locale_charset() using the latter is either saner, or the only option on those systems. GNU and Solaris have a nl_langinfo(CODESET), FreeBSD can use either, but MinGW and some others need to use libcharset.h's locale_charset() instead. =Credits This patch is based on work by Jeff Epler <jepler@unpythonic.net> who did the initial Makefile / C work, and a lot of comments from the Git mailing list, including Jonathan Nieder, Jakub Narebski, Johannes Sixt, Erik Faye-Lund, Peter Krefting, Junio C Hamano, Thomas Rast and others. [jc: squashed a small Makefile fix from Ramsay] Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-11-18 00:14:42 +01:00
ifdef NO_GETTEXT
BASIC_CFLAGS += -DNO_GETTEXT
USE_GETTEXT_SCHEME ?= fallthrough
i18n: add infrastructure for translating Git with gettext Change the skeleton implementation of i18n in Git to one that can show localized strings to users for our C, Shell and Perl programs using either GNU libintl or the Solaris gettext implementation. This new internationalization support is enabled by default. If gettext isn't available, or if Git is compiled with NO_GETTEXT=YesPlease, Git falls back on its current behavior of showing interface messages in English. When using the autoconf script we'll auto-detect if the gettext libraries are installed and act appropriately. This change is somewhat large because as well as adding a C, Shell and Perl i18n interface we're adding a lot of tests for them, and for those tests to work we need a skeleton PO file to actually test translations. A minimal Icelandic translation is included for this purpose. Icelandic includes multi-byte characters which makes it easy to test various edge cases, and it's a language I happen to understand. The rest of the commit message goes into detail about various sub-parts of this commit. = Installation Gettext .mo files will be installed and looked for in the standard $(prefix)/share/locale path. GIT_TEXTDOMAINDIR can also be set to override that, but that's only intended to be used to test Git itself. = Perl Perl code that's to be localized should use the new Git::I18n module. It imports a __ function into the caller's package by default. Instead of using the high level Locale::TextDomain interface I've opted to use the low-level (equivalent to the C interface) Locale::Messages module, which Locale::TextDomain itself uses. Locale::TextDomain does a lot of redundant work we don't need, and some of it would potentially introduce bugs. It tries to set the $TEXTDOMAIN based on package of the caller, and has its own hardcoded paths where it'll search for messages. I found it easier just to completely avoid it rather than try to circumvent its behavior. In any case, this is an issue wholly internal Git::I18N. Its guts can be changed later if that's deemed necessary. See <AANLkTilYD_NyIZMyj9dHtVk-ylVBfvyxpCC7982LWnVd@mail.gmail.com> for a further elaboration on this topic. = Shell Shell code that's to be localized should use the git-sh-i18n library. It's basically just a wrapper for the system's gettext.sh. If gettext.sh isn't available we'll fall back on gettext(1) if it's available. The latter is available without the former on Solaris, which has its own non-GNU gettext implementation. We also need to emulate eval_gettext() there. If neither are present we'll use a dumb printf(1) fall-through wrapper. = About libcharset.h and langinfo.h We use libcharset to query the character set of the current locale if it's available. I.e. we'll use it instead of nl_langinfo if HAVE_LIBCHARSET_H is set. The GNU gettext manual recommends using langinfo.h's nl_langinfo(CODESET) to acquire the current character set, but on systems that have libcharset.h's locale_charset() using the latter is either saner, or the only option on those systems. GNU and Solaris have a nl_langinfo(CODESET), FreeBSD can use either, but MinGW and some others need to use libcharset.h's locale_charset() instead. =Credits This patch is based on work by Jeff Epler <jepler@unpythonic.net> who did the initial Makefile / C work, and a lot of comments from the Git mailing list, including Jonathan Nieder, Jakub Narebski, Johannes Sixt, Erik Faye-Lund, Peter Krefting, Junio C Hamano, Thomas Rast and others. [jc: squashed a small Makefile fix from Ramsay] Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-11-18 00:14:42 +01:00
endif
ifdef NO_POLL
NO_POLL_H = YesPlease
NO_SYS_POLL_H = YesPlease
COMPAT_CFLAGS += -DNO_POLL -Icompat/poll
COMPAT_OBJS += compat/poll/poll.o
endif
ifdef NO_STRCASESTR
COMPAT_CFLAGS += -DNO_STRCASESTR
COMPAT_OBJS += compat/strcasestr.o
endif
ifdef NO_STRLCPY
COMPAT_CFLAGS += -DNO_STRLCPY
COMPAT_OBJS += compat/strlcpy.o
endif
ifdef NO_STRTOUMAX
COMPAT_CFLAGS += -DNO_STRTOUMAX
COMPAT_OBJS += compat/strtoumax.o compat/strtoimax.o
endif
ifdef NO_STRTOULL
COMPAT_CFLAGS += -DNO_STRTOULL
endif
ifdef NO_SETENV
COMPAT_CFLAGS += -DNO_SETENV
COMPAT_OBJS += compat/setenv.o
endif
ifdef NO_MKDTEMP
COMPAT_CFLAGS += -DNO_MKDTEMP
COMPAT_OBJS += compat/mkdtemp.o
endif
ifdef MKDIR_WO_TRAILING_SLASH
COMPAT_CFLAGS += -DMKDIR_WO_TRAILING_SLASH
COMPAT_OBJS += compat/mkdir.o
endif
ifdef NO_UNSETENV
COMPAT_CFLAGS += -DNO_UNSETENV
COMPAT_OBJS += compat/unsetenv.o
endif
ifdef NO_SYS_SELECT_H
BASIC_CFLAGS += -DNO_SYS_SELECT_H
endif
ifdef NO_POLL_H
BASIC_CFLAGS += -DNO_POLL_H
endif
ifdef NO_SYS_POLL_H
BASIC_CFLAGS += -DNO_SYS_POLL_H
endif
ifdef NEEDS_SYS_PARAM_H
BASIC_CFLAGS += -DNEEDS_SYS_PARAM_H
endif
ifdef NO_INTTYPES_H
BASIC_CFLAGS += -DNO_INTTYPES_H
endif
ifdef NO_INITGROUPS
BASIC_CFLAGS += -DNO_INITGROUPS
endif
ifdef NO_MMAP
COMPAT_CFLAGS += -DNO_MMAP
COMPAT_OBJS += compat/mmap.o
else
Makefile(s): avoid recipe prefix in conditional statements In GNU Make commit 07fcee35 ([SV 64815] Recipe lines cannot contain conditional statements, 2023-05-22) and following, conditional statements may no longer be preceded by a tab character (which Make refers to as the recipe prefix). There are a handful of spots in our various Makefile(s) which will break in a future release of Make containing 07fcee35. For instance, trying to compile the pre-image of this patch with the tip of make.git results in the following: $ make -v | head -1 && make GNU Make 4.4.90 config.mak.uname:842: *** missing 'endif'. Stop. The kernel addressed this issue in 82175d1f9430 (kbuild: Replace tabs with spaces when followed by conditionals, 2024-01-28). Address the issues in Git's tree by applying the same strategy. When a conditional word (ifeq, ifneq, ifdef, etc.) is preceded by one or more tab characters, replace each tab character with 8 space characters with the following: find . -type f -not -path './.git/*' -name Makefile -or -name '*.mak' | xargs perl -i -pe ' s/(\t+)(ifn?eq|ifn?def|else|endif)/" " x (length($1) * 8) . $2/ge unless /\\$/ ' The "unless /\\$/" removes any false-positives (like "\telse \" appearing within a shell script as part of a recipe). After doing so, Git compiles on newer versions of Make: $ make -v | head -1 && make GNU Make 4.4.90 GIT_VERSION = 2.44.0.414.gfac1dc44ca9 [...] $ echo $? 0 Reported-by: Dario Gjorgjevski <dario.gjorgjevski@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-08 17:51:44 +02:00
ifdef USE_WIN32_MMAP
COMPAT_CFLAGS += -DUSE_WIN32_MMAP
COMPAT_OBJS += compat/win32mmap.o
Makefile(s): avoid recipe prefix in conditional statements In GNU Make commit 07fcee35 ([SV 64815] Recipe lines cannot contain conditional statements, 2023-05-22) and following, conditional statements may no longer be preceded by a tab character (which Make refers to as the recipe prefix). There are a handful of spots in our various Makefile(s) which will break in a future release of Make containing 07fcee35. For instance, trying to compile the pre-image of this patch with the tip of make.git results in the following: $ make -v | head -1 && make GNU Make 4.4.90 config.mak.uname:842: *** missing 'endif'. Stop. The kernel addressed this issue in 82175d1f9430 (kbuild: Replace tabs with spaces when followed by conditionals, 2024-01-28). Address the issues in Git's tree by applying the same strategy. When a conditional word (ifeq, ifneq, ifdef, etc.) is preceded by one or more tab characters, replace each tab character with 8 space characters with the following: find . -type f -not -path './.git/*' -name Makefile -or -name '*.mak' | xargs perl -i -pe ' s/(\t+)(ifn?eq|ifn?def|else|endif)/" " x (length($1) * 8) . $2/ge unless /\\$/ ' The "unless /\\$/" removes any false-positives (like "\telse \" appearing within a shell script as part of a recipe). After doing so, Git compiles on newer versions of Make: $ make -v | head -1 && make GNU Make 4.4.90 GIT_VERSION = 2.44.0.414.gfac1dc44ca9 [...] $ echo $? 0 Reported-by: Dario Gjorgjevski <dario.gjorgjevski@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-08 17:51:44 +02:00
endif
endif
ifdef MMAP_PREVENTS_DELETE
BASIC_CFLAGS += -DMMAP_PREVENTS_DELETE
endif
ifdef OBJECT_CREATION_USES_RENAMES
COMPAT_CFLAGS += -DOBJECT_CREATION_MODE=1
endif
ifdef NO_STRUCT_ITIMERVAL
COMPAT_CFLAGS += -DNO_STRUCT_ITIMERVAL
NO_SETITIMER = YesPlease
endif
ifdef NO_SETITIMER
COMPAT_CFLAGS += -DNO_SETITIMER
endif
ifdef NO_PREAD
COMPAT_CFLAGS += -DNO_PREAD
COMPAT_OBJS += compat/pread.o
endif
Avoid accessing a slow working copy during diffcore operations. The Cygwin folks have done a fine job at creating a POSIX layer on Windows That Just Works(tm). However it comes with a penalty; accessing files in the working tree by way of stat/open/mmap can be slower for diffcore than inflating the data from a blob which is stored in a packfile. This performance problem is especially an issue in merge-recursive when dealing with nearly 7000 added files, as we are loading each file's content from the working directory to perform rename detection. I have literally seen (and sadly watched) paint dry in less time than it takes for merge-recursive to finish such a merge. On the other hand this very same merge runs very fast on Solaris. If Git is compiled with NO_FAST_WORKING_DIRECTORY set then we will avoid looking at the working directory when the blob in question is available within a packfile and the caller doesn't need the data unpacked into a temporary file. We don't use loose objects as they have the same open/mmap/close costs as the working directory file access, but have the additional CPU overhead of needing to inflate the content before use. So it is still faster to use the working tree file over the loose object. If the caller needs the file data unpacked into a temporary file its likely because they are going to call an external diff program, passing the file as a parameter. In this case reusing the working tree file will be faster as we don't need to inflate the data and write it out to a temporary file. The NO_FAST_WORKING_DIRECTORY feature is enabled by default on Cygwin, as that is the platform which currently appears to benefit the most from this option. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-12-14 12:15:57 +01:00
ifdef NO_FAST_WORKING_DIRECTORY
BASIC_CFLAGS += -DNO_FAST_WORKING_DIRECTORY
endif
ifdef NO_TRUSTABLE_FILEMODE
BASIC_CFLAGS += -DNO_TRUSTABLE_FILEMODE
endif
ifdef NEEDS_MODE_TRANSLATION
COMPAT_CFLAGS += -DNEEDS_MODE_TRANSLATION
COMPAT_OBJS += compat/stat.o
endif
ifdef NO_IPV6
BASIC_CFLAGS += -DNO_IPV6
endif
ifdef NO_INTPTR_T
COMPAT_CFLAGS += -DNO_INTPTR_T
endif
ifdef NO_UINTMAX_T
BASIC_CFLAGS += -Duintmax_t=uint32_t
endif
ifdef NO_SOCKADDR_STORAGE
ifdef NO_IPV6
BASIC_CFLAGS += -Dsockaddr_storage=sockaddr_in
else
BASIC_CFLAGS += -Dsockaddr_storage=sockaddr_in6
endif
endif
ifdef NO_INET_NTOP
LIB_OBJS += compat/inet_ntop.o
BASIC_CFLAGS += -DNO_INET_NTOP
endif
ifdef NO_INET_PTON
LIB_OBJS += compat/inet_pton.o
BASIC_CFLAGS += -DNO_INET_PTON
endif
ifdef NO_UNIX_SOCKETS
BASIC_CFLAGS += -DNO_UNIX_SOCKETS
else
LIB_OBJS += unix-socket.o
unix-stream-server: create unix domain socket under lock Create a wrapper class for `unix_stream_listen()` that uses a ".lock" lockfile to create the unix domain socket in a race-free manner. Unix domain sockets have a fundamental problem on Unix systems because they persist in the filesystem until they are deleted. This is independent of whether a server is actually listening for connections. Well-behaved servers are expected to delete the socket when they shutdown. A new server cannot easily tell if a found socket is attached to an active server or is leftover cruft from a dead server. The traditional solution used by `unix_stream_listen()` is to force delete the socket pathname and then create a new socket. This solves the latter (cruft) problem, but in the case of the former, it orphans the existing server (by stealing the pathname associated with the socket it is listening on). We cannot directly use a .lock lockfile to create the socket because the socket is created by `bind(2)` rather than the `open(2)` mechanism used by `tempfile.c`. As an alternative, we hold a plain lockfile ("<path>.lock") as a mutual exclusion device. Under the lock, we test if an existing socket ("<path>") is has an active server. If not, we create a new socket and begin listening. Then we use "rollback" to delete the lockfile in all cases. This wrapper code conceptually exists at a higher-level than the core unix_stream_connect() and unix_stream_listen() routines that it consumes. It is isolated in a wrapper class for clarity. Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-03-15 22:08:27 +01:00
LIB_OBJS += unix-stream-server.o
endif
# Simple IPC requires threads and platform-specific IPC support.
# Only platforms that have both should include these source files
# in the build.
#
# On Windows-based systems, Simple IPC requires threads and Windows
# Named Pipes. These are always available, so Simple IPC support
# is optional.
#
# On Unix-based systems, Simple IPC requires pthreads and Unix
# domain sockets. So support is only enabled when both are present.
#
ifdef USE_WIN32_IPC
BASIC_CFLAGS += -DSUPPORTS_SIMPLE_IPC
LIB_OBJS += compat/simple-ipc/ipc-shared.o
LIB_OBJS += compat/simple-ipc/ipc-win32.o
else
ifndef NO_PTHREADS
ifndef NO_UNIX_SOCKETS
BASIC_CFLAGS += -DSUPPORTS_SIMPLE_IPC
LIB_OBJS += compat/simple-ipc/ipc-shared.o
LIB_OBJS += compat/simple-ipc/ipc-unix-socket.o
endif
endif
endif
ifdef NO_ICONV
BASIC_CFLAGS += -DNO_ICONV
endif
ifdef OLD_ICONV
BASIC_CFLAGS += -DOLD_ICONV
endif
ifdef NO_DEFLATE_BOUND
BASIC_CFLAGS += -DNO_DEFLATE_BOUND
endif
ifdef NO_POSIX_GOODIES
BASIC_CFLAGS += -DNO_POSIX_GOODIES
endif
ifdef APPLE_COMMON_CRYPTO_SHA1
# Apple CommonCrypto requires chunking
SHA1_MAX_BLOCK_SIZE = 1024L*1024L*1024L
endif
Makefile + hash.h: remove PPC_SHA1 implementation Remove the PPC_SHA1 implementation added in a6ef3518f9a ([PATCH] PPC assembly implementation of SHA1, 2005-04-22). When this was added Apple consumer hardware used the PPC architecture, and the implementation was intended to improve SHA-1 speed there. Since it was added we've moved to using sha1collisiondetection by default, and anyone wanting hard-rolled non-DC SHA-1 implementation can use OpenSSL's via the OPENSSL_SHA1 knob. The PPC_SHA1 originally originally targeted 32 bit PPC, and later the 64 bit PPC 970 (a.k.a. Apple PowerPC G5). See 926172c5e48 (block-sha1: improve code on large-register-set machines, 2009-08-10) for a reference about the performance on G5 (a comment in block-sha1/sha1.c being removed here). I can't get it to do anything but segfault on both the BE and LE POWER machines in the GCC compile farm[1]. Anyone who's concerned about performance on PPC these days is likely to be using the IBM POWER processors. There have been proposals to entirely remove non-sha1collisiondetection implementations from the tree[2]. I think per [3] that would be a bit overzealous. I.e. there are various set-ups git's speed is going to be more important than the relatively implausible SHA-1 collision attack, or where such attacks are entirely mitigated by other means (e.g. by incoming objects being checked with DC_SHA1). But that really doesn't apply to PPC_SHA1 in particular, which seems to have outlived its usefulness. As this gets rid of the only in-tree *.S assembly file we can remove the small bits of logic from the Makefile needed to build objects from *.S (as opposed to *.c) The code being removed here was also throwing warnings with the "-pedantic" flag, it could have been fixed as 544d93bc3b4 (block-sha1: remove use of obsolete x86 assembly, 2022-03-10) did for block-sha1/*, but as noted above let's remove it instead. 1. https://cfarm.tetaneutral.net/machines/list/ Tested on gcc{110,112,135,203}, a mixture of POWER [789] ppc64 and ppc64le. All segfault in anything needing object hashing (e.g. t/t1007-hash-object.sh) when compiled with PPC_SHA1=Y. 2. https://lore.kernel.org/git/20200223223758.120941-1-mh@glandium.org/ 3. https://lore.kernel.org/git/20200224044732.GK1018190@coredump.intra.peff.net/ Acked-by: brian m. carlson" <sandals@crustytoothpaste.net> Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-08-31 11:18:43 +02:00
ifdef PPC_SHA1
$(error the PPC_SHA1 flag has been removed along with the PowerPC-specific SHA-1 implementation.)
endif
ifdef OPENSSL_SHA1
EXTLIBS += $(LIB_4_CRYPTO)
BASIC_CFLAGS += -DSHA1_OPENSSL
Makefile: add DC_SHA1 knob This knob lets you use the sha1dc implementation from: https://github.com/cr-marcstevens/sha1collisiondetection which can detect certain types of collision attacks (even when we only see half of the colliding pair). So it mitigates any attack which consists of getting the "good" half of a collision into a trusted repository, and then later replacing it with the "bad" half. The "good" half is rejected by the victim's version of Git (and even if they run an old version of Git, any sha1dc-enabled git will complain loudly if it ever has to interact with the object). The big downside is that it's slower than either the openssl or block-sha1 implementations. Here are some timings based off of linux.git: - compute sha1 over whole packfile sha1dc: 3.580s blk-sha1: 2.046s (-43%) openssl: 1.335s (-62%) - rev-list --all --objects sha1dc: 33.512s blk-sha1: 33.514s (+0.0%) openssl: 33.650s (+0.4%) - git log --no-merges -10000 -p sha1dc: 8.124s blk-sha1: 7.986s (-1.6%) openssl: 8.203s (+0.9%) - index-pack --verify sha1dc: 4m19s blk-sha1: 2m57s (-32%) openssl: 2m19s (-42%) So overall the sha1 computation with collision detection is about 1.75x slower than block-sha1, and 2.7x slower than sha1. But of course most operations do more than just sha1. Normal object access isn't really slowed at all (both the +/- changes there are well within the run-to-run noise); any changes are drowned out by the other work Git is doing. The most-affected operation is `index-pack --verify`, which is essentially just computing the sha1 on every object. This is similar to the `index-pack` invocation that the receiver of a push or fetch would perform. So clearly there's some extra CPU load here. There will also be some latency for the user, though keep in mind that such an operation will generally be network bound (this is about a 1.2GB packfile). Some of that extra CPU is "free" in the sense that we use it while the pack is streaming in anyway. But most of it comes during the delta-resolution phase, after the whole pack has been received. So we can imagine that for this (quite large) push, the user might have to wait an extra 100 seconds over openssl (which is what we use now). If we assume they can push to us at 20Mbit/s, that's 480s for a 1.2GB pack, which is only 20% slower. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-03-16 23:09:12 +01:00
else
ifdef BLK_SHA1
LIB_OBJS += block-sha1/sha1.o
BASIC_CFLAGS += -DSHA1_BLK
else
ifdef APPLE_COMMON_CRYPTO_SHA1
COMPAT_CFLAGS += -DCOMMON_DIGEST_FOR_OPENSSL
BASIC_CFLAGS += -DSHA1_APPLE
else
BASIC_CFLAGS += -DSHA1_DC
LIB_OBJS += sha1dc_git.o
ifdef DC_SHA1_EXTERNAL
Makefile(s): avoid recipe prefix in conditional statements In GNU Make commit 07fcee35 ([SV 64815] Recipe lines cannot contain conditional statements, 2023-05-22) and following, conditional statements may no longer be preceded by a tab character (which Make refers to as the recipe prefix). There are a handful of spots in our various Makefile(s) which will break in a future release of Make containing 07fcee35. For instance, trying to compile the pre-image of this patch with the tip of make.git results in the following: $ make -v | head -1 && make GNU Make 4.4.90 config.mak.uname:842: *** missing 'endif'. Stop. The kernel addressed this issue in 82175d1f9430 (kbuild: Replace tabs with spaces when followed by conditionals, 2024-01-28). Address the issues in Git's tree by applying the same strategy. When a conditional word (ifeq, ifneq, ifdef, etc.) is preceded by one or more tab characters, replace each tab character with 8 space characters with the following: find . -type f -not -path './.git/*' -name Makefile -or -name '*.mak' | xargs perl -i -pe ' s/(\t+)(ifn?eq|ifn?def|else|endif)/" " x (length($1) * 8) . $2/ge unless /\\$/ ' The "unless /\\$/" removes any false-positives (like "\telse \" appearing within a shell script as part of a recipe). After doing so, Git compiles on newer versions of Make: $ make -v | head -1 && make GNU Make 4.4.90 GIT_VERSION = 2.44.0.414.gfac1dc44ca9 [...] $ echo $? 0 Reported-by: Dario Gjorgjevski <dario.gjorgjevski@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-08 17:51:44 +02:00
ifdef DC_SHA1_SUBMODULE
ifneq ($(DC_SHA1_SUBMODULE),auto)
$(error Only set DC_SHA1_EXTERNAL or DC_SHA1_SUBMODULE, not both)
Makefile(s): avoid recipe prefix in conditional statements In GNU Make commit 07fcee35 ([SV 64815] Recipe lines cannot contain conditional statements, 2023-05-22) and following, conditional statements may no longer be preceded by a tab character (which Make refers to as the recipe prefix). There are a handful of spots in our various Makefile(s) which will break in a future release of Make containing 07fcee35. For instance, trying to compile the pre-image of this patch with the tip of make.git results in the following: $ make -v | head -1 && make GNU Make 4.4.90 config.mak.uname:842: *** missing 'endif'. Stop. The kernel addressed this issue in 82175d1f9430 (kbuild: Replace tabs with spaces when followed by conditionals, 2024-01-28). Address the issues in Git's tree by applying the same strategy. When a conditional word (ifeq, ifneq, ifdef, etc.) is preceded by one or more tab characters, replace each tab character with 8 space characters with the following: find . -type f -not -path './.git/*' -name Makefile -or -name '*.mak' | xargs perl -i -pe ' s/(\t+)(ifn?eq|ifn?def|else|endif)/" " x (length($1) * 8) . $2/ge unless /\\$/ ' The "unless /\\$/" removes any false-positives (like "\telse \" appearing within a shell script as part of a recipe). After doing so, Git compiles on newer versions of Make: $ make -v | head -1 && make GNU Make 4.4.90 GIT_VERSION = 2.44.0.414.gfac1dc44ca9 [...] $ echo $? 0 Reported-by: Dario Gjorgjevski <dario.gjorgjevski@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-08 17:51:44 +02:00
endif
endif
BASIC_CFLAGS += -DDC_SHA1_EXTERNAL
EXTLIBS += -lsha1detectcoll
else
ifdef DC_SHA1_SUBMODULE
LIB_OBJS += sha1collisiondetection/lib/sha1.o
LIB_OBJS += sha1collisiondetection/lib/ubc_check.o
BASIC_CFLAGS += -DDC_SHA1_SUBMODULE
else
LIB_OBJS += sha1dc/sha1.o
LIB_OBJS += sha1dc/ubc_check.o
endif
BASIC_CFLAGS += \
-DSHA1DC_NO_STANDARD_INCLUDES \
-DSHA1DC_INIT_SAFE_HASH_DEFAULT=0 \
-DSHA1DC_CUSTOM_INCLUDE_SHA1_C="\"git-compat-util.h\"" \
-DSHA1DC_CUSTOM_INCLUDE_UBC_CHECK_C="\"git-compat-util.h\""
endif
endif
endif
endif
ifdef OPENSSL_SHA256
EXTLIBS += $(LIB_4_CRYPTO)
BASIC_CFLAGS += -DSHA256_OPENSSL
else
ifdef NETTLE_SHA256
BASIC_CFLAGS += -DSHA256_NETTLE
EXTLIBS += -lnettle
else
ifdef GCRYPT_SHA256
BASIC_CFLAGS += -DSHA256_GCRYPT
EXTLIBS += -lgcrypt
else
LIB_OBJS += sha256/block/sha256.o
BASIC_CFLAGS += -DSHA256_BLK
endif
endif
endif
ifdef SHA1_MAX_BLOCK_SIZE
LIB_OBJS += compat/sha1-chunked.o
BASIC_CFLAGS += -DSHA1_MAX_BLOCK_SIZE="$(SHA1_MAX_BLOCK_SIZE)"
endif
ifdef NO_HSTRERROR
COMPAT_CFLAGS += -DNO_HSTRERROR
COMPAT_OBJS += compat/hstrerror.o
endif
ifdef NO_MEMMEM
COMPAT_CFLAGS += -DNO_MEMMEM
COMPAT_OBJS += compat/memmem.o
endif
ifdef NO_GETPAGESIZE
COMPAT_CFLAGS += -DNO_GETPAGESIZE
endif
ifdef INTERNAL_QSORT
COMPAT_CFLAGS += -DINTERNAL_QSORT
endif
ifdef HAVE_ISO_QSORT_S
COMPAT_CFLAGS += -DHAVE_ISO_QSORT_S
else
COMPAT_OBJS += compat/qsort_s.o
endif
Compute prefix at runtime if RUNTIME_PREFIX is set This commit adds support for relocatable binaries (called RUNTIME_PREFIX). Such binaries can be moved together with the system configuration files to a different directory, as long as the relative paths from the binary to the configuration files is preserved. This functionality is essential on Windows where we deliver git binaries with an installer that allows to freely choose the installation location. If RUNTIME_PREFIX is unset we use the static prefix. This will be the default on Unix. Thus, the behavior on Unix will remain identical to the old implementation, which used to add the prefix in the Makefile. If RUNTIME_PREFIX is set the prefix is computed from the location of the executable. In this case, system_path() tries to strip known directories that executables can be located in from the path of the executable. If the path is successfully stripped it is used as the prefix. For example, if the executable is "/msysgit/bin/git" and BINDIR is "bin", then the prefix computed is "/msysgit". If the runtime prefix computation fails, we fall back to the static prefix specified in the makefile. This can be the case if the executable is not installed at a known location. Note that our test system sets GIT_CONFIG_NOSYSTEM to tell git to ignore global configuration files during testing. Hence testing does not trigger the fall back. Note that RUNTIME_PREFIX only works on Windows, though adding support on Unix should not be too hard. The implementation requires argv0_path to be set to an absolute path. argv0_path must point to the directory of the executable. We use assert() to verify this in debug builds. On Windows, the wrapper for main() (see compat/mingw.h) guarantees that argv0_path is correctly initialized. On Unix, further work is required before RUNTIME_PREFIX can be enabled. Signed-off-by: Steffen Prohaska <prohaska@zib.de> Acked-by: Johannes Sixt <j6t@kdbg.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-01-18 13:00:14 +01:00
ifdef RUNTIME_PREFIX
COMPAT_CFLAGS += -DRUNTIME_PREFIX
endif
ifdef NO_PTHREADS
BASIC_CFLAGS += -DNO_PTHREADS
else
BASIC_CFLAGS += $(PTHREAD_CFLAGS)
EXTLIBS += $(PTHREAD_LIBS)
endif
ifdef HAVE_PATHS_H
BASIC_CFLAGS += -DHAVE_PATHS_H
endif
i18n: add infrastructure for translating Git with gettext Change the skeleton implementation of i18n in Git to one that can show localized strings to users for our C, Shell and Perl programs using either GNU libintl or the Solaris gettext implementation. This new internationalization support is enabled by default. If gettext isn't available, or if Git is compiled with NO_GETTEXT=YesPlease, Git falls back on its current behavior of showing interface messages in English. When using the autoconf script we'll auto-detect if the gettext libraries are installed and act appropriately. This change is somewhat large because as well as adding a C, Shell and Perl i18n interface we're adding a lot of tests for them, and for those tests to work we need a skeleton PO file to actually test translations. A minimal Icelandic translation is included for this purpose. Icelandic includes multi-byte characters which makes it easy to test various edge cases, and it's a language I happen to understand. The rest of the commit message goes into detail about various sub-parts of this commit. = Installation Gettext .mo files will be installed and looked for in the standard $(prefix)/share/locale path. GIT_TEXTDOMAINDIR can also be set to override that, but that's only intended to be used to test Git itself. = Perl Perl code that's to be localized should use the new Git::I18n module. It imports a __ function into the caller's package by default. Instead of using the high level Locale::TextDomain interface I've opted to use the low-level (equivalent to the C interface) Locale::Messages module, which Locale::TextDomain itself uses. Locale::TextDomain does a lot of redundant work we don't need, and some of it would potentially introduce bugs. It tries to set the $TEXTDOMAIN based on package of the caller, and has its own hardcoded paths where it'll search for messages. I found it easier just to completely avoid it rather than try to circumvent its behavior. In any case, this is an issue wholly internal Git::I18N. Its guts can be changed later if that's deemed necessary. See <AANLkTilYD_NyIZMyj9dHtVk-ylVBfvyxpCC7982LWnVd@mail.gmail.com> for a further elaboration on this topic. = Shell Shell code that's to be localized should use the git-sh-i18n library. It's basically just a wrapper for the system's gettext.sh. If gettext.sh isn't available we'll fall back on gettext(1) if it's available. The latter is available without the former on Solaris, which has its own non-GNU gettext implementation. We also need to emulate eval_gettext() there. If neither are present we'll use a dumb printf(1) fall-through wrapper. = About libcharset.h and langinfo.h We use libcharset to query the character set of the current locale if it's available. I.e. we'll use it instead of nl_langinfo if HAVE_LIBCHARSET_H is set. The GNU gettext manual recommends using langinfo.h's nl_langinfo(CODESET) to acquire the current character set, but on systems that have libcharset.h's locale_charset() using the latter is either saner, or the only option on those systems. GNU and Solaris have a nl_langinfo(CODESET), FreeBSD can use either, but MinGW and some others need to use libcharset.h's locale_charset() instead. =Credits This patch is based on work by Jeff Epler <jepler@unpythonic.net> who did the initial Makefile / C work, and a lot of comments from the Git mailing list, including Jonathan Nieder, Jakub Narebski, Johannes Sixt, Erik Faye-Lund, Peter Krefting, Junio C Hamano, Thomas Rast and others. [jc: squashed a small Makefile fix from Ramsay] Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-11-18 00:14:42 +01:00
ifdef HAVE_LIBCHARSET_H
BASIC_CFLAGS += -DHAVE_LIBCHARSET_H
EXTLIBS += $(CHARSET_LIB)
i18n: add infrastructure for translating Git with gettext Change the skeleton implementation of i18n in Git to one that can show localized strings to users for our C, Shell and Perl programs using either GNU libintl or the Solaris gettext implementation. This new internationalization support is enabled by default. If gettext isn't available, or if Git is compiled with NO_GETTEXT=YesPlease, Git falls back on its current behavior of showing interface messages in English. When using the autoconf script we'll auto-detect if the gettext libraries are installed and act appropriately. This change is somewhat large because as well as adding a C, Shell and Perl i18n interface we're adding a lot of tests for them, and for those tests to work we need a skeleton PO file to actually test translations. A minimal Icelandic translation is included for this purpose. Icelandic includes multi-byte characters which makes it easy to test various edge cases, and it's a language I happen to understand. The rest of the commit message goes into detail about various sub-parts of this commit. = Installation Gettext .mo files will be installed and looked for in the standard $(prefix)/share/locale path. GIT_TEXTDOMAINDIR can also be set to override that, but that's only intended to be used to test Git itself. = Perl Perl code that's to be localized should use the new Git::I18n module. It imports a __ function into the caller's package by default. Instead of using the high level Locale::TextDomain interface I've opted to use the low-level (equivalent to the C interface) Locale::Messages module, which Locale::TextDomain itself uses. Locale::TextDomain does a lot of redundant work we don't need, and some of it would potentially introduce bugs. It tries to set the $TEXTDOMAIN based on package of the caller, and has its own hardcoded paths where it'll search for messages. I found it easier just to completely avoid it rather than try to circumvent its behavior. In any case, this is an issue wholly internal Git::I18N. Its guts can be changed later if that's deemed necessary. See <AANLkTilYD_NyIZMyj9dHtVk-ylVBfvyxpCC7982LWnVd@mail.gmail.com> for a further elaboration on this topic. = Shell Shell code that's to be localized should use the git-sh-i18n library. It's basically just a wrapper for the system's gettext.sh. If gettext.sh isn't available we'll fall back on gettext(1) if it's available. The latter is available without the former on Solaris, which has its own non-GNU gettext implementation. We also need to emulate eval_gettext() there. If neither are present we'll use a dumb printf(1) fall-through wrapper. = About libcharset.h and langinfo.h We use libcharset to query the character set of the current locale if it's available. I.e. we'll use it instead of nl_langinfo if HAVE_LIBCHARSET_H is set. The GNU gettext manual recommends using langinfo.h's nl_langinfo(CODESET) to acquire the current character set, but on systems that have libcharset.h's locale_charset() using the latter is either saner, or the only option on those systems. GNU and Solaris have a nl_langinfo(CODESET), FreeBSD can use either, but MinGW and some others need to use libcharset.h's locale_charset() instead. =Credits This patch is based on work by Jeff Epler <jepler@unpythonic.net> who did the initial Makefile / C work, and a lot of comments from the Git mailing list, including Jonathan Nieder, Jakub Narebski, Johannes Sixt, Erik Faye-Lund, Peter Krefting, Junio C Hamano, Thomas Rast and others. [jc: squashed a small Makefile fix from Ramsay] Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-11-18 00:14:42 +01:00
endif
ifdef HAVE_STRINGS_H
BASIC_CFLAGS += -DHAVE_STRINGS_H
endif
ifdef HAVE_DEV_TTY
BASIC_CFLAGS += -DHAVE_DEV_TTY
endif
ifdef DIR_HAS_BSD_GROUP_SEMANTICS
COMPAT_CFLAGS += -DDIR_HAS_BSD_GROUP_SEMANTICS
endif
ifdef UNRELIABLE_FSTAT
BASIC_CFLAGS += -DUNRELIABLE_FSTAT
endif
ifdef NO_REGEX
COMPAT_CFLAGS += -Icompat/regex
COMPAT_OBJS += compat/regex/regex.o
else
ifdef USE_ENHANCED_BASIC_REGULAR_EXPRESSIONS
COMPAT_CFLAGS += -DUSE_ENHANCED_BASIC_REGULAR_EXPRESSIONS
COMPAT_OBJS += compat/regcomp_enhanced.o
endif
endif
ifdef NATIVE_CRLF
BASIC_CFLAGS += -DNATIVE_CRLF
endif
Add custom memory allocator to MinGW and MacOS builds The standard allocator on Windows is pretty bad prior to Windows Vista, and nedmalloc is better than the modified dlmalloc provided with newer versions of the MinGW libc. NedMalloc stats in Git ---------------------- All results are the best result out of 3 runs. The benchmarks have been done on different hardware, so the repack times are not comparable. These benchmarks are all based on 'git repack -adf' on the Linux kernel. XP ----------------------------------------------- MinGW Threads Total Time Speed ----------------------------------------------- 3.4.2 (1T) 00:12:28.422 3.4.2 + nedmalloc (1T) 00:07:25.437 1.68x 3.4.5 (1T) 00:12:20.718 3.4.5 + nedmalloc (1T) 00:07:24.809 1.67x 4.3.3-tdm (1T) 00:12:01.843 4.3.3-tdm + nedmalloc (1T) 00:07:16.468 1.65x 4.3.3-tdm (2T) 00:07:35.062 4.3.3-tdm + nedmalloc (2T) 00:04:57.874 1.54x Vista ----------------------------------------------- MinGW Threads Total Time Speed ----------------------------------------------- 4.3.3-tdm (1T) 00:07:40.844 4.3.3-tdm + nedmalloc (1T) 00:07:17.548 1.05x 4.3.3-tdm (2T) 00:05:33.746 4.3.3-tdm + nedmalloc (2T) 00:05:27.334 1.02x Mac Mini ----------------------------------------------- GCC Threads Total Time Speed ----------------------------------------------- i686-darwin9-4.0.1 (2T) 00:09:57.346 i686-darwin9-4.0.1+ned (2T) 00:08:51.072 1.12x Signed-off-by: Marius Storm-Olsen <marius@trolltech.com> Signed-off-by: Steffen Prohaska <prohaska@zib.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-05-31 18:15:23 +02:00
ifdef USE_NED_ALLOCATOR
COMPAT_CFLAGS += -Icompat/nedmalloc
COMPAT_OBJS += compat/nedmalloc/nedmalloc.o
OVERRIDE_STRDUP = YesPlease
endif
ifdef OVERRIDE_STRDUP
COMPAT_CFLAGS += -DOVERRIDE_STRDUP
COMPAT_OBJS += compat/strdup.o
Add custom memory allocator to MinGW and MacOS builds The standard allocator on Windows is pretty bad prior to Windows Vista, and nedmalloc is better than the modified dlmalloc provided with newer versions of the MinGW libc. NedMalloc stats in Git ---------------------- All results are the best result out of 3 runs. The benchmarks have been done on different hardware, so the repack times are not comparable. These benchmarks are all based on 'git repack -adf' on the Linux kernel. XP ----------------------------------------------- MinGW Threads Total Time Speed ----------------------------------------------- 3.4.2 (1T) 00:12:28.422 3.4.2 + nedmalloc (1T) 00:07:25.437 1.68x 3.4.5 (1T) 00:12:20.718 3.4.5 + nedmalloc (1T) 00:07:24.809 1.67x 4.3.3-tdm (1T) 00:12:01.843 4.3.3-tdm + nedmalloc (1T) 00:07:16.468 1.65x 4.3.3-tdm (2T) 00:07:35.062 4.3.3-tdm + nedmalloc (2T) 00:04:57.874 1.54x Vista ----------------------------------------------- MinGW Threads Total Time Speed ----------------------------------------------- 4.3.3-tdm (1T) 00:07:40.844 4.3.3-tdm + nedmalloc (1T) 00:07:17.548 1.05x 4.3.3-tdm (2T) 00:05:33.746 4.3.3-tdm + nedmalloc (2T) 00:05:27.334 1.02x Mac Mini ----------------------------------------------- GCC Threads Total Time Speed ----------------------------------------------- i686-darwin9-4.0.1 (2T) 00:09:57.346 i686-darwin9-4.0.1+ned (2T) 00:08:51.072 1.12x Signed-off-by: Marius Storm-Olsen <marius@trolltech.com> Signed-off-by: Steffen Prohaska <prohaska@zib.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-05-31 18:15:23 +02:00
endif
ifdef GIT_TEST_CMP_USE_COPIED_CONTEXT
export GIT_TEST_CMP_USE_COPIED_CONTEXT
endif
i18n: add infrastructure for translating Git with gettext Change the skeleton implementation of i18n in Git to one that can show localized strings to users for our C, Shell and Perl programs using either GNU libintl or the Solaris gettext implementation. This new internationalization support is enabled by default. If gettext isn't available, or if Git is compiled with NO_GETTEXT=YesPlease, Git falls back on its current behavior of showing interface messages in English. When using the autoconf script we'll auto-detect if the gettext libraries are installed and act appropriately. This change is somewhat large because as well as adding a C, Shell and Perl i18n interface we're adding a lot of tests for them, and for those tests to work we need a skeleton PO file to actually test translations. A minimal Icelandic translation is included for this purpose. Icelandic includes multi-byte characters which makes it easy to test various edge cases, and it's a language I happen to understand. The rest of the commit message goes into detail about various sub-parts of this commit. = Installation Gettext .mo files will be installed and looked for in the standard $(prefix)/share/locale path. GIT_TEXTDOMAINDIR can also be set to override that, but that's only intended to be used to test Git itself. = Perl Perl code that's to be localized should use the new Git::I18n module. It imports a __ function into the caller's package by default. Instead of using the high level Locale::TextDomain interface I've opted to use the low-level (equivalent to the C interface) Locale::Messages module, which Locale::TextDomain itself uses. Locale::TextDomain does a lot of redundant work we don't need, and some of it would potentially introduce bugs. It tries to set the $TEXTDOMAIN based on package of the caller, and has its own hardcoded paths where it'll search for messages. I found it easier just to completely avoid it rather than try to circumvent its behavior. In any case, this is an issue wholly internal Git::I18N. Its guts can be changed later if that's deemed necessary. See <AANLkTilYD_NyIZMyj9dHtVk-ylVBfvyxpCC7982LWnVd@mail.gmail.com> for a further elaboration on this topic. = Shell Shell code that's to be localized should use the git-sh-i18n library. It's basically just a wrapper for the system's gettext.sh. If gettext.sh isn't available we'll fall back on gettext(1) if it's available. The latter is available without the former on Solaris, which has its own non-GNU gettext implementation. We also need to emulate eval_gettext() there. If neither are present we'll use a dumb printf(1) fall-through wrapper. = About libcharset.h and langinfo.h We use libcharset to query the character set of the current locale if it's available. I.e. we'll use it instead of nl_langinfo if HAVE_LIBCHARSET_H is set. The GNU gettext manual recommends using langinfo.h's nl_langinfo(CODESET) to acquire the current character set, but on systems that have libcharset.h's locale_charset() using the latter is either saner, or the only option on those systems. GNU and Solaris have a nl_langinfo(CODESET), FreeBSD can use either, but MinGW and some others need to use libcharset.h's locale_charset() instead. =Credits This patch is based on work by Jeff Epler <jepler@unpythonic.net> who did the initial Makefile / C work, and a lot of comments from the Git mailing list, including Jonathan Nieder, Jakub Narebski, Johannes Sixt, Erik Faye-Lund, Peter Krefting, Junio C Hamano, Thomas Rast and others. [jc: squashed a small Makefile fix from Ramsay] Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-11-18 00:14:42 +01:00
ifndef NO_MSGFMT_EXTENDED_OPTIONS
MSGFMT += --check
i18n: add infrastructure for translating Git with gettext Change the skeleton implementation of i18n in Git to one that can show localized strings to users for our C, Shell and Perl programs using either GNU libintl or the Solaris gettext implementation. This new internationalization support is enabled by default. If gettext isn't available, or if Git is compiled with NO_GETTEXT=YesPlease, Git falls back on its current behavior of showing interface messages in English. When using the autoconf script we'll auto-detect if the gettext libraries are installed and act appropriately. This change is somewhat large because as well as adding a C, Shell and Perl i18n interface we're adding a lot of tests for them, and for those tests to work we need a skeleton PO file to actually test translations. A minimal Icelandic translation is included for this purpose. Icelandic includes multi-byte characters which makes it easy to test various edge cases, and it's a language I happen to understand. The rest of the commit message goes into detail about various sub-parts of this commit. = Installation Gettext .mo files will be installed and looked for in the standard $(prefix)/share/locale path. GIT_TEXTDOMAINDIR can also be set to override that, but that's only intended to be used to test Git itself. = Perl Perl code that's to be localized should use the new Git::I18n module. It imports a __ function into the caller's package by default. Instead of using the high level Locale::TextDomain interface I've opted to use the low-level (equivalent to the C interface) Locale::Messages module, which Locale::TextDomain itself uses. Locale::TextDomain does a lot of redundant work we don't need, and some of it would potentially introduce bugs. It tries to set the $TEXTDOMAIN based on package of the caller, and has its own hardcoded paths where it'll search for messages. I found it easier just to completely avoid it rather than try to circumvent its behavior. In any case, this is an issue wholly internal Git::I18N. Its guts can be changed later if that's deemed necessary. See <AANLkTilYD_NyIZMyj9dHtVk-ylVBfvyxpCC7982LWnVd@mail.gmail.com> for a further elaboration on this topic. = Shell Shell code that's to be localized should use the git-sh-i18n library. It's basically just a wrapper for the system's gettext.sh. If gettext.sh isn't available we'll fall back on gettext(1) if it's available. The latter is available without the former on Solaris, which has its own non-GNU gettext implementation. We also need to emulate eval_gettext() there. If neither are present we'll use a dumb printf(1) fall-through wrapper. = About libcharset.h and langinfo.h We use libcharset to query the character set of the current locale if it's available. I.e. we'll use it instead of nl_langinfo if HAVE_LIBCHARSET_H is set. The GNU gettext manual recommends using langinfo.h's nl_langinfo(CODESET) to acquire the current character set, but on systems that have libcharset.h's locale_charset() using the latter is either saner, or the only option on those systems. GNU and Solaris have a nl_langinfo(CODESET), FreeBSD can use either, but MinGW and some others need to use libcharset.h's locale_charset() instead. =Credits This patch is based on work by Jeff Epler <jepler@unpythonic.net> who did the initial Makefile / C work, and a lot of comments from the Git mailing list, including Jonathan Nieder, Jakub Narebski, Johannes Sixt, Erik Faye-Lund, Peter Krefting, Junio C Hamano, Thomas Rast and others. [jc: squashed a small Makefile fix from Ramsay] Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-11-18 00:14:42 +01:00
endif
trace: add high resolution timer function to debug performance issues Add a getnanotime() function that returns nanoseconds since 01/01/1970 as unsigned 64-bit integer (i.e. overflows in july 2554). This is easier to work with than e.g. struct timeval or struct timespec. Basing the timer on the epoch allows using the results with other time-related APIs. To simplify adaption to different platforms, split the implementation into a common getnanotime() and a platform-specific highres_nanos() function. The common getnanotime() function handles errors, falling back to gettimeofday() if highres_nanos() isn't implemented or doesn't work. getnanotime() is also responsible for normalizing to the epoch. The offset to the system clock is calculated only once on initialization, i.e. manually setting the system clock has no impact on the timer (except if the fallback gettimeofday() is in use). Git processes are typically short lived, so we don't need to handle clock drift. The highres_nanos() function returns monotonically increasing nanoseconds relative to some arbitrary point in time (e.g. system boot), or 0 on failure. Providing platform-specific implementations should be relatively easy, e.g. adapting to clock_gettime() as defined by the POSIX realtime extensions is seven lines of code. This version includes highres_nanos() implementations for: * Linux: using clock_gettime(CLOCK_MONOTONIC) * Windows: using QueryPerformanceCounter() Todo: * enable clock_gettime() on more platforms * add Mac OSX version, e.g. using mach_absolute_time + mach_timebase_info Signed-off-by: Karsten Blees <blees@dcon.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-07-12 02:05:42 +02:00
ifdef HAVE_CLOCK_GETTIME
BASIC_CFLAGS += -DHAVE_CLOCK_GETTIME
endif
ifdef HAVE_CLOCK_MONOTONIC
BASIC_CFLAGS += -DHAVE_CLOCK_MONOTONIC
endif
ifdef HAVE_SYNC_FILE_RANGE
BASIC_CFLAGS += -DHAVE_SYNC_FILE_RANGE
endif
ifdef NEEDS_LIBRT
EXTLIBS += -lrt
endif
ifdef HAVE_BSD_SYSCTL
BASIC_CFLAGS += -DHAVE_BSD_SYSCTL
endif
ifdef HAVE_BSD_KERN_PROC_SYSCTL
BASIC_CFLAGS += -DHAVE_BSD_KERN_PROC_SYSCTL
endif
ifdef HAVE_GETDELIM
BASIC_CFLAGS += -DHAVE_GETDELIM
endif
wrapper: add a helper to generate numbers from a CSPRNG There are many situations in which having access to a cryptographically secure pseudorandom number generator (CSPRNG) is helpful. In the future, we'll encounter one of these when dealing with temporary files. To make this possible, let's add a function which reads from a system CSPRNG and returns some bytes. We know that all systems will have such an interface. A CSPRNG is required for a secure TLS or SSH implementation and a Git implementation which provided neither would be of little practical use. In addition, POSIX is set to standardize getentropy(2) in the next version, so in the (potentially distant) future we can rely on that. For systems which lack one of the other interfaces, we provide the ability to use OpenSSL's CSPRNG. OpenSSL is highly portable and functions on practically every known OS, and we know it will have access to some source of cryptographically secure randomness. We also provide support for the arc4random in libbsd for folks who would prefer to use that. Because this is a security sensitive interface, we take some precautions. We either succeed by filling the buffer completely as we requested, or we fail. We don't return partial data because the caller will almost never find that to be a useful behavior. Specify a makefile knob which users can use to specify one or more suitable CSPRNGs, and turn the multiple string options into a set of defines, since we cannot match on strings in the preprocessor. We allow multiple options to make the job of handling this in autoconf easier. The order of options is important here. On systems with arc4random, which is most of the BSDs, we use that, since, except on MirBSD and macOS, it uses ChaCha20, which is extremely fast, and sits entirely in userspace, avoiding a system call. We then prefer getrandom over getentropy, because the former has been available longer on Linux, and then OpenSSL. Finally, if none of those are available, we use /dev/urandom, because most Unix-like operating systems provide that API. We prefer options that don't involve device files when possible because those work in some restricted environments where device files may not be available. Set the configuration variables appropriately for Linux and the BSDs, including macOS, as well as Windows and NonStop. We specifically only consider versions which receive publicly available security support here. For the same reason, we don't specify getrandom(2) on Linux, because CentOS 7 doesn't support it in glibc (although its kernel does) and we don't want to resort to making syscalls. Finally, add a test helper to allow this to be tested by hand and in tests. We don't add any tests, since invoking the CSPRNG is not likely to produce interesting, reproducible results. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-01-17 22:56:16 +01:00
ifneq ($(findstring arc4random,$(CSPRNG_METHOD)),)
BASIC_CFLAGS += -DHAVE_ARC4RANDOM
endif
ifneq ($(findstring libbsd,$(CSPRNG_METHOD)),)
BASIC_CFLAGS += -DHAVE_ARC4RANDOM_LIBBSD
EXTLIBS += -lbsd
endif
ifneq ($(findstring getrandom,$(CSPRNG_METHOD)),)
BASIC_CFLAGS += -DHAVE_GETRANDOM
endif
ifneq ($(findstring getentropy,$(CSPRNG_METHOD)),)
BASIC_CFLAGS += -DHAVE_GETENTROPY
endif
ifneq ($(findstring rtlgenrandom,$(CSPRNG_METHOD)),)
BASIC_CFLAGS += -DHAVE_RTLGENRANDOM
endif
ifneq ($(findstring openssl,$(CSPRNG_METHOD)),)
BASIC_CFLAGS += -DHAVE_OPENSSL_CSPRNG
EXTLIBS += -lcrypto -lssl
wrapper: add a helper to generate numbers from a CSPRNG There are many situations in which having access to a cryptographically secure pseudorandom number generator (CSPRNG) is helpful. In the future, we'll encounter one of these when dealing with temporary files. To make this possible, let's add a function which reads from a system CSPRNG and returns some bytes. We know that all systems will have such an interface. A CSPRNG is required for a secure TLS or SSH implementation and a Git implementation which provided neither would be of little practical use. In addition, POSIX is set to standardize getentropy(2) in the next version, so in the (potentially distant) future we can rely on that. For systems which lack one of the other interfaces, we provide the ability to use OpenSSL's CSPRNG. OpenSSL is highly portable and functions on practically every known OS, and we know it will have access to some source of cryptographically secure randomness. We also provide support for the arc4random in libbsd for folks who would prefer to use that. Because this is a security sensitive interface, we take some precautions. We either succeed by filling the buffer completely as we requested, or we fail. We don't return partial data because the caller will almost never find that to be a useful behavior. Specify a makefile knob which users can use to specify one or more suitable CSPRNGs, and turn the multiple string options into a set of defines, since we cannot match on strings in the preprocessor. We allow multiple options to make the job of handling this in autoconf easier. The order of options is important here. On systems with arc4random, which is most of the BSDs, we use that, since, except on MirBSD and macOS, it uses ChaCha20, which is extremely fast, and sits entirely in userspace, avoiding a system call. We then prefer getrandom over getentropy, because the former has been available longer on Linux, and then OpenSSL. Finally, if none of those are available, we use /dev/urandom, because most Unix-like operating systems provide that API. We prefer options that don't involve device files when possible because those work in some restricted environments where device files may not be available. Set the configuration variables appropriately for Linux and the BSDs, including macOS, as well as Windows and NonStop. We specifically only consider versions which receive publicly available security support here. For the same reason, we don't specify getrandom(2) on Linux, because CentOS 7 doesn't support it in glibc (although its kernel does) and we don't want to resort to making syscalls. Finally, add a test helper to allow this to be tested by hand and in tests. We don't add any tests, since invoking the CSPRNG is not likely to produce interesting, reproducible results. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-01-17 22:56:16 +01:00
endif
ifneq ($(PROCFS_EXECUTABLE_PATH),)
procfs_executable_path_SQ = $(subst ','\'',$(PROCFS_EXECUTABLE_PATH))
BASIC_CFLAGS += '-DPROCFS_EXECUTABLE_PATH="$(procfs_executable_path_SQ)"'
endif
ifndef HAVE_PLATFORM_PROCINFO
COMPAT_OBJS += compat/stub/procinfo.o
endif
ifdef HAVE_NS_GET_EXECUTABLE_PATH
BASIC_CFLAGS += -DHAVE_NS_GET_EXECUTABLE_PATH
endif
ifdef HAVE_WPGMPTR
BASIC_CFLAGS += -DHAVE_WPGMPTR
endif
ifdef FILENO_IS_A_MACRO
COMPAT_CFLAGS += -DFILENO_IS_A_MACRO
COMPAT_OBJS += compat/fileno.o
endif
ifdef NEED_ACCESS_ROOT_HANDLER
COMPAT_CFLAGS += -DNEED_ACCESS_ROOT_HANDLER
COMPAT_OBJS += compat/access.o
endif
ifdef FSMONITOR_DAEMON_BACKEND
COMPAT_CFLAGS += -DHAVE_FSMONITOR_DAEMON_BACKEND
COMPAT_OBJS += compat/fsmonitor/fsm-listen-$(FSMONITOR_DAEMON_BACKEND).o
fsmonitor--daemon: stub in health thread Create another thread to watch over the daemon process and automatically shut it down if necessary. This commit creates the basic framework for a "health" thread to monitor the daemon and/or the file system. Later commits will add platform-specific code to do the actual work. The "health" thread is intended to monitor conditions that would be difficult to track inside the IPC thread pool and/or the file system listener threads. For example, when there are file system events outside of the watched worktree root or if we want to have an idle-timeout auto-shutdown feature. This commit creates the health thread itself, defines the thread-proc and sets up the thread's event loop. It integrates this new thread into the existing IPC and Listener thread models. This commit defines the API to the platform-specific code where all of the monitoring will actually happen. The platform-specific code for MacOS is just stubs. Meaning that the health thread will immediately exit on MacOS, but that is OK and expected. Future work can define MacOS-specific monitoring. The platform-specific code for Windows sets up enough of the WaitForMultipleObjects() machinery to watch for system and/or custom events. Currently, the set of wait handles only includes our custom shutdown event (sent from our other theads). Later commits in this series will extend the set of wait handles to monitor other conditions. Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-05-26 23:47:10 +02:00
COMPAT_OBJS += compat/fsmonitor/fsm-health-$(FSMONITOR_DAEMON_BACKEND).o
COMPAT_OBJS += compat/fsmonitor/fsm-ipc-$(FSMONITOR_DAEMON_BACKEND).o
endif
ifdef FSMONITOR_OS_SETTINGS
COMPAT_CFLAGS += -DHAVE_FSMONITOR_OS_SETTINGS
COMPAT_OBJS += compat/fsmonitor/fsm-settings-$(FSMONITOR_OS_SETTINGS).o
COMPAT_OBJS += compat/fsmonitor/fsm-path-utils-$(FSMONITOR_OS_SETTINGS).o
endif
ifeq ($(TCLTK_PATH),)
NO_TCLTK = NoThanks
endif
ifeq ($(PERL_PATH),)
NO_PERL = NoThanks
endif
ifeq ($(PYTHON_PATH),)
NO_PYTHON = NoThanks
endif
ifndef PAGER_ENV
PAGER_ENV = LESS=FRX LV=-c
endif
ifdef NO_INSTALL_HARDLINKS
export NO_INSTALL_HARDLINKS
endif
Fix build problems related to profile-directed optimization There was a number of problems I ran into when trying the profile-directed optimizations added by Andi Kleen in git commit 7ddc2710b9. (This was using gcc 4.4 found on many enterprise distros.) 1) The -fprofile-generate and -fprofile-use commands are incompatible with ccache; the code ends up looking in the wrong place for the gcda files based on the ccache object names. 2) If the makefile notices that CFLAGS are different, it will rebuild all of the binaries. Hence the recipe originally specified by the INSTALL file ("make profile-all" followed by "make install") doesn't work. It will appear to work, but the binaries will end up getting built with no optimization. This patch fixes this by using an explicit set of options passed via the PROFILE variable then using this to directly manipulate CFLAGS and EXTLIBS. The developer can run "make PROFILE=BUILD all ; sudo make PROFILE=BUILD install" automatically run a two-pass build with the test suite run in between as the sample workload for the purpose of recording profiling information to do the profile-directed optimization. Alternatively, the profiling version of binaries can be built using: make PROFILE=GEN PROFILE_DIR=/var/cache/profile all make PROFILE=GEN install and then after git has been used for a while, the optimized version of the binary can be built as follows: make PROFILE=USE PROFILE_DIR=/var/cache/profile all make PROFILE=USE install Signed-off-by: "Theodore Ts'o" <tytso@mit.edu> Cc: Andi Kleen <ak@linux.intel.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-06 07:00:17 +01:00
### profile feedback build
#
# Can adjust this to be a global directory if you want to do extended
# data gathering
PROFILE_DIR := $(CURDIR)
ifeq ("$(PROFILE)","GEN")
BASIC_CFLAGS += -fprofile-generate=$(PROFILE_DIR) -DNO_NORETURN=1
Fix build problems related to profile-directed optimization There was a number of problems I ran into when trying the profile-directed optimizations added by Andi Kleen in git commit 7ddc2710b9. (This was using gcc 4.4 found on many enterprise distros.) 1) The -fprofile-generate and -fprofile-use commands are incompatible with ccache; the code ends up looking in the wrong place for the gcda files based on the ccache object names. 2) If the makefile notices that CFLAGS are different, it will rebuild all of the binaries. Hence the recipe originally specified by the INSTALL file ("make profile-all" followed by "make install") doesn't work. It will appear to work, but the binaries will end up getting built with no optimization. This patch fixes this by using an explicit set of options passed via the PROFILE variable then using this to directly manipulate CFLAGS and EXTLIBS. The developer can run "make PROFILE=BUILD all ; sudo make PROFILE=BUILD install" automatically run a two-pass build with the test suite run in between as the sample workload for the purpose of recording profiling information to do the profile-directed optimization. Alternatively, the profiling version of binaries can be built using: make PROFILE=GEN PROFILE_DIR=/var/cache/profile all make PROFILE=GEN install and then after git has been used for a while, the optimized version of the binary can be built as follows: make PROFILE=USE PROFILE_DIR=/var/cache/profile all make PROFILE=USE install Signed-off-by: "Theodore Ts'o" <tytso@mit.edu> Cc: Andi Kleen <ak@linux.intel.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-06 07:00:17 +01:00
EXTLIBS += -lgcov
export CCACHE_DISABLE = t
V = 1
else
ifneq ("$(PROFILE)","")
BASIC_CFLAGS += -fprofile-use=$(PROFILE_DIR) -fprofile-correction -DNO_NORETURN=1
export CCACHE_DISABLE = t
V = 1
Fix build problems related to profile-directed optimization There was a number of problems I ran into when trying the profile-directed optimizations added by Andi Kleen in git commit 7ddc2710b9. (This was using gcc 4.4 found on many enterprise distros.) 1) The -fprofile-generate and -fprofile-use commands are incompatible with ccache; the code ends up looking in the wrong place for the gcda files based on the ccache object names. 2) If the makefile notices that CFLAGS are different, it will rebuild all of the binaries. Hence the recipe originally specified by the INSTALL file ("make profile-all" followed by "make install") doesn't work. It will appear to work, but the binaries will end up getting built with no optimization. This patch fixes this by using an explicit set of options passed via the PROFILE variable then using this to directly manipulate CFLAGS and EXTLIBS. The developer can run "make PROFILE=BUILD all ; sudo make PROFILE=BUILD install" automatically run a two-pass build with the test suite run in between as the sample workload for the purpose of recording profiling information to do the profile-directed optimization. Alternatively, the profiling version of binaries can be built using: make PROFILE=GEN PROFILE_DIR=/var/cache/profile all make PROFILE=GEN install and then after git has been used for a while, the optimized version of the binary can be built as follows: make PROFILE=USE PROFILE_DIR=/var/cache/profile all make PROFILE=USE install Signed-off-by: "Theodore Ts'o" <tytso@mit.edu> Cc: Andi Kleen <ak@linux.intel.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-06 07:00:17 +01:00
endif
endif
Fix build problems related to profile-directed optimization There was a number of problems I ran into when trying the profile-directed optimizations added by Andi Kleen in git commit 7ddc2710b9. (This was using gcc 4.4 found on many enterprise distros.) 1) The -fprofile-generate and -fprofile-use commands are incompatible with ccache; the code ends up looking in the wrong place for the gcda files based on the ccache object names. 2) If the makefile notices that CFLAGS are different, it will rebuild all of the binaries. Hence the recipe originally specified by the INSTALL file ("make profile-all" followed by "make install") doesn't work. It will appear to work, but the binaries will end up getting built with no optimization. This patch fixes this by using an explicit set of options passed via the PROFILE variable then using this to directly manipulate CFLAGS and EXTLIBS. The developer can run "make PROFILE=BUILD all ; sudo make PROFILE=BUILD install" automatically run a two-pass build with the test suite run in between as the sample workload for the purpose of recording profiling information to do the profile-directed optimization. Alternatively, the profiling version of binaries can be built using: make PROFILE=GEN PROFILE_DIR=/var/cache/profile all make PROFILE=GEN install and then after git has been used for a while, the optimized version of the binary can be built as follows: make PROFILE=USE PROFILE_DIR=/var/cache/profile all make PROFILE=USE install Signed-off-by: "Theodore Ts'o" <tytso@mit.edu> Cc: Andi Kleen <ak@linux.intel.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-06 07:00:17 +01:00
# Shell quote (do not use $(call) to accommodate ancient setups);
ETC_GITCONFIG_SQ = $(subst ','\'',$(ETC_GITCONFIG))
ETC_GITATTRIBUTES_SQ = $(subst ','\'',$(ETC_GITATTRIBUTES))
DESTDIR_SQ = $(subst ','\'',$(DESTDIR))
NO_GETTEXT_SQ = $(subst ','\'',$(NO_GETTEXT))
bindir_SQ = $(subst ','\'',$(bindir))
bindir_relative_SQ = $(subst ','\'',$(bindir_relative))
Makefile: replace perl/Makefile.PL with simple make rules Replace the perl/Makefile.PL and the fallback perl/Makefile used under NO_PERL_MAKEMAKER=NoThanks with a much simpler implementation heavily inspired by how the i18n infrastructure's build process works[1]. The reason for having the Makefile.PL in the first place is that it was initially[2] building a perl C binding to interface with libgit, this functionality, that was removed[3] before Git.pm ever made it to the master branch. We've since since started maintaining a fallback perl/Makefile, as MakeMaker wouldn't work on some platforms[4]. That's just the tip of the iceberg. We have the PM.stamp hack in the top-level Makefile[5] to detect whether we need to regenerate the perl/perl.mak, which I fixed just recently to deal with issues like the perl version changing from under us[6]. There is absolutely no reason for why this needs to be so complex anymore. All we're getting out of this elaborate Rube Goldberg machine was copying perl/* to perl/blib/* as we do a string-replacement on the *.pm files to hardcode @@LOCALEDIR@@ in the source, as well as pod2man-ing Git.pm & friends. So replace the whole thing with something that's pretty much a copy of how we generate po/build/**.mo from po/*.po, just with a small sed(1) command instead of msgfmt. As that's being done rename the files from *.pm to *.pmc just to indicate that they're generated (see "perldoc -f require"). While I'm at it, change the fallback for Error.pm from being something where we'll ship our own Error.pm if one doesn't exist at build time to one where we just use a Git::Error wrapper that'll always prefer the system-wide Error.pm, only falling back to our own copy if it really doesn't exist at runtime. It's now shipped as Git::FromCPAN::Error, making it easy to add other modules to Git::FromCPAN::* in the future if that's needed. Functional changes: * This will not always install into perl's idea of its global "installsitelib". This only potentially matters for packagers that need to expose Git.pm for non-git use, and as explained in the INSTALL file there's a trivial workaround. * The scripts themselves will 'use lib' the target directory, but if INSTLIBDIR is set it overrides it. It doesn't have to be this way, it could be set in addition to INSTLIBDIR, but my reading of [7] is that this is the desired behavior. * We don't build man pages for all of the perl modules as we used to, only Git(3pm). As discussed on-list[8] that we were building installed manpages for purely internal APIs like Git::I18N or private-Error.pm was always a bug anyway, and all the Git::SVN::* ones say they're internal APIs. There are apparently external users of Git.pm, but I don't expect there to be any of the others. As a side-effect of these general changes the perl documentation now only installed by install-{doc,man}, not a mere "install" as before. 1. 5e9637c629 ("i18n: add infrastructure for translating Git with gettext", 2011-11-18) 2. b1edc53d06 ("Introduce Git.pm (v4)", 2006-06-24) 3. 18b0fc1ce1 ("Git.pm: Kill Git.xs for now", 2006-09-23) 4. f848718a69 ("Make perl/ build procedure ActiveState friendly.", 2006-12-04) 5. ee9be06770 ("perl: detect new files in MakeMaker builds", 2012-07-27) 6. c59c4939c2 ("perl: regenerate perl.mak if perl -V changes", 2017-03-29) 7. 0386dd37b1 ("Makefile: add PERLLIB_EXTRA variable that adds to default perl path", 2013-11-15) 8. 87bmjjv1pu.fsf@evledraar.booking.com ("Re: [PATCH] Makefile: replace perl/Makefile.PL with simple make rules" Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-12-10 22:13:33 +01:00
mandir_SQ = $(subst ','\'',$(mandir))
mandir_relative_SQ = $(subst ','\'',$(mandir_relative))
infodir_relative_SQ = $(subst ','\'',$(infodir_relative))
Makefile: replace perl/Makefile.PL with simple make rules Replace the perl/Makefile.PL and the fallback perl/Makefile used under NO_PERL_MAKEMAKER=NoThanks with a much simpler implementation heavily inspired by how the i18n infrastructure's build process works[1]. The reason for having the Makefile.PL in the first place is that it was initially[2] building a perl C binding to interface with libgit, this functionality, that was removed[3] before Git.pm ever made it to the master branch. We've since since started maintaining a fallback perl/Makefile, as MakeMaker wouldn't work on some platforms[4]. That's just the tip of the iceberg. We have the PM.stamp hack in the top-level Makefile[5] to detect whether we need to regenerate the perl/perl.mak, which I fixed just recently to deal with issues like the perl version changing from under us[6]. There is absolutely no reason for why this needs to be so complex anymore. All we're getting out of this elaborate Rube Goldberg machine was copying perl/* to perl/blib/* as we do a string-replacement on the *.pm files to hardcode @@LOCALEDIR@@ in the source, as well as pod2man-ing Git.pm & friends. So replace the whole thing with something that's pretty much a copy of how we generate po/build/**.mo from po/*.po, just with a small sed(1) command instead of msgfmt. As that's being done rename the files from *.pm to *.pmc just to indicate that they're generated (see "perldoc -f require"). While I'm at it, change the fallback for Error.pm from being something where we'll ship our own Error.pm if one doesn't exist at build time to one where we just use a Git::Error wrapper that'll always prefer the system-wide Error.pm, only falling back to our own copy if it really doesn't exist at runtime. It's now shipped as Git::FromCPAN::Error, making it easy to add other modules to Git::FromCPAN::* in the future if that's needed. Functional changes: * This will not always install into perl's idea of its global "installsitelib". This only potentially matters for packagers that need to expose Git.pm for non-git use, and as explained in the INSTALL file there's a trivial workaround. * The scripts themselves will 'use lib' the target directory, but if INSTLIBDIR is set it overrides it. It doesn't have to be this way, it could be set in addition to INSTLIBDIR, but my reading of [7] is that this is the desired behavior. * We don't build man pages for all of the perl modules as we used to, only Git(3pm). As discussed on-list[8] that we were building installed manpages for purely internal APIs like Git::I18N or private-Error.pm was always a bug anyway, and all the Git::SVN::* ones say they're internal APIs. There are apparently external users of Git.pm, but I don't expect there to be any of the others. As a side-effect of these general changes the perl documentation now only installed by install-{doc,man}, not a mere "install" as before. 1. 5e9637c629 ("i18n: add infrastructure for translating Git with gettext", 2011-11-18) 2. b1edc53d06 ("Introduce Git.pm (v4)", 2006-06-24) 3. 18b0fc1ce1 ("Git.pm: Kill Git.xs for now", 2006-09-23) 4. f848718a69 ("Make perl/ build procedure ActiveState friendly.", 2006-12-04) 5. ee9be06770 ("perl: detect new files in MakeMaker builds", 2012-07-27) 6. c59c4939c2 ("perl: regenerate perl.mak if perl -V changes", 2017-03-29) 7. 0386dd37b1 ("Makefile: add PERLLIB_EXTRA variable that adds to default perl path", 2013-11-15) 8. 87bmjjv1pu.fsf@evledraar.booking.com ("Re: [PATCH] Makefile: replace perl/Makefile.PL with simple make rules" Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-12-10 22:13:33 +01:00
perllibdir_SQ = $(subst ','\'',$(perllibdir))
i18n: add infrastructure for translating Git with gettext Change the skeleton implementation of i18n in Git to one that can show localized strings to users for our C, Shell and Perl programs using either GNU libintl or the Solaris gettext implementation. This new internationalization support is enabled by default. If gettext isn't available, or if Git is compiled with NO_GETTEXT=YesPlease, Git falls back on its current behavior of showing interface messages in English. When using the autoconf script we'll auto-detect if the gettext libraries are installed and act appropriately. This change is somewhat large because as well as adding a C, Shell and Perl i18n interface we're adding a lot of tests for them, and for those tests to work we need a skeleton PO file to actually test translations. A minimal Icelandic translation is included for this purpose. Icelandic includes multi-byte characters which makes it easy to test various edge cases, and it's a language I happen to understand. The rest of the commit message goes into detail about various sub-parts of this commit. = Installation Gettext .mo files will be installed and looked for in the standard $(prefix)/share/locale path. GIT_TEXTDOMAINDIR can also be set to override that, but that's only intended to be used to test Git itself. = Perl Perl code that's to be localized should use the new Git::I18n module. It imports a __ function into the caller's package by default. Instead of using the high level Locale::TextDomain interface I've opted to use the low-level (equivalent to the C interface) Locale::Messages module, which Locale::TextDomain itself uses. Locale::TextDomain does a lot of redundant work we don't need, and some of it would potentially introduce bugs. It tries to set the $TEXTDOMAIN based on package of the caller, and has its own hardcoded paths where it'll search for messages. I found it easier just to completely avoid it rather than try to circumvent its behavior. In any case, this is an issue wholly internal Git::I18N. Its guts can be changed later if that's deemed necessary. See <AANLkTilYD_NyIZMyj9dHtVk-ylVBfvyxpCC7982LWnVd@mail.gmail.com> for a further elaboration on this topic. = Shell Shell code that's to be localized should use the git-sh-i18n library. It's basically just a wrapper for the system's gettext.sh. If gettext.sh isn't available we'll fall back on gettext(1) if it's available. The latter is available without the former on Solaris, which has its own non-GNU gettext implementation. We also need to emulate eval_gettext() there. If neither are present we'll use a dumb printf(1) fall-through wrapper. = About libcharset.h and langinfo.h We use libcharset to query the character set of the current locale if it's available. I.e. we'll use it instead of nl_langinfo if HAVE_LIBCHARSET_H is set. The GNU gettext manual recommends using langinfo.h's nl_langinfo(CODESET) to acquire the current character set, but on systems that have libcharset.h's locale_charset() using the latter is either saner, or the only option on those systems. GNU and Solaris have a nl_langinfo(CODESET), FreeBSD can use either, but MinGW and some others need to use libcharset.h's locale_charset() instead. =Credits This patch is based on work by Jeff Epler <jepler@unpythonic.net> who did the initial Makefile / C work, and a lot of comments from the Git mailing list, including Jonathan Nieder, Jakub Narebski, Johannes Sixt, Erik Faye-Lund, Peter Krefting, Junio C Hamano, Thomas Rast and others. [jc: squashed a small Makefile fix from Ramsay] Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-11-18 00:14:42 +01:00
localedir_SQ = $(subst ','\'',$(localedir))
localedir_relative_SQ = $(subst ','\'',$(localedir_relative))
gitexecdir_SQ = $(subst ','\'',$(gitexecdir))
gitexecdir_relative_SQ = $(subst ','\'',$(gitexecdir_relative))
template_dir_SQ = $(subst ','\'',$(template_dir))
htmldir_relative_SQ = $(subst ','\'',$(htmldir_relative))
prefix_SQ = $(subst ','\'',$(prefix))
perllibdir_relative_SQ = $(subst ','\'',$(perllibdir_relative))
gitwebdir_SQ = $(subst ','\'',$(gitwebdir))
gitwebstaticdir_SQ = $(subst ','\'',$(gitwebstaticdir))
SHELL_PATH_SQ = $(subst ','\'',$(SHELL_PATH))
TEST_SHELL_PATH_SQ = $(subst ','\'',$(TEST_SHELL_PATH))
PERL_PATH_SQ = $(subst ','\'',$(PERL_PATH))
PYTHON_PATH_SQ = $(subst ','\'',$(PYTHON_PATH))
TCLTK_PATH_SQ = $(subst ','\'',$(TCLTK_PATH))
DIFF_SQ = $(subst ','\'',$(DIFF))
PERLLIB_EXTRA_SQ = $(subst ','\'',$(PERLLIB_EXTRA))
# RUNTIME_PREFIX's resolution logic requires resource paths to be expressed
# relative to each other and share an installation path.
#
# This is a dependency in:
# - Git's binary RUNTIME_PREFIX logic in (see "exec_cmd.c").
# - The runtime prefix Perl header (see
# "perl/header_templates/runtime_prefix.template.pl").
ifdef RUNTIME_PREFIX
ifneq ($(filter /%,$(firstword $(gitexecdir_relative))),)
$(error RUNTIME_PREFIX requires a relative gitexecdir, not: $(gitexecdir))
endif
ifneq ($(filter /%,$(firstword $(localedir_relative))),)
$(error RUNTIME_PREFIX requires a relative localedir, not: $(localedir))
endif
ifndef NO_PERL
ifneq ($(filter /%,$(firstword $(perllibdir_relative))),)
$(error RUNTIME_PREFIX requires a relative perllibdir, not: $(perllibdir))
endif
endif
endif
add an extra level of indirection to main() There are certain startup tasks that we expect every git process to do. In some cases this is just to improve the quality of the program (e.g., setting up gettext()). In others it is a requirement for using certain functions in libgit.a (e.g., system_path() expects that you have called git_extract_argv0_path()). Most commands are builtins and are covered by the git.c version of main(). However, there are still a few external commands that use their own main(). Each of these has to remember to include the correct startup sequence, and we are not always consistent. Rather than just fix the inconsistencies, let's make this harder to get wrong by providing a common main() that can run this standard startup. We basically have two options to do this: - the compat/mingw.h file already does something like this by adding a #define that replaces the definition of main with a wrapper that calls mingw_startup(). The upside is that the code in each program doesn't need to be changed at all; it's rewritten on the fly by the preprocessor. The downside is that it may make debugging of the startup sequence a bit more confusing, as the preprocessor is quietly inserting new code. - the builtin functions are all of the form cmd_foo(), and git.c's main() calls them. This is much more explicit, which may make things more obvious to somebody reading the code. It's also more flexible (because of course we have to figure out _which_ cmd_foo() to call). The downside is that each of the builtins must define cmd_foo(), instead of just main(). This patch chooses the latter option, preferring the more explicit approach, even though it is more invasive. We introduce a new file common-main.c, with the "real" main. It expects to call cmd_main() from whatever other objects it is linked against. We link common-main.o against anything that links against libgit.a, since we know that such programs will need to do this setup. Note that common-main.o can't actually go inside libgit.a, as the linker would not pick up its main() function automatically (it has no callers). The rest of the patch is just adjusting all of the various external programs (mostly in t/helper) to use cmd_main(). I've provided a global declaration for cmd_main(), which means that all of the programs also need to match its signature. In particular, many functions need to switch to "const char **" instead of "char **" for argv. This effect ripples out to a few other variables and functions, as well. This makes the patch even more invasive, but the end result is much better. We should be treating argv strings as const anyway, and now all programs conform to the same signature (which also matches the way builtins are defined). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-07-01 07:58:58 +02:00
# We must filter out any object files from $(GITLIBS),
# as it is typically used like:
#
# foo: foo.o $(GITLIBS)
# $(CC) $(filter %.o,$^) $(LIBS)
#
# where we use it as a dependency. Since we also pull object files
# from the dependency list, that would make each entry appear twice.
LIBS = $(filter-out %.o, $(GITLIBS)) $(EXTLIBS)
BASIC_CFLAGS += $(COMPAT_CFLAGS)
LIB_OBJS += $(COMPAT_OBJS)
# Quote for C
ifdef DEFAULT_EDITOR
DEFAULT_EDITOR_CQ = "$(subst ",\",$(subst \,\\,$(DEFAULT_EDITOR)))"
DEFAULT_EDITOR_CQ_SQ = $(subst ','\'',$(DEFAULT_EDITOR_CQ))
BASIC_CFLAGS += -DDEFAULT_EDITOR='$(DEFAULT_EDITOR_CQ_SQ)'
endif
ifdef DEFAULT_PAGER
DEFAULT_PAGER_CQ = "$(subst ",\",$(subst \,\\,$(DEFAULT_PAGER)))"
DEFAULT_PAGER_CQ_SQ = $(subst ','\'',$(DEFAULT_PAGER_CQ))
BASIC_CFLAGS += -DDEFAULT_PAGER='$(DEFAULT_PAGER_CQ_SQ)'
endif
Use SHELL_PATH from build system in run_command.c:prepare_shell_cmd During the testing of the 1.7.10 rc series on Solaris for OpenCSW, it was discovered that t7006-pager was failing due to finding a bad "sh" in PATH after a call to execvp("sh", ...). This call was setup by run_command.c:prepare_shell_cmd. The PATH in use at the time saw /opt/csw/bin given precedence to traditional Solaris paths such as /usr/bin and /usr/xpg4/bin. A package named schilyutils (Joerg Schilling's utilities) was installed on the build system and it delivered a modified version of the traditional Solaris /usr/bin/sh as /opt/csw/bin/sh. This version of sh suffers from many of the same problems as /usr/bin/sh. The command-specific pager test failed due to the broken "sh" handling ^ as a pipe character. It tried to fork two processes when it encountered "sed s/^/foo:/" as the pager command. This problem was entirely dependent on the PATH of the user at runtime. Possible fixes for this issue are: 1. Use the standard system() or popen() which both launch a POSIX shell on Solaris as long as _POSIX_SOURCE is defined. 2. The git wrapper could prepend SANE_TOOL_PATH to PATH thus forcing all unqualified commands run to use the known good tools on the system. 3. The run_command.c:prepare_shell_command() could use the same SHELL_PATH that is in the #! line of all all scripts and not rely on PATH to find the sh to run. Option 1 would preclude opening a bidirectional pipe to a filter script and would also break git for Windows as cmd.exe is spawned from system() (cf. v1.7.5-rc0~144^2, "alias: use run_command api to execute aliases, 2011-01-07). Option 2 is not friendly to users as it would negate their ability to use tools of their choice in many cases. Alternately, injecting SANE_TOOL_PATH such that it takes precedence over /bin and /usr/bin (and anything with lower precedence than those paths) as git-sh-setup.sh does would not solve the problem either as the user environment could still allow a bad sh to be found. (Many OpenCSW users will have /opt/csw/bin leading their PATH and some subset would have schilyutils installed.) Option 3 allows us to use a known good shell while still honouring the users' PATH for the utilities being run. Thus, it solves the problem while not negatively impacting either users or git's ability to run external commands in convenient ways. Essentially, the shell is a special case of tool that should not rely on SANE_TOOL_PATH and must be called explicitly. With this patch applied, any code path leading to run_command.c:prepare_shell_cmd can count on using the same sane shell that all shell scripts in the git suite use. Both the build system and run_command.c will default this shell to /bin/sh unless overridden. Signed-off-by: Ben Walton <bwalton@artsci.utoronto.ca> Reviewed-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-03-31 03:33:21 +02:00
ifdef SHELL_PATH
SHELL_PATH_CQ = "$(subst ",\",$(subst \,\\,$(SHELL_PATH)))"
SHELL_PATH_CQ_SQ = $(subst ','\'',$(SHELL_PATH_CQ))
BASIC_CFLAGS += -DSHELL_PATH='$(SHELL_PATH_CQ_SQ)'
endif
GIT_USER_AGENT_SQ = $(subst ','\'',$(GIT_USER_AGENT))
GIT_USER_AGENT_CQ = "$(subst ",\",$(subst \,\\,$(GIT_USER_AGENT)))"
GIT_USER_AGENT_CQ_SQ = $(subst ','\'',$(GIT_USER_AGENT_CQ))
GIT-USER-AGENT: FORCE
@if test x'$(GIT_USER_AGENT_SQ)' != x"`cat GIT-USER-AGENT 2>/dev/null`"; then \
echo '$(GIT_USER_AGENT_SQ)' >GIT-USER-AGENT; \
fi
ifdef DEFAULT_HELP_FORMAT
BASIC_CFLAGS += -DDEFAULT_HELP_FORMAT='"$(DEFAULT_HELP_FORMAT)"'
endif
ALL_CFLAGS += $(BASIC_CFLAGS)
ALL_LDFLAGS += $(BASIC_LDFLAGS)
export DIFF TAR INSTALL DESTDIR SHELL_PATH
### Build rules
SHELL = $(SHELL_PATH)
Fix build problems related to profile-directed optimization There was a number of problems I ran into when trying the profile-directed optimizations added by Andi Kleen in git commit 7ddc2710b9. (This was using gcc 4.4 found on many enterprise distros.) 1) The -fprofile-generate and -fprofile-use commands are incompatible with ccache; the code ends up looking in the wrong place for the gcda files based on the ccache object names. 2) If the makefile notices that CFLAGS are different, it will rebuild all of the binaries. Hence the recipe originally specified by the INSTALL file ("make profile-all" followed by "make install") doesn't work. It will appear to work, but the binaries will end up getting built with no optimization. This patch fixes this by using an explicit set of options passed via the PROFILE variable then using this to directly manipulate CFLAGS and EXTLIBS. The developer can run "make PROFILE=BUILD all ; sudo make PROFILE=BUILD install" automatically run a two-pass build with the test suite run in between as the sample workload for the purpose of recording profiling information to do the profile-directed optimization. Alternatively, the profiling version of binaries can be built using: make PROFILE=GEN PROFILE_DIR=/var/cache/profile all make PROFILE=GEN install and then after git has been used for a while, the optimized version of the binary can be built as follows: make PROFILE=USE PROFILE_DIR=/var/cache/profile all make PROFILE=USE install Signed-off-by: "Theodore Ts'o" <tytso@mit.edu> Cc: Andi Kleen <ak@linux.intel.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-06 07:00:17 +01:00
all:: shell_compatibility_test
ifeq "$(PROFILE)" "BUILD"
all:: profile
endif
profile:: profile-clean
Fix build problems related to profile-directed optimization There was a number of problems I ran into when trying the profile-directed optimizations added by Andi Kleen in git commit 7ddc2710b9. (This was using gcc 4.4 found on many enterprise distros.) 1) The -fprofile-generate and -fprofile-use commands are incompatible with ccache; the code ends up looking in the wrong place for the gcda files based on the ccache object names. 2) If the makefile notices that CFLAGS are different, it will rebuild all of the binaries. Hence the recipe originally specified by the INSTALL file ("make profile-all" followed by "make install") doesn't work. It will appear to work, but the binaries will end up getting built with no optimization. This patch fixes this by using an explicit set of options passed via the PROFILE variable then using this to directly manipulate CFLAGS and EXTLIBS. The developer can run "make PROFILE=BUILD all ; sudo make PROFILE=BUILD install" automatically run a two-pass build with the test suite run in between as the sample workload for the purpose of recording profiling information to do the profile-directed optimization. Alternatively, the profiling version of binaries can be built using: make PROFILE=GEN PROFILE_DIR=/var/cache/profile all make PROFILE=GEN install and then after git has been used for a while, the optimized version of the binary can be built as follows: make PROFILE=USE PROFILE_DIR=/var/cache/profile all make PROFILE=USE install Signed-off-by: "Theodore Ts'o" <tytso@mit.edu> Cc: Andi Kleen <ak@linux.intel.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-06 07:00:17 +01:00
$(MAKE) PROFILE=GEN all
$(MAKE) PROFILE=GEN -j1 test
@if test -n "$$GIT_PERF_REPO" || test -d .git; then \
$(MAKE) PROFILE=GEN -j1 perf; \
else \
echo "Skipping profile of perf tests..."; \
fi
$(MAKE) PROFILE=USE all
profile-fast: profile-clean
$(MAKE) PROFILE=GEN all
$(MAKE) PROFILE=GEN -j1 perf
$(MAKE) PROFILE=USE all
Fix build problems related to profile-directed optimization There was a number of problems I ran into when trying the profile-directed optimizations added by Andi Kleen in git commit 7ddc2710b9. (This was using gcc 4.4 found on many enterprise distros.) 1) The -fprofile-generate and -fprofile-use commands are incompatible with ccache; the code ends up looking in the wrong place for the gcda files based on the ccache object names. 2) If the makefile notices that CFLAGS are different, it will rebuild all of the binaries. Hence the recipe originally specified by the INSTALL file ("make profile-all" followed by "make install") doesn't work. It will appear to work, but the binaries will end up getting built with no optimization. This patch fixes this by using an explicit set of options passed via the PROFILE variable then using this to directly manipulate CFLAGS and EXTLIBS. The developer can run "make PROFILE=BUILD all ; sudo make PROFILE=BUILD install" automatically run a two-pass build with the test suite run in between as the sample workload for the purpose of recording profiling information to do the profile-directed optimization. Alternatively, the profiling version of binaries can be built using: make PROFILE=GEN PROFILE_DIR=/var/cache/profile all make PROFILE=GEN install and then after git has been used for a while, the optimized version of the binary can be built as follows: make PROFILE=USE PROFILE_DIR=/var/cache/profile all make PROFILE=USE install Signed-off-by: "Theodore Ts'o" <tytso@mit.edu> Cc: Andi Kleen <ak@linux.intel.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-06 07:00:17 +01:00
Optionally skip linking/copying the built-ins For a long time already, the non-dashed form of the built-ins is the recommended way to write scripts, i.e. it is better to call `git merge [...]` than to call `git-merge [...]`. While Git still supports the dashed form (by hard-linking the `git` executable to the dashed name in `libexec/git-core/`), in practice, it is probably almost irrelevant. However, we *do* care about keeping people's scripts working (even if they were written before the non-dashed form started to be recommended). Keeping this backwards-compatibility is not necessarily cheap, though: even so much as amending the tip commit in a git.git checkout will require re-linking all of those dashed commands. On this developer's laptop, this makes a noticeable difference: $ touch version.c && time make CC version.o AR libgit.a LINK git-bugreport.exe [... 11 similar lines ...] LN/CP git-remote-https.exe LN/CP git-remote-ftp.exe LN/CP git-remote-ftps.exe LINK git.exe BUILTIN git-add.exe [... 123 similar lines ...] BUILTIN all SUBDIR git-gui SUBDIR gitk-git SUBDIR templates LINK t/helper/test-fake-ssh.exe LINK t/helper/test-line-buffer.exe LINK t/helper/test-svn-fe.exe LINK t/helper/test-tool.exe real 0m36.633s user 0m3.794s sys 0m14.141s $ touch version.c && time make SKIP_DASHED_BUILT_INS=1 CC version.o AR libgit.a LINK git-bugreport.exe [... 11 similar lines ...] LN/CP git-remote-https.exe LN/CP git-remote-ftp.exe LN/CP git-remote-ftps.exe LINK git.exe BUILTIN git-receive-pack.exe BUILTIN git-upload-archive.exe BUILTIN git-upload-pack.exe BUILTIN all SUBDIR git-gui SUBDIR gitk-git SUBDIR templates LINK t/helper/test-fake-ssh.exe LINK t/helper/test-line-buffer.exe LINK t/helper/test-svn-fe.exe LINK t/helper/test-tool.exe real 0m23.717s user 0m1.562s sys 0m5.210s Also, `.zip` files do not have any standardized support for hard-links, therefore "zipping up" the executables will result in inflated disk usage. (To keep down the size of the "MinGit" variant of Git for Windows, which is distributed as a `.zip` file, the hard-links are excluded specifically.) In addition to that, some programs that are regularly used to assess disk usage fail to realize that those are hard-links, and heavily overcount disk usage. Most notably, this was the case with Windows Explorer up until the last couple of Windows 10 versions. See e.g. https://github.com/msysgit/msysgit/issues/58. To save on the time needed to hard-link these dashed commands, with the plan to eventually stop shipping with those hard-links on Windows, let's introduce a Makefile knob to skip generating them. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-22 00:28:16 +02:00
all:: $(ALL_COMMANDS_TO_INSTALL) $(SCRIPT_LIB) $(OTHER_PROGRAMS) GIT-BUILD-OPTIONS
ifneq (,$X)
$(QUIET_BUILT_IN)$(foreach p,$(patsubst %$X,%,$(filter %$X,$(ALL_COMMANDS_TO_INSTALL) $(OTHER_PROGRAMS))), if test ! -d '$p' && test ! '$p' -ef '$p$X'; then $(RM) '$p'; fi;)
endif
all::
ifndef NO_TCLTK
$(QUIET_SUBDIR0)git-gui $(QUIET_SUBDIR1) gitexecdir='$(gitexec_instdir_SQ)' all
$(QUIET_SUBDIR0)gitk-git $(QUIET_SUBDIR1) all
endif
$(QUIET_SUBDIR0)templates $(QUIET_SUBDIR1) SHELL_PATH='$(SHELL_PATH_SQ)' PERL_PATH='$(PERL_PATH_SQ)'
fuzz: link fuzz programs with `make all` on Linux Since 5e47215080 (fuzz: add basic fuzz testing target., 2018-10-12), we have compiled object files for the fuzz tests as part of the default 'make all' target. This helps prevent bit-rot in lesser-used parts of the codebase, by making sure that incompatible changes are caught at build time. However, since we never linked the fuzzer executables, this did not protect us from link-time errors. As of 8b9a42bf48 (fuzz: fix fuzz test build rules, 2024-01-19), it's now possible to link the fuzzer executables without using a fuzzing engine and a variety of compiler-specific (and compiler-version-specific) flags, at least on Linux. So let's add a platform-specific option in config.mak.uname to link the executables as part of the default `make all` target. Since linking the fuzzer executables without a fuzzing engine does not require a C++ compiler, we can change the FUZZ_PROGRAMS build rule to use $(CC) by default. This avoids compiler mis-match issues when overriding $(CC) but not $(CXX). When we *do* want to actually link with a fuzzing engine, we can set $(FUZZ_CXX). The build instructions in the CI fuzz-smoke-test job and in the Makefile comment have been updated accordingly. While we're at it, we can consolidate some of the fuzzer build instructions into one location in the Makefile. Suggested-by: Junio C Hamano <gitster@pobox.com> Helped-by: Jeff King <peff@peff.net> Signed-off-by: Josh Steadmon <steadmon@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-24 20:14:42 +02:00
# If you add a new fuzzer, please also make sure to run it in
# ci/run-build-and-minimal-fuzzers.sh so that we make sure it still links and
# runs in the future.
FUZZ_OBJS += oss-fuzz/dummy-cmd-main.o
FUZZ_OBJS += oss-fuzz/fuzz-commit-graph.o
FUZZ_OBJS += oss-fuzz/fuzz-config.o
FUZZ_OBJS += oss-fuzz/fuzz-date.o
FUZZ_OBJS += oss-fuzz/fuzz-pack-headers.o
FUZZ_OBJS += oss-fuzz/fuzz-pack-idx.o
.PHONY: fuzz-objs
fuzz-objs: $(FUZZ_OBJS)
# Always build fuzz objects even if not testing, to prevent bit-rot.
all:: $(FUZZ_OBJS)
FUZZ_PROGRAMS += $(patsubst %.o,%,$(filter-out %dummy-cmd-main.o,$(FUZZ_OBJS)))
# Build fuzz programs when possible, even without the necessary fuzzing support,
# to prevent bit-rot.
ifdef LINK_FUZZ_PROGRAMS
all:: $(FUZZ_PROGRAMS)
endif
please_set_SHELL_PATH_to_a_more_modern_shell:
@$$(:)
shell_compatibility_test: please_set_SHELL_PATH_to_a_more_modern_shell
strip: $(PROGRAMS) git$X
$(STRIP) $(STRIP_OPTS) $^
### Target-specific flags and dependencies
# The generic compilation pattern rule and automatically
# computed header dependencies (falling back to a dependency on
# LIB_H) are enough to describe how most targets should be built,
# but some targets are special enough to need something a little
# different.
#
# - When a source file "foo.c" #includes a generated header file,
# we need to list that dependency for the "foo.o" target.
#
# We also list it from other targets that are built from foo.c
# like "foo.sp" and "foo.s", even though that is easy to forget
# to do because the generated header is already present around
# after a regular build attempt.
#
# - Some code depends on configuration kept in makefile
# variables. The target-specific variable EXTRA_CPPFLAGS can
# be used to convey that information to the C preprocessor
# using -D options.
#
# The "foo.o" target should have a corresponding dependency on
# a file that changes when the value of the makefile variable
# changes. For example, targets making use of the
# $(GIT_VERSION) variable depend on GIT-VERSION-FILE.
#
# Technically the ".sp" and ".s" targets do not need this
# dependency because they are force-built, but they get the
# same dependency for consistency. This way, you do not have to
# know how each target is implemented. And it means the
# dependencies here will not need to change if the force-build
# details change some day.
git.sp git.s git.o: GIT-PREFIX
git.sp git.s git.o: EXTRA_CPPFLAGS = \
'-DGIT_HTML_PATH="$(htmldir_relative_SQ)"' \
'-DGIT_MAN_PATH="$(mandir_relative_SQ)"' \
'-DGIT_INFO_PATH="$(infodir_relative_SQ)"'
git$X: git.o GIT-LDFLAGS $(BUILTIN_OBJS) $(GITLIBS)
add an extra level of indirection to main() There are certain startup tasks that we expect every git process to do. In some cases this is just to improve the quality of the program (e.g., setting up gettext()). In others it is a requirement for using certain functions in libgit.a (e.g., system_path() expects that you have called git_extract_argv0_path()). Most commands are builtins and are covered by the git.c version of main(). However, there are still a few external commands that use their own main(). Each of these has to remember to include the correct startup sequence, and we are not always consistent. Rather than just fix the inconsistencies, let's make this harder to get wrong by providing a common main() that can run this standard startup. We basically have two options to do this: - the compat/mingw.h file already does something like this by adding a #define that replaces the definition of main with a wrapper that calls mingw_startup(). The upside is that the code in each program doesn't need to be changed at all; it's rewritten on the fly by the preprocessor. The downside is that it may make debugging of the startup sequence a bit more confusing, as the preprocessor is quietly inserting new code. - the builtin functions are all of the form cmd_foo(), and git.c's main() calls them. This is much more explicit, which may make things more obvious to somebody reading the code. It's also more flexible (because of course we have to figure out _which_ cmd_foo() to call). The downside is that each of the builtins must define cmd_foo(), instead of just main(). This patch chooses the latter option, preferring the more explicit approach, even though it is more invasive. We introduce a new file common-main.c, with the "real" main. It expects to call cmd_main() from whatever other objects it is linked against. We link common-main.o against anything that links against libgit.a, since we know that such programs will need to do this setup. Note that common-main.o can't actually go inside libgit.a, as the linker would not pick up its main() function automatically (it has no callers). The rest of the patch is just adjusting all of the various external programs (mostly in t/helper) to use cmd_main(). I've provided a global declaration for cmd_main(), which means that all of the programs also need to match its signature. In particular, many functions need to switch to "const char **" instead of "char **" for argv. This effect ripples out to a few other variables and functions, as well. This makes the patch even more invasive, but the end result is much better. We should be treating argv strings as const anyway, and now all programs conform to the same signature (which also matches the way builtins are defined). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-07-01 07:58:58 +02:00
$(QUIET_LINK)$(CC) $(ALL_CFLAGS) -o $@ $(ALL_LDFLAGS) \
$(filter %.o,$^) $(LIBS)
help.sp help.s help.o: command-list.h
builtin/bugreport.sp builtin/bugreport.s builtin/bugreport.o: hook-list.h
builtin/help.sp builtin/help.s builtin/help.o: config-list.h GIT-PREFIX
2011-04-21 21:14:42 +02:00
builtin/help.sp builtin/help.s builtin/help.o: EXTRA_CPPFLAGS = \
'-DGIT_HTML_PATH="$(htmldir_relative_SQ)"' \
'-DGIT_MAN_PATH="$(mandir_relative_SQ)"' \
'-DGIT_INFO_PATH="$(infodir_relative_SQ)"'
PAGER_ENV_SQ = $(subst ','\'',$(PAGER_ENV))
PAGER_ENV_CQ = "$(subst ",\",$(subst \,\\,$(PAGER_ENV)))"
PAGER_ENV_CQ_SQ = $(subst ','\'',$(PAGER_ENV_CQ))
pager.sp pager.s pager.o: EXTRA_CPPFLAGS = \
-DPAGER_ENV='$(PAGER_ENV_CQ_SQ)'
version.sp version.s version.o: GIT-VERSION-FILE GIT-USER-AGENT
version.sp version.s version.o: EXTRA_CPPFLAGS = \
'-DGIT_VERSION="$(GIT_VERSION)"' \
'-DGIT_USER_AGENT=$(GIT_USER_AGENT_CQ_SQ)' \
'-DGIT_BUILT_FROM_COMMIT="$(shell \
GIT_CEILING_DIRECTORIES="$(CURDIR)/.." \
git rev-parse -q --verify HEAD 2>/dev/null)"'
$(BUILT_INS): git$X
$(QUIET_BUILT_IN)$(RM) $@ && \
ln $< $@ 2>/dev/null || \
ln -s $< $@ 2>/dev/null || \
cp $< $@
config-list.h: generate-configlist.sh
config-list.h: Documentation/*config.txt Documentation/config/*.txt
$(QUIET_GEN)$(SHELL_PATH) ./generate-configlist.sh >$@
command-list.h: generate-cmdlist.sh command-list.txt
command-list.h: $(wildcard Documentation/git*.txt)
$(QUIET_GEN)$(SHELL_PATH) ./generate-cmdlist.sh \
$(patsubst %,--exclude-program %,$(EXCLUDED_PROGRAMS)) \
command-list.txt >$@
hook-list.h: add a generated list of hooks, like config-list.h Make githooks(5) the source of truth for what hooks git supports, and punt out early on hooks we don't know about in find_hook(). This ensures that the documentation and the C code's idea about existing hooks doesn't diverge. We still have Perl and Python code running its own hooks, but that'll be addressed by Emily Shaffer's upcoming "git hook run" command. This resolves a long-standing TODO item in bugreport.c of there being no centralized listing of hooks, and fixes a bug with the bugreport listing only knowing about 1/4 of the p4 hooks. It didn't know about the recent "reference-transaction" hook either. We could make the find_hook() function die() or BUG() out if the new known_hook() returned 0, but let's make it return NULL just as it does when it can't find a hook of a known type. Making it die() is overly anal, and unlikely to be what we need in catching stupid typos in the name of some new hook hardcoded in git.git's sources. By making this be tolerant of unknown hook names, changes in a later series to make "git hook run" run arbitrary user-configured hook names will be easier to implement. I have not been able to directly test the CMake change being made here. Since 4c2c38e800 (ci: modification of main.yml to use cmake for vs-build job, 2020-06-26) some of the Windows CI has a hard dependency on CMake, this change works there, and is to my eyes an obviously correct use of a pattern established in previous CMake changes, namely: - 061c2240b1 (Introduce CMake support for configuring Git, 2020-06-12) - 709df95b78 (help: move list_config_help to builtin/help, 2020-04-16) - 976aaedca0 (msvc: add a Makefile target to pre-generate the Visual Studio solution, 2019-07-29) The LC_ALL=C is needed because at least in my locale the dash ("-") is ignored for the purposes of sorting, which results in a different order. I'm not aware of anything in git that has a hard dependency on the order, but e.g. the bugreport output would end up using whatever locale was in effect when git was compiled. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Helped-by: René Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-09-26 21:03:29 +02:00
hook-list.h: generate-hooklist.sh Documentation/githooks.txt
$(QUIET_GEN)$(SHELL_PATH) ./generate-hooklist.sh >$@
SCRIPT_DEFINES = $(SHELL_PATH_SQ):$(DIFF_SQ):\
$(localedir_SQ):$(USE_GETTEXT_SCHEME):$(SANE_TOOL_PATH_SQ):\
$(gitwebdir_SQ):$(PERL_PATH_SQ):$(PAGER_ENV):\
Makefile: replace perl/Makefile.PL with simple make rules Replace the perl/Makefile.PL and the fallback perl/Makefile used under NO_PERL_MAKEMAKER=NoThanks with a much simpler implementation heavily inspired by how the i18n infrastructure's build process works[1]. The reason for having the Makefile.PL in the first place is that it was initially[2] building a perl C binding to interface with libgit, this functionality, that was removed[3] before Git.pm ever made it to the master branch. We've since since started maintaining a fallback perl/Makefile, as MakeMaker wouldn't work on some platforms[4]. That's just the tip of the iceberg. We have the PM.stamp hack in the top-level Makefile[5] to detect whether we need to regenerate the perl/perl.mak, which I fixed just recently to deal with issues like the perl version changing from under us[6]. There is absolutely no reason for why this needs to be so complex anymore. All we're getting out of this elaborate Rube Goldberg machine was copying perl/* to perl/blib/* as we do a string-replacement on the *.pm files to hardcode @@LOCALEDIR@@ in the source, as well as pod2man-ing Git.pm & friends. So replace the whole thing with something that's pretty much a copy of how we generate po/build/**.mo from po/*.po, just with a small sed(1) command instead of msgfmt. As that's being done rename the files from *.pm to *.pmc just to indicate that they're generated (see "perldoc -f require"). While I'm at it, change the fallback for Error.pm from being something where we'll ship our own Error.pm if one doesn't exist at build time to one where we just use a Git::Error wrapper that'll always prefer the system-wide Error.pm, only falling back to our own copy if it really doesn't exist at runtime. It's now shipped as Git::FromCPAN::Error, making it easy to add other modules to Git::FromCPAN::* in the future if that's needed. Functional changes: * This will not always install into perl's idea of its global "installsitelib". This only potentially matters for packagers that need to expose Git.pm for non-git use, and as explained in the INSTALL file there's a trivial workaround. * The scripts themselves will 'use lib' the target directory, but if INSTLIBDIR is set it overrides it. It doesn't have to be this way, it could be set in addition to INSTLIBDIR, but my reading of [7] is that this is the desired behavior. * We don't build man pages for all of the perl modules as we used to, only Git(3pm). As discussed on-list[8] that we were building installed manpages for purely internal APIs like Git::I18N or private-Error.pm was always a bug anyway, and all the Git::SVN::* ones say they're internal APIs. There are apparently external users of Git.pm, but I don't expect there to be any of the others. As a side-effect of these general changes the perl documentation now only installed by install-{doc,man}, not a mere "install" as before. 1. 5e9637c629 ("i18n: add infrastructure for translating Git with gettext", 2011-11-18) 2. b1edc53d06 ("Introduce Git.pm (v4)", 2006-06-24) 3. 18b0fc1ce1 ("Git.pm: Kill Git.xs for now", 2006-09-23) 4. f848718a69 ("Make perl/ build procedure ActiveState friendly.", 2006-12-04) 5. ee9be06770 ("perl: detect new files in MakeMaker builds", 2012-07-27) 6. c59c4939c2 ("perl: regenerate perl.mak if perl -V changes", 2017-03-29) 7. 0386dd37b1 ("Makefile: add PERLLIB_EXTRA variable that adds to default perl path", 2013-11-15) 8. 87bmjjv1pu.fsf@evledraar.booking.com ("Re: [PATCH] Makefile: replace perl/Makefile.PL with simple make rules" Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-12-10 22:13:33 +01:00
$(perllibdir_SQ)
GIT-SCRIPT-DEFINES: FORCE
@FLAGS='$(SCRIPT_DEFINES)'; \
if test x"$$FLAGS" != x"`cat $@ 2>/dev/null`" ; then \
echo >&2 " * new script parameters"; \
echo "$$FLAGS" >$@; \
fi
define cmd_munge_script
sed -e '1s|#!.*/sh|#!$(SHELL_PATH_SQ)|' \
-e 's|@SHELL_PATH@|$(SHELL_PATH_SQ)|' \
-e 's|@@DIFF@@|$(DIFF_SQ)|' \
i18n: add infrastructure for translating Git with gettext Change the skeleton implementation of i18n in Git to one that can show localized strings to users for our C, Shell and Perl programs using either GNU libintl or the Solaris gettext implementation. This new internationalization support is enabled by default. If gettext isn't available, or if Git is compiled with NO_GETTEXT=YesPlease, Git falls back on its current behavior of showing interface messages in English. When using the autoconf script we'll auto-detect if the gettext libraries are installed and act appropriately. This change is somewhat large because as well as adding a C, Shell and Perl i18n interface we're adding a lot of tests for them, and for those tests to work we need a skeleton PO file to actually test translations. A minimal Icelandic translation is included for this purpose. Icelandic includes multi-byte characters which makes it easy to test various edge cases, and it's a language I happen to understand. The rest of the commit message goes into detail about various sub-parts of this commit. = Installation Gettext .mo files will be installed and looked for in the standard $(prefix)/share/locale path. GIT_TEXTDOMAINDIR can also be set to override that, but that's only intended to be used to test Git itself. = Perl Perl code that's to be localized should use the new Git::I18n module. It imports a __ function into the caller's package by default. Instead of using the high level Locale::TextDomain interface I've opted to use the low-level (equivalent to the C interface) Locale::Messages module, which Locale::TextDomain itself uses. Locale::TextDomain does a lot of redundant work we don't need, and some of it would potentially introduce bugs. It tries to set the $TEXTDOMAIN based on package of the caller, and has its own hardcoded paths where it'll search for messages. I found it easier just to completely avoid it rather than try to circumvent its behavior. In any case, this is an issue wholly internal Git::I18N. Its guts can be changed later if that's deemed necessary. See <AANLkTilYD_NyIZMyj9dHtVk-ylVBfvyxpCC7982LWnVd@mail.gmail.com> for a further elaboration on this topic. = Shell Shell code that's to be localized should use the git-sh-i18n library. It's basically just a wrapper for the system's gettext.sh. If gettext.sh isn't available we'll fall back on gettext(1) if it's available. The latter is available without the former on Solaris, which has its own non-GNU gettext implementation. We also need to emulate eval_gettext() there. If neither are present we'll use a dumb printf(1) fall-through wrapper. = About libcharset.h and langinfo.h We use libcharset to query the character set of the current locale if it's available. I.e. we'll use it instead of nl_langinfo if HAVE_LIBCHARSET_H is set. The GNU gettext manual recommends using langinfo.h's nl_langinfo(CODESET) to acquire the current character set, but on systems that have libcharset.h's locale_charset() using the latter is either saner, or the only option on those systems. GNU and Solaris have a nl_langinfo(CODESET), FreeBSD can use either, but MinGW and some others need to use libcharset.h's locale_charset() instead. =Credits This patch is based on work by Jeff Epler <jepler@unpythonic.net> who did the initial Makefile / C work, and a lot of comments from the Git mailing list, including Jonathan Nieder, Jakub Narebski, Johannes Sixt, Erik Faye-Lund, Peter Krefting, Junio C Hamano, Thomas Rast and others. [jc: squashed a small Makefile fix from Ramsay] Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-11-18 00:14:42 +01:00
-e 's|@@LOCALEDIR@@|$(localedir_SQ)|g' \
-e 's/@@USE_GETTEXT_SCHEME@@/$(USE_GETTEXT_SCHEME)/g' \
-e $(BROKEN_PATH_FIX) \
-e 's|@@GITWEBDIR@@|$(gitwebdir_SQ)|g' \
-e 's|@@PERL@@|$(PERL_PATH_SQ)|g' \
-e 's|@@PAGER_ENV@@|$(PAGER_ENV_SQ)|g' \
$@.sh >$@+
endef
$(SCRIPT_SH_GEN) : % : %.sh GIT-SCRIPT-DEFINES
$(QUIET_GEN)$(cmd_munge_script) && \
chmod +x $@+ && \
mv $@+ $@
$(SCRIPT_LIB) : % : %.sh GIT-SCRIPT-DEFINES
$(QUIET_GEN)$(cmd_munge_script) && \
mv $@+ $@
git.res: git.rc GIT-VERSION-FILE GIT-PREFIX
$(QUIET_RC)$(RC) \
$(join -DMAJOR= -DMINOR= -DMICRO= -DPATCHLEVEL=, $(wordlist 1, 4, \
$(shell echo $(GIT_VERSION) 0 0 0 0 | tr '.a-zA-Z-' ' '))) \
-DGIT_VERSION="\\\"$(GIT_VERSION)\\\"" -i $< -o $@
# This makes sure we depend on the NO_PERL setting itself.
$(SCRIPT_PERL_GEN): GIT-BUILD-OPTIONS
# Used for substitution in Perl modules. Disabled when using RUNTIME_PREFIX
# since the locale directory is injected.
perl_localedir_SQ = $(localedir_SQ)
ifndef NO_PERL
PERL_HEADER_TEMPLATE = perl/header_templates/fixed_prefix.template.pl
PERL_DEFINES =
PERL_DEFINES += $(PERL_PATH_SQ)
PERL_DEFINES += $(PERLLIB_EXTRA_SQ)
PERL_DEFINES += $(perllibdir_SQ)
PERL_DEFINES += $(RUNTIME_PREFIX)
PERL_DEFINES += $(NO_PERL_CPAN_FALLBACKS)
PERL_DEFINES += $(NO_GETTEXT)
# Support Perl runtime prefix. In this mode, a different header is installed
# into Perl scripts.
ifdef RUNTIME_PREFIX
PERL_HEADER_TEMPLATE = perl/header_templates/runtime_prefix.template.pl
# Don't export a fixed $(localedir) path; it will be resolved by the Perl header
# at runtime.
perl_localedir_SQ =
endif
PERL_DEFINES += $(gitexecdir) $(perllibdir) $(localedir)
$(SCRIPT_PERL_GEN): % : %.perl GIT-PERL-DEFINES GIT-PERL-HEADER GIT-VERSION-FILE
Makefile: add and use the ".DELETE_ON_ERROR" flag Use the GNU make ".DELETE_ON_ERROR" flag in our main Makefile, as we already do in the Documentation/Makefile since db10fc6c09f (doc: simplify Makefile using .DELETE_ON_ERROR, 2021-05-21). Now if a command to make X fails X will be removed, the default behavior of GNU make is to only do so if "make" itself is interrupted with a signal. E.g. if we now intentionally break one of the rules with: - mv $@+ $@ + mv $@+ $@ && \ + false We'll get output like: $ make git CC git.o LINK git make: *** [Makefile:2179: git] Error 1 make: *** Deleting file 'git' $ file git git: cannot open `git' (No such file or directory) Before this change we'd leave the file in place in under this scenario. As in db10fc6c09f this allows us to remove patterns of removing leftover $@ files at the start of rules, since previous failing runs of the Makefile won't have left those littered around anymore. I'm not as confident that we should be replacing the "mv $@+ $@" pattern entirely, since that means that external programs or one of our other Makefiles might race and get partial content. I'm not changing $(REMOTE_CURL_ALIASES) since that uses a ln/ln -s/cp dance, and would require the addition of "-f" flags if the "rm" at the start was removed. I've also got plans to fix that ln/ln -s/cp pattern in another series. For $(LIB_FILE) and $(XDIFF_LIB) we can rely on the "c" (create) being present in ARFLAGS. I'm not changing "$(ETAGS_TARGET)", "tags" and "cscope" because they've got a messy combination of removing "$@+" not "$@" at the beginning, or "$@*". I'm also addressing those in another series. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-06-29 10:44:50 +02:00
$(QUIET_GEN) \
sed -e '1{' \
-e ' s|#!.*perl|#!$(PERL_PATH_SQ)|' \
-e ' r GIT-PERL-HEADER' \
-e ' G' \
-e '}' \
-e 's/@@GIT_VERSION@@/$(GIT_VERSION)/g' \
$< >$@+ && \
chmod +x $@+ && \
mv $@+ $@
PERL_DEFINES := $(subst $(space),:,$(PERL_DEFINES))
GIT-PERL-DEFINES: FORCE
@FLAGS='$(PERL_DEFINES)'; \
if test x"$$FLAGS" != x"`cat $@ 2>/dev/null`" ; then \
echo >&2 " * new perl-specific parameters"; \
echo "$$FLAGS" >$@; \
fi
GIT-PERL-HEADER: $(PERL_HEADER_TEMPLATE) GIT-PERL-DEFINES Makefile
Makefile: add and use the ".DELETE_ON_ERROR" flag Use the GNU make ".DELETE_ON_ERROR" flag in our main Makefile, as we already do in the Documentation/Makefile since db10fc6c09f (doc: simplify Makefile using .DELETE_ON_ERROR, 2021-05-21). Now if a command to make X fails X will be removed, the default behavior of GNU make is to only do so if "make" itself is interrupted with a signal. E.g. if we now intentionally break one of the rules with: - mv $@+ $@ + mv $@+ $@ && \ + false We'll get output like: $ make git CC git.o LINK git make: *** [Makefile:2179: git] Error 1 make: *** Deleting file 'git' $ file git git: cannot open `git' (No such file or directory) Before this change we'd leave the file in place in under this scenario. As in db10fc6c09f this allows us to remove patterns of removing leftover $@ files at the start of rules, since previous failing runs of the Makefile won't have left those littered around anymore. I'm not as confident that we should be replacing the "mv $@+ $@" pattern entirely, since that means that external programs or one of our other Makefiles might race and get partial content. I'm not changing $(REMOTE_CURL_ALIASES) since that uses a ln/ln -s/cp dance, and would require the addition of "-f" flags if the "rm" at the start was removed. I've also got plans to fix that ln/ln -s/cp pattern in another series. For $(LIB_FILE) and $(XDIFF_LIB) we can rely on the "c" (create) being present in ARFLAGS. I'm not changing "$(ETAGS_TARGET)", "tags" and "cscope" because they've got a messy combination of removing "$@+" not "$@" at the beginning, or "$@*". I'm also addressing those in another series. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-06-29 10:44:50 +02:00
$(QUIET_GEN) \
INSTLIBDIR='$(perllibdir_SQ)' && \
INSTLIBDIR_EXTRA='$(PERLLIB_EXTRA_SQ)' && \
INSTLIBDIR="$$INSTLIBDIR$${INSTLIBDIR_EXTRA:+:$$INSTLIBDIR_EXTRA}" && \
sed -e 's=@@PATHSEP@@=$(pathsep)=g' \
-e "s=@@INSTLIBDIR@@=$$INSTLIBDIR=g" \
-e 's=@@PERLLIBDIR_REL@@=$(perllibdir_relative_SQ)=g' \
-e 's=@@GITEXECDIR_REL@@=$(gitexecdir_relative_SQ)=g' \
-e 's=@@LOCALEDIR_REL@@=$(localedir_relative_SQ)=g' \
$< >$@+ && \
mv $@+ $@
.PHONY: perllibdir
perllibdir:
@echo '$(perllibdir_SQ)'
git-instaweb: git-instaweb.sh GIT-SCRIPT-DEFINES
$(QUIET_GEN)$(cmd_munge_script) && \
chmod +x $@+ && \
mv $@+ $@
else # NO_PERL
$(SCRIPT_PERL_GEN) git-instaweb: % : unimplemented.sh
Makefile: add and use the ".DELETE_ON_ERROR" flag Use the GNU make ".DELETE_ON_ERROR" flag in our main Makefile, as we already do in the Documentation/Makefile since db10fc6c09f (doc: simplify Makefile using .DELETE_ON_ERROR, 2021-05-21). Now if a command to make X fails X will be removed, the default behavior of GNU make is to only do so if "make" itself is interrupted with a signal. E.g. if we now intentionally break one of the rules with: - mv $@+ $@ + mv $@+ $@ && \ + false We'll get output like: $ make git CC git.o LINK git make: *** [Makefile:2179: git] Error 1 make: *** Deleting file 'git' $ file git git: cannot open `git' (No such file or directory) Before this change we'd leave the file in place in under this scenario. As in db10fc6c09f this allows us to remove patterns of removing leftover $@ files at the start of rules, since previous failing runs of the Makefile won't have left those littered around anymore. I'm not as confident that we should be replacing the "mv $@+ $@" pattern entirely, since that means that external programs or one of our other Makefiles might race and get partial content. I'm not changing $(REMOTE_CURL_ALIASES) since that uses a ln/ln -s/cp dance, and would require the addition of "-f" flags if the "rm" at the start was removed. I've also got plans to fix that ln/ln -s/cp pattern in another series. For $(LIB_FILE) and $(XDIFF_LIB) we can rely on the "c" (create) being present in ARFLAGS. I'm not changing "$(ETAGS_TARGET)", "tags" and "cscope" because they've got a messy combination of removing "$@+" not "$@" at the beginning, or "$@*". I'm also addressing those in another series. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-06-29 10:44:50 +02:00
$(QUIET_GEN) \
sed -e '1s|#!.*/sh|#!$(SHELL_PATH_SQ)|' \
-e 's|@@REASON@@|NO_PERL=$(NO_PERL)|g' \
unimplemented.sh >$@+ && \
chmod +x $@+ && \
mv $@+ $@
endif # NO_PERL
# This makes sure we depend on the NO_PYTHON setting itself.
$(SCRIPT_PYTHON_GEN): GIT-BUILD-OPTIONS
ifndef NO_PYTHON
$(SCRIPT_PYTHON_GEN): GIT-CFLAGS GIT-PREFIX GIT-PYTHON-VARS
$(SCRIPT_PYTHON_GEN): % : %.py
Makefile: add and use the ".DELETE_ON_ERROR" flag Use the GNU make ".DELETE_ON_ERROR" flag in our main Makefile, as we already do in the Documentation/Makefile since db10fc6c09f (doc: simplify Makefile using .DELETE_ON_ERROR, 2021-05-21). Now if a command to make X fails X will be removed, the default behavior of GNU make is to only do so if "make" itself is interrupted with a signal. E.g. if we now intentionally break one of the rules with: - mv $@+ $@ + mv $@+ $@ && \ + false We'll get output like: $ make git CC git.o LINK git make: *** [Makefile:2179: git] Error 1 make: *** Deleting file 'git' $ file git git: cannot open `git' (No such file or directory) Before this change we'd leave the file in place in under this scenario. As in db10fc6c09f this allows us to remove patterns of removing leftover $@ files at the start of rules, since previous failing runs of the Makefile won't have left those littered around anymore. I'm not as confident that we should be replacing the "mv $@+ $@" pattern entirely, since that means that external programs or one of our other Makefiles might race and get partial content. I'm not changing $(REMOTE_CURL_ALIASES) since that uses a ln/ln -s/cp dance, and would require the addition of "-f" flags if the "rm" at the start was removed. I've also got plans to fix that ln/ln -s/cp pattern in another series. For $(LIB_FILE) and $(XDIFF_LIB) we can rely on the "c" (create) being present in ARFLAGS. I'm not changing "$(ETAGS_TARGET)", "tags" and "cscope" because they've got a messy combination of removing "$@+" not "$@" at the beginning, or "$@*". I'm also addressing those in another series. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-06-29 10:44:50 +02:00
$(QUIET_GEN) \
sed -e '1s|#!.*python|#!$(PYTHON_PATH_SQ)|' \
$< >$@+ && \
chmod +x $@+ && \
mv $@+ $@
else # NO_PYTHON
$(SCRIPT_PYTHON_GEN): % : unimplemented.sh
Makefile: add and use the ".DELETE_ON_ERROR" flag Use the GNU make ".DELETE_ON_ERROR" flag in our main Makefile, as we already do in the Documentation/Makefile since db10fc6c09f (doc: simplify Makefile using .DELETE_ON_ERROR, 2021-05-21). Now if a command to make X fails X will be removed, the default behavior of GNU make is to only do so if "make" itself is interrupted with a signal. E.g. if we now intentionally break one of the rules with: - mv $@+ $@ + mv $@+ $@ && \ + false We'll get output like: $ make git CC git.o LINK git make: *** [Makefile:2179: git] Error 1 make: *** Deleting file 'git' $ file git git: cannot open `git' (No such file or directory) Before this change we'd leave the file in place in under this scenario. As in db10fc6c09f this allows us to remove patterns of removing leftover $@ files at the start of rules, since previous failing runs of the Makefile won't have left those littered around anymore. I'm not as confident that we should be replacing the "mv $@+ $@" pattern entirely, since that means that external programs or one of our other Makefiles might race and get partial content. I'm not changing $(REMOTE_CURL_ALIASES) since that uses a ln/ln -s/cp dance, and would require the addition of "-f" flags if the "rm" at the start was removed. I've also got plans to fix that ln/ln -s/cp pattern in another series. For $(LIB_FILE) and $(XDIFF_LIB) we can rely on the "c" (create) being present in ARFLAGS. I'm not changing "$(ETAGS_TARGET)", "tags" and "cscope" because they've got a messy combination of removing "$@+" not "$@" at the beginning, or "$@*". I'm also addressing those in another series. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-06-29 10:44:50 +02:00
$(QUIET_GEN) \
sed -e '1s|#!.*/sh|#!$(SHELL_PATH_SQ)|' \
-e 's|@@REASON@@|NO_PYTHON=$(NO_PYTHON)|g' \
unimplemented.sh >$@+ && \
chmod +x $@+ && \
mv $@+ $@
endif # NO_PYTHON
Makefile: add and use the ".DELETE_ON_ERROR" flag Use the GNU make ".DELETE_ON_ERROR" flag in our main Makefile, as we already do in the Documentation/Makefile since db10fc6c09f (doc: simplify Makefile using .DELETE_ON_ERROR, 2021-05-21). Now if a command to make X fails X will be removed, the default behavior of GNU make is to only do so if "make" itself is interrupted with a signal. E.g. if we now intentionally break one of the rules with: - mv $@+ $@ + mv $@+ $@ && \ + false We'll get output like: $ make git CC git.o LINK git make: *** [Makefile:2179: git] Error 1 make: *** Deleting file 'git' $ file git git: cannot open `git' (No such file or directory) Before this change we'd leave the file in place in under this scenario. As in db10fc6c09f this allows us to remove patterns of removing leftover $@ files at the start of rules, since previous failing runs of the Makefile won't have left those littered around anymore. I'm not as confident that we should be replacing the "mv $@+ $@" pattern entirely, since that means that external programs or one of our other Makefiles might race and get partial content. I'm not changing $(REMOTE_CURL_ALIASES) since that uses a ln/ln -s/cp dance, and would require the addition of "-f" flags if the "rm" at the start was removed. I've also got plans to fix that ln/ln -s/cp pattern in another series. For $(LIB_FILE) and $(XDIFF_LIB) we can rely on the "c" (create) being present in ARFLAGS. I'm not changing "$(ETAGS_TARGET)", "tags" and "cscope" because they've got a messy combination of removing "$@+" not "$@" at the beginning, or "$@*". I'm also addressing those in another series. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-06-29 10:44:50 +02:00
CONFIGURE_RECIPE = sed -e 's/@@GIT_VERSION@@/$(GIT_VERSION)/g' \
configure.ac >configure.ac+ && \
autoconf -o configure configure.ac+ && \
$(RM) configure.ac+
configure: configure.ac GIT-VERSION-FILE
$(QUIET_GEN)$(CONFIGURE_RECIPE)
ifdef AUTOCONFIGURED
# We avoid depending on 'configure' here, because it gets rebuilt
# every time GIT-VERSION-FILE is modified, only to update the embedded
# version number string, which config.status does not care about. We
# do want to recheck when the platform/environment detection logic
# changes, hence this depends on configure.ac.
config.status: configure.ac
$(QUIET_GEN)$(CONFIGURE_RECIPE) && \
if test -f config.status; then \
./config.status --recheck; \
else \
./configure; \
fi
reconfigure config.mak.autogen: config.status
$(QUIET_GEN)./config.status
.PHONY: reconfigure # This is a convenience target.
endif
XDIFF_OBJS += xdiff/xdiffi.o
XDIFF_OBJS += xdiff/xemit.o
XDIFF_OBJS += xdiff/xhistogram.o
XDIFF_OBJS += xdiff/xmerge.o
XDIFF_OBJS += xdiff/xpatience.o
XDIFF_OBJS += xdiff/xprepare.o
XDIFF_OBJS += xdiff/xutils.o
.PHONY: xdiff-objs
xdiff-objs: $(XDIFF_OBJS)
REFTABLE_OBJS += reftable/basics.o
REFTABLE_OBJS += reftable/error.o
REFTABLE_OBJS += reftable/block.o
REFTABLE_OBJS += reftable/blocksource.o
REFTABLE_OBJS += reftable/iter.o
REFTABLE_OBJS += reftable/publicbasics.o
REFTABLE_OBJS += reftable/merged.o
REFTABLE_OBJS += reftable/pq.o
REFTABLE_OBJS += reftable/reader.o
REFTABLE_OBJS += reftable/record.o
REFTABLE_OBJS += reftable/refname.o
REFTABLE_OBJS += reftable/generic.o
REFTABLE_OBJS += reftable/stack.o
REFTABLE_OBJS += reftable/tree.o
REFTABLE_OBJS += reftable/writer.o
REFTABLE_TEST_OBJS += reftable/basics_test.o
REFTABLE_TEST_OBJS += reftable/block_test.o
REFTABLE_TEST_OBJS += reftable/dump.o
REFTABLE_TEST_OBJS += reftable/merged_test.o
REFTABLE_TEST_OBJS += reftable/pq_test.o
REFTABLE_TEST_OBJS += reftable/record_test.o
REFTABLE_TEST_OBJS += reftable/readwrite_test.o
REFTABLE_TEST_OBJS += reftable/refname_test.o
REFTABLE_TEST_OBJS += reftable/stack_test.o
REFTABLE_TEST_OBJS += reftable/test_framework.o
REFTABLE_TEST_OBJS += reftable/tree_test.o
TEST_OBJS := $(patsubst %$X,%.o,$(TEST_PROGRAMS)) $(patsubst %,t/helper/%,$(TEST_BUILTINS_OBJS))
.PHONY: test-objs
test-objs: $(TEST_OBJS)
GIT_OBJS += $(LIB_OBJS)
GIT_OBJS += $(BUILTIN_OBJS)
GIT_OBJS += common-main.o
GIT_OBJS += git.o
.PHONY: git-objs
git-objs: $(GIT_OBJS)
SCALAR_OBJS += scalar.o
.PHONY: scalar-objs
scalar-objs: $(SCALAR_OBJS)
OBJECTS += $(GIT_OBJS)
OBJECTS += $(SCALAR_OBJS)
OBJECTS += $(PROGRAM_OBJS)
OBJECTS += $(TEST_OBJS)
OBJECTS += $(XDIFF_OBJS)
OBJECTS += $(FUZZ_OBJS)
OBJECTS += $(REFTABLE_OBJS) $(REFTABLE_TEST_OBJS)
unit tests: add TAP unit test framework This patch contains an implementation for writing unit tests with TAP output. Each test is a function that contains one or more checks. The test is run with the TEST() macro and if any of the checks fail then the test will fail. A complete program that tests STRBUF_INIT would look like #include "test-lib.h" #include "strbuf.h" static void t_static_init(void) { struct strbuf buf = STRBUF_INIT; check_uint(buf.len, ==, 0); check_uint(buf.alloc, ==, 0); check_char(buf.buf[0], ==, '\0'); } int main(void) { TEST(t_static_init(), "static initialization works); return test_done(); } The output of this program would be ok 1 - static initialization works 1..1 If any of the checks in a test fail then they print a diagnostic message to aid debugging and the test will be reported as failing. For example a failing integer check would look like # check "x >= 3" failed at my-test.c:102 # left: 2 # right: 3 not ok 1 - x is greater than or equal to three There are a number of check functions implemented so far. check() checks a boolean condition, check_int(), check_uint() and check_char() take two values to compare and a comparison operator. check_str() will check if two strings are equal. Custom checks are simple to implement as shown in the comments above test_assert() in test-lib.h. Tests can be skipped with test_skip() which can be supplied with a reason for skipping which it will print. Tests can print diagnostic messages with test_msg(). Checks that are known to fail can be wrapped in TEST_TODO(). There are a couple of example test programs included in this patch. t-basic.c implements some self-tests and demonstrates the diagnostic output for failing test. The output of this program is checked by t0080-unit-test-output.sh. t-strbuf.c shows some example unit tests for strbuf.c The unit tests will be built as part of the default "make all" target, to avoid bitrot. If you wish to build just the unit tests, you can run "make build-unit-tests". To run the tests, you can use "make unit-tests" or run the test binaries directly, as in "./t/unit-tests/bin/t-strbuf". Signed-off-by: Phillip Wood <phillip.wood@dunelm.org.uk> Signed-off-by: Josh Steadmon <steadmon@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-11-09 19:50:43 +01:00
OBJECTS += $(UNIT_TEST_OBJS)
ifndef NO_CURL
OBJECTS += http.o http-walker.o remote-curl.o
endif
scalar: create a rudimentary executable The idea of Scalar (https://github.com/microsoft/scalar), and before that, of VFS for Git, has always been to prove that Git _can_ scale, and to upstream whatever strategies have been demonstrated to help. With this patch, we start the journey from that C# project to move what is left to Git's own `contrib/` directory, reimplementing it in pure C, with the intention to facilitate integrating the functionality into core Git all while maintaining backwards-compatibility for existing Scalar users (which will be much easier when both live in the same worktree). It has always been the plan to contribute all of the proven strategies back to core Git. For example, while the virtual filesystem provided by VFS for Git helped the team developing the Windows operating system to move onto Git, while trying to upstream it we realized that it cannot be done: getting the virtual filesystem to work (which we only managed to implement fully on Windows, but not on, say, macOS or Linux), and the required server-side support for the GVFS protocol, made this not quite feasible. The Scalar project learned from that and tackled the problem with different tactics: instead of pretending to Git that the working directory is fully populated, it _specifically_ teaches Git about partial clone (which is based on VFS for Git's cache server), about sparse checkout (which VFS for Git tried to do transparently, in the file system layer), and regularly runs maintenance tasks to keep the repository in a healthy state. With partial clone, sparse checkout and `git maintenance` having been upstreamed, there is little left that `scalar.exe` does which `git.exe` cannot do. One such thing is that `scalar clone <url>` will automatically set up a partial, sparse clone, and configure known-helpful settings from the start. So let's bring this convenience into Git's tree. The idea here is that you can (optionally) build Scalar via make -C contrib/scalar/ This will build the `scalar` executable and put it into the contrib/scalar/ subdirectory. The slightly awkward addition of the `contrib/scalar/*` bits to the top-level `Makefile` are actually really required: we want to link to `libgit.a`, which means that we will need to use the very same `CFLAGS` and `LDFLAGS` as the rest of Git. An early development version of this patch tried to replicate all the conditional code in `contrib/scalar/Makefile` (e.g. `NO_POLL`) just like `contrib/svn-fe/Makefile` used to do before it was retired. It turned out to be quite the whack-a-mole game: the SHA-1-related flags, the flags enabling/disabling `compat/poll/`, `compat/regex/`, `compat/win32mmap.c` & friends depending on the current platform... To put it mildly: it was a major mess. Instead, this patch makes minimal changes to the top-level `Makefile` so that the bits in `contrib/scalar/` can be compiled and linked, and adds a `contrib/scalar/Makefile` that uses the top-level `Makefile` in a most minimal way to do the actual compiling. Note: With this commit, we only establish the infrastructure, no Scalar functionality is implemented yet; We will do that incrementally over the next few commits. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-12-03 14:34:16 +01:00
.PHONY: objects
objects: $(OBJECTS)
dep_files := $(foreach f,$(OBJECTS),$(dir $f).depend/$(notdir $f).d)
dep_dirs := $(addsuffix .depend,$(sort $(dir $(OBJECTS))))
ifeq ($(COMPUTE_HEADER_DEPENDENCIES),yes)
$(dep_dirs):
@mkdir -p $@
missing_dep_dirs := $(filter-out $(wildcard $(dep_dirs)),$(dep_dirs))
dep_file = $(dir $@).depend/$(notdir $@).d
dep_args = -MF $(dep_file) -MQ $@ -MMD -MP
endif
ifneq ($(COMPUTE_HEADER_DEPENDENCIES),yes)
missing_dep_dirs =
dep_args =
endif
Makefile: add support for generating JSON compilation database Tools based on LibClang [1] can make use of a 'JSON Compilation Database' [2] that keeps track of the exact options used to compile a set of source files. For example, clangd [3], which is a C language server protocol implementation, can use a JSON compilation database to determine the flags needed to compile a file so it can provide proper editor integration. As a result, editors supporting the language server protocol (such as VS Code, Emacs, or Vim, with suitable plugins) can provide better searching, integration, and refactoring tools. The Clang compiler can generate JSON fragments when compiling [4], using the `-MJ` flag. These JSON fragments (one per compiled source file) can then be concatenated to create the compilation database, commonly called 'compile_commands.json'. Add support to the Makefile for generating these JSON fragments as well as the compilation database itself, if the environment variable 'GENERATE_COMPILATION_DATABASE' is set. If this variable is set, check that $(CC) indeed supports the `-MJ` flag, following what is done for automatic dependencies. All JSON fragments are placed in the 'compile_commands/' directory, and the compilation database 'compile_commands.json' is generated as a dependency of the 'all' target using a `sed` invocation. [1] https://clang.llvm.org/docs/Tooling.html [2] https://clang.llvm.org/docs/JSONCompilationDatabase.html [3] https://clangd.llvm.org/ [4] https://clang.llvm.org/docs/ClangCommandLineReference.html#cmdoption-clang-mj-arg Helped-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Philippe Blain <levraiphilippeblain@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-04 00:13:38 +02:00
compdb_dir = compile_commands
ifeq ($(GENERATE_COMPILATION_DATABASE),yes)
missing_compdb_dir = $(compdb_dir)
$(missing_compdb_dir):
@mkdir -p $@
compdb_file = $(compdb_dir)/$(subst /,-,$@.json)
compdb_args = -MJ $(compdb_file)
else
missing_compdb_dir =
compdb_args =
endif
$(OBJECTS): %.o: %.c GIT-CFLAGS $(missing_dep_dirs) $(missing_compdb_dir)
Makefile: add support for generating JSON compilation database Tools based on LibClang [1] can make use of a 'JSON Compilation Database' [2] that keeps track of the exact options used to compile a set of source files. For example, clangd [3], which is a C language server protocol implementation, can use a JSON compilation database to determine the flags needed to compile a file so it can provide proper editor integration. As a result, editors supporting the language server protocol (such as VS Code, Emacs, or Vim, with suitable plugins) can provide better searching, integration, and refactoring tools. The Clang compiler can generate JSON fragments when compiling [4], using the `-MJ` flag. These JSON fragments (one per compiled source file) can then be concatenated to create the compilation database, commonly called 'compile_commands.json'. Add support to the Makefile for generating these JSON fragments as well as the compilation database itself, if the environment variable 'GENERATE_COMPILATION_DATABASE' is set. If this variable is set, check that $(CC) indeed supports the `-MJ` flag, following what is done for automatic dependencies. All JSON fragments are placed in the 'compile_commands/' directory, and the compilation database 'compile_commands.json' is generated as a dependency of the 'all' target using a `sed` invocation. [1] https://clang.llvm.org/docs/Tooling.html [2] https://clang.llvm.org/docs/JSONCompilationDatabase.html [3] https://clangd.llvm.org/ [4] https://clang.llvm.org/docs/ClangCommandLineReference.html#cmdoption-clang-mj-arg Helped-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Philippe Blain <levraiphilippeblain@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-04 00:13:38 +02:00
$(QUIET_CC)$(CC) -o $*.o -c $(dep_args) $(compdb_args) $(ALL_CFLAGS) $(EXTRA_CPPFLAGS) $<
%.s: %.c GIT-CFLAGS FORCE
$(QUIET_CC)$(CC) -o $@ -S $(ALL_CFLAGS) $(EXTRA_CPPFLAGS) $<
ifdef USE_COMPUTED_HEADER_DEPENDENCIES
# Take advantage of gcc's on-the-fly dependency generation
# See <https://gcc.gnu.org/gcc-3.0/features.html>.
dep_files_present := $(wildcard $(dep_files))
ifneq ($(dep_files_present),)
include $(dep_files_present)
endif
else
Makefile: use `git ls-files` to list header files, if possible In d85b0dff72 (Makefile: use `find` to determine static header dependencies, 2014-08-25), we switched from a static list of header files to a dynamically-generated one, asking `find` to enumerate them. Back in those days, we did not use `$(LIB_H)` by default, and many a `make` implementation seems smart enough not to run that `find` command in that case, so it was deemed okay to run `find` for special targets requiring this macro. However, as of ebb7baf02f (Makefile: add a hdr-check target, 2018-09-19), $(LIB_H) is part of a global rule and therefore must be expanded. Meaning: this `find` command has to be run upon every `make` invocation. In the presence of many a worktree, this can tax the developers' patience quite a bit. Even in the absence of worktrees or other untracked files and directories, the cost of I/O to generate that list of header files is simply a lot larger than a simple `git ls-files` call. Therefore, just like in 335339758c (Makefile: ask "ls-files" to list source files if available, 2011-10-18), we now prefer to use `git ls-files` to enumerate the header files to enumerating them via `find`, falling back to the latter if the former failed (which would be the case e.g. in a worktree that was extracted from a source .tar file rather than from a clone of Git's sources). This has one notable consequence: we no longer include `command-list.h` in `LIB_H`, as it is a generated file, not a tracked one, but that is easily worked around. Of the three sites that use `LIB_H`, two (`LOCALIZED_C` and `CHK_HDRS`) already handle generated headers separately. In the third, the computed-dependency fallback, we can just add in a reference to $(GENERATED_H). Likewise, we no longer include not-yet-tracked header files in `LIB_H`. Given the speed improvements, these consequences seem a comparably small price to pay. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Acked-by: Ramsay Jones <ramsay@ramsayjones.plus.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-03-04 14:47:06 +01:00
$(OBJECTS): $(LIB_H) $(GENERATED_H)
endif
Makefile: add support for generating JSON compilation database Tools based on LibClang [1] can make use of a 'JSON Compilation Database' [2] that keeps track of the exact options used to compile a set of source files. For example, clangd [3], which is a C language server protocol implementation, can use a JSON compilation database to determine the flags needed to compile a file so it can provide proper editor integration. As a result, editors supporting the language server protocol (such as VS Code, Emacs, or Vim, with suitable plugins) can provide better searching, integration, and refactoring tools. The Clang compiler can generate JSON fragments when compiling [4], using the `-MJ` flag. These JSON fragments (one per compiled source file) can then be concatenated to create the compilation database, commonly called 'compile_commands.json'. Add support to the Makefile for generating these JSON fragments as well as the compilation database itself, if the environment variable 'GENERATE_COMPILATION_DATABASE' is set. If this variable is set, check that $(CC) indeed supports the `-MJ` flag, following what is done for automatic dependencies. All JSON fragments are placed in the 'compile_commands/' directory, and the compilation database 'compile_commands.json' is generated as a dependency of the 'all' target using a `sed` invocation. [1] https://clang.llvm.org/docs/Tooling.html [2] https://clang.llvm.org/docs/JSONCompilationDatabase.html [3] https://clangd.llvm.org/ [4] https://clang.llvm.org/docs/ClangCommandLineReference.html#cmdoption-clang-mj-arg Helped-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Philippe Blain <levraiphilippeblain@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-04 00:13:38 +02:00
ifeq ($(GENERATE_COMPILATION_DATABASE),yes)
all:: compile_commands.json
compile_commands.json:
$(QUIET_GEN)sed -e '1s/^/[/' -e '$$s/,$$/]/' $(compdb_dir)/*.o.json > $@+
@if test -s $@+; then mv $@+ $@; else $(RM) $@+; fi
endif
exec-cmd.sp exec-cmd.s exec-cmd.o: GIT-PREFIX
exec-cmd.sp exec-cmd.s exec-cmd.o: EXTRA_CPPFLAGS = \
'-DGIT_EXEC_PATH="$(gitexecdir_SQ)"' \
'-DGIT_LOCALE_PATH="$(localedir_relative_SQ)"' \
'-DBINDIR="$(bindir_relative_SQ)"' \
'-DFALLBACK_RUNTIME_PREFIX="$(prefix_SQ)"'
setup.sp setup.s setup.o: GIT-PREFIX
setup.sp setup.s setup.o: EXTRA_CPPFLAGS = \
-DDEFAULT_GIT_TEMPLATE_DIR='"$(template_dir_SQ)"'
config.sp config.s config.o: GIT-PREFIX
2011-04-21 21:14:42 +02:00
config.sp config.s config.o: EXTRA_CPPFLAGS = \
-DETC_GITCONFIG='"$(ETC_GITCONFIG_SQ)"'
attr.sp attr.s attr.o: GIT-PREFIX
2011-04-21 21:14:42 +02:00
attr.sp attr.s attr.o: EXTRA_CPPFLAGS = \
-DETC_GITATTRIBUTES='"$(ETC_GITATTRIBUTES_SQ)"'
gettext.sp gettext.s gettext.o: GIT-PREFIX
i18n: add infrastructure for translating Git with gettext Change the skeleton implementation of i18n in Git to one that can show localized strings to users for our C, Shell and Perl programs using either GNU libintl or the Solaris gettext implementation. This new internationalization support is enabled by default. If gettext isn't available, or if Git is compiled with NO_GETTEXT=YesPlease, Git falls back on its current behavior of showing interface messages in English. When using the autoconf script we'll auto-detect if the gettext libraries are installed and act appropriately. This change is somewhat large because as well as adding a C, Shell and Perl i18n interface we're adding a lot of tests for them, and for those tests to work we need a skeleton PO file to actually test translations. A minimal Icelandic translation is included for this purpose. Icelandic includes multi-byte characters which makes it easy to test various edge cases, and it's a language I happen to understand. The rest of the commit message goes into detail about various sub-parts of this commit. = Installation Gettext .mo files will be installed and looked for in the standard $(prefix)/share/locale path. GIT_TEXTDOMAINDIR can also be set to override that, but that's only intended to be used to test Git itself. = Perl Perl code that's to be localized should use the new Git::I18n module. It imports a __ function into the caller's package by default. Instead of using the high level Locale::TextDomain interface I've opted to use the low-level (equivalent to the C interface) Locale::Messages module, which Locale::TextDomain itself uses. Locale::TextDomain does a lot of redundant work we don't need, and some of it would potentially introduce bugs. It tries to set the $TEXTDOMAIN based on package of the caller, and has its own hardcoded paths where it'll search for messages. I found it easier just to completely avoid it rather than try to circumvent its behavior. In any case, this is an issue wholly internal Git::I18N. Its guts can be changed later if that's deemed necessary. See <AANLkTilYD_NyIZMyj9dHtVk-ylVBfvyxpCC7982LWnVd@mail.gmail.com> for a further elaboration on this topic. = Shell Shell code that's to be localized should use the git-sh-i18n library. It's basically just a wrapper for the system's gettext.sh. If gettext.sh isn't available we'll fall back on gettext(1) if it's available. The latter is available without the former on Solaris, which has its own non-GNU gettext implementation. We also need to emulate eval_gettext() there. If neither are present we'll use a dumb printf(1) fall-through wrapper. = About libcharset.h and langinfo.h We use libcharset to query the character set of the current locale if it's available. I.e. we'll use it instead of nl_langinfo if HAVE_LIBCHARSET_H is set. The GNU gettext manual recommends using langinfo.h's nl_langinfo(CODESET) to acquire the current character set, but on systems that have libcharset.h's locale_charset() using the latter is either saner, or the only option on those systems. GNU and Solaris have a nl_langinfo(CODESET), FreeBSD can use either, but MinGW and some others need to use libcharset.h's locale_charset() instead. =Credits This patch is based on work by Jeff Epler <jepler@unpythonic.net> who did the initial Makefile / C work, and a lot of comments from the Git mailing list, including Jonathan Nieder, Jakub Narebski, Johannes Sixt, Erik Faye-Lund, Peter Krefting, Junio C Hamano, Thomas Rast and others. [jc: squashed a small Makefile fix from Ramsay] Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-11-18 00:14:42 +01:00
gettext.sp gettext.s gettext.o: EXTRA_CPPFLAGS = \
-DGIT_LOCALE_PATH='"$(localedir_relative_SQ)"'
i18n: add infrastructure for translating Git with gettext Change the skeleton implementation of i18n in Git to one that can show localized strings to users for our C, Shell and Perl programs using either GNU libintl or the Solaris gettext implementation. This new internationalization support is enabled by default. If gettext isn't available, or if Git is compiled with NO_GETTEXT=YesPlease, Git falls back on its current behavior of showing interface messages in English. When using the autoconf script we'll auto-detect if the gettext libraries are installed and act appropriately. This change is somewhat large because as well as adding a C, Shell and Perl i18n interface we're adding a lot of tests for them, and for those tests to work we need a skeleton PO file to actually test translations. A minimal Icelandic translation is included for this purpose. Icelandic includes multi-byte characters which makes it easy to test various edge cases, and it's a language I happen to understand. The rest of the commit message goes into detail about various sub-parts of this commit. = Installation Gettext .mo files will be installed and looked for in the standard $(prefix)/share/locale path. GIT_TEXTDOMAINDIR can also be set to override that, but that's only intended to be used to test Git itself. = Perl Perl code that's to be localized should use the new Git::I18n module. It imports a __ function into the caller's package by default. Instead of using the high level Locale::TextDomain interface I've opted to use the low-level (equivalent to the C interface) Locale::Messages module, which Locale::TextDomain itself uses. Locale::TextDomain does a lot of redundant work we don't need, and some of it would potentially introduce bugs. It tries to set the $TEXTDOMAIN based on package of the caller, and has its own hardcoded paths where it'll search for messages. I found it easier just to completely avoid it rather than try to circumvent its behavior. In any case, this is an issue wholly internal Git::I18N. Its guts can be changed later if that's deemed necessary. See <AANLkTilYD_NyIZMyj9dHtVk-ylVBfvyxpCC7982LWnVd@mail.gmail.com> for a further elaboration on this topic. = Shell Shell code that's to be localized should use the git-sh-i18n library. It's basically just a wrapper for the system's gettext.sh. If gettext.sh isn't available we'll fall back on gettext(1) if it's available. The latter is available without the former on Solaris, which has its own non-GNU gettext implementation. We also need to emulate eval_gettext() there. If neither are present we'll use a dumb printf(1) fall-through wrapper. = About libcharset.h and langinfo.h We use libcharset to query the character set of the current locale if it's available. I.e. we'll use it instead of nl_langinfo if HAVE_LIBCHARSET_H is set. The GNU gettext manual recommends using langinfo.h's nl_langinfo(CODESET) to acquire the current character set, but on systems that have libcharset.h's locale_charset() using the latter is either saner, or the only option on those systems. GNU and Solaris have a nl_langinfo(CODESET), FreeBSD can use either, but MinGW and some others need to use libcharset.h's locale_charset() instead. =Credits This patch is based on work by Jeff Epler <jepler@unpythonic.net> who did the initial Makefile / C work, and a lot of comments from the Git mailing list, including Jonathan Nieder, Jakub Narebski, Johannes Sixt, Erik Faye-Lund, Peter Krefting, Junio C Hamano, Thomas Rast and others. [jc: squashed a small Makefile fix from Ramsay] Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-11-18 00:14:42 +01:00
http-push.sp http.sp http-walker.sp remote-curl.sp imap-send.sp: SP_EXTRA_FLAGS += \
-DCURL_DISABLE_TYPECHECK
pack-revindex.sp: SP_EXTRA_FLAGS += -Wno-memcpy-max-count
ifdef NO_EXPAT
2011-04-21 21:14:42 +02:00
http-walker.sp http-walker.s http-walker.o: EXTRA_CPPFLAGS = -DNO_EXPAT
endif
ifdef NO_REGEX
2011-04-21 21:14:42 +02:00
compat/regex/regex.sp compat/regex/regex.o: EXTRA_CPPFLAGS = \
-DGAWK -DNO_MBSUPPORT
endif
ifdef USE_NED_ALLOCATOR
2011-04-21 21:14:42 +02:00
compat/nedmalloc/nedmalloc.sp compat/nedmalloc/nedmalloc.o: EXTRA_CPPFLAGS = \
-DNDEBUG -DREPLACE_SYSTEM_ALLOCATOR
compat/nedmalloc/nedmalloc.sp: SP_EXTRA_FLAGS += -Wno-non-pointer-null
endif
headless-git.o: compat/win32/headless.c GIT-CFLAGS
$(QUIET_CC)$(CC) $(ALL_CFLAGS) $(COMPAT_CFLAGS) \
-fno-stack-protector -o $@ -c -Wall -Wwrite-strings $<
headless-git$X: headless-git.o git.res GIT-LDFLAGS
$(QUIET_LINK)$(CC) $(ALL_CFLAGS) $(ALL_LDFLAGS) -mwindows -o $@ $< git.res
git-%$X: %.o GIT-LDFLAGS $(GITLIBS)
$(QUIET_LINK)$(CC) $(ALL_CFLAGS) -o $@ $(ALL_LDFLAGS) $(filter %.o,$^) $(LIBS)
git-imap-send$X: imap-send.o $(IMAP_SEND_BUILDDEPS) GIT-LDFLAGS $(GITLIBS)
$(QUIET_LINK)$(CC) $(ALL_CFLAGS) -o $@ $(ALL_LDFLAGS) $(filter %.o,$^) \
$(IMAP_SEND_LDFLAGS) $(LIBS)
git-http-fetch$X: http.o http-walker.o http-fetch.o GIT-LDFLAGS $(GITLIBS)
$(QUIET_LINK)$(CC) $(ALL_CFLAGS) -o $@ $(ALL_LDFLAGS) $(filter %.o,$^) \
$(CURL_LIBCURL) $(LIBS)
git-http-push$X: http.o http-push.o GIT-LDFLAGS $(GITLIBS)
$(QUIET_LINK)$(CC) $(ALL_CFLAGS) -o $@ $(ALL_LDFLAGS) $(filter %.o,$^) \
$(CURL_LIBCURL) $(EXPAT_LIBEXPAT) $(LIBS)
$(REMOTE_CURL_ALIASES): $(REMOTE_CURL_PRIMARY)
$(QUIET_LNCP)$(RM) $@ && \
ln $< $@ 2>/dev/null || \
ln -s $< $@ 2>/dev/null || \
cp $< $@
$(REMOTE_CURL_PRIMARY): remote-curl.o http.o http-walker.o GIT-LDFLAGS $(GITLIBS)
$(QUIET_LINK)$(CC) $(ALL_CFLAGS) -o $@ $(ALL_LDFLAGS) $(filter %.o,$^) \
$(CURL_LIBCURL) $(EXPAT_LIBEXPAT) $(LIBS)
scalar$X: scalar.o GIT-LDFLAGS $(GITLIBS)
scalar: create a rudimentary executable The idea of Scalar (https://github.com/microsoft/scalar), and before that, of VFS for Git, has always been to prove that Git _can_ scale, and to upstream whatever strategies have been demonstrated to help. With this patch, we start the journey from that C# project to move what is left to Git's own `contrib/` directory, reimplementing it in pure C, with the intention to facilitate integrating the functionality into core Git all while maintaining backwards-compatibility for existing Scalar users (which will be much easier when both live in the same worktree). It has always been the plan to contribute all of the proven strategies back to core Git. For example, while the virtual filesystem provided by VFS for Git helped the team developing the Windows operating system to move onto Git, while trying to upstream it we realized that it cannot be done: getting the virtual filesystem to work (which we only managed to implement fully on Windows, but not on, say, macOS or Linux), and the required server-side support for the GVFS protocol, made this not quite feasible. The Scalar project learned from that and tackled the problem with different tactics: instead of pretending to Git that the working directory is fully populated, it _specifically_ teaches Git about partial clone (which is based on VFS for Git's cache server), about sparse checkout (which VFS for Git tried to do transparently, in the file system layer), and regularly runs maintenance tasks to keep the repository in a healthy state. With partial clone, sparse checkout and `git maintenance` having been upstreamed, there is little left that `scalar.exe` does which `git.exe` cannot do. One such thing is that `scalar clone <url>` will automatically set up a partial, sparse clone, and configure known-helpful settings from the start. So let's bring this convenience into Git's tree. The idea here is that you can (optionally) build Scalar via make -C contrib/scalar/ This will build the `scalar` executable and put it into the contrib/scalar/ subdirectory. The slightly awkward addition of the `contrib/scalar/*` bits to the top-level `Makefile` are actually really required: we want to link to `libgit.a`, which means that we will need to use the very same `CFLAGS` and `LDFLAGS` as the rest of Git. An early development version of this patch tried to replicate all the conditional code in `contrib/scalar/Makefile` (e.g. `NO_POLL`) just like `contrib/svn-fe/Makefile` used to do before it was retired. It turned out to be quite the whack-a-mole game: the SHA-1-related flags, the flags enabling/disabling `compat/poll/`, `compat/regex/`, `compat/win32mmap.c` & friends depending on the current platform... To put it mildly: it was a major mess. Instead, this patch makes minimal changes to the top-level `Makefile` so that the bits in `contrib/scalar/` can be compiled and linked, and adds a `contrib/scalar/Makefile` that uses the top-level `Makefile` in a most minimal way to do the actual compiling. Note: With this commit, we only establish the infrastructure, no Scalar functionality is implemented yet; We will do that incrementally over the next few commits. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-12-03 14:34:16 +01:00
$(QUIET_LINK)$(CC) $(ALL_CFLAGS) -o $@ $(ALL_LDFLAGS) \
$(filter %.o,$^) $(LIBS)
$(LIB_FILE): $(LIB_OBJS)
Makefile: remove archives before manipulating them with 'ar' The rules creating the $(LIB_FILE) and $(XDIFF_LIB) archives used to be: $(QUIET_AR)$(RM) $@ && $(AR) $(ARFLAGS) $@ $^ until commit 7b76d6bf22 (Makefile: add and use the ".DELETE_ON_ERROR" flag, 2021-06-29) removed the '$(RM) $@' part, claiming that "we can rely on the "c" (create) being present in ARFLAGS", and (I presume) assuming that it means that the named archive is created from scratch. Unfortunately, that's not what the 'c' flag does, it merely "Suppress the diagnostic message that is written to standard error by default when the archive is created" [1]. Consequently, all object files that are already present in an existing archive and are not replaced will remain there. This leads to linker errors in back-to-back builds of different revisions without a 'make clean' between them if source files going into these archives are renamed in between: # The last commit renaming files that go into 'libgit.a': # bc62692757 (hash-lookup: rename from sha1-lookup, 2020-12-31) # sha1-lookup.c => hash-lookup.c | 14 +++++++------- # sha1-lookup.h => hash-lookup.h | 12 ++++++------ $ git checkout bc62692757^ HEAD is now at 7a7d992d0d sha1-lookup: rename `sha1_pos()` as `hash_pos()` $ make [...] $ git checkout 7b76d6bf22 HEAD is now at 7b76d6bf22 Makefile: add and use the ".DELETE_ON_ERROR" flag $ make [...] AR libgit.a LINK git /usr/bin/ld: libgit.a(hash-lookup.o): in function `bsearch_hash': /home/szeder/src/git/hash-lookup.c:105: multiple definition of `bsearch_hash'; libgit.a(sha1-lookup.o):/home/szeder/src/git/sha1-lookup.c:105: first defined here collect2: error: ld returned 1 exit status make: *** [Makefile:2213: git] Error 1 Restore the original make rules to first remove $(LIB_FILE) and $(XDIFF_LIB) and then create them from scratch to avoid these build errors. [1] Quoting POSIX at: https://pubs.opengroup.org/onlinepubs/9699919799/utilities/ar.html Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-08-18 23:36:11 +02:00
$(QUIET_AR)$(RM) $@ && $(AR) $(ARFLAGS) $@ $^
Use a *real* built-in diff generator This uses a simplified libxdiff setup to generate unified diffs _without_ doing fork/execve of GNU "diff". This has several huge advantages, for example: Before: [torvalds@g5 linux]$ time git diff v2.6.16.. > /dev/null real 0m24.818s user 0m13.332s sys 0m8.664s After: [torvalds@g5 linux]$ time git diff v2.6.16.. > /dev/null real 0m4.563s user 0m2.944s sys 0m1.580s and the fact that this should be a lot more portable (ie we can ignore all the issues with doing fork/execve under Windows). Perhaps even more importantly, this allows us to do diffs without actually ever writing out the git file contents to a temporary file (and without any of the shell quoting issues on filenames etc etc). NOTE! THIS PATCH DOES NOT DO THAT OPTIMIZATION YET! I was lazy, and the current "diff-core" code actually will always write the temp-files, because it used to be something that you simply had to do. So this current one actually writes a temp-file like before, and then reads it into memory again just to do the diff. Stupid. But if this basic infrastructure is accepted, we can start switching over diff-core to not write temp-files, which should speed things up even further, especially when doing big tree-to-tree diffs. Now, in the interest of full disclosure, I should also point out a few downsides: - the libxdiff algorithm is different, and I bet GNU diff has gotten a lot more testing. And the thing is, generating a diff is not an exact science - you can get two different diffs (and you will), and they can both be perfectly valid. So it's not possible to "validate" the libxdiff output by just comparing it against GNU diff. - GNU diff does some nice eye-candy, like trying to figure out what the last function was, and adding that information to the "@@ .." line. libxdiff doesn't do that. - The libxdiff thing has some known deficiencies. In particular, it gets the "\No newline at end of file" case wrong. So this is currently for the experimental branch only. I hope Davide will help fix it. That said, I think the huge performance advantage, and the fact that it integrates better is definitely worth it. But it should go into a development branch at least due to the missing newline issue. Technical note: this is based on libxdiff-0.17, but I did some surgery to get rid of the extraneous fat - stuff that git doesn't need, and seriously cutting down on mmfile_t, which had much more capabilities than the diff algorithm either needed or used. In this version, "mmfile_t" is just a trivial <pointer,length> tuple. That said, I tried to keep the differences to simple removals, so that you can do a diff between this and the libxdiff origin, and you'll basically see just things getting deleted. Even the mmfile_t simplifications are left in a state where the diffs should be readable. Apologies to Davide, whom I'd love to get feedback on this all from (I wrote my own "fill_mmfile()" for the new simpler mmfile_t format: the old complex format had a helper function for that, but I did my surgery with the goal in mind that eventually we _should_ just do mmfile_t mf; buf = read_sha1_file(sha1, type, &size); mf->ptr = buf; mf->size = size; .. use "mf" directly .. which was really a nightmare with the old "helpful" mmfile_t, and really is that easy with the new cut-down interfaces). [ Btw, as any hawk-eye can see from the diff, this was actually generated with itself, so it is "self-hosting". That's about all the testing it has gotten, along with the above kernel diff, which eye-balls correctly, but shows the newline issue when you double-check it with "git-apply" ] Signed-off-by: Linus Torvalds <torvalds@osdl.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-03-25 05:13:22 +01:00
$(XDIFF_LIB): $(XDIFF_OBJS)
Makefile: remove archives before manipulating them with 'ar' The rules creating the $(LIB_FILE) and $(XDIFF_LIB) archives used to be: $(QUIET_AR)$(RM) $@ && $(AR) $(ARFLAGS) $@ $^ until commit 7b76d6bf22 (Makefile: add and use the ".DELETE_ON_ERROR" flag, 2021-06-29) removed the '$(RM) $@' part, claiming that "we can rely on the "c" (create) being present in ARFLAGS", and (I presume) assuming that it means that the named archive is created from scratch. Unfortunately, that's not what the 'c' flag does, it merely "Suppress the diagnostic message that is written to standard error by default when the archive is created" [1]. Consequently, all object files that are already present in an existing archive and are not replaced will remain there. This leads to linker errors in back-to-back builds of different revisions without a 'make clean' between them if source files going into these archives are renamed in between: # The last commit renaming files that go into 'libgit.a': # bc62692757 (hash-lookup: rename from sha1-lookup, 2020-12-31) # sha1-lookup.c => hash-lookup.c | 14 +++++++------- # sha1-lookup.h => hash-lookup.h | 12 ++++++------ $ git checkout bc62692757^ HEAD is now at 7a7d992d0d sha1-lookup: rename `sha1_pos()` as `hash_pos()` $ make [...] $ git checkout 7b76d6bf22 HEAD is now at 7b76d6bf22 Makefile: add and use the ".DELETE_ON_ERROR" flag $ make [...] AR libgit.a LINK git /usr/bin/ld: libgit.a(hash-lookup.o): in function `bsearch_hash': /home/szeder/src/git/hash-lookup.c:105: multiple definition of `bsearch_hash'; libgit.a(sha1-lookup.o):/home/szeder/src/git/sha1-lookup.c:105: first defined here collect2: error: ld returned 1 exit status make: *** [Makefile:2213: git] Error 1 Restore the original make rules to first remove $(LIB_FILE) and $(XDIFF_LIB) and then create them from scratch to avoid these build errors. [1] Quoting POSIX at: https://pubs.opengroup.org/onlinepubs/9699919799/utilities/ar.html Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-08-18 23:36:11 +02:00
$(QUIET_AR)$(RM) $@ && $(AR) $(ARFLAGS) $@ $^
Use a *real* built-in diff generator This uses a simplified libxdiff setup to generate unified diffs _without_ doing fork/execve of GNU "diff". This has several huge advantages, for example: Before: [torvalds@g5 linux]$ time git diff v2.6.16.. > /dev/null real 0m24.818s user 0m13.332s sys 0m8.664s After: [torvalds@g5 linux]$ time git diff v2.6.16.. > /dev/null real 0m4.563s user 0m2.944s sys 0m1.580s and the fact that this should be a lot more portable (ie we can ignore all the issues with doing fork/execve under Windows). Perhaps even more importantly, this allows us to do diffs without actually ever writing out the git file contents to a temporary file (and without any of the shell quoting issues on filenames etc etc). NOTE! THIS PATCH DOES NOT DO THAT OPTIMIZATION YET! I was lazy, and the current "diff-core" code actually will always write the temp-files, because it used to be something that you simply had to do. So this current one actually writes a temp-file like before, and then reads it into memory again just to do the diff. Stupid. But if this basic infrastructure is accepted, we can start switching over diff-core to not write temp-files, which should speed things up even further, especially when doing big tree-to-tree diffs. Now, in the interest of full disclosure, I should also point out a few downsides: - the libxdiff algorithm is different, and I bet GNU diff has gotten a lot more testing. And the thing is, generating a diff is not an exact science - you can get two different diffs (and you will), and they can both be perfectly valid. So it's not possible to "validate" the libxdiff output by just comparing it against GNU diff. - GNU diff does some nice eye-candy, like trying to figure out what the last function was, and adding that information to the "@@ .." line. libxdiff doesn't do that. - The libxdiff thing has some known deficiencies. In particular, it gets the "\No newline at end of file" case wrong. So this is currently for the experimental branch only. I hope Davide will help fix it. That said, I think the huge performance advantage, and the fact that it integrates better is definitely worth it. But it should go into a development branch at least due to the missing newline issue. Technical note: this is based on libxdiff-0.17, but I did some surgery to get rid of the extraneous fat - stuff that git doesn't need, and seriously cutting down on mmfile_t, which had much more capabilities than the diff algorithm either needed or used. In this version, "mmfile_t" is just a trivial <pointer,length> tuple. That said, I tried to keep the differences to simple removals, so that you can do a diff between this and the libxdiff origin, and you'll basically see just things getting deleted. Even the mmfile_t simplifications are left in a state where the diffs should be readable. Apologies to Davide, whom I'd love to get feedback on this all from (I wrote my own "fill_mmfile()" for the new simpler mmfile_t format: the old complex format had a helper function for that, but I did my surgery with the goal in mind that eventually we _should_ just do mmfile_t mf; buf = read_sha1_file(sha1, type, &size); mf->ptr = buf; mf->size = size; .. use "mf" directly .. which was really a nightmare with the old "helpful" mmfile_t, and really is that easy with the new cut-down interfaces). [ Btw, as any hawk-eye can see from the diff, this was actually generated with itself, so it is "self-hosting". That's about all the testing it has gotten, along with the above kernel diff, which eye-balls correctly, but shows the newline issue when you double-check it with "git-apply" ] Signed-off-by: Linus Torvalds <torvalds@osdl.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-03-25 05:13:22 +01:00
$(REFTABLE_LIB): $(REFTABLE_OBJS)
$(QUIET_AR)$(RM) $@ && $(AR) $(ARFLAGS) $@ $^
$(REFTABLE_TEST_LIB): $(REFTABLE_TEST_OBJS)
$(QUIET_AR)$(RM) $@ && $(AR) $(ARFLAGS) $@ $^
export DEFAULT_EDITOR DEFAULT_PAGER
Documentation/GIT-EXCLUDED-PROGRAMS: FORCE
@EXCLUDED='EXCLUDED_PROGRAMS := $(EXCLUDED_PROGRAMS)'; \
if test x"$$EXCLUDED" != \
x"`cat Documentation/GIT-EXCLUDED-PROGRAMS 2>/dev/null`" ; then \
echo >&2 " * new documentation flags"; \
echo "$$EXCLUDED" >Documentation/GIT-EXCLUDED-PROGRAMS; \
fi
.PHONY: doc man man-perl html info pdf
doc: man-perl
$(MAKE) -C Documentation all
man: man-perl
$(MAKE) -C Documentation man
man-perl: perl/build/man/man3/Git.3pm
html:
$(MAKE) -C Documentation html
info:
$(MAKE) -C Documentation info
pdf:
$(MAKE) -C Documentation pdf
XGETTEXT_FLAGS = \
--force-po \
--add-comments=TRANSLATORS: \
--msgid-bugs-address="Git Mailing List <git@vger.kernel.org>" \
i18n CI: stop allowing non-ASCII source messages in po/git.pot In the preceding commit we moved away from using xgettext(1) to both generate the po/git.pot, and to merge the incrementally generated po/git.pot+ file as we sourced translations from C, shell and Perl. Doing it this way, which dates back to my initial implementation[1][2][3] was conflating two things: With xgettext(1) the --from-code both controls what encoding is specified in the po/git.pot's header, and what encoding we allow in source messages. We don't ever want to allow non-ASCII in *source messages*, and doing so has hid e.g. a buggy message introduced in a6226fd772b (submodule--helper: convert the bulk of cmd_add() to C, 2021-08-10) from us, we'd warn about it before, but only when running "make pot", but the operation would still succeed. Now we'll error out on it when running "make pot". Since the preceding Makefile changes made this easy: let's add a "make check-pot" target with the same prerequisites as the "po/git.pot" target, but without changing the file "po/git.pot". Running it as part of the "static-analysis" CI target will ensure that we catch any such issues in the future. E.g.: $ make check-pot XGETTEXT .build/pot/po/builtin/submodule--helper.c.po xgettext: Non-ASCII string at builtin/submodule--helper.c:3381. Please specify the source encoding through --from-code. make: *** [.build/pot/po/builtin/submodule--helper.c.po] Error 1 1. cd5513a7168 (i18n: Makefile: "pot" target to extract messages marked for translation, 2011-02-22) 2. adc3b2b2767 (Makefile: add xgettext target for *.sh files, 2011-05-14) 3. 5e9637c6297 (i18n: add infrastructure for translating Git with gettext, 2011-11-18) Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Jiang Xin <zhiyou.jx@alibaba-inc.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-05-26 16:50:29 +02:00
--package-name=Git
XGETTEXT_FLAGS_C = $(XGETTEXT_FLAGS) --language=C \
--keyword=_ --keyword=N_ --keyword="Q_:1,2"
XGETTEXT_FLAGS_SH = $(XGETTEXT_FLAGS) --language=Shell \
--keyword=gettextln --keyword=eval_gettextln
XGETTEXT_FLAGS_PERL = $(XGETTEXT_FLAGS) --language=Perl \
--keyword=__ --keyword=N__ --keyword="__n:1,2"
MSGMERGE_FLAGS = --add-location --backup=off --update
LOCALIZED_C = $(sort $(FOUND_C_SOURCES) $(FOUND_H_SOURCES) $(GENERATED_H))
LOCALIZED_SH = $(sort $(SCRIPT_SH) git-sh-setup.sh)
LOCALIZED_PERL = $(sort $(SCRIPT_PERL))
i18n: add infrastructure for translating Git with gettext Change the skeleton implementation of i18n in Git to one that can show localized strings to users for our C, Shell and Perl programs using either GNU libintl or the Solaris gettext implementation. This new internationalization support is enabled by default. If gettext isn't available, or if Git is compiled with NO_GETTEXT=YesPlease, Git falls back on its current behavior of showing interface messages in English. When using the autoconf script we'll auto-detect if the gettext libraries are installed and act appropriately. This change is somewhat large because as well as adding a C, Shell and Perl i18n interface we're adding a lot of tests for them, and for those tests to work we need a skeleton PO file to actually test translations. A minimal Icelandic translation is included for this purpose. Icelandic includes multi-byte characters which makes it easy to test various edge cases, and it's a language I happen to understand. The rest of the commit message goes into detail about various sub-parts of this commit. = Installation Gettext .mo files will be installed and looked for in the standard $(prefix)/share/locale path. GIT_TEXTDOMAINDIR can also be set to override that, but that's only intended to be used to test Git itself. = Perl Perl code that's to be localized should use the new Git::I18n module. It imports a __ function into the caller's package by default. Instead of using the high level Locale::TextDomain interface I've opted to use the low-level (equivalent to the C interface) Locale::Messages module, which Locale::TextDomain itself uses. Locale::TextDomain does a lot of redundant work we don't need, and some of it would potentially introduce bugs. It tries to set the $TEXTDOMAIN based on package of the caller, and has its own hardcoded paths where it'll search for messages. I found it easier just to completely avoid it rather than try to circumvent its behavior. In any case, this is an issue wholly internal Git::I18N. Its guts can be changed later if that's deemed necessary. See <AANLkTilYD_NyIZMyj9dHtVk-ylVBfvyxpCC7982LWnVd@mail.gmail.com> for a further elaboration on this topic. = Shell Shell code that's to be localized should use the git-sh-i18n library. It's basically just a wrapper for the system's gettext.sh. If gettext.sh isn't available we'll fall back on gettext(1) if it's available. The latter is available without the former on Solaris, which has its own non-GNU gettext implementation. We also need to emulate eval_gettext() there. If neither are present we'll use a dumb printf(1) fall-through wrapper. = About libcharset.h and langinfo.h We use libcharset to query the character set of the current locale if it's available. I.e. we'll use it instead of nl_langinfo if HAVE_LIBCHARSET_H is set. The GNU gettext manual recommends using langinfo.h's nl_langinfo(CODESET) to acquire the current character set, but on systems that have libcharset.h's locale_charset() using the latter is either saner, or the only option on those systems. GNU and Solaris have a nl_langinfo(CODESET), FreeBSD can use either, but MinGW and some others need to use libcharset.h's locale_charset() instead. =Credits This patch is based on work by Jeff Epler <jepler@unpythonic.net> who did the initial Makefile / C work, and a lot of comments from the Git mailing list, including Jonathan Nieder, Jakub Narebski, Johannes Sixt, Erik Faye-Lund, Peter Krefting, Junio C Hamano, Thomas Rast and others. [jc: squashed a small Makefile fix from Ramsay] Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-11-18 00:14:42 +01:00
ifdef XGETTEXT_INCLUDE_TESTS
LOCALIZED_C += t/t0200/test.c
LOCALIZED_SH += t/t0200/test.sh
LOCALIZED_PERL += t/t0200/test.perl
endif
Makefile: have "make pot" not "reset --hard" Before commit fc0fd5b23b (Makefile: help gettext tools to cope with our custom PRItime format, 2017-07-20), we'd consider source files as-is with gettext, but because we need to understand PRItime in the same way that gettext itself understands PRIuMAX, we'd first check if we had a clean checkout, then munge all of the processed files in-place with "sed", generate "po/git.pot", and then finally "reset --hard" to undo our changes. By generating "pot" snippets in ".build/pot/po" for each source file and rewriting certain source files with PRItime macros to temporary files in ".build/pot/po", we can avoid running "make pot" by altering files in place and doing a "reset --hard" afterwards. This speed of "make pot" is slower than before on an initial run, because we run "xgettext" many times (once per source file), but it can be boosted by parallelization. It is *much* faster for incremental runs, and will allow us to implement related targets in subsequent commits. When the "pot" target was originally added in cd5513a7168 (i18n: Makefile: "pot" target to extract messages marked for translation, 2011-02-22) it behaved like a "normal" target. I.e. we'd skip the re-generation of the po/git.pot if nothing had to be done. Then after po/git.pot was checked in in dce37b66fb0 (l10n: initial git.pot for 1.7.10 upcoming release, 2012-02-13) the target was broken until 1f31963e921 (i18n: treat "make pot" as an explicitly-invoked target, 2014-08-22) when it was made to depend on "FORCE". I.e. the Makefile's dependency resolution inherently can't handle incremental building when the target file may be updated by git (or something else external to "make"). But this case no longer applies, so FORCE is no longer needed. That out of the way, the main logic change here is getting rid of the "reset --hard": We'll generate intermediate ".build/pot/po/%.po" files from "%", which is handy to see at a glance what strings (if any) in a given file are marked for translation: $ make .build/pot/po/pretty.c.po [...] $ cat .build/pot/po/pretty.c.po #: pretty.c:1051 msgid "unable to parse --pretty format" msgstr "" $ For these C source files which contain the PRItime macros, we will create temporary munged "*.c" files in a tree in ".build/pot/po" corresponding to our source tree, and have "xgettext" consider those. The rule needs to be careful to "(cd .build/pot/po && ...)", because otherwise the comments in the po/git.pot file wouldn't refer to the correct source locations (they'd be prefixed with ".build/pot/po"). These temporary munged "*.c” files will be removed immediately after the corresponding po files are generated, because some development tools cannot ignore the duplicate source files in the ".build" directory according to the ".gitignore" file, and that may cause trouble. The output of the generated po/git.pot file is changed in one minor way: Because we're using msgcat(1) instead of xgettext(1) to concatenate the output we'll now disambiguate where "TRANSLATORS" comments come from, in cases where a message is the same in N files, and either only one has a "TRANSLATORS" comment, or they're different. E.g. for the "Your edited hunk[...]" message we'll now apply this change (comment content elided): +#. #-#-#-#-# add-patch.c.po #-#-#-#-# #. TRANSLATORS: do not translate [y/n] [...] +#. #-#-#-#-# git-add--interactive.perl.po #-#-#-#-# #. TRANSLATORS: do not translate [y/n] [...] #: add-patch.c:1253 git-add--interactive.perl:1244 msgid "" "Your edited hunk does not apply. Edit again (saying \"no\" discards!) [y/n]? " msgstr "" There are six such changes, and they all make the context more understandable, as msgcat(1) is better at handling these edge cases than xgettext(1)'s previously used "--join-existing" flag. But filenames in the above disambiguation lines of extracted-comments have an extra ".po" extension compared to the filenames at the file locations. While we could rename the intermediate ".build/pot/po/%.po" files without the ".po" extension to use more intuitive filenames in the disambiguation lines of extracted-comments, but that will confuse developer tools with lots of invalid C or other source files in ".build/pot/po" directory. The addition of "--omit-header" option for xgettext makes the "pot" snippets in ".build/pot/po/*.po" smaller. But as we'll see in a subsequent commit this header behavior has been hiding an encoding-related bug from us, so let's carry it forward instead of re-generating it with xgettext(1). The "po/git.pot" file should have a header entry, because a proper header entry will increase the speed of creating a new po file using msginit and set a proper "POT-Creation-Date:" field in the header entry of a "po/XX.po" file. We use xgettext to generate a separate header file at ".build/pot/git.header" from "/dev/null", and use this header to assemble "po/git.pot". Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Jiang Xin <zhiyou.jx@alibaba-inc.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-05-26 16:50:28 +02:00
## We generate intermediate .build/pot/po/%.po files containing a
## extract of the translations we find in each file in the source
## tree. We will assemble them using msgcat to create the final
## "po/git.pot" file.
LOCALIZED_ALL_GEN_PO =
LOCALIZED_C_GEN_PO = $(LOCALIZED_C:%=.build/pot/po/%.po)
LOCALIZED_ALL_GEN_PO += $(LOCALIZED_C_GEN_PO)
LOCALIZED_SH_GEN_PO = $(LOCALIZED_SH:%=.build/pot/po/%.po)
LOCALIZED_ALL_GEN_PO += $(LOCALIZED_SH_GEN_PO)
LOCALIZED_PERL_GEN_PO = $(LOCALIZED_PERL:%=.build/pot/po/%.po)
LOCALIZED_ALL_GEN_PO += $(LOCALIZED_PERL_GEN_PO)
## Gettext tools cannot work with our own custom PRItime type, so
## we replace PRItime with PRIuMAX. We need to update this to
## PRIdMAX if we switch to a signed type later.
Makefile: have "make pot" not "reset --hard" Before commit fc0fd5b23b (Makefile: help gettext tools to cope with our custom PRItime format, 2017-07-20), we'd consider source files as-is with gettext, but because we need to understand PRItime in the same way that gettext itself understands PRIuMAX, we'd first check if we had a clean checkout, then munge all of the processed files in-place with "sed", generate "po/git.pot", and then finally "reset --hard" to undo our changes. By generating "pot" snippets in ".build/pot/po" for each source file and rewriting certain source files with PRItime macros to temporary files in ".build/pot/po", we can avoid running "make pot" by altering files in place and doing a "reset --hard" afterwards. This speed of "make pot" is slower than before on an initial run, because we run "xgettext" many times (once per source file), but it can be boosted by parallelization. It is *much* faster for incremental runs, and will allow us to implement related targets in subsequent commits. When the "pot" target was originally added in cd5513a7168 (i18n: Makefile: "pot" target to extract messages marked for translation, 2011-02-22) it behaved like a "normal" target. I.e. we'd skip the re-generation of the po/git.pot if nothing had to be done. Then after po/git.pot was checked in in dce37b66fb0 (l10n: initial git.pot for 1.7.10 upcoming release, 2012-02-13) the target was broken until 1f31963e921 (i18n: treat "make pot" as an explicitly-invoked target, 2014-08-22) when it was made to depend on "FORCE". I.e. the Makefile's dependency resolution inherently can't handle incremental building when the target file may be updated by git (or something else external to "make"). But this case no longer applies, so FORCE is no longer needed. That out of the way, the main logic change here is getting rid of the "reset --hard": We'll generate intermediate ".build/pot/po/%.po" files from "%", which is handy to see at a glance what strings (if any) in a given file are marked for translation: $ make .build/pot/po/pretty.c.po [...] $ cat .build/pot/po/pretty.c.po #: pretty.c:1051 msgid "unable to parse --pretty format" msgstr "" $ For these C source files which contain the PRItime macros, we will create temporary munged "*.c" files in a tree in ".build/pot/po" corresponding to our source tree, and have "xgettext" consider those. The rule needs to be careful to "(cd .build/pot/po && ...)", because otherwise the comments in the po/git.pot file wouldn't refer to the correct source locations (they'd be prefixed with ".build/pot/po"). These temporary munged "*.c” files will be removed immediately after the corresponding po files are generated, because some development tools cannot ignore the duplicate source files in the ".build" directory according to the ".gitignore" file, and that may cause trouble. The output of the generated po/git.pot file is changed in one minor way: Because we're using msgcat(1) instead of xgettext(1) to concatenate the output we'll now disambiguate where "TRANSLATORS" comments come from, in cases where a message is the same in N files, and either only one has a "TRANSLATORS" comment, or they're different. E.g. for the "Your edited hunk[...]" message we'll now apply this change (comment content elided): +#. #-#-#-#-# add-patch.c.po #-#-#-#-# #. TRANSLATORS: do not translate [y/n] [...] +#. #-#-#-#-# git-add--interactive.perl.po #-#-#-#-# #. TRANSLATORS: do not translate [y/n] [...] #: add-patch.c:1253 git-add--interactive.perl:1244 msgid "" "Your edited hunk does not apply. Edit again (saying \"no\" discards!) [y/n]? " msgstr "" There are six such changes, and they all make the context more understandable, as msgcat(1) is better at handling these edge cases than xgettext(1)'s previously used "--join-existing" flag. But filenames in the above disambiguation lines of extracted-comments have an extra ".po" extension compared to the filenames at the file locations. While we could rename the intermediate ".build/pot/po/%.po" files without the ".po" extension to use more intuitive filenames in the disambiguation lines of extracted-comments, but that will confuse developer tools with lots of invalid C or other source files in ".build/pot/po" directory. The addition of "--omit-header" option for xgettext makes the "pot" snippets in ".build/pot/po/*.po" smaller. But as we'll see in a subsequent commit this header behavior has been hiding an encoding-related bug from us, so let's carry it forward instead of re-generating it with xgettext(1). The "po/git.pot" file should have a header entry, because a proper header entry will increase the speed of creating a new po file using msginit and set a proper "POT-Creation-Date:" field in the header entry of a "po/XX.po" file. We use xgettext to generate a separate header file at ".build/pot/git.header" from "/dev/null", and use this header to assemble "po/git.pot". Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Jiang Xin <zhiyou.jx@alibaba-inc.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-05-26 16:50:28 +02:00
$(LOCALIZED_C_GEN_PO): .build/pot/po/%.po: %
$(call mkdir_p_parent_template)
$(QUIET_XGETTEXT) \
if grep -q PRItime $<; then \
(\
sed -e 's|PRItime|PRIuMAX|g' <$< \
>.build/pot/po/$< && \
cd .build/pot/po && \
$(XGETTEXT) --omit-header \
-o $(@:.build/pot/po/%=%) \
$(XGETTEXT_FLAGS_C) $< && \
rm $<; \
); \
else \
$(XGETTEXT) --omit-header \
-o $@ $(XGETTEXT_FLAGS_C) $<; \
fi
Makefile: have "make pot" not "reset --hard" Before commit fc0fd5b23b (Makefile: help gettext tools to cope with our custom PRItime format, 2017-07-20), we'd consider source files as-is with gettext, but because we need to understand PRItime in the same way that gettext itself understands PRIuMAX, we'd first check if we had a clean checkout, then munge all of the processed files in-place with "sed", generate "po/git.pot", and then finally "reset --hard" to undo our changes. By generating "pot" snippets in ".build/pot/po" for each source file and rewriting certain source files with PRItime macros to temporary files in ".build/pot/po", we can avoid running "make pot" by altering files in place and doing a "reset --hard" afterwards. This speed of "make pot" is slower than before on an initial run, because we run "xgettext" many times (once per source file), but it can be boosted by parallelization. It is *much* faster for incremental runs, and will allow us to implement related targets in subsequent commits. When the "pot" target was originally added in cd5513a7168 (i18n: Makefile: "pot" target to extract messages marked for translation, 2011-02-22) it behaved like a "normal" target. I.e. we'd skip the re-generation of the po/git.pot if nothing had to be done. Then after po/git.pot was checked in in dce37b66fb0 (l10n: initial git.pot for 1.7.10 upcoming release, 2012-02-13) the target was broken until 1f31963e921 (i18n: treat "make pot" as an explicitly-invoked target, 2014-08-22) when it was made to depend on "FORCE". I.e. the Makefile's dependency resolution inherently can't handle incremental building when the target file may be updated by git (or something else external to "make"). But this case no longer applies, so FORCE is no longer needed. That out of the way, the main logic change here is getting rid of the "reset --hard": We'll generate intermediate ".build/pot/po/%.po" files from "%", which is handy to see at a glance what strings (if any) in a given file are marked for translation: $ make .build/pot/po/pretty.c.po [...] $ cat .build/pot/po/pretty.c.po #: pretty.c:1051 msgid "unable to parse --pretty format" msgstr "" $ For these C source files which contain the PRItime macros, we will create temporary munged "*.c" files in a tree in ".build/pot/po" corresponding to our source tree, and have "xgettext" consider those. The rule needs to be careful to "(cd .build/pot/po && ...)", because otherwise the comments in the po/git.pot file wouldn't refer to the correct source locations (they'd be prefixed with ".build/pot/po"). These temporary munged "*.c” files will be removed immediately after the corresponding po files are generated, because some development tools cannot ignore the duplicate source files in the ".build" directory according to the ".gitignore" file, and that may cause trouble. The output of the generated po/git.pot file is changed in one minor way: Because we're using msgcat(1) instead of xgettext(1) to concatenate the output we'll now disambiguate where "TRANSLATORS" comments come from, in cases where a message is the same in N files, and either only one has a "TRANSLATORS" comment, or they're different. E.g. for the "Your edited hunk[...]" message we'll now apply this change (comment content elided): +#. #-#-#-#-# add-patch.c.po #-#-#-#-# #. TRANSLATORS: do not translate [y/n] [...] +#. #-#-#-#-# git-add--interactive.perl.po #-#-#-#-# #. TRANSLATORS: do not translate [y/n] [...] #: add-patch.c:1253 git-add--interactive.perl:1244 msgid "" "Your edited hunk does not apply. Edit again (saying \"no\" discards!) [y/n]? " msgstr "" There are six such changes, and they all make the context more understandable, as msgcat(1) is better at handling these edge cases than xgettext(1)'s previously used "--join-existing" flag. But filenames in the above disambiguation lines of extracted-comments have an extra ".po" extension compared to the filenames at the file locations. While we could rename the intermediate ".build/pot/po/%.po" files without the ".po" extension to use more intuitive filenames in the disambiguation lines of extracted-comments, but that will confuse developer tools with lots of invalid C or other source files in ".build/pot/po" directory. The addition of "--omit-header" option for xgettext makes the "pot" snippets in ".build/pot/po/*.po" smaller. But as we'll see in a subsequent commit this header behavior has been hiding an encoding-related bug from us, so let's carry it forward instead of re-generating it with xgettext(1). The "po/git.pot" file should have a header entry, because a proper header entry will increase the speed of creating a new po file using msginit and set a proper "POT-Creation-Date:" field in the header entry of a "po/XX.po" file. We use xgettext to generate a separate header file at ".build/pot/git.header" from "/dev/null", and use this header to assemble "po/git.pot". Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Jiang Xin <zhiyou.jx@alibaba-inc.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-05-26 16:50:28 +02:00
$(LOCALIZED_SH_GEN_PO): .build/pot/po/%.po: %
$(call mkdir_p_parent_template)
$(QUIET_XGETTEXT)$(XGETTEXT) --omit-header \
-o$@ $(XGETTEXT_FLAGS_SH) $<
Makefile: have "make pot" not "reset --hard" Before commit fc0fd5b23b (Makefile: help gettext tools to cope with our custom PRItime format, 2017-07-20), we'd consider source files as-is with gettext, but because we need to understand PRItime in the same way that gettext itself understands PRIuMAX, we'd first check if we had a clean checkout, then munge all of the processed files in-place with "sed", generate "po/git.pot", and then finally "reset --hard" to undo our changes. By generating "pot" snippets in ".build/pot/po" for each source file and rewriting certain source files with PRItime macros to temporary files in ".build/pot/po", we can avoid running "make pot" by altering files in place and doing a "reset --hard" afterwards. This speed of "make pot" is slower than before on an initial run, because we run "xgettext" many times (once per source file), but it can be boosted by parallelization. It is *much* faster for incremental runs, and will allow us to implement related targets in subsequent commits. When the "pot" target was originally added in cd5513a7168 (i18n: Makefile: "pot" target to extract messages marked for translation, 2011-02-22) it behaved like a "normal" target. I.e. we'd skip the re-generation of the po/git.pot if nothing had to be done. Then after po/git.pot was checked in in dce37b66fb0 (l10n: initial git.pot for 1.7.10 upcoming release, 2012-02-13) the target was broken until 1f31963e921 (i18n: treat "make pot" as an explicitly-invoked target, 2014-08-22) when it was made to depend on "FORCE". I.e. the Makefile's dependency resolution inherently can't handle incremental building when the target file may be updated by git (or something else external to "make"). But this case no longer applies, so FORCE is no longer needed. That out of the way, the main logic change here is getting rid of the "reset --hard": We'll generate intermediate ".build/pot/po/%.po" files from "%", which is handy to see at a glance what strings (if any) in a given file are marked for translation: $ make .build/pot/po/pretty.c.po [...] $ cat .build/pot/po/pretty.c.po #: pretty.c:1051 msgid "unable to parse --pretty format" msgstr "" $ For these C source files which contain the PRItime macros, we will create temporary munged "*.c" files in a tree in ".build/pot/po" corresponding to our source tree, and have "xgettext" consider those. The rule needs to be careful to "(cd .build/pot/po && ...)", because otherwise the comments in the po/git.pot file wouldn't refer to the correct source locations (they'd be prefixed with ".build/pot/po"). These temporary munged "*.c” files will be removed immediately after the corresponding po files are generated, because some development tools cannot ignore the duplicate source files in the ".build" directory according to the ".gitignore" file, and that may cause trouble. The output of the generated po/git.pot file is changed in one minor way: Because we're using msgcat(1) instead of xgettext(1) to concatenate the output we'll now disambiguate where "TRANSLATORS" comments come from, in cases where a message is the same in N files, and either only one has a "TRANSLATORS" comment, or they're different. E.g. for the "Your edited hunk[...]" message we'll now apply this change (comment content elided): +#. #-#-#-#-# add-patch.c.po #-#-#-#-# #. TRANSLATORS: do not translate [y/n] [...] +#. #-#-#-#-# git-add--interactive.perl.po #-#-#-#-# #. TRANSLATORS: do not translate [y/n] [...] #: add-patch.c:1253 git-add--interactive.perl:1244 msgid "" "Your edited hunk does not apply. Edit again (saying \"no\" discards!) [y/n]? " msgstr "" There are six such changes, and they all make the context more understandable, as msgcat(1) is better at handling these edge cases than xgettext(1)'s previously used "--join-existing" flag. But filenames in the above disambiguation lines of extracted-comments have an extra ".po" extension compared to the filenames at the file locations. While we could rename the intermediate ".build/pot/po/%.po" files without the ".po" extension to use more intuitive filenames in the disambiguation lines of extracted-comments, but that will confuse developer tools with lots of invalid C or other source files in ".build/pot/po" directory. The addition of "--omit-header" option for xgettext makes the "pot" snippets in ".build/pot/po/*.po" smaller. But as we'll see in a subsequent commit this header behavior has been hiding an encoding-related bug from us, so let's carry it forward instead of re-generating it with xgettext(1). The "po/git.pot" file should have a header entry, because a proper header entry will increase the speed of creating a new po file using msginit and set a proper "POT-Creation-Date:" field in the header entry of a "po/XX.po" file. We use xgettext to generate a separate header file at ".build/pot/git.header" from "/dev/null", and use this header to assemble "po/git.pot". Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Jiang Xin <zhiyou.jx@alibaba-inc.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-05-26 16:50:28 +02:00
$(LOCALIZED_PERL_GEN_PO): .build/pot/po/%.po: %
$(call mkdir_p_parent_template)
$(QUIET_XGETTEXT)$(XGETTEXT) --omit-header \
-o$@ $(XGETTEXT_FLAGS_PERL) $<
define gen_pot_header
$(XGETTEXT) $(XGETTEXT_FLAGS_C) \
-o - /dev/null | \
sed -e 's|charset=CHARSET|charset=UTF-8|' \
-e 's|\(Last-Translator: \)FULL NAME <.*>|\1make by the Makefile|' \
-e 's|\(Language-Team: \)LANGUAGE <.*>|\1Git Mailing List <git@vger.kernel.org>|' \
>$@ && \
echo '"Plural-Forms: nplurals=INTEGER; plural=EXPRESSION;\\n"' >>$@
endef
Makefile: have "make pot" not "reset --hard" Before commit fc0fd5b23b (Makefile: help gettext tools to cope with our custom PRItime format, 2017-07-20), we'd consider source files as-is with gettext, but because we need to understand PRItime in the same way that gettext itself understands PRIuMAX, we'd first check if we had a clean checkout, then munge all of the processed files in-place with "sed", generate "po/git.pot", and then finally "reset --hard" to undo our changes. By generating "pot" snippets in ".build/pot/po" for each source file and rewriting certain source files with PRItime macros to temporary files in ".build/pot/po", we can avoid running "make pot" by altering files in place and doing a "reset --hard" afterwards. This speed of "make pot" is slower than before on an initial run, because we run "xgettext" many times (once per source file), but it can be boosted by parallelization. It is *much* faster for incremental runs, and will allow us to implement related targets in subsequent commits. When the "pot" target was originally added in cd5513a7168 (i18n: Makefile: "pot" target to extract messages marked for translation, 2011-02-22) it behaved like a "normal" target. I.e. we'd skip the re-generation of the po/git.pot if nothing had to be done. Then after po/git.pot was checked in in dce37b66fb0 (l10n: initial git.pot for 1.7.10 upcoming release, 2012-02-13) the target was broken until 1f31963e921 (i18n: treat "make pot" as an explicitly-invoked target, 2014-08-22) when it was made to depend on "FORCE". I.e. the Makefile's dependency resolution inherently can't handle incremental building when the target file may be updated by git (or something else external to "make"). But this case no longer applies, so FORCE is no longer needed. That out of the way, the main logic change here is getting rid of the "reset --hard": We'll generate intermediate ".build/pot/po/%.po" files from "%", which is handy to see at a glance what strings (if any) in a given file are marked for translation: $ make .build/pot/po/pretty.c.po [...] $ cat .build/pot/po/pretty.c.po #: pretty.c:1051 msgid "unable to parse --pretty format" msgstr "" $ For these C source files which contain the PRItime macros, we will create temporary munged "*.c" files in a tree in ".build/pot/po" corresponding to our source tree, and have "xgettext" consider those. The rule needs to be careful to "(cd .build/pot/po && ...)", because otherwise the comments in the po/git.pot file wouldn't refer to the correct source locations (they'd be prefixed with ".build/pot/po"). These temporary munged "*.c” files will be removed immediately after the corresponding po files are generated, because some development tools cannot ignore the duplicate source files in the ".build" directory according to the ".gitignore" file, and that may cause trouble. The output of the generated po/git.pot file is changed in one minor way: Because we're using msgcat(1) instead of xgettext(1) to concatenate the output we'll now disambiguate where "TRANSLATORS" comments come from, in cases where a message is the same in N files, and either only one has a "TRANSLATORS" comment, or they're different. E.g. for the "Your edited hunk[...]" message we'll now apply this change (comment content elided): +#. #-#-#-#-# add-patch.c.po #-#-#-#-# #. TRANSLATORS: do not translate [y/n] [...] +#. #-#-#-#-# git-add--interactive.perl.po #-#-#-#-# #. TRANSLATORS: do not translate [y/n] [...] #: add-patch.c:1253 git-add--interactive.perl:1244 msgid "" "Your edited hunk does not apply. Edit again (saying \"no\" discards!) [y/n]? " msgstr "" There are six such changes, and they all make the context more understandable, as msgcat(1) is better at handling these edge cases than xgettext(1)'s previously used "--join-existing" flag. But filenames in the above disambiguation lines of extracted-comments have an extra ".po" extension compared to the filenames at the file locations. While we could rename the intermediate ".build/pot/po/%.po" files without the ".po" extension to use more intuitive filenames in the disambiguation lines of extracted-comments, but that will confuse developer tools with lots of invalid C or other source files in ".build/pot/po" directory. The addition of "--omit-header" option for xgettext makes the "pot" snippets in ".build/pot/po/*.po" smaller. But as we'll see in a subsequent commit this header behavior has been hiding an encoding-related bug from us, so let's carry it forward instead of re-generating it with xgettext(1). The "po/git.pot" file should have a header entry, because a proper header entry will increase the speed of creating a new po file using msginit and set a proper "POT-Creation-Date:" field in the header entry of a "po/XX.po" file. We use xgettext to generate a separate header file at ".build/pot/git.header" from "/dev/null", and use this header to assemble "po/git.pot". Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Jiang Xin <zhiyou.jx@alibaba-inc.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-05-26 16:50:28 +02:00
.build/pot/git.header: $(LOCALIZED_ALL_GEN_PO)
$(call mkdir_p_parent_template)
$(QUIET_GEN)$(gen_pot_header)
po/git.pot: .build/pot/git.header $(LOCALIZED_ALL_GEN_PO)
$(QUIET_GEN)$(MSGCAT) $^ >$@
.PHONY: pot
pot: po/git.pot
define check_po_file_envvar
$(if $(PO_FILE), \
$(if $(filter po/%.po,$(PO_FILE)), , \
$(error PO_FILE should match pattern: "po/%.po")), \
$(error PO_FILE is not defined))
endef
.PHONY: po-update
po-update: po/git.pot
$(check_po_file_envvar)
@if test ! -e $(PO_FILE); then \
echo >&2 "error: $(PO_FILE) does not exist"; \
echo >&2 'To create an initial po file, use: "make po-init PO_FILE=po/XX.po"'; \
exit 1; \
fi
$(QUIET_MSGMERGE)$(MSGMERGE) $(MSGMERGE_FLAGS) $(PO_FILE) po/git.pot
i18n CI: stop allowing non-ASCII source messages in po/git.pot In the preceding commit we moved away from using xgettext(1) to both generate the po/git.pot, and to merge the incrementally generated po/git.pot+ file as we sourced translations from C, shell and Perl. Doing it this way, which dates back to my initial implementation[1][2][3] was conflating two things: With xgettext(1) the --from-code both controls what encoding is specified in the po/git.pot's header, and what encoding we allow in source messages. We don't ever want to allow non-ASCII in *source messages*, and doing so has hid e.g. a buggy message introduced in a6226fd772b (submodule--helper: convert the bulk of cmd_add() to C, 2021-08-10) from us, we'd warn about it before, but only when running "make pot", but the operation would still succeed. Now we'll error out on it when running "make pot". Since the preceding Makefile changes made this easy: let's add a "make check-pot" target with the same prerequisites as the "po/git.pot" target, but without changing the file "po/git.pot". Running it as part of the "static-analysis" CI target will ensure that we catch any such issues in the future. E.g.: $ make check-pot XGETTEXT .build/pot/po/builtin/submodule--helper.c.po xgettext: Non-ASCII string at builtin/submodule--helper.c:3381. Please specify the source encoding through --from-code. make: *** [.build/pot/po/builtin/submodule--helper.c.po] Error 1 1. cd5513a7168 (i18n: Makefile: "pot" target to extract messages marked for translation, 2011-02-22) 2. adc3b2b2767 (Makefile: add xgettext target for *.sh files, 2011-05-14) 3. 5e9637c6297 (i18n: add infrastructure for translating Git with gettext, 2011-11-18) Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Jiang Xin <zhiyou.jx@alibaba-inc.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-05-26 16:50:29 +02:00
.PHONY: check-pot
check-pot: $(LOCALIZED_ALL_GEN_PO)
### TODO FIXME: Translating everything in these files is a bad
### heuristic for "core", as we'll translate obscure error() messages
### along with commonly seen i18n messages. A better heuristic would
### be to e.g. use spatch to first remove error/die/warning
### etc. messages.
LOCALIZED_C_CORE =
LOCALIZED_C_CORE += builtin/checkout.c
LOCALIZED_C_CORE += builtin/clone.c
LOCALIZED_C_CORE += builtin/index-pack.c
LOCALIZED_C_CORE += builtin/push.c
LOCALIZED_C_CORE += builtin/reset.c
LOCALIZED_C_CORE += remote.c
LOCALIZED_C_CORE += wt-status.c
LOCALIZED_C_CORE_GEN_PO = $(LOCALIZED_C_CORE:%=.build/pot/po/%.po)
.build/pot/git-core.header: $(LOCALIZED_C_CORE_GEN_PO)
$(call mkdir_p_parent_template)
$(QUIET_GEN)$(gen_pot_header)
po/git-core.pot: .build/pot/git-core.header $(LOCALIZED_C_CORE_GEN_PO)
$(QUIET_GEN)$(MSGCAT) $^ >$@
.PHONY: po-init
po-init: po/git-core.pot
$(check_po_file_envvar)
@if test -e $(PO_FILE); then \
echo >&2 "error: $(PO_FILE) exists already"; \
exit 1; \
fi
$(QUIET_MSGINIT)msginit \
--input=$< \
--output=$(PO_FILE) \
--no-translator \
--locale=$(PO_FILE:po/%.po=%)
## po/*.po files & their rules
ifdef NO_GETTEXT
POFILES :=
MOFILES :=
else
i18n: add infrastructure for translating Git with gettext Change the skeleton implementation of i18n in Git to one that can show localized strings to users for our C, Shell and Perl programs using either GNU libintl or the Solaris gettext implementation. This new internationalization support is enabled by default. If gettext isn't available, or if Git is compiled with NO_GETTEXT=YesPlease, Git falls back on its current behavior of showing interface messages in English. When using the autoconf script we'll auto-detect if the gettext libraries are installed and act appropriately. This change is somewhat large because as well as adding a C, Shell and Perl i18n interface we're adding a lot of tests for them, and for those tests to work we need a skeleton PO file to actually test translations. A minimal Icelandic translation is included for this purpose. Icelandic includes multi-byte characters which makes it easy to test various edge cases, and it's a language I happen to understand. The rest of the commit message goes into detail about various sub-parts of this commit. = Installation Gettext .mo files will be installed and looked for in the standard $(prefix)/share/locale path. GIT_TEXTDOMAINDIR can also be set to override that, but that's only intended to be used to test Git itself. = Perl Perl code that's to be localized should use the new Git::I18n module. It imports a __ function into the caller's package by default. Instead of using the high level Locale::TextDomain interface I've opted to use the low-level (equivalent to the C interface) Locale::Messages module, which Locale::TextDomain itself uses. Locale::TextDomain does a lot of redundant work we don't need, and some of it would potentially introduce bugs. It tries to set the $TEXTDOMAIN based on package of the caller, and has its own hardcoded paths where it'll search for messages. I found it easier just to completely avoid it rather than try to circumvent its behavior. In any case, this is an issue wholly internal Git::I18N. Its guts can be changed later if that's deemed necessary. See <AANLkTilYD_NyIZMyj9dHtVk-ylVBfvyxpCC7982LWnVd@mail.gmail.com> for a further elaboration on this topic. = Shell Shell code that's to be localized should use the git-sh-i18n library. It's basically just a wrapper for the system's gettext.sh. If gettext.sh isn't available we'll fall back on gettext(1) if it's available. The latter is available without the former on Solaris, which has its own non-GNU gettext implementation. We also need to emulate eval_gettext() there. If neither are present we'll use a dumb printf(1) fall-through wrapper. = About libcharset.h and langinfo.h We use libcharset to query the character set of the current locale if it's available. I.e. we'll use it instead of nl_langinfo if HAVE_LIBCHARSET_H is set. The GNU gettext manual recommends using langinfo.h's nl_langinfo(CODESET) to acquire the current character set, but on systems that have libcharset.h's locale_charset() using the latter is either saner, or the only option on those systems. GNU and Solaris have a nl_langinfo(CODESET), FreeBSD can use either, but MinGW and some others need to use libcharset.h's locale_charset() instead. =Credits This patch is based on work by Jeff Epler <jepler@unpythonic.net> who did the initial Makefile / C work, and a lot of comments from the Git mailing list, including Jonathan Nieder, Jakub Narebski, Johannes Sixt, Erik Faye-Lund, Peter Krefting, Junio C Hamano, Thomas Rast and others. [jc: squashed a small Makefile fix from Ramsay] Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-11-18 00:14:42 +01:00
POFILES := $(wildcard po/*.po)
MOFILES := $(patsubst po/%.po,po/build/locale/%/LC_MESSAGES/git.mo,$(POFILES))
all:: $(MOFILES)
endif
po/build/locale/%/LC_MESSAGES/git.mo: po/%.po
Makefiles: add and use wildcard "mkdir -p" template Add a template to do the "mkdir -p" of $(@D) (the parent dir of $@) for us, and use it for the "make lint-docs" targets I added in 8650c6298c1 (doc lint: make "lint-docs" non-.PHONY, 2021-10-15). As seen in 4c64fb5aad9 (Documentation/Makefile: fix lint-docs mkdir dependency, 2021-10-26) maintaining these manual lists of parent directory dependencies is fragile, in addition to being obviously verbose. I used this pattern at the time because I couldn't find another method than "order-only" prerequisites to avoid doing a "mkdir -p $(@D)" for every file being created, which as noted in [1] would be significantly slower. But as it turns out we can use this neat trick of only doing a "mkdir -p" if the $(wildcard) macro tells us the path doesn't exist. A re-run of a performance test similar to that noted downthread of [1] in [2] shows that this is faster, in addition to being less verbose and more reliable (this uses my "git-hyperfine" thin wrapper for "hyperfine"[3]): $ git -c hyperfine.hook.setup= hyperfine -L rev HEAD~1,HEAD~0 -s 'make -C Documentation lint-docs' -p 'rm -rf Documentation/.build' 'make -C Documentation -j1 lint-docs' Benchmark 1: make -C Documentation -j1 lint-docs' in 'HEAD~1 Time (mean ± σ): 2.914 s ± 0.062 s [User: 2.449 s, System: 0.489 s] Range (min … max): 2.834 s … 3.020 s 10 runs Benchmark 2: make -C Documentation -j1 lint-docs' in 'HEAD~0 Time (mean ± σ): 2.315 s ± 0.062 s [User: 1.950 s, System: 0.386 s] Range (min … max): 2.229 s … 2.397 s 10 runs Summary 'make -C Documentation -j1 lint-docs' in 'HEAD~0' ran 1.26 ± 0.04 times faster than 'make -C Documentation -j1 lint-docs' in 'HEAD~1' So let's use that pattern both for the "lint-docs" target, and a few miscellaneous other targets. This method of creating parent directories is explicitly racy in that we don't know if we're going to say always create a "foo" followed by a "foo/bar" under parallelism, or skip the "foo" because we created "foo/bar" first. In this case it doesn't matter for anything except that we aren't guaranteed to get the same number of rules firing when running make in parallel. 1. https://lore.kernel.org/git/211028.861r45y3pt.gmgdl@evledraar.gmail.com/ 2. https://lore.kernel.org/git/211028.86o879vvtp.gmgdl@evledraar.gmail.com/ 3. https://gitlab.com/avar/git-hyperfine/ Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-03-03 17:04:19 +01:00
$(call mkdir_p_parent_template)
$(QUIET_MSGFMT)$(MSGFMT) -o $@ $<
i18n: add infrastructure for translating Git with gettext Change the skeleton implementation of i18n in Git to one that can show localized strings to users for our C, Shell and Perl programs using either GNU libintl or the Solaris gettext implementation. This new internationalization support is enabled by default. If gettext isn't available, or if Git is compiled with NO_GETTEXT=YesPlease, Git falls back on its current behavior of showing interface messages in English. When using the autoconf script we'll auto-detect if the gettext libraries are installed and act appropriately. This change is somewhat large because as well as adding a C, Shell and Perl i18n interface we're adding a lot of tests for them, and for those tests to work we need a skeleton PO file to actually test translations. A minimal Icelandic translation is included for this purpose. Icelandic includes multi-byte characters which makes it easy to test various edge cases, and it's a language I happen to understand. The rest of the commit message goes into detail about various sub-parts of this commit. = Installation Gettext .mo files will be installed and looked for in the standard $(prefix)/share/locale path. GIT_TEXTDOMAINDIR can also be set to override that, but that's only intended to be used to test Git itself. = Perl Perl code that's to be localized should use the new Git::I18n module. It imports a __ function into the caller's package by default. Instead of using the high level Locale::TextDomain interface I've opted to use the low-level (equivalent to the C interface) Locale::Messages module, which Locale::TextDomain itself uses. Locale::TextDomain does a lot of redundant work we don't need, and some of it would potentially introduce bugs. It tries to set the $TEXTDOMAIN based on package of the caller, and has its own hardcoded paths where it'll search for messages. I found it easier just to completely avoid it rather than try to circumvent its behavior. In any case, this is an issue wholly internal Git::I18N. Its guts can be changed later if that's deemed necessary. See <AANLkTilYD_NyIZMyj9dHtVk-ylVBfvyxpCC7982LWnVd@mail.gmail.com> for a further elaboration on this topic. = Shell Shell code that's to be localized should use the git-sh-i18n library. It's basically just a wrapper for the system's gettext.sh. If gettext.sh isn't available we'll fall back on gettext(1) if it's available. The latter is available without the former on Solaris, which has its own non-GNU gettext implementation. We also need to emulate eval_gettext() there. If neither are present we'll use a dumb printf(1) fall-through wrapper. = About libcharset.h and langinfo.h We use libcharset to query the character set of the current locale if it's available. I.e. we'll use it instead of nl_langinfo if HAVE_LIBCHARSET_H is set. The GNU gettext manual recommends using langinfo.h's nl_langinfo(CODESET) to acquire the current character set, but on systems that have libcharset.h's locale_charset() using the latter is either saner, or the only option on those systems. GNU and Solaris have a nl_langinfo(CODESET), FreeBSD can use either, but MinGW and some others need to use libcharset.h's locale_charset() instead. =Credits This patch is based on work by Jeff Epler <jepler@unpythonic.net> who did the initial Makefile / C work, and a lot of comments from the Git mailing list, including Jonathan Nieder, Jakub Narebski, Johannes Sixt, Erik Faye-Lund, Peter Krefting, Junio C Hamano, Thomas Rast and others. [jc: squashed a small Makefile fix from Ramsay] Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-11-18 00:14:42 +01:00
LIB_PERL := $(wildcard perl/Git.pm perl/Git/*.pm perl/Git/*/*.pm perl/Git/*/*/*.pm)
LIB_PERL_GEN := $(patsubst perl/%.pm,perl/build/lib/%.pm,$(LIB_PERL))
LIB_CPAN := $(wildcard perl/FromCPAN/*.pm perl/FromCPAN/*/*.pm)
LIB_CPAN_GEN := $(patsubst perl/%.pm,perl/build/lib/%.pm,$(LIB_CPAN))
Makefile: replace perl/Makefile.PL with simple make rules Replace the perl/Makefile.PL and the fallback perl/Makefile used under NO_PERL_MAKEMAKER=NoThanks with a much simpler implementation heavily inspired by how the i18n infrastructure's build process works[1]. The reason for having the Makefile.PL in the first place is that it was initially[2] building a perl C binding to interface with libgit, this functionality, that was removed[3] before Git.pm ever made it to the master branch. We've since since started maintaining a fallback perl/Makefile, as MakeMaker wouldn't work on some platforms[4]. That's just the tip of the iceberg. We have the PM.stamp hack in the top-level Makefile[5] to detect whether we need to regenerate the perl/perl.mak, which I fixed just recently to deal with issues like the perl version changing from under us[6]. There is absolutely no reason for why this needs to be so complex anymore. All we're getting out of this elaborate Rube Goldberg machine was copying perl/* to perl/blib/* as we do a string-replacement on the *.pm files to hardcode @@LOCALEDIR@@ in the source, as well as pod2man-ing Git.pm & friends. So replace the whole thing with something that's pretty much a copy of how we generate po/build/**.mo from po/*.po, just with a small sed(1) command instead of msgfmt. As that's being done rename the files from *.pm to *.pmc just to indicate that they're generated (see "perldoc -f require"). While I'm at it, change the fallback for Error.pm from being something where we'll ship our own Error.pm if one doesn't exist at build time to one where we just use a Git::Error wrapper that'll always prefer the system-wide Error.pm, only falling back to our own copy if it really doesn't exist at runtime. It's now shipped as Git::FromCPAN::Error, making it easy to add other modules to Git::FromCPAN::* in the future if that's needed. Functional changes: * This will not always install into perl's idea of its global "installsitelib". This only potentially matters for packagers that need to expose Git.pm for non-git use, and as explained in the INSTALL file there's a trivial workaround. * The scripts themselves will 'use lib' the target directory, but if INSTLIBDIR is set it overrides it. It doesn't have to be this way, it could be set in addition to INSTLIBDIR, but my reading of [7] is that this is the desired behavior. * We don't build man pages for all of the perl modules as we used to, only Git(3pm). As discussed on-list[8] that we were building installed manpages for purely internal APIs like Git::I18N or private-Error.pm was always a bug anyway, and all the Git::SVN::* ones say they're internal APIs. There are apparently external users of Git.pm, but I don't expect there to be any of the others. As a side-effect of these general changes the perl documentation now only installed by install-{doc,man}, not a mere "install" as before. 1. 5e9637c629 ("i18n: add infrastructure for translating Git with gettext", 2011-11-18) 2. b1edc53d06 ("Introduce Git.pm (v4)", 2006-06-24) 3. 18b0fc1ce1 ("Git.pm: Kill Git.xs for now", 2006-09-23) 4. f848718a69 ("Make perl/ build procedure ActiveState friendly.", 2006-12-04) 5. ee9be06770 ("perl: detect new files in MakeMaker builds", 2012-07-27) 6. c59c4939c2 ("perl: regenerate perl.mak if perl -V changes", 2017-03-29) 7. 0386dd37b1 ("Makefile: add PERLLIB_EXTRA variable that adds to default perl path", 2013-11-15) 8. 87bmjjv1pu.fsf@evledraar.booking.com ("Re: [PATCH] Makefile: replace perl/Makefile.PL with simple make rules" Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-12-10 22:13:33 +01:00
ifndef NO_PERL
all:: $(LIB_PERL_GEN)
ifndef NO_PERL_CPAN_FALLBACKS
all:: $(LIB_CPAN_GEN)
Makefile: replace perl/Makefile.PL with simple make rules Replace the perl/Makefile.PL and the fallback perl/Makefile used under NO_PERL_MAKEMAKER=NoThanks with a much simpler implementation heavily inspired by how the i18n infrastructure's build process works[1]. The reason for having the Makefile.PL in the first place is that it was initially[2] building a perl C binding to interface with libgit, this functionality, that was removed[3] before Git.pm ever made it to the master branch. We've since since started maintaining a fallback perl/Makefile, as MakeMaker wouldn't work on some platforms[4]. That's just the tip of the iceberg. We have the PM.stamp hack in the top-level Makefile[5] to detect whether we need to regenerate the perl/perl.mak, which I fixed just recently to deal with issues like the perl version changing from under us[6]. There is absolutely no reason for why this needs to be so complex anymore. All we're getting out of this elaborate Rube Goldberg machine was copying perl/* to perl/blib/* as we do a string-replacement on the *.pm files to hardcode @@LOCALEDIR@@ in the source, as well as pod2man-ing Git.pm & friends. So replace the whole thing with something that's pretty much a copy of how we generate po/build/**.mo from po/*.po, just with a small sed(1) command instead of msgfmt. As that's being done rename the files from *.pm to *.pmc just to indicate that they're generated (see "perldoc -f require"). While I'm at it, change the fallback for Error.pm from being something where we'll ship our own Error.pm if one doesn't exist at build time to one where we just use a Git::Error wrapper that'll always prefer the system-wide Error.pm, only falling back to our own copy if it really doesn't exist at runtime. It's now shipped as Git::FromCPAN::Error, making it easy to add other modules to Git::FromCPAN::* in the future if that's needed. Functional changes: * This will not always install into perl's idea of its global "installsitelib". This only potentially matters for packagers that need to expose Git.pm for non-git use, and as explained in the INSTALL file there's a trivial workaround. * The scripts themselves will 'use lib' the target directory, but if INSTLIBDIR is set it overrides it. It doesn't have to be this way, it could be set in addition to INSTLIBDIR, but my reading of [7] is that this is the desired behavior. * We don't build man pages for all of the perl modules as we used to, only Git(3pm). As discussed on-list[8] that we were building installed manpages for purely internal APIs like Git::I18N or private-Error.pm was always a bug anyway, and all the Git::SVN::* ones say they're internal APIs. There are apparently external users of Git.pm, but I don't expect there to be any of the others. As a side-effect of these general changes the perl documentation now only installed by install-{doc,man}, not a mere "install" as before. 1. 5e9637c629 ("i18n: add infrastructure for translating Git with gettext", 2011-11-18) 2. b1edc53d06 ("Introduce Git.pm (v4)", 2006-06-24) 3. 18b0fc1ce1 ("Git.pm: Kill Git.xs for now", 2006-09-23) 4. f848718a69 ("Make perl/ build procedure ActiveState friendly.", 2006-12-04) 5. ee9be06770 ("perl: detect new files in MakeMaker builds", 2012-07-27) 6. c59c4939c2 ("perl: regenerate perl.mak if perl -V changes", 2017-03-29) 7. 0386dd37b1 ("Makefile: add PERLLIB_EXTRA variable that adds to default perl path", 2013-11-15) 8. 87bmjjv1pu.fsf@evledraar.booking.com ("Re: [PATCH] Makefile: replace perl/Makefile.PL with simple make rules" Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-12-10 22:13:33 +01:00
endif
NO_PERL_CPAN_FALLBACKS_SQ = $(subst ','\'',$(NO_PERL_CPAN_FALLBACKS))
Makefile: replace perl/Makefile.PL with simple make rules Replace the perl/Makefile.PL and the fallback perl/Makefile used under NO_PERL_MAKEMAKER=NoThanks with a much simpler implementation heavily inspired by how the i18n infrastructure's build process works[1]. The reason for having the Makefile.PL in the first place is that it was initially[2] building a perl C binding to interface with libgit, this functionality, that was removed[3] before Git.pm ever made it to the master branch. We've since since started maintaining a fallback perl/Makefile, as MakeMaker wouldn't work on some platforms[4]. That's just the tip of the iceberg. We have the PM.stamp hack in the top-level Makefile[5] to detect whether we need to regenerate the perl/perl.mak, which I fixed just recently to deal with issues like the perl version changing from under us[6]. There is absolutely no reason for why this needs to be so complex anymore. All we're getting out of this elaborate Rube Goldberg machine was copying perl/* to perl/blib/* as we do a string-replacement on the *.pm files to hardcode @@LOCALEDIR@@ in the source, as well as pod2man-ing Git.pm & friends. So replace the whole thing with something that's pretty much a copy of how we generate po/build/**.mo from po/*.po, just with a small sed(1) command instead of msgfmt. As that's being done rename the files from *.pm to *.pmc just to indicate that they're generated (see "perldoc -f require"). While I'm at it, change the fallback for Error.pm from being something where we'll ship our own Error.pm if one doesn't exist at build time to one where we just use a Git::Error wrapper that'll always prefer the system-wide Error.pm, only falling back to our own copy if it really doesn't exist at runtime. It's now shipped as Git::FromCPAN::Error, making it easy to add other modules to Git::FromCPAN::* in the future if that's needed. Functional changes: * This will not always install into perl's idea of its global "installsitelib". This only potentially matters for packagers that need to expose Git.pm for non-git use, and as explained in the INSTALL file there's a trivial workaround. * The scripts themselves will 'use lib' the target directory, but if INSTLIBDIR is set it overrides it. It doesn't have to be this way, it could be set in addition to INSTLIBDIR, but my reading of [7] is that this is the desired behavior. * We don't build man pages for all of the perl modules as we used to, only Git(3pm). As discussed on-list[8] that we were building installed manpages for purely internal APIs like Git::I18N or private-Error.pm was always a bug anyway, and all the Git::SVN::* ones say they're internal APIs. There are apparently external users of Git.pm, but I don't expect there to be any of the others. As a side-effect of these general changes the perl documentation now only installed by install-{doc,man}, not a mere "install" as before. 1. 5e9637c629 ("i18n: add infrastructure for translating Git with gettext", 2011-11-18) 2. b1edc53d06 ("Introduce Git.pm (v4)", 2006-06-24) 3. 18b0fc1ce1 ("Git.pm: Kill Git.xs for now", 2006-09-23) 4. f848718a69 ("Make perl/ build procedure ActiveState friendly.", 2006-12-04) 5. ee9be06770 ("perl: detect new files in MakeMaker builds", 2012-07-27) 6. c59c4939c2 ("perl: regenerate perl.mak if perl -V changes", 2017-03-29) 7. 0386dd37b1 ("Makefile: add PERLLIB_EXTRA variable that adds to default perl path", 2013-11-15) 8. 87bmjjv1pu.fsf@evledraar.booking.com ("Re: [PATCH] Makefile: replace perl/Makefile.PL with simple make rules" Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-12-10 22:13:33 +01:00
endif
perl/build/lib/%.pm: perl/%.pm GIT-PERL-DEFINES
Makefiles: add and use wildcard "mkdir -p" template Add a template to do the "mkdir -p" of $(@D) (the parent dir of $@) for us, and use it for the "make lint-docs" targets I added in 8650c6298c1 (doc lint: make "lint-docs" non-.PHONY, 2021-10-15). As seen in 4c64fb5aad9 (Documentation/Makefile: fix lint-docs mkdir dependency, 2021-10-26) maintaining these manual lists of parent directory dependencies is fragile, in addition to being obviously verbose. I used this pattern at the time because I couldn't find another method than "order-only" prerequisites to avoid doing a "mkdir -p $(@D)" for every file being created, which as noted in [1] would be significantly slower. But as it turns out we can use this neat trick of only doing a "mkdir -p" if the $(wildcard) macro tells us the path doesn't exist. A re-run of a performance test similar to that noted downthread of [1] in [2] shows that this is faster, in addition to being less verbose and more reliable (this uses my "git-hyperfine" thin wrapper for "hyperfine"[3]): $ git -c hyperfine.hook.setup= hyperfine -L rev HEAD~1,HEAD~0 -s 'make -C Documentation lint-docs' -p 'rm -rf Documentation/.build' 'make -C Documentation -j1 lint-docs' Benchmark 1: make -C Documentation -j1 lint-docs' in 'HEAD~1 Time (mean ± σ): 2.914 s ± 0.062 s [User: 2.449 s, System: 0.489 s] Range (min … max): 2.834 s … 3.020 s 10 runs Benchmark 2: make -C Documentation -j1 lint-docs' in 'HEAD~0 Time (mean ± σ): 2.315 s ± 0.062 s [User: 1.950 s, System: 0.386 s] Range (min … max): 2.229 s … 2.397 s 10 runs Summary 'make -C Documentation -j1 lint-docs' in 'HEAD~0' ran 1.26 ± 0.04 times faster than 'make -C Documentation -j1 lint-docs' in 'HEAD~1' So let's use that pattern both for the "lint-docs" target, and a few miscellaneous other targets. This method of creating parent directories is explicitly racy in that we don't know if we're going to say always create a "foo" followed by a "foo/bar" under parallelism, or skip the "foo" because we created "foo/bar" first. In this case it doesn't matter for anything except that we aren't guaranteed to get the same number of rules firing when running make in parallel. 1. https://lore.kernel.org/git/211028.861r45y3pt.gmgdl@evledraar.gmail.com/ 2. https://lore.kernel.org/git/211028.86o879vvtp.gmgdl@evledraar.gmail.com/ 3. https://gitlab.com/avar/git-hyperfine/ Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-03-03 17:04:19 +01:00
$(call mkdir_p_parent_template)
$(QUIET_GEN) \
sed -e 's|@@LOCALEDIR@@|$(perl_localedir_SQ)|g' \
-e 's|@@NO_GETTEXT@@|$(NO_GETTEXT_SQ)|g' \
-e 's|@@NO_PERL_CPAN_FALLBACKS@@|$(NO_PERL_CPAN_FALLBACKS_SQ)|g' \
< $< > $@
Makefile: replace perl/Makefile.PL with simple make rules Replace the perl/Makefile.PL and the fallback perl/Makefile used under NO_PERL_MAKEMAKER=NoThanks with a much simpler implementation heavily inspired by how the i18n infrastructure's build process works[1]. The reason for having the Makefile.PL in the first place is that it was initially[2] building a perl C binding to interface with libgit, this functionality, that was removed[3] before Git.pm ever made it to the master branch. We've since since started maintaining a fallback perl/Makefile, as MakeMaker wouldn't work on some platforms[4]. That's just the tip of the iceberg. We have the PM.stamp hack in the top-level Makefile[5] to detect whether we need to regenerate the perl/perl.mak, which I fixed just recently to deal with issues like the perl version changing from under us[6]. There is absolutely no reason for why this needs to be so complex anymore. All we're getting out of this elaborate Rube Goldberg machine was copying perl/* to perl/blib/* as we do a string-replacement on the *.pm files to hardcode @@LOCALEDIR@@ in the source, as well as pod2man-ing Git.pm & friends. So replace the whole thing with something that's pretty much a copy of how we generate po/build/**.mo from po/*.po, just with a small sed(1) command instead of msgfmt. As that's being done rename the files from *.pm to *.pmc just to indicate that they're generated (see "perldoc -f require"). While I'm at it, change the fallback for Error.pm from being something where we'll ship our own Error.pm if one doesn't exist at build time to one where we just use a Git::Error wrapper that'll always prefer the system-wide Error.pm, only falling back to our own copy if it really doesn't exist at runtime. It's now shipped as Git::FromCPAN::Error, making it easy to add other modules to Git::FromCPAN::* in the future if that's needed. Functional changes: * This will not always install into perl's idea of its global "installsitelib". This only potentially matters for packagers that need to expose Git.pm for non-git use, and as explained in the INSTALL file there's a trivial workaround. * The scripts themselves will 'use lib' the target directory, but if INSTLIBDIR is set it overrides it. It doesn't have to be this way, it could be set in addition to INSTLIBDIR, but my reading of [7] is that this is the desired behavior. * We don't build man pages for all of the perl modules as we used to, only Git(3pm). As discussed on-list[8] that we were building installed manpages for purely internal APIs like Git::I18N or private-Error.pm was always a bug anyway, and all the Git::SVN::* ones say they're internal APIs. There are apparently external users of Git.pm, but I don't expect there to be any of the others. As a side-effect of these general changes the perl documentation now only installed by install-{doc,man}, not a mere "install" as before. 1. 5e9637c629 ("i18n: add infrastructure for translating Git with gettext", 2011-11-18) 2. b1edc53d06 ("Introduce Git.pm (v4)", 2006-06-24) 3. 18b0fc1ce1 ("Git.pm: Kill Git.xs for now", 2006-09-23) 4. f848718a69 ("Make perl/ build procedure ActiveState friendly.", 2006-12-04) 5. ee9be06770 ("perl: detect new files in MakeMaker builds", 2012-07-27) 6. c59c4939c2 ("perl: regenerate perl.mak if perl -V changes", 2017-03-29) 7. 0386dd37b1 ("Makefile: add PERLLIB_EXTRA variable that adds to default perl path", 2013-11-15) 8. 87bmjjv1pu.fsf@evledraar.booking.com ("Re: [PATCH] Makefile: replace perl/Makefile.PL with simple make rules" Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-12-10 22:13:33 +01:00
perl/build/man/man3/Git.3pm: perl/Git.pm
Makefiles: add and use wildcard "mkdir -p" template Add a template to do the "mkdir -p" of $(@D) (the parent dir of $@) for us, and use it for the "make lint-docs" targets I added in 8650c6298c1 (doc lint: make "lint-docs" non-.PHONY, 2021-10-15). As seen in 4c64fb5aad9 (Documentation/Makefile: fix lint-docs mkdir dependency, 2021-10-26) maintaining these manual lists of parent directory dependencies is fragile, in addition to being obviously verbose. I used this pattern at the time because I couldn't find another method than "order-only" prerequisites to avoid doing a "mkdir -p $(@D)" for every file being created, which as noted in [1] would be significantly slower. But as it turns out we can use this neat trick of only doing a "mkdir -p" if the $(wildcard) macro tells us the path doesn't exist. A re-run of a performance test similar to that noted downthread of [1] in [2] shows that this is faster, in addition to being less verbose and more reliable (this uses my "git-hyperfine" thin wrapper for "hyperfine"[3]): $ git -c hyperfine.hook.setup= hyperfine -L rev HEAD~1,HEAD~0 -s 'make -C Documentation lint-docs' -p 'rm -rf Documentation/.build' 'make -C Documentation -j1 lint-docs' Benchmark 1: make -C Documentation -j1 lint-docs' in 'HEAD~1 Time (mean ± σ): 2.914 s ± 0.062 s [User: 2.449 s, System: 0.489 s] Range (min … max): 2.834 s … 3.020 s 10 runs Benchmark 2: make -C Documentation -j1 lint-docs' in 'HEAD~0 Time (mean ± σ): 2.315 s ± 0.062 s [User: 1.950 s, System: 0.386 s] Range (min … max): 2.229 s … 2.397 s 10 runs Summary 'make -C Documentation -j1 lint-docs' in 'HEAD~0' ran 1.26 ± 0.04 times faster than 'make -C Documentation -j1 lint-docs' in 'HEAD~1' So let's use that pattern both for the "lint-docs" target, and a few miscellaneous other targets. This method of creating parent directories is explicitly racy in that we don't know if we're going to say always create a "foo" followed by a "foo/bar" under parallelism, or skip the "foo" because we created "foo/bar" first. In this case it doesn't matter for anything except that we aren't guaranteed to get the same number of rules firing when running make in parallel. 1. https://lore.kernel.org/git/211028.861r45y3pt.gmgdl@evledraar.gmail.com/ 2. https://lore.kernel.org/git/211028.86o879vvtp.gmgdl@evledraar.gmail.com/ 3. https://gitlab.com/avar/git-hyperfine/ Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-03-03 17:04:19 +01:00
$(call mkdir_p_parent_template)
$(QUIET_GEN)pod2man $< $@
Makefile: replace perl/Makefile.PL with simple make rules Replace the perl/Makefile.PL and the fallback perl/Makefile used under NO_PERL_MAKEMAKER=NoThanks with a much simpler implementation heavily inspired by how the i18n infrastructure's build process works[1]. The reason for having the Makefile.PL in the first place is that it was initially[2] building a perl C binding to interface with libgit, this functionality, that was removed[3] before Git.pm ever made it to the master branch. We've since since started maintaining a fallback perl/Makefile, as MakeMaker wouldn't work on some platforms[4]. That's just the tip of the iceberg. We have the PM.stamp hack in the top-level Makefile[5] to detect whether we need to regenerate the perl/perl.mak, which I fixed just recently to deal with issues like the perl version changing from under us[6]. There is absolutely no reason for why this needs to be so complex anymore. All we're getting out of this elaborate Rube Goldberg machine was copying perl/* to perl/blib/* as we do a string-replacement on the *.pm files to hardcode @@LOCALEDIR@@ in the source, as well as pod2man-ing Git.pm & friends. So replace the whole thing with something that's pretty much a copy of how we generate po/build/**.mo from po/*.po, just with a small sed(1) command instead of msgfmt. As that's being done rename the files from *.pm to *.pmc just to indicate that they're generated (see "perldoc -f require"). While I'm at it, change the fallback for Error.pm from being something where we'll ship our own Error.pm if one doesn't exist at build time to one where we just use a Git::Error wrapper that'll always prefer the system-wide Error.pm, only falling back to our own copy if it really doesn't exist at runtime. It's now shipped as Git::FromCPAN::Error, making it easy to add other modules to Git::FromCPAN::* in the future if that's needed. Functional changes: * This will not always install into perl's idea of its global "installsitelib". This only potentially matters for packagers that need to expose Git.pm for non-git use, and as explained in the INSTALL file there's a trivial workaround. * The scripts themselves will 'use lib' the target directory, but if INSTLIBDIR is set it overrides it. It doesn't have to be this way, it could be set in addition to INSTLIBDIR, but my reading of [7] is that this is the desired behavior. * We don't build man pages for all of the perl modules as we used to, only Git(3pm). As discussed on-list[8] that we were building installed manpages for purely internal APIs like Git::I18N or private-Error.pm was always a bug anyway, and all the Git::SVN::* ones say they're internal APIs. There are apparently external users of Git.pm, but I don't expect there to be any of the others. As a side-effect of these general changes the perl documentation now only installed by install-{doc,man}, not a mere "install" as before. 1. 5e9637c629 ("i18n: add infrastructure for translating Git with gettext", 2011-11-18) 2. b1edc53d06 ("Introduce Git.pm (v4)", 2006-06-24) 3. 18b0fc1ce1 ("Git.pm: Kill Git.xs for now", 2006-09-23) 4. f848718a69 ("Make perl/ build procedure ActiveState friendly.", 2006-12-04) 5. ee9be06770 ("perl: detect new files in MakeMaker builds", 2012-07-27) 6. c59c4939c2 ("perl: regenerate perl.mak if perl -V changes", 2017-03-29) 7. 0386dd37b1 ("Makefile: add PERLLIB_EXTRA variable that adds to default perl path", 2013-11-15) 8. 87bmjjv1pu.fsf@evledraar.booking.com ("Re: [PATCH] Makefile: replace perl/Makefile.PL with simple make rules" Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-12-10 22:13:33 +01:00
$(ETAGS_TARGET): $(FOUND_SOURCE_FILES)
$(QUIET_GEN)$(RM) $@+ && \
echo $(FOUND_SOURCE_FILES) | xargs etags -a -o $@+ && \
mv $@+ $@
tags: $(FOUND_SOURCE_FILES)
$(QUIET_GEN)$(RM) $@+ && \
echo $(FOUND_SOURCE_FILES) | xargs ctags -a -o $@+ && \
mv $@+ $@
cscope.out: $(FOUND_SOURCE_FILES)
$(QUIET_GEN)$(RM) $@+ && \
echo $(FOUND_SOURCE_FILES) | xargs cscope -f$@+ -b && \
mv $@+ $@
.PHONY: cscope
cscope: cscope.out
### Detect prefix changes
TRACK_PREFIX = $(bindir_SQ):$(gitexecdir_SQ):$(template_dir_SQ):$(prefix_SQ):\
$(localedir_SQ)
GIT-PREFIX: FORCE
@FLAGS='$(TRACK_PREFIX)'; \
if test x"$$FLAGS" != x"`cat GIT-PREFIX 2>/dev/null`" ; then \
echo >&2 " * new prefix flags"; \
echo "$$FLAGS" >GIT-PREFIX; \
fi
TRACK_CFLAGS = $(CC):$(subst ','\'',$(ALL_CFLAGS)):$(USE_GETTEXT_SCHEME)
GIT-CFLAGS: FORCE
@FLAGS='$(TRACK_CFLAGS)'; \
if test x"$$FLAGS" != x"`cat GIT-CFLAGS 2>/dev/null`" ; then \
echo >&2 " * new build flags"; \
echo "$$FLAGS" >GIT-CFLAGS; \
fi
TRACK_LDFLAGS = $(subst ','\'',$(ALL_LDFLAGS))
GIT-LDFLAGS: FORCE
@FLAGS='$(TRACK_LDFLAGS)'; \
if test x"$$FLAGS" != x"`cat GIT-LDFLAGS 2>/dev/null`" ; then \
echo >&2 " * new link flags"; \
echo "$$FLAGS" >GIT-LDFLAGS; \
fi
# We need to apply sq twice, once to protect from the shell
# that runs GIT-BUILD-OPTIONS, and then again to protect it
# and the first level quoting from the shell that runs "echo".
GIT-BUILD-OPTIONS: FORCE
@echo SHELL_PATH=\''$(subst ','\'',$(SHELL_PATH_SQ))'\' >$@+
@echo TEST_SHELL_PATH=\''$(subst ','\'',$(TEST_SHELL_PATH_SQ))'\' >>$@+
@echo PERL_PATH=\''$(subst ','\'',$(PERL_PATH_SQ))'\' >>$@+
@echo DIFF=\''$(subst ','\'',$(subst ','\'',$(DIFF)))'\' >>$@+
@echo PYTHON_PATH=\''$(subst ','\'',$(PYTHON_PATH_SQ))'\' >>$@+
@echo TAR=\''$(subst ','\'',$(subst ','\'',$(TAR)))'\' >>$@+
@echo NO_CURL=\''$(subst ','\'',$(subst ','\'',$(NO_CURL)))'\' >>$@+
@echo NO_EXPAT=\''$(subst ','\'',$(subst ','\'',$(NO_EXPAT)))'\' >>$@+
grep: add support for PCRE v2 Add support for v2 of the PCRE API. This is a new major version of PCRE that came out in early 2015[1]. The regular expression syntax is the same, but while the API is similar, pretty much every function is either renamed or takes different arguments. Thus using it via entirely new functions makes sense, as opposed to trying to e.g. have one compile_pcre_pattern() that would call either PCRE v1 or v2 functions. Git can now be compiled with either USE_LIBPCRE1=YesPlease or USE_LIBPCRE2=YesPlease, with USE_LIBPCRE=YesPlease currently being a synonym for the former. Providing both is a compile-time error. With earlier patches to enable JIT for PCRE v1 the performance of the release versions of both libraries is almost exactly the same, with PCRE v2 being around 1% slower. However after I reported this to the pcre-dev mailing list[2] I got a lot of help with the API use from Zoltán Herczeg, he subsequently optimized some of the JIT functionality in v2 of the library. Running the p7820-grep-engines.sh performance test against the latest Subversion trunk of both, with both them and git compiled as -O3, and the test run against linux.git, gives the following results. Just the /perl/ tests shown: $ GIT_PERF_REPEAT_COUNT=30 GIT_PERF_LARGE_REPO=~/g/linux GIT_PERF_MAKE_COMMAND='grep -q LIBPCRE2 Makefile && make -j8 USE_LIBPCRE2=YesPlease CC=~/perl5/installed/bin/gcc NO_R_TO_GCC_LINKER=YesPlease CFLAGS=-O3 LIBPCREDIR=/home/avar/g/pcre2/inst LDFLAGS=-Wl,-rpath,/home/avar/g/pcre2/inst/lib || make -j8 USE_LIBPCRE=YesPlease CC=~/perl5/installed/bin/gcc NO_R_TO_GCC_LINKER=YesPlease CFLAGS=-O3 LIBPCREDIR=/home/avar/g/pcre/inst LDFLAGS=-Wl,-rpath,/home/avar/g/pcre/inst/lib' ./run HEAD~5 HEAD~ HEAD p7820-grep-engines.sh [...] Test HEAD~5 HEAD~ HEAD ----------------------------------------------------------------------------------------------------------------- 7820.3: perl grep 'how.to' 0.31(1.10+0.48) 0.21(0.35+0.56) -32.3% 0.21(0.34+0.55) -32.3% 7820.7: perl grep '^how to' 0.56(2.70+0.40) 0.24(0.64+0.52) -57.1% 0.20(0.28+0.60) -64.3% 7820.11: perl grep '[how] to' 0.56(2.66+0.38) 0.29(0.95+0.45) -48.2% 0.23(0.45+0.54) -58.9% 7820.15: perl grep '(e.t[^ ]*|v.ry) rare' 1.02(5.77+0.42) 0.31(1.02+0.54) -69.6% 0.23(0.50+0.54) -77.5% 7820.19: perl grep 'm(ú|u)lt.b(æ|y)te' 0.38(1.57+0.42) 0.27(0.85+0.46) -28.9% 0.21(0.33+0.57) -44.7% See commit ("perf: add a comparison test of grep regex engines", 2017-04-19) for details on the machine the above test run was executed on. Here HEAD~2 is git with PCRE v1 without JIT, HEAD~ is PCRE v1 with JIT, and HEAD is PCRE v2 (also with JIT). See previous commits of mine mentioning p7820-grep-engines.sh for more details on the test setup. For ease of readability, a different run just of HEAD~ (PCRE v1 with JIT v.s. PCRE v2), again with just the /perl/ tests shown: [...] Test HEAD~ HEAD ---------------------------------------------------------------------------------------- 7820.3: perl grep 'how.to' 0.21(0.42+0.52) 0.21(0.31+0.58) +0.0% 7820.7: perl grep '^how to' 0.25(0.65+0.50) 0.20(0.31+0.57) -20.0% 7820.11: perl grep '[how] to' 0.30(0.90+0.50) 0.23(0.46+0.53) -23.3% 7820.15: perl grep '(e.t[^ ]*|v.ry) rare' 0.30(1.19+0.38) 0.23(0.51+0.51) -23.3% 7820.19: perl grep 'm(ú|u)lt.b(æ|y)te' 0.27(0.84+0.48) 0.21(0.34+0.57) -22.2% I.e. the two are either neck-to-neck, but PCRE v2 usually pulls ahead, when it does it's around 20% faster. A brief note on thread safety: As noted in pcre2api(3) & pcre2jit(3) the compiled pattern can be shared between threads, but not some of the JIT context, however the grep threading support does all pattern & JIT compilation in separate threads, so this code doesn't need to concern itself with thread safety. See commit 63e7e9d8b6 ("git-grep: Learn PCRE", 2011-05-09) for the initial addition of PCRE v1. This change follows some of the same patterns it did (and which were discussed on list at the time), e.g. mocking up types with typedef instead of ifdef-ing them out when USE_LIBPCRE2 isn't defined. This adds some trivial memory use to the program, but makes the code look nicer. 1. https://lists.exim.org/lurker/message/20150105.162835.0666407a.en.html 2. https://lists.exim.org/lurker/thread/20170419.172322.833ee099.en.html Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-06-01 20:20:56 +02:00
@echo USE_LIBPCRE2=\''$(subst ','\'',$(subst ','\'',$(USE_LIBPCRE2)))'\' >>$@+
@echo NO_PERL=\''$(subst ','\'',$(subst ','\'',$(NO_PERL)))'\' >>$@+
@echo NO_PTHREADS=\''$(subst ','\'',$(subst ','\'',$(NO_PTHREADS)))'\' >>$@+
@echo NO_PYTHON=\''$(subst ','\'',$(subst ','\'',$(NO_PYTHON)))'\' >>$@+
@echo NO_REGEX=\''$(subst ','\'',$(subst ','\'',$(NO_REGEX)))'\' >>$@+
@echo NO_UNIX_SOCKETS=\''$(subst ','\'',$(subst ','\'',$(NO_UNIX_SOCKETS)))'\' >>$@+
@echo PAGER_ENV=\''$(subst ','\'',$(subst ','\'',$(PAGER_ENV)))'\' >>$@+
@echo SANITIZE_LEAK=\''$(subst ','\'',$(subst ','\'',$(SANITIZE_LEAK)))'\' >>$@+
@echo SANITIZE_ADDRESS=\''$(subst ','\'',$(subst ','\'',$(SANITIZE_ADDRESS)))'\' >>$@+
tests: explicitly use `git.exe` on Windows On Windows, when we refer to `/an/absolute/path/to/git`, it magically resolves `git.exe` at that location. Except if something of the name `git` exists next to that `git.exe`. So if we call `$BUILD_DIR/git`, it will find `$BUILD_DIR/git.exe` *only* if there is not, say, a directory called `$BUILD_DIR/git`. Such a directory, however, exists in Git for Windows when building with Visual Studio (our Visual Studio project generator defaults to putting the build files into a directory whose name is the base name of the corresponding `.exe`). In the bin-wrappers/* scripts, we already take pains to use `git.exe` rather than `git`, as this could pick up the wrong thing on Windows (i.e. if there exists a `git` file or directory in the build directory). Now we do the same in the tests' start-up code. This also helps when testing an installed Git, as there might be even more likely some stray file or directory in the way. Note: the only way we can record whether the `.exe` suffix is by writing it to the `GIT-BUILD-OPTIONS` file and sourcing it at the beginning of `t/test-lib.sh`. This is not a requirement introduced by this patch, but we move the call to be able to use the `$X` variable that holds the file extension, if any. Note also: the many, many calls to `git this` and `git that` are unaffected, as the regular PATH search will find the `.exe` files on Windows (and not be confused by a directory of the name `git` that is in one of the directories listed in the `PATH` variable), while `/path/to/git` would not, per se, know that it is looking for an executable and happily prefer such a directory. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-11-14 17:32:11 +01:00
@echo X=\'$(X)\' >>$@+
ifdef FSMONITOR_DAEMON_BACKEND
@echo FSMONITOR_DAEMON_BACKEND=\''$(subst ','\'',$(subst ','\'',$(FSMONITOR_DAEMON_BACKEND)))'\' >>$@+
endif
ifdef FSMONITOR_OS_SETTINGS
@echo FSMONITOR_OS_SETTINGS=\''$(subst ','\'',$(subst ','\'',$(FSMONITOR_OS_SETTINGS)))'\' >>$@+
endif
ifdef TEST_OUTPUT_DIRECTORY
@echo TEST_OUTPUT_DIRECTORY=\''$(subst ','\'',$(subst ','\'',$(TEST_OUTPUT_DIRECTORY)))'\' >>$@+
endif
Introduce a performance testing framework This introduces a performance testing framework under t/perf/. It tries to be as close to the test-lib.sh infrastructure as possible, and thus should be easy to get used to for git developers. The following points were considered for the implementation: 1. You usually want to compare arbitrary revisions/build trees against each other. They may not have the performance test under consideration, or even the perf-lib.sh infrastructure. To cope with this, the 'run' script lets you specify arbitrary build dirs and revisions. It even automatically builds the revisions if it doesn't have them at hand yet. 2. Usually you would not want to run all tests. It would take too long anyway. The 'run' script lets you specify which tests to run; or you can also do it manually. There is a Makefile for discoverability and 'make clean', but it is not meant for real-world use. 3. Creating test repos from scratch in every test is extremely time-consuming, and shipping or downloading such large/weird repos is out of the question. We leave this decision to the user. Two different sizes of test repos can be configured, and the scripts just copy one or more of those (using hardlinks for the object store). By default it tries to use the build tree's git.git repository. This is fairly fast and versatile. Using a copy instead of a clone preserves many properties that the user may want to test for, such as lots of loose objects, unpacked refs, etc. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-17 11:25:09 +01:00
ifdef GIT_TEST_OPTS
@echo GIT_TEST_OPTS=\''$(subst ','\'',$(subst ','\'',$(GIT_TEST_OPTS)))'\' >>$@+
Introduce a performance testing framework This introduces a performance testing framework under t/perf/. It tries to be as close to the test-lib.sh infrastructure as possible, and thus should be easy to get used to for git developers. The following points were considered for the implementation: 1. You usually want to compare arbitrary revisions/build trees against each other. They may not have the performance test under consideration, or even the perf-lib.sh infrastructure. To cope with this, the 'run' script lets you specify arbitrary build dirs and revisions. It even automatically builds the revisions if it doesn't have them at hand yet. 2. Usually you would not want to run all tests. It would take too long anyway. The 'run' script lets you specify which tests to run; or you can also do it manually. There is a Makefile for discoverability and 'make clean', but it is not meant for real-world use. 3. Creating test repos from scratch in every test is extremely time-consuming, and shipping or downloading such large/weird repos is out of the question. We leave this decision to the user. Two different sizes of test repos can be configured, and the scripts just copy one or more of those (using hardlinks for the object store). By default it tries to use the build tree's git.git repository. This is fairly fast and versatile. Using a copy instead of a clone preserves many properties that the user may want to test for, such as lots of loose objects, unpacked refs, etc. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-17 11:25:09 +01:00
endif
ifdef GIT_TEST_CMP
@echo GIT_TEST_CMP=\''$(subst ','\'',$(subst ','\'',$(GIT_TEST_CMP)))'\' >>$@+
endif
ifdef GIT_TEST_CMP_USE_COPIED_CONTEXT
@echo GIT_TEST_CMP_USE_COPIED_CONTEXT=YesPlease >>$@+
endif
ifdef GIT_TEST_UTF8_LOCALE
@echo GIT_TEST_UTF8_LOCALE=\''$(subst ','\'',$(subst ','\'',$(GIT_TEST_UTF8_LOCALE)))'\' >>$@+
endif
@echo NO_GETTEXT=\''$(subst ','\'',$(subst ','\'',$(NO_GETTEXT)))'\' >>$@+
Introduce a performance testing framework This introduces a performance testing framework under t/perf/. It tries to be as close to the test-lib.sh infrastructure as possible, and thus should be easy to get used to for git developers. The following points were considered for the implementation: 1. You usually want to compare arbitrary revisions/build trees against each other. They may not have the performance test under consideration, or even the perf-lib.sh infrastructure. To cope with this, the 'run' script lets you specify arbitrary build dirs and revisions. It even automatically builds the revisions if it doesn't have them at hand yet. 2. Usually you would not want to run all tests. It would take too long anyway. The 'run' script lets you specify which tests to run; or you can also do it manually. There is a Makefile for discoverability and 'make clean', but it is not meant for real-world use. 3. Creating test repos from scratch in every test is extremely time-consuming, and shipping or downloading such large/weird repos is out of the question. We leave this decision to the user. Two different sizes of test repos can be configured, and the scripts just copy one or more of those (using hardlinks for the object store). By default it tries to use the build tree's git.git repository. This is fairly fast and versatile. Using a copy instead of a clone preserves many properties that the user may want to test for, such as lots of loose objects, unpacked refs, etc. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-17 11:25:09 +01:00
ifdef GIT_PERF_REPEAT_COUNT
@echo GIT_PERF_REPEAT_COUNT=\''$(subst ','\'',$(subst ','\'',$(GIT_PERF_REPEAT_COUNT)))'\' >>$@+
Introduce a performance testing framework This introduces a performance testing framework under t/perf/. It tries to be as close to the test-lib.sh infrastructure as possible, and thus should be easy to get used to for git developers. The following points were considered for the implementation: 1. You usually want to compare arbitrary revisions/build trees against each other. They may not have the performance test under consideration, or even the perf-lib.sh infrastructure. To cope with this, the 'run' script lets you specify arbitrary build dirs and revisions. It even automatically builds the revisions if it doesn't have them at hand yet. 2. Usually you would not want to run all tests. It would take too long anyway. The 'run' script lets you specify which tests to run; or you can also do it manually. There is a Makefile for discoverability and 'make clean', but it is not meant for real-world use. 3. Creating test repos from scratch in every test is extremely time-consuming, and shipping or downloading such large/weird repos is out of the question. We leave this decision to the user. Two different sizes of test repos can be configured, and the scripts just copy one or more of those (using hardlinks for the object store). By default it tries to use the build tree's git.git repository. This is fairly fast and versatile. Using a copy instead of a clone preserves many properties that the user may want to test for, such as lots of loose objects, unpacked refs, etc. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-17 11:25:09 +01:00
endif
ifdef GIT_PERF_REPO
@echo GIT_PERF_REPO=\''$(subst ','\'',$(subst ','\'',$(GIT_PERF_REPO)))'\' >>$@+
Introduce a performance testing framework This introduces a performance testing framework under t/perf/. It tries to be as close to the test-lib.sh infrastructure as possible, and thus should be easy to get used to for git developers. The following points were considered for the implementation: 1. You usually want to compare arbitrary revisions/build trees against each other. They may not have the performance test under consideration, or even the perf-lib.sh infrastructure. To cope with this, the 'run' script lets you specify arbitrary build dirs and revisions. It even automatically builds the revisions if it doesn't have them at hand yet. 2. Usually you would not want to run all tests. It would take too long anyway. The 'run' script lets you specify which tests to run; or you can also do it manually. There is a Makefile for discoverability and 'make clean', but it is not meant for real-world use. 3. Creating test repos from scratch in every test is extremely time-consuming, and shipping or downloading such large/weird repos is out of the question. We leave this decision to the user. Two different sizes of test repos can be configured, and the scripts just copy one or more of those (using hardlinks for the object store). By default it tries to use the build tree's git.git repository. This is fairly fast and versatile. Using a copy instead of a clone preserves many properties that the user may want to test for, such as lots of loose objects, unpacked refs, etc. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-17 11:25:09 +01:00
endif
ifdef GIT_PERF_LARGE_REPO
@echo GIT_PERF_LARGE_REPO=\''$(subst ','\'',$(subst ','\'',$(GIT_PERF_LARGE_REPO)))'\' >>$@+
Introduce a performance testing framework This introduces a performance testing framework under t/perf/. It tries to be as close to the test-lib.sh infrastructure as possible, and thus should be easy to get used to for git developers. The following points were considered for the implementation: 1. You usually want to compare arbitrary revisions/build trees against each other. They may not have the performance test under consideration, or even the perf-lib.sh infrastructure. To cope with this, the 'run' script lets you specify arbitrary build dirs and revisions. It even automatically builds the revisions if it doesn't have them at hand yet. 2. Usually you would not want to run all tests. It would take too long anyway. The 'run' script lets you specify which tests to run; or you can also do it manually. There is a Makefile for discoverability and 'make clean', but it is not meant for real-world use. 3. Creating test repos from scratch in every test is extremely time-consuming, and shipping or downloading such large/weird repos is out of the question. We leave this decision to the user. Two different sizes of test repos can be configured, and the scripts just copy one or more of those (using hardlinks for the object store). By default it tries to use the build tree's git.git repository. This is fairly fast and versatile. Using a copy instead of a clone preserves many properties that the user may want to test for, such as lots of loose objects, unpacked refs, etc. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-17 11:25:09 +01:00
endif
ifdef GIT_PERF_MAKE_OPTS
@echo GIT_PERF_MAKE_OPTS=\''$(subst ','\'',$(subst ','\'',$(GIT_PERF_MAKE_OPTS)))'\' >>$@+
Introduce a performance testing framework This introduces a performance testing framework under t/perf/. It tries to be as close to the test-lib.sh infrastructure as possible, and thus should be easy to get used to for git developers. The following points were considered for the implementation: 1. You usually want to compare arbitrary revisions/build trees against each other. They may not have the performance test under consideration, or even the perf-lib.sh infrastructure. To cope with this, the 'run' script lets you specify arbitrary build dirs and revisions. It even automatically builds the revisions if it doesn't have them at hand yet. 2. Usually you would not want to run all tests. It would take too long anyway. The 'run' script lets you specify which tests to run; or you can also do it manually. There is a Makefile for discoverability and 'make clean', but it is not meant for real-world use. 3. Creating test repos from scratch in every test is extremely time-consuming, and shipping or downloading such large/weird repos is out of the question. We leave this decision to the user. Two different sizes of test repos can be configured, and the scripts just copy one or more of those (using hardlinks for the object store). By default it tries to use the build tree's git.git repository. This is fairly fast and versatile. Using a copy instead of a clone preserves many properties that the user may want to test for, such as lots of loose objects, unpacked refs, etc. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-17 11:25:09 +01:00
endif
ifdef GIT_PERF_MAKE_COMMAND
@echo GIT_PERF_MAKE_COMMAND=\''$(subst ','\'',$(subst ','\'',$(GIT_PERF_MAKE_COMMAND)))'\' >>$@+
endif
ifdef GIT_INTEROP_MAKE_OPTS
@echo GIT_INTEROP_MAKE_OPTS=\''$(subst ','\'',$(subst ','\'',$(GIT_INTEROP_MAKE_OPTS)))'\' >>$@+
endif
ifdef GIT_TEST_INDEX_VERSION
@echo GIT_TEST_INDEX_VERSION=\''$(subst ','\'',$(subst ','\'',$(GIT_TEST_INDEX_VERSION)))'\' >>$@+
perl: check for perl warnings while running tests We set "use warnings" in most of our perl code to catch problems. But as the name implies, warnings just emit a message to stderr and don't otherwise affect the program. So our tests are quite likely to miss that warnings are being spewed, as most of them do not look at stderr. We could ask perl to make all warnings fatal, but this is likely annoying for non-developers, who would rather have a running program with a warning than something that refuses to work at all. So instead, let's teach the perl code to respect an environment variable (GIT_PERL_FATAL_WARNINGS) to increase the severity of the warnings. This can be set for day-to-day running if people want to be really pedantic, but the primary use is to trigger it within the test suite. We could also trigger that for every test run, but likewise even the tests failing may be annoying to distro builders, etc (just as -Werror would be for compiling C code). So we'll tie it to a special test-mode variable (GIT_TEST_PERL_FATAL_WARNINGS) that can be set in the environment or as a Makefile knob, and we'll automatically turn the knob when DEVELOPER=1 is set. That should give developers and CI the more careful view without disrupting normal users or packagers. Note that the mapping from the GIT_TEST_* form to the GIT_* form in test-lib.sh is necessary even if they had the same name: the perl scripts need it to be normalized to a perl truth value, and we also have to make sure it's exported (we might have gotten it from the environment, but we might also have gotten it from GIT-BUILD-OPTIONS directly). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-10-22 05:24:00 +02:00
endif
ifdef GIT_TEST_PERL_FATAL_WARNINGS
@echo GIT_TEST_PERL_FATAL_WARNINGS=\''$(subst ','\'',$(subst ','\'',$(GIT_TEST_PERL_FATAL_WARNINGS)))'\' >>$@+
endif
ifdef RUNTIME_PREFIX
@echo RUNTIME_PREFIX=\'true\' >>$@+
else
@echo RUNTIME_PREFIX=\'false\' >>$@+
endif
@if cmp $@+ $@ >/dev/null 2>&1; then $(RM) $@+; else mv $@+ $@; fi
@if test -f GIT-BUILD-DIR; then rm GIT-BUILD-DIR; fi
### Detect Python interpreter path changes
ifndef NO_PYTHON
TRACK_PYTHON = $(subst ','\'',-DPYTHON_PATH='$(PYTHON_PATH_SQ)')
GIT-PYTHON-VARS: FORCE
@VARS='$(TRACK_PYTHON)'; \
if test x"$$VARS" != x"`cat $@ 2>/dev/null`" ; then \
echo >&2 " * new Python interpreter location"; \
echo "$$VARS" >$@; \
fi
endif
test_bindir_programs := $(patsubst %,bin-wrappers/%,$(BINDIR_PROGRAMS_NEED_X) $(BINDIR_PROGRAMS_NO_X) $(TEST_PROGRAMS_NEED_X))
unit tests: add TAP unit test framework This patch contains an implementation for writing unit tests with TAP output. Each test is a function that contains one or more checks. The test is run with the TEST() macro and if any of the checks fail then the test will fail. A complete program that tests STRBUF_INIT would look like #include "test-lib.h" #include "strbuf.h" static void t_static_init(void) { struct strbuf buf = STRBUF_INIT; check_uint(buf.len, ==, 0); check_uint(buf.alloc, ==, 0); check_char(buf.buf[0], ==, '\0'); } int main(void) { TEST(t_static_init(), "static initialization works); return test_done(); } The output of this program would be ok 1 - static initialization works 1..1 If any of the checks in a test fail then they print a diagnostic message to aid debugging and the test will be reported as failing. For example a failing integer check would look like # check "x >= 3" failed at my-test.c:102 # left: 2 # right: 3 not ok 1 - x is greater than or equal to three There are a number of check functions implemented so far. check() checks a boolean condition, check_int(), check_uint() and check_char() take two values to compare and a comparison operator. check_str() will check if two strings are equal. Custom checks are simple to implement as shown in the comments above test_assert() in test-lib.h. Tests can be skipped with test_skip() which can be supplied with a reason for skipping which it will print. Tests can print diagnostic messages with test_msg(). Checks that are known to fail can be wrapped in TEST_TODO(). There are a couple of example test programs included in this patch. t-basic.c implements some self-tests and demonstrates the diagnostic output for failing test. The output of this program is checked by t0080-unit-test-output.sh. t-strbuf.c shows some example unit tests for strbuf.c The unit tests will be built as part of the default "make all" target, to avoid bitrot. If you wish to build just the unit tests, you can run "make build-unit-tests". To run the tests, you can use "make unit-tests" or run the test binaries directly, as in "./t/unit-tests/bin/t-strbuf". Signed-off-by: Phillip Wood <phillip.wood@dunelm.org.uk> Signed-off-by: Josh Steadmon <steadmon@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-11-09 19:50:43 +01:00
all:: $(TEST_PROGRAMS) $(test_bindir_programs) $(UNIT_TEST_PROGS)
bin-wrappers/%: wrap-for-bin.sh
Makefiles: add and use wildcard "mkdir -p" template Add a template to do the "mkdir -p" of $(@D) (the parent dir of $@) for us, and use it for the "make lint-docs" targets I added in 8650c6298c1 (doc lint: make "lint-docs" non-.PHONY, 2021-10-15). As seen in 4c64fb5aad9 (Documentation/Makefile: fix lint-docs mkdir dependency, 2021-10-26) maintaining these manual lists of parent directory dependencies is fragile, in addition to being obviously verbose. I used this pattern at the time because I couldn't find another method than "order-only" prerequisites to avoid doing a "mkdir -p $(@D)" for every file being created, which as noted in [1] would be significantly slower. But as it turns out we can use this neat trick of only doing a "mkdir -p" if the $(wildcard) macro tells us the path doesn't exist. A re-run of a performance test similar to that noted downthread of [1] in [2] shows that this is faster, in addition to being less verbose and more reliable (this uses my "git-hyperfine" thin wrapper for "hyperfine"[3]): $ git -c hyperfine.hook.setup= hyperfine -L rev HEAD~1,HEAD~0 -s 'make -C Documentation lint-docs' -p 'rm -rf Documentation/.build' 'make -C Documentation -j1 lint-docs' Benchmark 1: make -C Documentation -j1 lint-docs' in 'HEAD~1 Time (mean ± σ): 2.914 s ± 0.062 s [User: 2.449 s, System: 0.489 s] Range (min … max): 2.834 s … 3.020 s 10 runs Benchmark 2: make -C Documentation -j1 lint-docs' in 'HEAD~0 Time (mean ± σ): 2.315 s ± 0.062 s [User: 1.950 s, System: 0.386 s] Range (min … max): 2.229 s … 2.397 s 10 runs Summary 'make -C Documentation -j1 lint-docs' in 'HEAD~0' ran 1.26 ± 0.04 times faster than 'make -C Documentation -j1 lint-docs' in 'HEAD~1' So let's use that pattern both for the "lint-docs" target, and a few miscellaneous other targets. This method of creating parent directories is explicitly racy in that we don't know if we're going to say always create a "foo" followed by a "foo/bar" under parallelism, or skip the "foo" because we created "foo/bar" first. In this case it doesn't matter for anything except that we aren't guaranteed to get the same number of rules firing when running make in parallel. 1. https://lore.kernel.org/git/211028.861r45y3pt.gmgdl@evledraar.gmail.com/ 2. https://lore.kernel.org/git/211028.86o879vvtp.gmgdl@evledraar.gmail.com/ 3. https://gitlab.com/avar/git-hyperfine/ Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-03-03 17:04:19 +01:00
$(call mkdir_p_parent_template)
$(QUIET_GEN)sed -e '1s|#!.*/sh|#!$(SHELL_PATH_SQ)|' \
-e 's|@@BUILD_DIR@@|$(shell pwd)|' \
-e 's|@@PROG@@|$(patsubst test-%,t/helper/test-%,$(@F))$(if $(filter-out $(BINDIR_PROGRAMS_NO_X),$(@F)),$(X),)|' < $< > $@ && \
chmod +x $@
# GNU make supports exporting all variables by "export" without parameters.
# However, the environment gets quite big, and some programs have problems
# with that.
export NO_SVN_TESTS
export TEST_NO_MALLOC_CHECK
### Testing rules
test: all
$(MAKE) -C t/ all
Introduce a performance testing framework This introduces a performance testing framework under t/perf/. It tries to be as close to the test-lib.sh infrastructure as possible, and thus should be easy to get used to for git developers. The following points were considered for the implementation: 1. You usually want to compare arbitrary revisions/build trees against each other. They may not have the performance test under consideration, or even the perf-lib.sh infrastructure. To cope with this, the 'run' script lets you specify arbitrary build dirs and revisions. It even automatically builds the revisions if it doesn't have them at hand yet. 2. Usually you would not want to run all tests. It would take too long anyway. The 'run' script lets you specify which tests to run; or you can also do it manually. There is a Makefile for discoverability and 'make clean', but it is not meant for real-world use. 3. Creating test repos from scratch in every test is extremely time-consuming, and shipping or downloading such large/weird repos is out of the question. We leave this decision to the user. Two different sizes of test repos can be configured, and the scripts just copy one or more of those (using hardlinks for the object store). By default it tries to use the build tree's git.git repository. This is fairly fast and versatile. Using a copy instead of a clone preserves many properties that the user may want to test for, such as lots of loose objects, unpacked refs, etc. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-17 11:25:09 +01:00
perf: all
$(MAKE) -C t/perf/ all
.PHONY: test perf
.PRECIOUS: $(TEST_OBJS)
t/helper/test-tool$X: $(patsubst %,t/helper/%,$(TEST_BUILTINS_OBJS))
t/helper/test-%$X: t/helper/test-%.o GIT-LDFLAGS $(GITLIBS) $(REFTABLE_TEST_LIB)
$(QUIET_LINK)$(CC) $(ALL_CFLAGS) -o $@ $(ALL_LDFLAGS) $(filter %.o,$^) $(filter %.a,$^) $(LIBS)
check-sha1:: t/helper/test-tool$X
t/helper/test-sha1.sh
SP_OBJ = $(patsubst %.o,%.sp,$(OBJECTS))
2011-04-21 21:14:42 +02:00
$(SP_OBJ): %.sp: %.c %.o
2011-04-21 21:14:42 +02:00
$(QUIET_SP)cgcc -no-compile $(ALL_CFLAGS) $(EXTRA_CPPFLAGS) \
Makefile: make the "sparse" target non-.PHONY Change the "sparse" target and its *.sp dependencies to be non-.PHONY. Before this change "make sparse" would take ~5s to re-run all the *.c files through "cgcc", after it it'll create an empty *.sp file sitting alongside the *.c file, only if the *.c file or its dependencies are newer than the *.sp is the *.sp re-made. We ensure that the recursive dependencies are correct by depending on the *.o file, which in turn will have correct dependencies by either depending on all header files, or under "COMPUTE_HEADER_DEPENDENCIES=yes" the headers it needs. This means that a plain "make sparse" is much slower, as we'll now need to make the *.o files just to create the *.sp files, but incrementally creating the *.sp files is *much* faster and less verbose, it thus becomes viable to run "sparse" along with "all" as e.g. "git rebase --exec 'make all sparse'". On my box with -j8 "make sparse" was fast before, or around 5 seconds, now it only takes that long the first time, and the common case is <100ms, or however long it takes GNU make to stat the *.sp file and see that all the corresponding *.c file and its dependencies are older. See 0bcd9ae85d7 (sparse: Fix errors due to missing target-specific variables, 2011-04-21) for the modern implementation of the sparse target being changed here. It is critical that we use -Wsparse-error here, otherwise the error would only show up once, but we'd successfully create the empty *.sp file, and running a second time wouldn't show the error. I'm therefore not putting it into SPARSE_FLAGS or SP_EXTRA_FLAGS, it's not optional, the Makefile logic won't behave properly without it. Appending to $@ without a move is OK here because we're using the .DELETE_ON_ERROR Makefile feature. See 7b76d6bf221 (Makefile: add and use the ".DELETE_ON_ERROR" flag, 2021-06-29). GNU make ensures that on error this file will be removed. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-09-23 02:07:16 +02:00
-Wsparse-error \
$(SPARSE_FLAGS) $(SP_EXTRA_FLAGS) $< && \
>$@
2011-04-21 21:14:42 +02:00
Makefile: make the "sparse" target non-.PHONY Change the "sparse" target and its *.sp dependencies to be non-.PHONY. Before this change "make sparse" would take ~5s to re-run all the *.c files through "cgcc", after it it'll create an empty *.sp file sitting alongside the *.c file, only if the *.c file or its dependencies are newer than the *.sp is the *.sp re-made. We ensure that the recursive dependencies are correct by depending on the *.o file, which in turn will have correct dependencies by either depending on all header files, or under "COMPUTE_HEADER_DEPENDENCIES=yes" the headers it needs. This means that a plain "make sparse" is much slower, as we'll now need to make the *.o files just to create the *.sp files, but incrementally creating the *.sp files is *much* faster and less verbose, it thus becomes viable to run "sparse" along with "all" as e.g. "git rebase --exec 'make all sparse'". On my box with -j8 "make sparse" was fast before, or around 5 seconds, now it only takes that long the first time, and the common case is <100ms, or however long it takes GNU make to stat the *.sp file and see that all the corresponding *.c file and its dependencies are older. See 0bcd9ae85d7 (sparse: Fix errors due to missing target-specific variables, 2011-04-21) for the modern implementation of the sparse target being changed here. It is critical that we use -Wsparse-error here, otherwise the error would only show up once, but we'd successfully create the empty *.sp file, and running a second time wouldn't show the error. I'm therefore not putting it into SPARSE_FLAGS or SP_EXTRA_FLAGS, it's not optional, the Makefile logic won't behave properly without it. Appending to $@ without a move is OK here because we're using the .DELETE_ON_ERROR Makefile feature. See 7b76d6bf221 (Makefile: add and use the ".DELETE_ON_ERROR" flag, 2021-06-29). GNU make ensures that on error this file will be removed. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-09-23 02:07:16 +02:00
.PHONY: sparse
2011-04-21 21:14:42 +02:00
sparse: $(SP_OBJ)
EXCEPT_HDRS := $(GENERATED_H) unicode-width.h compat/% xdiff/%
ifndef OPENSSL_SHA1
EXCEPT_HDRS += sha1/openssl.h
endif
ifndef OPENSSL_SHA256
EXCEPT_HDRS += sha256/openssl.h
endif
ifndef NETTLE_SHA256
EXCEPT_HDRS += sha256/nettle.h
endif
ifndef GCRYPT_SHA256
EXCEPT_HDRS += sha256/gcrypt.h
endif
CHK_HDRS = $(filter-out $(EXCEPT_HDRS),$(LIB_H))
HCO = $(patsubst %.h,%.hco,$(CHK_HDRS))
HCC = $(HCO:hco=hcc)
%.hcc: %.h
@echo '#include "git-compat-util.h"' >$@
@echo '#include "$<"' >>$@
$(HCO): %.hco: %.hcc FORCE
$(QUIET_HDR)$(CC) $(ALL_CFLAGS) -o /dev/null -c -xc $<
.PHONY: hdr-check $(HCO)
hdr-check: $(HCO)
.PHONY: style
style:
git clang-format --style file --diff --extensions c,h
.PHONY: check
check: $(GENERATED_H)
@if sparse; \
then \
echo >&2 "Use 'make sparse' instead"; \
2011-04-21 21:14:42 +02:00
$(MAKE) --no-print-directory sparse; \
else \
echo >&2 "Did you mean 'make test'?"; \
exit 1; \
fi
cocci: run against a generated ALL.cocci The preceding commits to make the "coccicheck" target incremental made it slower in some cases. As an optimization let's not have the many=many mapping of <*.cocci>=<*.[ch]>, but instead concat the <*.cocci> into an ALL.cocci, and then run one-to-many ALL.cocci=<*.[ch]>. A "make coccicheck" is now around 2x as fast as it was on "master", and around 1.5x as fast as the preceding change to make the run incremental: $ git hyperfine -L rev origin/master,HEAD~,HEAD -p 'make clean' 'make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' -r 3 Benchmark 1: make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'origin/master Time (mean ± σ): 4.258 s ± 0.015 s [User: 27.432 s, System: 1.532 s] Range (min … max): 4.241 s … 4.268 s 3 runs Benchmark 2: make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'HEAD~ Time (mean ± σ): 5.365 s ± 0.079 s [User: 36.899 s, System: 1.810 s] Range (min … max): 5.281 s … 5.436 s 3 runs Benchmark 3: make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'HEAD Time (mean ± σ): 2.725 s ± 0.063 s [User: 14.796 s, System: 0.233 s] Range (min … max): 2.667 s … 2.792 s 3 runs Summary 'make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'HEAD' ran 1.56 ± 0.04 times faster than 'make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'origin/master' 1.97 ± 0.05 times faster than 'make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'HEAD~' This can be turned off with SPATCH_CONCAT_COCCI, but as the beneficiaries of "SPATCH_CONCAT_COCCI=" would mainly be those developing the *.cocci rules themselves, let's leave this optimization on by default. For more information see my "Optimizing *.cocci rules by concat'ing them" (<220901.8635dbjfko.gmgdl@evledraar.gmail.com>) on the cocci@inria.fr mailing list. This potentially changes the results of our *.cocci rules, but as noted in that discussion it should be safe for our use. We don't name rules, or if we do their names don't conflict across our *.cocci files. To the extent that we'd have any inter-dependencies between rules this doesn't make that worse, as we'd have them now if we ran "make coccicheck", applied the results, and would then have (due to hypothetical interdependencies) suggested changes on the subsequent "make coccicheck". Our "coccicheck-test" target makes use of the ALL.cocci when running tests, e.g. when testing unused.{c,out} we test it against ALL.cocci, not unused.cocci. We thus assert (to the extent that we have test coverage) that this concatenation doesn't change the expected results of running these rules. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-01 23:35:54 +01:00
COCCI_GEN_ALL = .build/contrib/coccinelle/ALL.cocci
COCCI_GLOB = $(wildcard contrib/coccinelle/*.cocci)
COCCI_RULES_TRACKED = $(COCCI_GLOB:%=.build/%)
cocci: run against a generated ALL.cocci The preceding commits to make the "coccicheck" target incremental made it slower in some cases. As an optimization let's not have the many=many mapping of <*.cocci>=<*.[ch]>, but instead concat the <*.cocci> into an ALL.cocci, and then run one-to-many ALL.cocci=<*.[ch]>. A "make coccicheck" is now around 2x as fast as it was on "master", and around 1.5x as fast as the preceding change to make the run incremental: $ git hyperfine -L rev origin/master,HEAD~,HEAD -p 'make clean' 'make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' -r 3 Benchmark 1: make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'origin/master Time (mean ± σ): 4.258 s ± 0.015 s [User: 27.432 s, System: 1.532 s] Range (min … max): 4.241 s … 4.268 s 3 runs Benchmark 2: make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'HEAD~ Time (mean ± σ): 5.365 s ± 0.079 s [User: 36.899 s, System: 1.810 s] Range (min … max): 5.281 s … 5.436 s 3 runs Benchmark 3: make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'HEAD Time (mean ± σ): 2.725 s ± 0.063 s [User: 14.796 s, System: 0.233 s] Range (min … max): 2.667 s … 2.792 s 3 runs Summary 'make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'HEAD' ran 1.56 ± 0.04 times faster than 'make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'origin/master' 1.97 ± 0.05 times faster than 'make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'HEAD~' This can be turned off with SPATCH_CONCAT_COCCI, but as the beneficiaries of "SPATCH_CONCAT_COCCI=" would mainly be those developing the *.cocci rules themselves, let's leave this optimization on by default. For more information see my "Optimizing *.cocci rules by concat'ing them" (<220901.8635dbjfko.gmgdl@evledraar.gmail.com>) on the cocci@inria.fr mailing list. This potentially changes the results of our *.cocci rules, but as noted in that discussion it should be safe for our use. We don't name rules, or if we do their names don't conflict across our *.cocci files. To the extent that we'd have any inter-dependencies between rules this doesn't make that worse, as we'd have them now if we ran "make coccicheck", applied the results, and would then have (due to hypothetical interdependencies) suggested changes on the subsequent "make coccicheck". Our "coccicheck-test" target makes use of the ALL.cocci when running tests, e.g. when testing unused.{c,out} we test it against ALL.cocci, not unused.cocci. We thus assert (to the extent that we have test coverage) that this concatenation doesn't change the expected results of running these rules. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-01 23:35:54 +01:00
COCCI_RULES_TRACKED_NO_PENDING = $(filter-out %.pending.cocci,$(COCCI_RULES_TRACKED))
COCCI_RULES =
cocci: run against a generated ALL.cocci The preceding commits to make the "coccicheck" target incremental made it slower in some cases. As an optimization let's not have the many=many mapping of <*.cocci>=<*.[ch]>, but instead concat the <*.cocci> into an ALL.cocci, and then run one-to-many ALL.cocci=<*.[ch]>. A "make coccicheck" is now around 2x as fast as it was on "master", and around 1.5x as fast as the preceding change to make the run incremental: $ git hyperfine -L rev origin/master,HEAD~,HEAD -p 'make clean' 'make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' -r 3 Benchmark 1: make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'origin/master Time (mean ± σ): 4.258 s ± 0.015 s [User: 27.432 s, System: 1.532 s] Range (min … max): 4.241 s … 4.268 s 3 runs Benchmark 2: make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'HEAD~ Time (mean ± σ): 5.365 s ± 0.079 s [User: 36.899 s, System: 1.810 s] Range (min … max): 5.281 s … 5.436 s 3 runs Benchmark 3: make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'HEAD Time (mean ± σ): 2.725 s ± 0.063 s [User: 14.796 s, System: 0.233 s] Range (min … max): 2.667 s … 2.792 s 3 runs Summary 'make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'HEAD' ran 1.56 ± 0.04 times faster than 'make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'origin/master' 1.97 ± 0.05 times faster than 'make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'HEAD~' This can be turned off with SPATCH_CONCAT_COCCI, but as the beneficiaries of "SPATCH_CONCAT_COCCI=" would mainly be those developing the *.cocci rules themselves, let's leave this optimization on by default. For more information see my "Optimizing *.cocci rules by concat'ing them" (<220901.8635dbjfko.gmgdl@evledraar.gmail.com>) on the cocci@inria.fr mailing list. This potentially changes the results of our *.cocci rules, but as noted in that discussion it should be safe for our use. We don't name rules, or if we do their names don't conflict across our *.cocci files. To the extent that we'd have any inter-dependencies between rules this doesn't make that worse, as we'd have them now if we ran "make coccicheck", applied the results, and would then have (due to hypothetical interdependencies) suggested changes on the subsequent "make coccicheck". Our "coccicheck-test" target makes use of the ALL.cocci when running tests, e.g. when testing unused.{c,out} we test it against ALL.cocci, not unused.cocci. We thus assert (to the extent that we have test coverage) that this concatenation doesn't change the expected results of running these rules. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-01 23:35:54 +01:00
COCCI_RULES += $(COCCI_GEN_ALL)
COCCI_RULES += $(COCCI_RULES_TRACKED)
COCCI_NAMES =
COCCI_NAMES += $(COCCI_RULES:.build/contrib/coccinelle/%.cocci=%)
COCCICHECK_PENDING = $(filter %.pending.cocci,$(COCCI_RULES))
COCCICHECK = $(filter-out $(COCCICHECK_PENDING),$(COCCI_RULES))
COCCICHECK_PATCHES = $(COCCICHECK:%=%.patch)
COCCICHECK_PATCHES_PENDING = $(COCCICHECK_PENDING:%=%.patch)
COCCICHECK_PATCHES_INTREE = $(COCCICHECK_PATCHES:.build/%=%)
COCCICHECK_PATCHES_PENDING_INTREE = $(COCCICHECK_PATCHES_PENDING:.build/%=%)
cocci: make "coccicheck" rule incremental Optimize the very slow "coccicheck" target to take advantage of incremental rebuilding, and fix outstanding dependency problems with the existing rule. The rule is now faster both on the initial run as we can make better use of GNU make's parallelism than the old ad-hoc combination of make's parallelism combined with $(SPATCH_BATCH_SIZE) and/or the "--jobs" argument to "spatch(1)". It also makes us *much* faster when incrementally building, it's now viable to "make coccicheck" as topic branches are merged down. The rule didn't use FORCE (or its equivalents) before, so a: make coccicheck make coccicheck Would report nothing to do on the second iteration. But all of our patch output depended on all $(COCCI_SOURCES) files, therefore e.g.: make -W grep.c coccicheck Would do a full re-run, i.e. a a change in a single file would force us to do a full re-run. The reason for this (not the initial rationale, but my analysis) is: * Since we create a single "*.cocci.patch+" we don't know where to pick up where we left off, or how to incrementally merge e.g. a "grep.c" change with an existing *.cocci.patch. * We've been carrying forward the dependency on the *.c files since 63f0a758a06 (add coccicheck make target, 2016-09-15) the rule was initially added as a sort of poor man's dependency discovery. As we don't include other *.c files depending on other *.c files has always been broken, as could be trivially demonstrated e.g. with: make coccicheck make -W strbuf.h coccicheck However, depending on the corresponding *.c files has been doing something, namely that *if* an API change modified both *.c and *.h files we'd catch the change to the *.h we care about via the *.c being changed. For API changes that happened only via *.h files we'd do the wrong thing before this change, but e.g. for function additions (not "static inline" ones) catch the *.h change by proxy. Now we'll instead: * Create a <RULE>/<FILE> pair in the .build directory, E.g. for swap.cocci and grep.c we'll create .build/contrib/coccinelle/swap.cocci.patch/grep.c. That file is the diff we'll apply for that <RULE>-<FILE> combination, if there's no changes to me made (the common case) it'll be an empty file. * Our generated *.patch file (e.g. contrib/coccinelle/swap.cocci.patch) is now a simple "cat $^" of all of all of the <RULE>/<FILE> files for a given <RULE>. In the case discussed above of "grep.c" being changed we'll do the full "cat" every time, so they resulting *.cocci.patch will always be correct and up-to-date, even if it's "incrementally updated". See 1cc0425a27c (Makefile: have "make pot" not "reset --hard", 2022-05-26) for another recent rule that used that technique. As before we'll: * End up generating a contrib/coccinelle/swap.cocci.patch, if we "fail" by creating a non-empty patch we'll still exit with a zero exit code. Arguably we should move to a more Makefile-native way of doing this, i.e. fail early, and if we want all of the "failed" changes we can use "make -k", but as the current "ci/run-static-analysis.sh" expects us to behave this way let's keep the existing behavior of exhaustively discovering all cocci changes, and only failing if spatch itself errors out. Further implementation details & notes: * Before this change running "make coccicheck" would by default end up pegging just one CPU at the very end for a while, usually as we'd finish whichever *.cocci rule was the most expensive. This could be mitigated by combining "make -jN" with SPATCH_BATCH_SIZE, see 960154b9c17 (coccicheck: optionally batch spatch invocations, 2019-05-06). There will be cases where getting rid of "SPATCH_BATCH_SIZE" makes things worse, but a from-scratch "make coccicheck" with the default of SPATCH_BATCH_SIZE=1 (and tweaking it doesn't make a difference) is faster (~3m36s v.s. ~3m56s) with this approach, as we can feed the CPU more work in a less staggered way. * Getting rid of "SPATCH_BATCH_SIZE" particularly helps in cases where the default of 1 yields parallelism under "make coccicheck", but then running e.g.: make -W contrib/coccinelle/swap.cocci coccicheck I.e. before that would use only one CPU core, until the user remembered to adjust "SPATCH_BATCH_SIZE" differently than the setting that makes sense when doing a non-incremental run of "make coccicheck". * Before the "make coccicheck" rule would have to clean "contrib/coccinelle/*.cocci.patch*", since we'd create "*+" and "*.log" files there. Now those are created in .build/contrib/coccinelle/, which is covered by the "cocciclean" rule already. Outstanding issues & future work: * We could get rid of "--all-includes" in favor of manually specifying a list of includes to give to "spatch(1)". As noted upthread of [1] a naïve removal of "--all-includes" will result in broken *.cocci patches, but if we know the exhaustive list of includes via COMPUTE_HEADER_DEPENDENCIES we don't need to re-scan for them, we could grab the headers to include from the .depend.d/<file>.o.d and supply them with the "--include" option to spatch(1).q 1. https://lore.kernel.org/git/87ft18tcog.fsf@evledraar.gmail.com/ Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-01 23:35:50 +01:00
# It's expensive to compute the many=many rules below, only eval them
# on $(MAKECMDGOALS) that match these $(COCCI_RULES)
COCCI_RULES_GLOB =
COCCI_RULES_GLOB += cocci%
COCCI_RULES_GLOB += .build/contrib/coccinelle/%
COCCI_RULES_GLOB += $(COCCICHECK_PATCHES)
COCCI_RULES_GLOB += $(COCCICHEC_PATCHES_PENDING)
COCCI_RULES_GLOB += $(COCCICHECK_PATCHES_INTREE)
COCCI_RULES_GLOB += $(COCCICHECK_PATCHES_PENDING_INTREE)
cocci: make "coccicheck" rule incremental Optimize the very slow "coccicheck" target to take advantage of incremental rebuilding, and fix outstanding dependency problems with the existing rule. The rule is now faster both on the initial run as we can make better use of GNU make's parallelism than the old ad-hoc combination of make's parallelism combined with $(SPATCH_BATCH_SIZE) and/or the "--jobs" argument to "spatch(1)". It also makes us *much* faster when incrementally building, it's now viable to "make coccicheck" as topic branches are merged down. The rule didn't use FORCE (or its equivalents) before, so a: make coccicheck make coccicheck Would report nothing to do on the second iteration. But all of our patch output depended on all $(COCCI_SOURCES) files, therefore e.g.: make -W grep.c coccicheck Would do a full re-run, i.e. a a change in a single file would force us to do a full re-run. The reason for this (not the initial rationale, but my analysis) is: * Since we create a single "*.cocci.patch+" we don't know where to pick up where we left off, or how to incrementally merge e.g. a "grep.c" change with an existing *.cocci.patch. * We've been carrying forward the dependency on the *.c files since 63f0a758a06 (add coccicheck make target, 2016-09-15) the rule was initially added as a sort of poor man's dependency discovery. As we don't include other *.c files depending on other *.c files has always been broken, as could be trivially demonstrated e.g. with: make coccicheck make -W strbuf.h coccicheck However, depending on the corresponding *.c files has been doing something, namely that *if* an API change modified both *.c and *.h files we'd catch the change to the *.h we care about via the *.c being changed. For API changes that happened only via *.h files we'd do the wrong thing before this change, but e.g. for function additions (not "static inline" ones) catch the *.h change by proxy. Now we'll instead: * Create a <RULE>/<FILE> pair in the .build directory, E.g. for swap.cocci and grep.c we'll create .build/contrib/coccinelle/swap.cocci.patch/grep.c. That file is the diff we'll apply for that <RULE>-<FILE> combination, if there's no changes to me made (the common case) it'll be an empty file. * Our generated *.patch file (e.g. contrib/coccinelle/swap.cocci.patch) is now a simple "cat $^" of all of all of the <RULE>/<FILE> files for a given <RULE>. In the case discussed above of "grep.c" being changed we'll do the full "cat" every time, so they resulting *.cocci.patch will always be correct and up-to-date, even if it's "incrementally updated". See 1cc0425a27c (Makefile: have "make pot" not "reset --hard", 2022-05-26) for another recent rule that used that technique. As before we'll: * End up generating a contrib/coccinelle/swap.cocci.patch, if we "fail" by creating a non-empty patch we'll still exit with a zero exit code. Arguably we should move to a more Makefile-native way of doing this, i.e. fail early, and if we want all of the "failed" changes we can use "make -k", but as the current "ci/run-static-analysis.sh" expects us to behave this way let's keep the existing behavior of exhaustively discovering all cocci changes, and only failing if spatch itself errors out. Further implementation details & notes: * Before this change running "make coccicheck" would by default end up pegging just one CPU at the very end for a while, usually as we'd finish whichever *.cocci rule was the most expensive. This could be mitigated by combining "make -jN" with SPATCH_BATCH_SIZE, see 960154b9c17 (coccicheck: optionally batch spatch invocations, 2019-05-06). There will be cases where getting rid of "SPATCH_BATCH_SIZE" makes things worse, but a from-scratch "make coccicheck" with the default of SPATCH_BATCH_SIZE=1 (and tweaking it doesn't make a difference) is faster (~3m36s v.s. ~3m56s) with this approach, as we can feed the CPU more work in a less staggered way. * Getting rid of "SPATCH_BATCH_SIZE" particularly helps in cases where the default of 1 yields parallelism under "make coccicheck", but then running e.g.: make -W contrib/coccinelle/swap.cocci coccicheck I.e. before that would use only one CPU core, until the user remembered to adjust "SPATCH_BATCH_SIZE" differently than the setting that makes sense when doing a non-incremental run of "make coccicheck". * Before the "make coccicheck" rule would have to clean "contrib/coccinelle/*.cocci.patch*", since we'd create "*+" and "*.log" files there. Now those are created in .build/contrib/coccinelle/, which is covered by the "cocciclean" rule already. Outstanding issues & future work: * We could get rid of "--all-includes" in favor of manually specifying a list of includes to give to "spatch(1)". As noted upthread of [1] a naïve removal of "--all-includes" will result in broken *.cocci patches, but if we know the exhaustive list of includes via COMPUTE_HEADER_DEPENDENCIES we don't need to re-scan for them, we could grab the headers to include from the .depend.d/<file>.o.d and supply them with the "--include" option to spatch(1).q 1. https://lore.kernel.org/git/87ft18tcog.fsf@evledraar.gmail.com/ Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-01 23:35:50 +01:00
COCCI_GOALS = $(filter $(COCCI_RULES_GLOB),$(MAKECMDGOALS))
cocci: add a "coccicheck-test" target and test *.cocci rules Add a "coccicheck-test" target to test our *.cocci rules, and as a demonstration add tests for the rules added in 39ea59a2570 (remove unnecessary NULL check before free(3), 2016-10-08) and 1b83d1251ed (coccinelle: add a rule to make "expression" code use FREE_AND_NULL(), 2017-06-15). I considered making use of the "spatch --test" option, and the choice of a "tests" over a "t" directory is to make these tests compatible with such a future change. Unfortunately "spatch --test" doesn't return meaningful exit codes, AFAICT you need to "grep" its output to see if the *.res is what you expect. There's "--test-okfailed", but I didn't find a way to sensibly integrate those (it relies on some in-between status files, but doesn't help with the status codes). Instead let's use a "--sp-file" pattern similar to the main "coccicheck" rule, with the difference that we use and compare the two *.res files with cmp(1). The --very-quiet and --no-show-diff options ensure that we don't need to pipe stdout and stderr somewhere. Unlike the "%.cocci.patch" rule we're not using the diff. The "cmp || git diff" is optimistically giving us better output on failure, but even if we only have POSIX cmp and no system git installed we'll still fail with the "cmp", just with an error message that isn't as friendly. The "2>/dev/null" is in case we don't have a "git" installed. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-07-05 15:46:57 +02:00
COCCI_TEST_RES = $(wildcard contrib/coccinelle/tests/*.res)
$(COCCI_RULES_TRACKED): .build/% : %
$(call mkdir_p_parent_template)
$(QUIET_CP)cp $< $@
cocci: make "coccicheck" rule incremental Optimize the very slow "coccicheck" target to take advantage of incremental rebuilding, and fix outstanding dependency problems with the existing rule. The rule is now faster both on the initial run as we can make better use of GNU make's parallelism than the old ad-hoc combination of make's parallelism combined with $(SPATCH_BATCH_SIZE) and/or the "--jobs" argument to "spatch(1)". It also makes us *much* faster when incrementally building, it's now viable to "make coccicheck" as topic branches are merged down. The rule didn't use FORCE (or its equivalents) before, so a: make coccicheck make coccicheck Would report nothing to do on the second iteration. But all of our patch output depended on all $(COCCI_SOURCES) files, therefore e.g.: make -W grep.c coccicheck Would do a full re-run, i.e. a a change in a single file would force us to do a full re-run. The reason for this (not the initial rationale, but my analysis) is: * Since we create a single "*.cocci.patch+" we don't know where to pick up where we left off, or how to incrementally merge e.g. a "grep.c" change with an existing *.cocci.patch. * We've been carrying forward the dependency on the *.c files since 63f0a758a06 (add coccicheck make target, 2016-09-15) the rule was initially added as a sort of poor man's dependency discovery. As we don't include other *.c files depending on other *.c files has always been broken, as could be trivially demonstrated e.g. with: make coccicheck make -W strbuf.h coccicheck However, depending on the corresponding *.c files has been doing something, namely that *if* an API change modified both *.c and *.h files we'd catch the change to the *.h we care about via the *.c being changed. For API changes that happened only via *.h files we'd do the wrong thing before this change, but e.g. for function additions (not "static inline" ones) catch the *.h change by proxy. Now we'll instead: * Create a <RULE>/<FILE> pair in the .build directory, E.g. for swap.cocci and grep.c we'll create .build/contrib/coccinelle/swap.cocci.patch/grep.c. That file is the diff we'll apply for that <RULE>-<FILE> combination, if there's no changes to me made (the common case) it'll be an empty file. * Our generated *.patch file (e.g. contrib/coccinelle/swap.cocci.patch) is now a simple "cat $^" of all of all of the <RULE>/<FILE> files for a given <RULE>. In the case discussed above of "grep.c" being changed we'll do the full "cat" every time, so they resulting *.cocci.patch will always be correct and up-to-date, even if it's "incrementally updated". See 1cc0425a27c (Makefile: have "make pot" not "reset --hard", 2022-05-26) for another recent rule that used that technique. As before we'll: * End up generating a contrib/coccinelle/swap.cocci.patch, if we "fail" by creating a non-empty patch we'll still exit with a zero exit code. Arguably we should move to a more Makefile-native way of doing this, i.e. fail early, and if we want all of the "failed" changes we can use "make -k", but as the current "ci/run-static-analysis.sh" expects us to behave this way let's keep the existing behavior of exhaustively discovering all cocci changes, and only failing if spatch itself errors out. Further implementation details & notes: * Before this change running "make coccicheck" would by default end up pegging just one CPU at the very end for a while, usually as we'd finish whichever *.cocci rule was the most expensive. This could be mitigated by combining "make -jN" with SPATCH_BATCH_SIZE, see 960154b9c17 (coccicheck: optionally batch spatch invocations, 2019-05-06). There will be cases where getting rid of "SPATCH_BATCH_SIZE" makes things worse, but a from-scratch "make coccicheck" with the default of SPATCH_BATCH_SIZE=1 (and tweaking it doesn't make a difference) is faster (~3m36s v.s. ~3m56s) with this approach, as we can feed the CPU more work in a less staggered way. * Getting rid of "SPATCH_BATCH_SIZE" particularly helps in cases where the default of 1 yields parallelism under "make coccicheck", but then running e.g.: make -W contrib/coccinelle/swap.cocci coccicheck I.e. before that would use only one CPU core, until the user remembered to adjust "SPATCH_BATCH_SIZE" differently than the setting that makes sense when doing a non-incremental run of "make coccicheck". * Before the "make coccicheck" rule would have to clean "contrib/coccinelle/*.cocci.patch*", since we'd create "*+" and "*.log" files there. Now those are created in .build/contrib/coccinelle/, which is covered by the "cocciclean" rule already. Outstanding issues & future work: * We could get rid of "--all-includes" in favor of manually specifying a list of includes to give to "spatch(1)". As noted upthread of [1] a naïve removal of "--all-includes" will result in broken *.cocci patches, but if we know the exhaustive list of includes via COMPUTE_HEADER_DEPENDENCIES we don't need to re-scan for them, we could grab the headers to include from the .depend.d/<file>.o.d and supply them with the "--include" option to spatch(1).q 1. https://lore.kernel.org/git/87ft18tcog.fsf@evledraar.gmail.com/ Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-01 23:35:50 +01:00
.build/contrib/coccinelle/FOUND_H_SOURCES: $(FOUND_H_SOURCES)
$(call mkdir_p_parent_template)
$(QUIET_GEN) >$@
cocci: run against a generated ALL.cocci The preceding commits to make the "coccicheck" target incremental made it slower in some cases. As an optimization let's not have the many=many mapping of <*.cocci>=<*.[ch]>, but instead concat the <*.cocci> into an ALL.cocci, and then run one-to-many ALL.cocci=<*.[ch]>. A "make coccicheck" is now around 2x as fast as it was on "master", and around 1.5x as fast as the preceding change to make the run incremental: $ git hyperfine -L rev origin/master,HEAD~,HEAD -p 'make clean' 'make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' -r 3 Benchmark 1: make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'origin/master Time (mean ± σ): 4.258 s ± 0.015 s [User: 27.432 s, System: 1.532 s] Range (min … max): 4.241 s … 4.268 s 3 runs Benchmark 2: make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'HEAD~ Time (mean ± σ): 5.365 s ± 0.079 s [User: 36.899 s, System: 1.810 s] Range (min … max): 5.281 s … 5.436 s 3 runs Benchmark 3: make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'HEAD Time (mean ± σ): 2.725 s ± 0.063 s [User: 14.796 s, System: 0.233 s] Range (min … max): 2.667 s … 2.792 s 3 runs Summary 'make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'HEAD' ran 1.56 ± 0.04 times faster than 'make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'origin/master' 1.97 ± 0.05 times faster than 'make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'HEAD~' This can be turned off with SPATCH_CONCAT_COCCI, but as the beneficiaries of "SPATCH_CONCAT_COCCI=" would mainly be those developing the *.cocci rules themselves, let's leave this optimization on by default. For more information see my "Optimizing *.cocci rules by concat'ing them" (<220901.8635dbjfko.gmgdl@evledraar.gmail.com>) on the cocci@inria.fr mailing list. This potentially changes the results of our *.cocci rules, but as noted in that discussion it should be safe for our use. We don't name rules, or if we do their names don't conflict across our *.cocci files. To the extent that we'd have any inter-dependencies between rules this doesn't make that worse, as we'd have them now if we ran "make coccicheck", applied the results, and would then have (due to hypothetical interdependencies) suggested changes on the subsequent "make coccicheck". Our "coccicheck-test" target makes use of the ALL.cocci when running tests, e.g. when testing unused.{c,out} we test it against ALL.cocci, not unused.cocci. We thus assert (to the extent that we have test coverage) that this concatenation doesn't change the expected results of running these rules. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-01 23:35:54 +01:00
$(COCCI_GEN_ALL): $(COCCI_RULES_TRACKED_NO_PENDING)
$(call mkdir_p_parent_template)
$(QUIET_SPATCH_CAT)cat $^ >$@
cocci: optimistically use COMPUTE_HEADER_DEPENDENCIES Improve the incremental rebuilding support of "coccicheck" by piggy-backing on the computed dependency information of the corresponding *.o file, rather than rebuilding all <RULE>/<FILE> pairs if either their corresponding file changes, or if any header changes. This in effect uses the same method that the "sparse" target was made to use in c234e8a0ecf (Makefile: make the "sparse" target non-.PHONY, 2021-09-23), except that the dependency on the *.o file isn't a hard one, we check with $(wildcard) if the *.o file exists, and if so we'll depend on it. This means that the common case of: make make coccicheck Will benefit from incremental rebuilding, now changing e.g. a header will only re-run "spatch" on those those *.c files that make use of it: By depending on the *.o we piggy-back on COMPUTE_HEADER_DEPENDENCIES. See c234e8a0ecf (Makefile: make the "sparse" target non-.PHONY, 2021-09-23) for prior art of doing that for the *.sp files. E.g.: make contrib/coccinelle/free.cocci.patch make -W column.h contrib/coccinelle/free.cocci.patch Will take around 15 seconds for the second command on my 8 core box if I didn't run "make" beforehand to create the *.o files. But around 2 seconds if I did and we have those "*.o" files. Notes about the approach of piggy-backing on *.o for dependencies: * It *is* a trade-off since we'll pay the extra cost of running the C compiler, but we're probably doing that anyway. The compiler is much faster than "spatch", so even though we need to re-compile the *.o to create the dependency info for the *.c for "spatch" it's faster (especially if using "ccache"). * There *are* use-cases where some would like to have *.o files around, but to have the "make coccicheck" ignore them. See: https://lore.kernel.org/git/20220826104312.GJ1735@szeder.dev/ For those users a: make make coccicheck SPATCH_USE_O_DEPENDENCIES= Will avoid considering the *.o files. * If that *.o file doesn't exist we'll depend on an intermediate file of ours which in turn depends on $(FOUND_H_SOURCES). This covers both an initial build, or where "coccicheck" is run without running "all" beforehand, and because we run "coccicheck" on e.g. files in compat/* that we don't know how to build unless the requisite flag was provided to the Makefile. Most of the runtime of "incremental" runs is now spent on various compat/* files, i.e. we conditionally add files to COMPAT_OBJS, and therefore conflate whether we *can* compile an object and generate dependency information for it with whether we'd like to link it into our binary. Before this change the distinction didn't matter, but now one way to make this even faster on incremental builds would be to peel those concerns apart so that we can see that e.g. compat/mmap.c doesn't depend on column.h. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-01 23:35:51 +01:00
ifeq ($(COMPUTE_HEADER_DEPENDENCIES),no)
SPATCH_USE_O_DEPENDENCIES =
endif
cocci: make "coccicheck" rule incremental Optimize the very slow "coccicheck" target to take advantage of incremental rebuilding, and fix outstanding dependency problems with the existing rule. The rule is now faster both on the initial run as we can make better use of GNU make's parallelism than the old ad-hoc combination of make's parallelism combined with $(SPATCH_BATCH_SIZE) and/or the "--jobs" argument to "spatch(1)". It also makes us *much* faster when incrementally building, it's now viable to "make coccicheck" as topic branches are merged down. The rule didn't use FORCE (or its equivalents) before, so a: make coccicheck make coccicheck Would report nothing to do on the second iteration. But all of our patch output depended on all $(COCCI_SOURCES) files, therefore e.g.: make -W grep.c coccicheck Would do a full re-run, i.e. a a change in a single file would force us to do a full re-run. The reason for this (not the initial rationale, but my analysis) is: * Since we create a single "*.cocci.patch+" we don't know where to pick up where we left off, or how to incrementally merge e.g. a "grep.c" change with an existing *.cocci.patch. * We've been carrying forward the dependency on the *.c files since 63f0a758a06 (add coccicheck make target, 2016-09-15) the rule was initially added as a sort of poor man's dependency discovery. As we don't include other *.c files depending on other *.c files has always been broken, as could be trivially demonstrated e.g. with: make coccicheck make -W strbuf.h coccicheck However, depending on the corresponding *.c files has been doing something, namely that *if* an API change modified both *.c and *.h files we'd catch the change to the *.h we care about via the *.c being changed. For API changes that happened only via *.h files we'd do the wrong thing before this change, but e.g. for function additions (not "static inline" ones) catch the *.h change by proxy. Now we'll instead: * Create a <RULE>/<FILE> pair in the .build directory, E.g. for swap.cocci and grep.c we'll create .build/contrib/coccinelle/swap.cocci.patch/grep.c. That file is the diff we'll apply for that <RULE>-<FILE> combination, if there's no changes to me made (the common case) it'll be an empty file. * Our generated *.patch file (e.g. contrib/coccinelle/swap.cocci.patch) is now a simple "cat $^" of all of all of the <RULE>/<FILE> files for a given <RULE>. In the case discussed above of "grep.c" being changed we'll do the full "cat" every time, so they resulting *.cocci.patch will always be correct and up-to-date, even if it's "incrementally updated". See 1cc0425a27c (Makefile: have "make pot" not "reset --hard", 2022-05-26) for another recent rule that used that technique. As before we'll: * End up generating a contrib/coccinelle/swap.cocci.patch, if we "fail" by creating a non-empty patch we'll still exit with a zero exit code. Arguably we should move to a more Makefile-native way of doing this, i.e. fail early, and if we want all of the "failed" changes we can use "make -k", but as the current "ci/run-static-analysis.sh" expects us to behave this way let's keep the existing behavior of exhaustively discovering all cocci changes, and only failing if spatch itself errors out. Further implementation details & notes: * Before this change running "make coccicheck" would by default end up pegging just one CPU at the very end for a while, usually as we'd finish whichever *.cocci rule was the most expensive. This could be mitigated by combining "make -jN" with SPATCH_BATCH_SIZE, see 960154b9c17 (coccicheck: optionally batch spatch invocations, 2019-05-06). There will be cases where getting rid of "SPATCH_BATCH_SIZE" makes things worse, but a from-scratch "make coccicheck" with the default of SPATCH_BATCH_SIZE=1 (and tweaking it doesn't make a difference) is faster (~3m36s v.s. ~3m56s) with this approach, as we can feed the CPU more work in a less staggered way. * Getting rid of "SPATCH_BATCH_SIZE" particularly helps in cases where the default of 1 yields parallelism under "make coccicheck", but then running e.g.: make -W contrib/coccinelle/swap.cocci coccicheck I.e. before that would use only one CPU core, until the user remembered to adjust "SPATCH_BATCH_SIZE" differently than the setting that makes sense when doing a non-incremental run of "make coccicheck". * Before the "make coccicheck" rule would have to clean "contrib/coccinelle/*.cocci.patch*", since we'd create "*+" and "*.log" files there. Now those are created in .build/contrib/coccinelle/, which is covered by the "cocciclean" rule already. Outstanding issues & future work: * We could get rid of "--all-includes" in favor of manually specifying a list of includes to give to "spatch(1)". As noted upthread of [1] a naïve removal of "--all-includes" will result in broken *.cocci patches, but if we know the exhaustive list of includes via COMPUTE_HEADER_DEPENDENCIES we don't need to re-scan for them, we could grab the headers to include from the .depend.d/<file>.o.d and supply them with the "--include" option to spatch(1).q 1. https://lore.kernel.org/git/87ft18tcog.fsf@evledraar.gmail.com/ Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-01 23:35:50 +01:00
define cocci-rule
## Rule for .build/$(1).patch/$(2); Params:
Makefile: don't create a ".build/.build/" for cocci, fix output Fix a couple of issues in the recently merged 0f3c55d4c2b (Merge branch 'ab/coccicheck-incremental' into next, 2022-11-08): In copying over the "contrib/coccinelle/" rules to ".build/contrib/coccinelle/" we inadvertently ended up with a ".build/.build/contrib/coccinelle/" as well. We'd generate the per-file patches in the former, and keep the rule and overall result in the latter. E.g. running: make contrib/coccinelle/free.cocci.patch COCCI_SOURCES="attr.c grep.c" Would, per "tree -a .build" yield the following result: .build ├── .build │   └── contrib │   └── coccinelle │   └── free.cocci.patch │   ├── attr.c │   ├── attr.c.log │   ├── grep.c │   └── grep.c.log └── contrib └── coccinelle ├── FOUND_H_SOURCES ├── free.cocci └── free.cocci.patch Now we'll instead generate all of our files in ".build/contrib/coccinelle/". Fixing this required renaming the directory where we keep our per-file patches, as we'd otherwise conflict with the result. Now the per-file patch directory is named e.g. "free.cocci.d". And the end result will now be: .build └── contrib └── coccinelle ├── FOUND_H_SOURCES ├── free.cocci ├── free.cocci.d │   ├── attr.c.patch │   ├── attr.c.patch.log │   ├── grep.c.patch │   └── grep.c.patch.log └── free.cocci.patch The per-file patches now have a ".patch" file suffix, which fixes another issue reported against 0f3c55d4c2b: The summary output was confusing. Before for the "make" command above we'd emit: [...] MKDIR -p .build/contrib/coccinelle CP contrib/coccinelle/free.cocci .build/contrib/coccinelle/free.cocci GEN .build/contrib/coccinelle/FOUND_H_SOURCES MKDIR -p .build/.build/contrib/coccinelle/free.cocci.patch SPATCH .build/.build/contrib/coccinelle/free.cocci.patch/grep.c SPATCH .build/.build/contrib/coccinelle/free.cocci.patch/attr.c SPATCH CAT $^ >.build/contrib/coccinelle/free.cocci.patch CP .build/contrib/coccinelle/free.cocci.patch contrib/coccinelle/free.cocci.patch But now we'll instead emit (identical output at the start omitted): [...] MKDIR -p .build/contrib/coccinelle/free.cocci.d SPATCH grep.c >.build/contrib/coccinelle/free.cocci.d/grep.c.patch SPATCH attr.c >.build/contrib/coccinelle/free.cocci.d/attr.c.patch SPATCH CAT .build/contrib/coccinelle/free.cocci.d/**.patch >.build/contrib/coccinelle/free.cocci.patch CP .build/contrib/coccinelle/free.cocci.patch contrib/coccinelle/free.cocci.patch I.e. we have an "SPATCH" line that makes it clear that we're running against the "{attr,grep}.c" file. The "SPATCH CAT" is then altered to correspond to it, showing that we're concatenating the "free.cocci.d/**.patch" files into one generated "free.cocci.patch" at the end. Reported-by: SZEDER Gábor <szeder.dev@gmail.com> Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-10 17:14:18 +01:00
# $(1) = e.g. ".build/contrib/coccinelle/free.cocci"
cocci: make "coccicheck" rule incremental Optimize the very slow "coccicheck" target to take advantage of incremental rebuilding, and fix outstanding dependency problems with the existing rule. The rule is now faster both on the initial run as we can make better use of GNU make's parallelism than the old ad-hoc combination of make's parallelism combined with $(SPATCH_BATCH_SIZE) and/or the "--jobs" argument to "spatch(1)". It also makes us *much* faster when incrementally building, it's now viable to "make coccicheck" as topic branches are merged down. The rule didn't use FORCE (or its equivalents) before, so a: make coccicheck make coccicheck Would report nothing to do on the second iteration. But all of our patch output depended on all $(COCCI_SOURCES) files, therefore e.g.: make -W grep.c coccicheck Would do a full re-run, i.e. a a change in a single file would force us to do a full re-run. The reason for this (not the initial rationale, but my analysis) is: * Since we create a single "*.cocci.patch+" we don't know where to pick up where we left off, or how to incrementally merge e.g. a "grep.c" change with an existing *.cocci.patch. * We've been carrying forward the dependency on the *.c files since 63f0a758a06 (add coccicheck make target, 2016-09-15) the rule was initially added as a sort of poor man's dependency discovery. As we don't include other *.c files depending on other *.c files has always been broken, as could be trivially demonstrated e.g. with: make coccicheck make -W strbuf.h coccicheck However, depending on the corresponding *.c files has been doing something, namely that *if* an API change modified both *.c and *.h files we'd catch the change to the *.h we care about via the *.c being changed. For API changes that happened only via *.h files we'd do the wrong thing before this change, but e.g. for function additions (not "static inline" ones) catch the *.h change by proxy. Now we'll instead: * Create a <RULE>/<FILE> pair in the .build directory, E.g. for swap.cocci and grep.c we'll create .build/contrib/coccinelle/swap.cocci.patch/grep.c. That file is the diff we'll apply for that <RULE>-<FILE> combination, if there's no changes to me made (the common case) it'll be an empty file. * Our generated *.patch file (e.g. contrib/coccinelle/swap.cocci.patch) is now a simple "cat $^" of all of all of the <RULE>/<FILE> files for a given <RULE>. In the case discussed above of "grep.c" being changed we'll do the full "cat" every time, so they resulting *.cocci.patch will always be correct and up-to-date, even if it's "incrementally updated". See 1cc0425a27c (Makefile: have "make pot" not "reset --hard", 2022-05-26) for another recent rule that used that technique. As before we'll: * End up generating a contrib/coccinelle/swap.cocci.patch, if we "fail" by creating a non-empty patch we'll still exit with a zero exit code. Arguably we should move to a more Makefile-native way of doing this, i.e. fail early, and if we want all of the "failed" changes we can use "make -k", but as the current "ci/run-static-analysis.sh" expects us to behave this way let's keep the existing behavior of exhaustively discovering all cocci changes, and only failing if spatch itself errors out. Further implementation details & notes: * Before this change running "make coccicheck" would by default end up pegging just one CPU at the very end for a while, usually as we'd finish whichever *.cocci rule was the most expensive. This could be mitigated by combining "make -jN" with SPATCH_BATCH_SIZE, see 960154b9c17 (coccicheck: optionally batch spatch invocations, 2019-05-06). There will be cases where getting rid of "SPATCH_BATCH_SIZE" makes things worse, but a from-scratch "make coccicheck" with the default of SPATCH_BATCH_SIZE=1 (and tweaking it doesn't make a difference) is faster (~3m36s v.s. ~3m56s) with this approach, as we can feed the CPU more work in a less staggered way. * Getting rid of "SPATCH_BATCH_SIZE" particularly helps in cases where the default of 1 yields parallelism under "make coccicheck", but then running e.g.: make -W contrib/coccinelle/swap.cocci coccicheck I.e. before that would use only one CPU core, until the user remembered to adjust "SPATCH_BATCH_SIZE" differently than the setting that makes sense when doing a non-incremental run of "make coccicheck". * Before the "make coccicheck" rule would have to clean "contrib/coccinelle/*.cocci.patch*", since we'd create "*+" and "*.log" files there. Now those are created in .build/contrib/coccinelle/, which is covered by the "cocciclean" rule already. Outstanding issues & future work: * We could get rid of "--all-includes" in favor of manually specifying a list of includes to give to "spatch(1)". As noted upthread of [1] a naïve removal of "--all-includes" will result in broken *.cocci patches, but if we know the exhaustive list of includes via COMPUTE_HEADER_DEPENDENCIES we don't need to re-scan for them, we could grab the headers to include from the .depend.d/<file>.o.d and supply them with the "--include" option to spatch(1).q 1. https://lore.kernel.org/git/87ft18tcog.fsf@evledraar.gmail.com/ Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-01 23:35:50 +01:00
# $(2) = e.g. "grep.c"
cocci: optimistically use COMPUTE_HEADER_DEPENDENCIES Improve the incremental rebuilding support of "coccicheck" by piggy-backing on the computed dependency information of the corresponding *.o file, rather than rebuilding all <RULE>/<FILE> pairs if either their corresponding file changes, or if any header changes. This in effect uses the same method that the "sparse" target was made to use in c234e8a0ecf (Makefile: make the "sparse" target non-.PHONY, 2021-09-23), except that the dependency on the *.o file isn't a hard one, we check with $(wildcard) if the *.o file exists, and if so we'll depend on it. This means that the common case of: make make coccicheck Will benefit from incremental rebuilding, now changing e.g. a header will only re-run "spatch" on those those *.c files that make use of it: By depending on the *.o we piggy-back on COMPUTE_HEADER_DEPENDENCIES. See c234e8a0ecf (Makefile: make the "sparse" target non-.PHONY, 2021-09-23) for prior art of doing that for the *.sp files. E.g.: make contrib/coccinelle/free.cocci.patch make -W column.h contrib/coccinelle/free.cocci.patch Will take around 15 seconds for the second command on my 8 core box if I didn't run "make" beforehand to create the *.o files. But around 2 seconds if I did and we have those "*.o" files. Notes about the approach of piggy-backing on *.o for dependencies: * It *is* a trade-off since we'll pay the extra cost of running the C compiler, but we're probably doing that anyway. The compiler is much faster than "spatch", so even though we need to re-compile the *.o to create the dependency info for the *.c for "spatch" it's faster (especially if using "ccache"). * There *are* use-cases where some would like to have *.o files around, but to have the "make coccicheck" ignore them. See: https://lore.kernel.org/git/20220826104312.GJ1735@szeder.dev/ For those users a: make make coccicheck SPATCH_USE_O_DEPENDENCIES= Will avoid considering the *.o files. * If that *.o file doesn't exist we'll depend on an intermediate file of ours which in turn depends on $(FOUND_H_SOURCES). This covers both an initial build, or where "coccicheck" is run without running "all" beforehand, and because we run "coccicheck" on e.g. files in compat/* that we don't know how to build unless the requisite flag was provided to the Makefile. Most of the runtime of "incremental" runs is now spent on various compat/* files, i.e. we conditionally add files to COMPAT_OBJS, and therefore conflate whether we *can* compile an object and generate dependency information for it with whether we'd like to link it into our binary. Before this change the distinction didn't matter, but now one way to make this even faster on incremental builds would be to peel those concerns apart so that we can see that e.g. compat/mmap.c doesn't depend on column.h. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-01 23:35:51 +01:00
# $(3) = e.g. "grep.o"
Makefile: don't create a ".build/.build/" for cocci, fix output Fix a couple of issues in the recently merged 0f3c55d4c2b (Merge branch 'ab/coccicheck-incremental' into next, 2022-11-08): In copying over the "contrib/coccinelle/" rules to ".build/contrib/coccinelle/" we inadvertently ended up with a ".build/.build/contrib/coccinelle/" as well. We'd generate the per-file patches in the former, and keep the rule and overall result in the latter. E.g. running: make contrib/coccinelle/free.cocci.patch COCCI_SOURCES="attr.c grep.c" Would, per "tree -a .build" yield the following result: .build ├── .build │   └── contrib │   └── coccinelle │   └── free.cocci.patch │   ├── attr.c │   ├── attr.c.log │   ├── grep.c │   └── grep.c.log └── contrib └── coccinelle ├── FOUND_H_SOURCES ├── free.cocci └── free.cocci.patch Now we'll instead generate all of our files in ".build/contrib/coccinelle/". Fixing this required renaming the directory where we keep our per-file patches, as we'd otherwise conflict with the result. Now the per-file patch directory is named e.g. "free.cocci.d". And the end result will now be: .build └── contrib └── coccinelle ├── FOUND_H_SOURCES ├── free.cocci ├── free.cocci.d │   ├── attr.c.patch │   ├── attr.c.patch.log │   ├── grep.c.patch │   └── grep.c.patch.log └── free.cocci.patch The per-file patches now have a ".patch" file suffix, which fixes another issue reported against 0f3c55d4c2b: The summary output was confusing. Before for the "make" command above we'd emit: [...] MKDIR -p .build/contrib/coccinelle CP contrib/coccinelle/free.cocci .build/contrib/coccinelle/free.cocci GEN .build/contrib/coccinelle/FOUND_H_SOURCES MKDIR -p .build/.build/contrib/coccinelle/free.cocci.patch SPATCH .build/.build/contrib/coccinelle/free.cocci.patch/grep.c SPATCH .build/.build/contrib/coccinelle/free.cocci.patch/attr.c SPATCH CAT $^ >.build/contrib/coccinelle/free.cocci.patch CP .build/contrib/coccinelle/free.cocci.patch contrib/coccinelle/free.cocci.patch But now we'll instead emit (identical output at the start omitted): [...] MKDIR -p .build/contrib/coccinelle/free.cocci.d SPATCH grep.c >.build/contrib/coccinelle/free.cocci.d/grep.c.patch SPATCH attr.c >.build/contrib/coccinelle/free.cocci.d/attr.c.patch SPATCH CAT .build/contrib/coccinelle/free.cocci.d/**.patch >.build/contrib/coccinelle/free.cocci.patch CP .build/contrib/coccinelle/free.cocci.patch contrib/coccinelle/free.cocci.patch I.e. we have an "SPATCH" line that makes it clear that we're running against the "{attr,grep}.c" file. The "SPATCH CAT" is then altered to correspond to it, showing that we're concatenating the "free.cocci.d/**.patch" files into one generated "free.cocci.patch" at the end. Reported-by: SZEDER Gábor <szeder.dev@gmail.com> Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-10 17:14:18 +01:00
COCCI_$(1:.build/contrib/coccinelle/%.cocci=%) += $(1).d/$(2).patch
$(1).d/$(2).patch: GIT-SPATCH-DEFINES
$(1).d/$(2).patch: $(if $(and $(SPATCH_USE_O_DEPENDENCIES),$(wildcard $(3))),$(3),.build/contrib/coccinelle/FOUND_H_SOURCES)
$(1).d/$(2).patch: $(1)
$(1).d/$(2).patch: $(1).d/%.patch : %
cocci: make "coccicheck" rule incremental Optimize the very slow "coccicheck" target to take advantage of incremental rebuilding, and fix outstanding dependency problems with the existing rule. The rule is now faster both on the initial run as we can make better use of GNU make's parallelism than the old ad-hoc combination of make's parallelism combined with $(SPATCH_BATCH_SIZE) and/or the "--jobs" argument to "spatch(1)". It also makes us *much* faster when incrementally building, it's now viable to "make coccicheck" as topic branches are merged down. The rule didn't use FORCE (or its equivalents) before, so a: make coccicheck make coccicheck Would report nothing to do on the second iteration. But all of our patch output depended on all $(COCCI_SOURCES) files, therefore e.g.: make -W grep.c coccicheck Would do a full re-run, i.e. a a change in a single file would force us to do a full re-run. The reason for this (not the initial rationale, but my analysis) is: * Since we create a single "*.cocci.patch+" we don't know where to pick up where we left off, or how to incrementally merge e.g. a "grep.c" change with an existing *.cocci.patch. * We've been carrying forward the dependency on the *.c files since 63f0a758a06 (add coccicheck make target, 2016-09-15) the rule was initially added as a sort of poor man's dependency discovery. As we don't include other *.c files depending on other *.c files has always been broken, as could be trivially demonstrated e.g. with: make coccicheck make -W strbuf.h coccicheck However, depending on the corresponding *.c files has been doing something, namely that *if* an API change modified both *.c and *.h files we'd catch the change to the *.h we care about via the *.c being changed. For API changes that happened only via *.h files we'd do the wrong thing before this change, but e.g. for function additions (not "static inline" ones) catch the *.h change by proxy. Now we'll instead: * Create a <RULE>/<FILE> pair in the .build directory, E.g. for swap.cocci and grep.c we'll create .build/contrib/coccinelle/swap.cocci.patch/grep.c. That file is the diff we'll apply for that <RULE>-<FILE> combination, if there's no changes to me made (the common case) it'll be an empty file. * Our generated *.patch file (e.g. contrib/coccinelle/swap.cocci.patch) is now a simple "cat $^" of all of all of the <RULE>/<FILE> files for a given <RULE>. In the case discussed above of "grep.c" being changed we'll do the full "cat" every time, so they resulting *.cocci.patch will always be correct and up-to-date, even if it's "incrementally updated". See 1cc0425a27c (Makefile: have "make pot" not "reset --hard", 2022-05-26) for another recent rule that used that technique. As before we'll: * End up generating a contrib/coccinelle/swap.cocci.patch, if we "fail" by creating a non-empty patch we'll still exit with a zero exit code. Arguably we should move to a more Makefile-native way of doing this, i.e. fail early, and if we want all of the "failed" changes we can use "make -k", but as the current "ci/run-static-analysis.sh" expects us to behave this way let's keep the existing behavior of exhaustively discovering all cocci changes, and only failing if spatch itself errors out. Further implementation details & notes: * Before this change running "make coccicheck" would by default end up pegging just one CPU at the very end for a while, usually as we'd finish whichever *.cocci rule was the most expensive. This could be mitigated by combining "make -jN" with SPATCH_BATCH_SIZE, see 960154b9c17 (coccicheck: optionally batch spatch invocations, 2019-05-06). There will be cases where getting rid of "SPATCH_BATCH_SIZE" makes things worse, but a from-scratch "make coccicheck" with the default of SPATCH_BATCH_SIZE=1 (and tweaking it doesn't make a difference) is faster (~3m36s v.s. ~3m56s) with this approach, as we can feed the CPU more work in a less staggered way. * Getting rid of "SPATCH_BATCH_SIZE" particularly helps in cases where the default of 1 yields parallelism under "make coccicheck", but then running e.g.: make -W contrib/coccinelle/swap.cocci coccicheck I.e. before that would use only one CPU core, until the user remembered to adjust "SPATCH_BATCH_SIZE" differently than the setting that makes sense when doing a non-incremental run of "make coccicheck". * Before the "make coccicheck" rule would have to clean "contrib/coccinelle/*.cocci.patch*", since we'd create "*+" and "*.log" files there. Now those are created in .build/contrib/coccinelle/, which is covered by the "cocciclean" rule already. Outstanding issues & future work: * We could get rid of "--all-includes" in favor of manually specifying a list of includes to give to "spatch(1)". As noted upthread of [1] a naïve removal of "--all-includes" will result in broken *.cocci patches, but if we know the exhaustive list of includes via COMPUTE_HEADER_DEPENDENCIES we don't need to re-scan for them, we could grab the headers to include from the .depend.d/<file>.o.d and supply them with the "--include" option to spatch(1).q 1. https://lore.kernel.org/git/87ft18tcog.fsf@evledraar.gmail.com/ Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-01 23:35:50 +01:00
$$(call mkdir_p_parent_template)
$$(QUIET_SPATCH)if ! $$(SPATCH) $$(SPATCH_FLAGS) \
$$(SPATCH_INCLUDE_FLAGS) \
--sp-file $(1) --patch . $$< \
>$$@ 2>$$@.log; \
then \
cocci: make "coccicheck" rule incremental Optimize the very slow "coccicheck" target to take advantage of incremental rebuilding, and fix outstanding dependency problems with the existing rule. The rule is now faster both on the initial run as we can make better use of GNU make's parallelism than the old ad-hoc combination of make's parallelism combined with $(SPATCH_BATCH_SIZE) and/or the "--jobs" argument to "spatch(1)". It also makes us *much* faster when incrementally building, it's now viable to "make coccicheck" as topic branches are merged down. The rule didn't use FORCE (or its equivalents) before, so a: make coccicheck make coccicheck Would report nothing to do on the second iteration. But all of our patch output depended on all $(COCCI_SOURCES) files, therefore e.g.: make -W grep.c coccicheck Would do a full re-run, i.e. a a change in a single file would force us to do a full re-run. The reason for this (not the initial rationale, but my analysis) is: * Since we create a single "*.cocci.patch+" we don't know where to pick up where we left off, or how to incrementally merge e.g. a "grep.c" change with an existing *.cocci.patch. * We've been carrying forward the dependency on the *.c files since 63f0a758a06 (add coccicheck make target, 2016-09-15) the rule was initially added as a sort of poor man's dependency discovery. As we don't include other *.c files depending on other *.c files has always been broken, as could be trivially demonstrated e.g. with: make coccicheck make -W strbuf.h coccicheck However, depending on the corresponding *.c files has been doing something, namely that *if* an API change modified both *.c and *.h files we'd catch the change to the *.h we care about via the *.c being changed. For API changes that happened only via *.h files we'd do the wrong thing before this change, but e.g. for function additions (not "static inline" ones) catch the *.h change by proxy. Now we'll instead: * Create a <RULE>/<FILE> pair in the .build directory, E.g. for swap.cocci and grep.c we'll create .build/contrib/coccinelle/swap.cocci.patch/grep.c. That file is the diff we'll apply for that <RULE>-<FILE> combination, if there's no changes to me made (the common case) it'll be an empty file. * Our generated *.patch file (e.g. contrib/coccinelle/swap.cocci.patch) is now a simple "cat $^" of all of all of the <RULE>/<FILE> files for a given <RULE>. In the case discussed above of "grep.c" being changed we'll do the full "cat" every time, so they resulting *.cocci.patch will always be correct and up-to-date, even if it's "incrementally updated". See 1cc0425a27c (Makefile: have "make pot" not "reset --hard", 2022-05-26) for another recent rule that used that technique. As before we'll: * End up generating a contrib/coccinelle/swap.cocci.patch, if we "fail" by creating a non-empty patch we'll still exit with a zero exit code. Arguably we should move to a more Makefile-native way of doing this, i.e. fail early, and if we want all of the "failed" changes we can use "make -k", but as the current "ci/run-static-analysis.sh" expects us to behave this way let's keep the existing behavior of exhaustively discovering all cocci changes, and only failing if spatch itself errors out. Further implementation details & notes: * Before this change running "make coccicheck" would by default end up pegging just one CPU at the very end for a while, usually as we'd finish whichever *.cocci rule was the most expensive. This could be mitigated by combining "make -jN" with SPATCH_BATCH_SIZE, see 960154b9c17 (coccicheck: optionally batch spatch invocations, 2019-05-06). There will be cases where getting rid of "SPATCH_BATCH_SIZE" makes things worse, but a from-scratch "make coccicheck" with the default of SPATCH_BATCH_SIZE=1 (and tweaking it doesn't make a difference) is faster (~3m36s v.s. ~3m56s) with this approach, as we can feed the CPU more work in a less staggered way. * Getting rid of "SPATCH_BATCH_SIZE" particularly helps in cases where the default of 1 yields parallelism under "make coccicheck", but then running e.g.: make -W contrib/coccinelle/swap.cocci coccicheck I.e. before that would use only one CPU core, until the user remembered to adjust "SPATCH_BATCH_SIZE" differently than the setting that makes sense when doing a non-incremental run of "make coccicheck". * Before the "make coccicheck" rule would have to clean "contrib/coccinelle/*.cocci.patch*", since we'd create "*+" and "*.log" files there. Now those are created in .build/contrib/coccinelle/, which is covered by the "cocciclean" rule already. Outstanding issues & future work: * We could get rid of "--all-includes" in favor of manually specifying a list of includes to give to "spatch(1)". As noted upthread of [1] a naïve removal of "--all-includes" will result in broken *.cocci patches, but if we know the exhaustive list of includes via COMPUTE_HEADER_DEPENDENCIES we don't need to re-scan for them, we could grab the headers to include from the .depend.d/<file>.o.d and supply them with the "--include" option to spatch(1).q 1. https://lore.kernel.org/git/87ft18tcog.fsf@evledraar.gmail.com/ Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-01 23:35:50 +01:00
echo "ERROR when applying '$(1)' to '$$<'; '$$@.log' follows:"; \
cat $$@.log; \
exit 1; \
cocci: make "coccicheck" rule incremental Optimize the very slow "coccicheck" target to take advantage of incremental rebuilding, and fix outstanding dependency problems with the existing rule. The rule is now faster both on the initial run as we can make better use of GNU make's parallelism than the old ad-hoc combination of make's parallelism combined with $(SPATCH_BATCH_SIZE) and/or the "--jobs" argument to "spatch(1)". It also makes us *much* faster when incrementally building, it's now viable to "make coccicheck" as topic branches are merged down. The rule didn't use FORCE (or its equivalents) before, so a: make coccicheck make coccicheck Would report nothing to do on the second iteration. But all of our patch output depended on all $(COCCI_SOURCES) files, therefore e.g.: make -W grep.c coccicheck Would do a full re-run, i.e. a a change in a single file would force us to do a full re-run. The reason for this (not the initial rationale, but my analysis) is: * Since we create a single "*.cocci.patch+" we don't know where to pick up where we left off, or how to incrementally merge e.g. a "grep.c" change with an existing *.cocci.patch. * We've been carrying forward the dependency on the *.c files since 63f0a758a06 (add coccicheck make target, 2016-09-15) the rule was initially added as a sort of poor man's dependency discovery. As we don't include other *.c files depending on other *.c files has always been broken, as could be trivially demonstrated e.g. with: make coccicheck make -W strbuf.h coccicheck However, depending on the corresponding *.c files has been doing something, namely that *if* an API change modified both *.c and *.h files we'd catch the change to the *.h we care about via the *.c being changed. For API changes that happened only via *.h files we'd do the wrong thing before this change, but e.g. for function additions (not "static inline" ones) catch the *.h change by proxy. Now we'll instead: * Create a <RULE>/<FILE> pair in the .build directory, E.g. for swap.cocci and grep.c we'll create .build/contrib/coccinelle/swap.cocci.patch/grep.c. That file is the diff we'll apply for that <RULE>-<FILE> combination, if there's no changes to me made (the common case) it'll be an empty file. * Our generated *.patch file (e.g. contrib/coccinelle/swap.cocci.patch) is now a simple "cat $^" of all of all of the <RULE>/<FILE> files for a given <RULE>. In the case discussed above of "grep.c" being changed we'll do the full "cat" every time, so they resulting *.cocci.patch will always be correct and up-to-date, even if it's "incrementally updated". See 1cc0425a27c (Makefile: have "make pot" not "reset --hard", 2022-05-26) for another recent rule that used that technique. As before we'll: * End up generating a contrib/coccinelle/swap.cocci.patch, if we "fail" by creating a non-empty patch we'll still exit with a zero exit code. Arguably we should move to a more Makefile-native way of doing this, i.e. fail early, and if we want all of the "failed" changes we can use "make -k", but as the current "ci/run-static-analysis.sh" expects us to behave this way let's keep the existing behavior of exhaustively discovering all cocci changes, and only failing if spatch itself errors out. Further implementation details & notes: * Before this change running "make coccicheck" would by default end up pegging just one CPU at the very end for a while, usually as we'd finish whichever *.cocci rule was the most expensive. This could be mitigated by combining "make -jN" with SPATCH_BATCH_SIZE, see 960154b9c17 (coccicheck: optionally batch spatch invocations, 2019-05-06). There will be cases where getting rid of "SPATCH_BATCH_SIZE" makes things worse, but a from-scratch "make coccicheck" with the default of SPATCH_BATCH_SIZE=1 (and tweaking it doesn't make a difference) is faster (~3m36s v.s. ~3m56s) with this approach, as we can feed the CPU more work in a less staggered way. * Getting rid of "SPATCH_BATCH_SIZE" particularly helps in cases where the default of 1 yields parallelism under "make coccicheck", but then running e.g.: make -W contrib/coccinelle/swap.cocci coccicheck I.e. before that would use only one CPU core, until the user remembered to adjust "SPATCH_BATCH_SIZE" differently than the setting that makes sense when doing a non-incremental run of "make coccicheck". * Before the "make coccicheck" rule would have to clean "contrib/coccinelle/*.cocci.patch*", since we'd create "*+" and "*.log" files there. Now those are created in .build/contrib/coccinelle/, which is covered by the "cocciclean" rule already. Outstanding issues & future work: * We could get rid of "--all-includes" in favor of manually specifying a list of includes to give to "spatch(1)". As noted upthread of [1] a naïve removal of "--all-includes" will result in broken *.cocci patches, but if we know the exhaustive list of includes via COMPUTE_HEADER_DEPENDENCIES we don't need to re-scan for them, we could grab the headers to include from the .depend.d/<file>.o.d and supply them with the "--include" option to spatch(1).q 1. https://lore.kernel.org/git/87ft18tcog.fsf@evledraar.gmail.com/ Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-01 23:35:50 +01:00
fi
endef
define cocci-matrix
cocci: optimistically use COMPUTE_HEADER_DEPENDENCIES Improve the incremental rebuilding support of "coccicheck" by piggy-backing on the computed dependency information of the corresponding *.o file, rather than rebuilding all <RULE>/<FILE> pairs if either their corresponding file changes, or if any header changes. This in effect uses the same method that the "sparse" target was made to use in c234e8a0ecf (Makefile: make the "sparse" target non-.PHONY, 2021-09-23), except that the dependency on the *.o file isn't a hard one, we check with $(wildcard) if the *.o file exists, and if so we'll depend on it. This means that the common case of: make make coccicheck Will benefit from incremental rebuilding, now changing e.g. a header will only re-run "spatch" on those those *.c files that make use of it: By depending on the *.o we piggy-back on COMPUTE_HEADER_DEPENDENCIES. See c234e8a0ecf (Makefile: make the "sparse" target non-.PHONY, 2021-09-23) for prior art of doing that for the *.sp files. E.g.: make contrib/coccinelle/free.cocci.patch make -W column.h contrib/coccinelle/free.cocci.patch Will take around 15 seconds for the second command on my 8 core box if I didn't run "make" beforehand to create the *.o files. But around 2 seconds if I did and we have those "*.o" files. Notes about the approach of piggy-backing on *.o for dependencies: * It *is* a trade-off since we'll pay the extra cost of running the C compiler, but we're probably doing that anyway. The compiler is much faster than "spatch", so even though we need to re-compile the *.o to create the dependency info for the *.c for "spatch" it's faster (especially if using "ccache"). * There *are* use-cases where some would like to have *.o files around, but to have the "make coccicheck" ignore them. See: https://lore.kernel.org/git/20220826104312.GJ1735@szeder.dev/ For those users a: make make coccicheck SPATCH_USE_O_DEPENDENCIES= Will avoid considering the *.o files. * If that *.o file doesn't exist we'll depend on an intermediate file of ours which in turn depends on $(FOUND_H_SOURCES). This covers both an initial build, or where "coccicheck" is run without running "all" beforehand, and because we run "coccicheck" on e.g. files in compat/* that we don't know how to build unless the requisite flag was provided to the Makefile. Most of the runtime of "incremental" runs is now spent on various compat/* files, i.e. we conditionally add files to COMPAT_OBJS, and therefore conflate whether we *can* compile an object and generate dependency information for it with whether we'd like to link it into our binary. Before this change the distinction didn't matter, but now one way to make this even faster on incremental builds would be to peel those concerns apart so that we can see that e.g. compat/mmap.c doesn't depend on column.h. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-01 23:35:51 +01:00
$(foreach s,$(COCCI_SOURCES),$(call cocci-rule,$(c),$(s),$(s:%.c=%.o)))
cocci: make "coccicheck" rule incremental Optimize the very slow "coccicheck" target to take advantage of incremental rebuilding, and fix outstanding dependency problems with the existing rule. The rule is now faster both on the initial run as we can make better use of GNU make's parallelism than the old ad-hoc combination of make's parallelism combined with $(SPATCH_BATCH_SIZE) and/or the "--jobs" argument to "spatch(1)". It also makes us *much* faster when incrementally building, it's now viable to "make coccicheck" as topic branches are merged down. The rule didn't use FORCE (or its equivalents) before, so a: make coccicheck make coccicheck Would report nothing to do on the second iteration. But all of our patch output depended on all $(COCCI_SOURCES) files, therefore e.g.: make -W grep.c coccicheck Would do a full re-run, i.e. a a change in a single file would force us to do a full re-run. The reason for this (not the initial rationale, but my analysis) is: * Since we create a single "*.cocci.patch+" we don't know where to pick up where we left off, or how to incrementally merge e.g. a "grep.c" change with an existing *.cocci.patch. * We've been carrying forward the dependency on the *.c files since 63f0a758a06 (add coccicheck make target, 2016-09-15) the rule was initially added as a sort of poor man's dependency discovery. As we don't include other *.c files depending on other *.c files has always been broken, as could be trivially demonstrated e.g. with: make coccicheck make -W strbuf.h coccicheck However, depending on the corresponding *.c files has been doing something, namely that *if* an API change modified both *.c and *.h files we'd catch the change to the *.h we care about via the *.c being changed. For API changes that happened only via *.h files we'd do the wrong thing before this change, but e.g. for function additions (not "static inline" ones) catch the *.h change by proxy. Now we'll instead: * Create a <RULE>/<FILE> pair in the .build directory, E.g. for swap.cocci and grep.c we'll create .build/contrib/coccinelle/swap.cocci.patch/grep.c. That file is the diff we'll apply for that <RULE>-<FILE> combination, if there's no changes to me made (the common case) it'll be an empty file. * Our generated *.patch file (e.g. contrib/coccinelle/swap.cocci.patch) is now a simple "cat $^" of all of all of the <RULE>/<FILE> files for a given <RULE>. In the case discussed above of "grep.c" being changed we'll do the full "cat" every time, so they resulting *.cocci.patch will always be correct and up-to-date, even if it's "incrementally updated". See 1cc0425a27c (Makefile: have "make pot" not "reset --hard", 2022-05-26) for another recent rule that used that technique. As before we'll: * End up generating a contrib/coccinelle/swap.cocci.patch, if we "fail" by creating a non-empty patch we'll still exit with a zero exit code. Arguably we should move to a more Makefile-native way of doing this, i.e. fail early, and if we want all of the "failed" changes we can use "make -k", but as the current "ci/run-static-analysis.sh" expects us to behave this way let's keep the existing behavior of exhaustively discovering all cocci changes, and only failing if spatch itself errors out. Further implementation details & notes: * Before this change running "make coccicheck" would by default end up pegging just one CPU at the very end for a while, usually as we'd finish whichever *.cocci rule was the most expensive. This could be mitigated by combining "make -jN" with SPATCH_BATCH_SIZE, see 960154b9c17 (coccicheck: optionally batch spatch invocations, 2019-05-06). There will be cases where getting rid of "SPATCH_BATCH_SIZE" makes things worse, but a from-scratch "make coccicheck" with the default of SPATCH_BATCH_SIZE=1 (and tweaking it doesn't make a difference) is faster (~3m36s v.s. ~3m56s) with this approach, as we can feed the CPU more work in a less staggered way. * Getting rid of "SPATCH_BATCH_SIZE" particularly helps in cases where the default of 1 yields parallelism under "make coccicheck", but then running e.g.: make -W contrib/coccinelle/swap.cocci coccicheck I.e. before that would use only one CPU core, until the user remembered to adjust "SPATCH_BATCH_SIZE" differently than the setting that makes sense when doing a non-incremental run of "make coccicheck". * Before the "make coccicheck" rule would have to clean "contrib/coccinelle/*.cocci.patch*", since we'd create "*+" and "*.log" files there. Now those are created in .build/contrib/coccinelle/, which is covered by the "cocciclean" rule already. Outstanding issues & future work: * We could get rid of "--all-includes" in favor of manually specifying a list of includes to give to "spatch(1)". As noted upthread of [1] a naïve removal of "--all-includes" will result in broken *.cocci patches, but if we know the exhaustive list of includes via COMPUTE_HEADER_DEPENDENCIES we don't need to re-scan for them, we could grab the headers to include from the .depend.d/<file>.o.d and supply them with the "--include" option to spatch(1).q 1. https://lore.kernel.org/git/87ft18tcog.fsf@evledraar.gmail.com/ Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-01 23:35:50 +01:00
endef
ifdef COCCI_GOALS
$(eval $(foreach c,$(COCCI_RULES),$(call cocci-matrix,$(c))))
endif
define spatch-rule
.build/contrib/coccinelle/$(1).cocci.patch: $$(COCCI_$(1))
cocci: make "coccicheck" rule incremental Optimize the very slow "coccicheck" target to take advantage of incremental rebuilding, and fix outstanding dependency problems with the existing rule. The rule is now faster both on the initial run as we can make better use of GNU make's parallelism than the old ad-hoc combination of make's parallelism combined with $(SPATCH_BATCH_SIZE) and/or the "--jobs" argument to "spatch(1)". It also makes us *much* faster when incrementally building, it's now viable to "make coccicheck" as topic branches are merged down. The rule didn't use FORCE (or its equivalents) before, so a: make coccicheck make coccicheck Would report nothing to do on the second iteration. But all of our patch output depended on all $(COCCI_SOURCES) files, therefore e.g.: make -W grep.c coccicheck Would do a full re-run, i.e. a a change in a single file would force us to do a full re-run. The reason for this (not the initial rationale, but my analysis) is: * Since we create a single "*.cocci.patch+" we don't know where to pick up where we left off, or how to incrementally merge e.g. a "grep.c" change with an existing *.cocci.patch. * We've been carrying forward the dependency on the *.c files since 63f0a758a06 (add coccicheck make target, 2016-09-15) the rule was initially added as a sort of poor man's dependency discovery. As we don't include other *.c files depending on other *.c files has always been broken, as could be trivially demonstrated e.g. with: make coccicheck make -W strbuf.h coccicheck However, depending on the corresponding *.c files has been doing something, namely that *if* an API change modified both *.c and *.h files we'd catch the change to the *.h we care about via the *.c being changed. For API changes that happened only via *.h files we'd do the wrong thing before this change, but e.g. for function additions (not "static inline" ones) catch the *.h change by proxy. Now we'll instead: * Create a <RULE>/<FILE> pair in the .build directory, E.g. for swap.cocci and grep.c we'll create .build/contrib/coccinelle/swap.cocci.patch/grep.c. That file is the diff we'll apply for that <RULE>-<FILE> combination, if there's no changes to me made (the common case) it'll be an empty file. * Our generated *.patch file (e.g. contrib/coccinelle/swap.cocci.patch) is now a simple "cat $^" of all of all of the <RULE>/<FILE> files for a given <RULE>. In the case discussed above of "grep.c" being changed we'll do the full "cat" every time, so they resulting *.cocci.patch will always be correct and up-to-date, even if it's "incrementally updated". See 1cc0425a27c (Makefile: have "make pot" not "reset --hard", 2022-05-26) for another recent rule that used that technique. As before we'll: * End up generating a contrib/coccinelle/swap.cocci.patch, if we "fail" by creating a non-empty patch we'll still exit with a zero exit code. Arguably we should move to a more Makefile-native way of doing this, i.e. fail early, and if we want all of the "failed" changes we can use "make -k", but as the current "ci/run-static-analysis.sh" expects us to behave this way let's keep the existing behavior of exhaustively discovering all cocci changes, and only failing if spatch itself errors out. Further implementation details & notes: * Before this change running "make coccicheck" would by default end up pegging just one CPU at the very end for a while, usually as we'd finish whichever *.cocci rule was the most expensive. This could be mitigated by combining "make -jN" with SPATCH_BATCH_SIZE, see 960154b9c17 (coccicheck: optionally batch spatch invocations, 2019-05-06). There will be cases where getting rid of "SPATCH_BATCH_SIZE" makes things worse, but a from-scratch "make coccicheck" with the default of SPATCH_BATCH_SIZE=1 (and tweaking it doesn't make a difference) is faster (~3m36s v.s. ~3m56s) with this approach, as we can feed the CPU more work in a less staggered way. * Getting rid of "SPATCH_BATCH_SIZE" particularly helps in cases where the default of 1 yields parallelism under "make coccicheck", but then running e.g.: make -W contrib/coccinelle/swap.cocci coccicheck I.e. before that would use only one CPU core, until the user remembered to adjust "SPATCH_BATCH_SIZE" differently than the setting that makes sense when doing a non-incremental run of "make coccicheck". * Before the "make coccicheck" rule would have to clean "contrib/coccinelle/*.cocci.patch*", since we'd create "*+" and "*.log" files there. Now those are created in .build/contrib/coccinelle/, which is covered by the "cocciclean" rule already. Outstanding issues & future work: * We could get rid of "--all-includes" in favor of manually specifying a list of includes to give to "spatch(1)". As noted upthread of [1] a naïve removal of "--all-includes" will result in broken *.cocci patches, but if we know the exhaustive list of includes via COMPUTE_HEADER_DEPENDENCIES we don't need to re-scan for them, we could grab the headers to include from the .depend.d/<file>.o.d and supply them with the "--include" option to spatch(1).q 1. https://lore.kernel.org/git/87ft18tcog.fsf@evledraar.gmail.com/ Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-01 23:35:50 +01:00
$$(QUIET_SPATCH_CAT)cat $$^ >$$@ && \
if test -s $$@; \
then \
cocci: make "coccicheck" rule incremental Optimize the very slow "coccicheck" target to take advantage of incremental rebuilding, and fix outstanding dependency problems with the existing rule. The rule is now faster both on the initial run as we can make better use of GNU make's parallelism than the old ad-hoc combination of make's parallelism combined with $(SPATCH_BATCH_SIZE) and/or the "--jobs" argument to "spatch(1)". It also makes us *much* faster when incrementally building, it's now viable to "make coccicheck" as topic branches are merged down. The rule didn't use FORCE (or its equivalents) before, so a: make coccicheck make coccicheck Would report nothing to do on the second iteration. But all of our patch output depended on all $(COCCI_SOURCES) files, therefore e.g.: make -W grep.c coccicheck Would do a full re-run, i.e. a a change in a single file would force us to do a full re-run. The reason for this (not the initial rationale, but my analysis) is: * Since we create a single "*.cocci.patch+" we don't know where to pick up where we left off, or how to incrementally merge e.g. a "grep.c" change with an existing *.cocci.patch. * We've been carrying forward the dependency on the *.c files since 63f0a758a06 (add coccicheck make target, 2016-09-15) the rule was initially added as a sort of poor man's dependency discovery. As we don't include other *.c files depending on other *.c files has always been broken, as could be trivially demonstrated e.g. with: make coccicheck make -W strbuf.h coccicheck However, depending on the corresponding *.c files has been doing something, namely that *if* an API change modified both *.c and *.h files we'd catch the change to the *.h we care about via the *.c being changed. For API changes that happened only via *.h files we'd do the wrong thing before this change, but e.g. for function additions (not "static inline" ones) catch the *.h change by proxy. Now we'll instead: * Create a <RULE>/<FILE> pair in the .build directory, E.g. for swap.cocci and grep.c we'll create .build/contrib/coccinelle/swap.cocci.patch/grep.c. That file is the diff we'll apply for that <RULE>-<FILE> combination, if there's no changes to me made (the common case) it'll be an empty file. * Our generated *.patch file (e.g. contrib/coccinelle/swap.cocci.patch) is now a simple "cat $^" of all of all of the <RULE>/<FILE> files for a given <RULE>. In the case discussed above of "grep.c" being changed we'll do the full "cat" every time, so they resulting *.cocci.patch will always be correct and up-to-date, even if it's "incrementally updated". See 1cc0425a27c (Makefile: have "make pot" not "reset --hard", 2022-05-26) for another recent rule that used that technique. As before we'll: * End up generating a contrib/coccinelle/swap.cocci.patch, if we "fail" by creating a non-empty patch we'll still exit with a zero exit code. Arguably we should move to a more Makefile-native way of doing this, i.e. fail early, and if we want all of the "failed" changes we can use "make -k", but as the current "ci/run-static-analysis.sh" expects us to behave this way let's keep the existing behavior of exhaustively discovering all cocci changes, and only failing if spatch itself errors out. Further implementation details & notes: * Before this change running "make coccicheck" would by default end up pegging just one CPU at the very end for a while, usually as we'd finish whichever *.cocci rule was the most expensive. This could be mitigated by combining "make -jN" with SPATCH_BATCH_SIZE, see 960154b9c17 (coccicheck: optionally batch spatch invocations, 2019-05-06). There will be cases where getting rid of "SPATCH_BATCH_SIZE" makes things worse, but a from-scratch "make coccicheck" with the default of SPATCH_BATCH_SIZE=1 (and tweaking it doesn't make a difference) is faster (~3m36s v.s. ~3m56s) with this approach, as we can feed the CPU more work in a less staggered way. * Getting rid of "SPATCH_BATCH_SIZE" particularly helps in cases where the default of 1 yields parallelism under "make coccicheck", but then running e.g.: make -W contrib/coccinelle/swap.cocci coccicheck I.e. before that would use only one CPU core, until the user remembered to adjust "SPATCH_BATCH_SIZE" differently than the setting that makes sense when doing a non-incremental run of "make coccicheck". * Before the "make coccicheck" rule would have to clean "contrib/coccinelle/*.cocci.patch*", since we'd create "*+" and "*.log" files there. Now those are created in .build/contrib/coccinelle/, which is covered by the "cocciclean" rule already. Outstanding issues & future work: * We could get rid of "--all-includes" in favor of manually specifying a list of includes to give to "spatch(1)". As noted upthread of [1] a naïve removal of "--all-includes" will result in broken *.cocci patches, but if we know the exhaustive list of includes via COMPUTE_HEADER_DEPENDENCIES we don't need to re-scan for them, we could grab the headers to include from the .depend.d/<file>.o.d and supply them with the "--include" option to spatch(1).q 1. https://lore.kernel.org/git/87ft18tcog.fsf@evledraar.gmail.com/ Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-01 23:35:50 +01:00
echo ' ' SPATCH result: $$@; \
fi
contrib/coccinelle/$(1).cocci.patch: .build/contrib/coccinelle/$(1).cocci.patch
$$(QUIET_CP)cp $$< $$@
cocci: make "coccicheck" rule incremental Optimize the very slow "coccicheck" target to take advantage of incremental rebuilding, and fix outstanding dependency problems with the existing rule. The rule is now faster both on the initial run as we can make better use of GNU make's parallelism than the old ad-hoc combination of make's parallelism combined with $(SPATCH_BATCH_SIZE) and/or the "--jobs" argument to "spatch(1)". It also makes us *much* faster when incrementally building, it's now viable to "make coccicheck" as topic branches are merged down. The rule didn't use FORCE (or its equivalents) before, so a: make coccicheck make coccicheck Would report nothing to do on the second iteration. But all of our patch output depended on all $(COCCI_SOURCES) files, therefore e.g.: make -W grep.c coccicheck Would do a full re-run, i.e. a a change in a single file would force us to do a full re-run. The reason for this (not the initial rationale, but my analysis) is: * Since we create a single "*.cocci.patch+" we don't know where to pick up where we left off, or how to incrementally merge e.g. a "grep.c" change with an existing *.cocci.patch. * We've been carrying forward the dependency on the *.c files since 63f0a758a06 (add coccicheck make target, 2016-09-15) the rule was initially added as a sort of poor man's dependency discovery. As we don't include other *.c files depending on other *.c files has always been broken, as could be trivially demonstrated e.g. with: make coccicheck make -W strbuf.h coccicheck However, depending on the corresponding *.c files has been doing something, namely that *if* an API change modified both *.c and *.h files we'd catch the change to the *.h we care about via the *.c being changed. For API changes that happened only via *.h files we'd do the wrong thing before this change, but e.g. for function additions (not "static inline" ones) catch the *.h change by proxy. Now we'll instead: * Create a <RULE>/<FILE> pair in the .build directory, E.g. for swap.cocci and grep.c we'll create .build/contrib/coccinelle/swap.cocci.patch/grep.c. That file is the diff we'll apply for that <RULE>-<FILE> combination, if there's no changes to me made (the common case) it'll be an empty file. * Our generated *.patch file (e.g. contrib/coccinelle/swap.cocci.patch) is now a simple "cat $^" of all of all of the <RULE>/<FILE> files for a given <RULE>. In the case discussed above of "grep.c" being changed we'll do the full "cat" every time, so they resulting *.cocci.patch will always be correct and up-to-date, even if it's "incrementally updated". See 1cc0425a27c (Makefile: have "make pot" not "reset --hard", 2022-05-26) for another recent rule that used that technique. As before we'll: * End up generating a contrib/coccinelle/swap.cocci.patch, if we "fail" by creating a non-empty patch we'll still exit with a zero exit code. Arguably we should move to a more Makefile-native way of doing this, i.e. fail early, and if we want all of the "failed" changes we can use "make -k", but as the current "ci/run-static-analysis.sh" expects us to behave this way let's keep the existing behavior of exhaustively discovering all cocci changes, and only failing if spatch itself errors out. Further implementation details & notes: * Before this change running "make coccicheck" would by default end up pegging just one CPU at the very end for a while, usually as we'd finish whichever *.cocci rule was the most expensive. This could be mitigated by combining "make -jN" with SPATCH_BATCH_SIZE, see 960154b9c17 (coccicheck: optionally batch spatch invocations, 2019-05-06). There will be cases where getting rid of "SPATCH_BATCH_SIZE" makes things worse, but a from-scratch "make coccicheck" with the default of SPATCH_BATCH_SIZE=1 (and tweaking it doesn't make a difference) is faster (~3m36s v.s. ~3m56s) with this approach, as we can feed the CPU more work in a less staggered way. * Getting rid of "SPATCH_BATCH_SIZE" particularly helps in cases where the default of 1 yields parallelism under "make coccicheck", but then running e.g.: make -W contrib/coccinelle/swap.cocci coccicheck I.e. before that would use only one CPU core, until the user remembered to adjust "SPATCH_BATCH_SIZE" differently than the setting that makes sense when doing a non-incremental run of "make coccicheck". * Before the "make coccicheck" rule would have to clean "contrib/coccinelle/*.cocci.patch*", since we'd create "*+" and "*.log" files there. Now those are created in .build/contrib/coccinelle/, which is covered by the "cocciclean" rule already. Outstanding issues & future work: * We could get rid of "--all-includes" in favor of manually specifying a list of includes to give to "spatch(1)". As noted upthread of [1] a naïve removal of "--all-includes" will result in broken *.cocci patches, but if we know the exhaustive list of includes via COMPUTE_HEADER_DEPENDENCIES we don't need to re-scan for them, we could grab the headers to include from the .depend.d/<file>.o.d and supply them with the "--include" option to spatch(1).q 1. https://lore.kernel.org/git/87ft18tcog.fsf@evledraar.gmail.com/ Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-01 23:35:50 +01:00
endef
ifdef COCCI_GOALS
$(eval $(foreach n,$(COCCI_NAMES),$(call spatch-rule,$(n))))
endif
cocci: add a "coccicheck-test" target and test *.cocci rules Add a "coccicheck-test" target to test our *.cocci rules, and as a demonstration add tests for the rules added in 39ea59a2570 (remove unnecessary NULL check before free(3), 2016-10-08) and 1b83d1251ed (coccinelle: add a rule to make "expression" code use FREE_AND_NULL(), 2017-06-15). I considered making use of the "spatch --test" option, and the choice of a "tests" over a "t" directory is to make these tests compatible with such a future change. Unfortunately "spatch --test" doesn't return meaningful exit codes, AFAICT you need to "grep" its output to see if the *.res is what you expect. There's "--test-okfailed", but I didn't find a way to sensibly integrate those (it relies on some in-between status files, but doesn't help with the status codes). Instead let's use a "--sp-file" pattern similar to the main "coccicheck" rule, with the difference that we use and compare the two *.res files with cmp(1). The --very-quiet and --no-show-diff options ensure that we don't need to pipe stdout and stderr somewhere. Unlike the "%.cocci.patch" rule we're not using the diff. The "cmp || git diff" is optimistically giving us better output on failure, but even if we only have POSIX cmp and no system git installed we'll still fail with the "cmp", just with an error message that isn't as friendly. The "2>/dev/null" is in case we don't have a "git" installed. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-07-05 15:46:57 +02:00
COCCI_TEST_RES_GEN = $(addprefix .build/,$(COCCI_TEST_RES))
$(COCCI_TEST_RES_GEN): GIT-SPATCH-DEFINES
cocci: add a "coccicheck-test" target and test *.cocci rules Add a "coccicheck-test" target to test our *.cocci rules, and as a demonstration add tests for the rules added in 39ea59a2570 (remove unnecessary NULL check before free(3), 2016-10-08) and 1b83d1251ed (coccinelle: add a rule to make "expression" code use FREE_AND_NULL(), 2017-06-15). I considered making use of the "spatch --test" option, and the choice of a "tests" over a "t" directory is to make these tests compatible with such a future change. Unfortunately "spatch --test" doesn't return meaningful exit codes, AFAICT you need to "grep" its output to see if the *.res is what you expect. There's "--test-okfailed", but I didn't find a way to sensibly integrate those (it relies on some in-between status files, but doesn't help with the status codes). Instead let's use a "--sp-file" pattern similar to the main "coccicheck" rule, with the difference that we use and compare the two *.res files with cmp(1). The --very-quiet and --no-show-diff options ensure that we don't need to pipe stdout and stderr somewhere. Unlike the "%.cocci.patch" rule we're not using the diff. The "cmp || git diff" is optimistically giving us better output on failure, but even if we only have POSIX cmp and no system git installed we'll still fail with the "cmp", just with an error message that isn't as friendly. The "2>/dev/null" is in case we don't have a "git" installed. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-07-05 15:46:57 +02:00
$(COCCI_TEST_RES_GEN): .build/%.res : %.c
$(COCCI_TEST_RES_GEN): .build/%.res : %.res
cocci: run against a generated ALL.cocci The preceding commits to make the "coccicheck" target incremental made it slower in some cases. As an optimization let's not have the many=many mapping of <*.cocci>=<*.[ch]>, but instead concat the <*.cocci> into an ALL.cocci, and then run one-to-many ALL.cocci=<*.[ch]>. A "make coccicheck" is now around 2x as fast as it was on "master", and around 1.5x as fast as the preceding change to make the run incremental: $ git hyperfine -L rev origin/master,HEAD~,HEAD -p 'make clean' 'make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' -r 3 Benchmark 1: make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'origin/master Time (mean ± σ): 4.258 s ± 0.015 s [User: 27.432 s, System: 1.532 s] Range (min … max): 4.241 s … 4.268 s 3 runs Benchmark 2: make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'HEAD~ Time (mean ± σ): 5.365 s ± 0.079 s [User: 36.899 s, System: 1.810 s] Range (min … max): 5.281 s … 5.436 s 3 runs Benchmark 3: make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'HEAD Time (mean ± σ): 2.725 s ± 0.063 s [User: 14.796 s, System: 0.233 s] Range (min … max): 2.667 s … 2.792 s 3 runs Summary 'make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'HEAD' ran 1.56 ± 0.04 times faster than 'make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'origin/master' 1.97 ± 0.05 times faster than 'make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'HEAD~' This can be turned off with SPATCH_CONCAT_COCCI, but as the beneficiaries of "SPATCH_CONCAT_COCCI=" would mainly be those developing the *.cocci rules themselves, let's leave this optimization on by default. For more information see my "Optimizing *.cocci rules by concat'ing them" (<220901.8635dbjfko.gmgdl@evledraar.gmail.com>) on the cocci@inria.fr mailing list. This potentially changes the results of our *.cocci rules, but as noted in that discussion it should be safe for our use. We don't name rules, or if we do their names don't conflict across our *.cocci files. To the extent that we'd have any inter-dependencies between rules this doesn't make that worse, as we'd have them now if we ran "make coccicheck", applied the results, and would then have (due to hypothetical interdependencies) suggested changes on the subsequent "make coccicheck". Our "coccicheck-test" target makes use of the ALL.cocci when running tests, e.g. when testing unused.{c,out} we test it against ALL.cocci, not unused.cocci. We thus assert (to the extent that we have test coverage) that this concatenation doesn't change the expected results of running these rules. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-01 23:35:54 +01:00
ifdef SPATCH_CONCAT_COCCI
$(COCCI_TEST_RES_GEN): .build/contrib/coccinelle/tests/%.res : $(COCCI_GEN_ALL)
else
cocci: add a "coccicheck-test" target and test *.cocci rules Add a "coccicheck-test" target to test our *.cocci rules, and as a demonstration add tests for the rules added in 39ea59a2570 (remove unnecessary NULL check before free(3), 2016-10-08) and 1b83d1251ed (coccinelle: add a rule to make "expression" code use FREE_AND_NULL(), 2017-06-15). I considered making use of the "spatch --test" option, and the choice of a "tests" over a "t" directory is to make these tests compatible with such a future change. Unfortunately "spatch --test" doesn't return meaningful exit codes, AFAICT you need to "grep" its output to see if the *.res is what you expect. There's "--test-okfailed", but I didn't find a way to sensibly integrate those (it relies on some in-between status files, but doesn't help with the status codes). Instead let's use a "--sp-file" pattern similar to the main "coccicheck" rule, with the difference that we use and compare the two *.res files with cmp(1). The --very-quiet and --no-show-diff options ensure that we don't need to pipe stdout and stderr somewhere. Unlike the "%.cocci.patch" rule we're not using the diff. The "cmp || git diff" is optimistically giving us better output on failure, but even if we only have POSIX cmp and no system git installed we'll still fail with the "cmp", just with an error message that isn't as friendly. The "2>/dev/null" is in case we don't have a "git" installed. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-07-05 15:46:57 +02:00
$(COCCI_TEST_RES_GEN): .build/contrib/coccinelle/tests/%.res : contrib/coccinelle/%.cocci
cocci: run against a generated ALL.cocci The preceding commits to make the "coccicheck" target incremental made it slower in some cases. As an optimization let's not have the many=many mapping of <*.cocci>=<*.[ch]>, but instead concat the <*.cocci> into an ALL.cocci, and then run one-to-many ALL.cocci=<*.[ch]>. A "make coccicheck" is now around 2x as fast as it was on "master", and around 1.5x as fast as the preceding change to make the run incremental: $ git hyperfine -L rev origin/master,HEAD~,HEAD -p 'make clean' 'make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' -r 3 Benchmark 1: make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'origin/master Time (mean ± σ): 4.258 s ± 0.015 s [User: 27.432 s, System: 1.532 s] Range (min … max): 4.241 s … 4.268 s 3 runs Benchmark 2: make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'HEAD~ Time (mean ± σ): 5.365 s ± 0.079 s [User: 36.899 s, System: 1.810 s] Range (min … max): 5.281 s … 5.436 s 3 runs Benchmark 3: make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'HEAD Time (mean ± σ): 2.725 s ± 0.063 s [User: 14.796 s, System: 0.233 s] Range (min … max): 2.667 s … 2.792 s 3 runs Summary 'make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'HEAD' ran 1.56 ± 0.04 times faster than 'make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'origin/master' 1.97 ± 0.05 times faster than 'make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'HEAD~' This can be turned off with SPATCH_CONCAT_COCCI, but as the beneficiaries of "SPATCH_CONCAT_COCCI=" would mainly be those developing the *.cocci rules themselves, let's leave this optimization on by default. For more information see my "Optimizing *.cocci rules by concat'ing them" (<220901.8635dbjfko.gmgdl@evledraar.gmail.com>) on the cocci@inria.fr mailing list. This potentially changes the results of our *.cocci rules, but as noted in that discussion it should be safe for our use. We don't name rules, or if we do their names don't conflict across our *.cocci files. To the extent that we'd have any inter-dependencies between rules this doesn't make that worse, as we'd have them now if we ran "make coccicheck", applied the results, and would then have (due to hypothetical interdependencies) suggested changes on the subsequent "make coccicheck". Our "coccicheck-test" target makes use of the ALL.cocci when running tests, e.g. when testing unused.{c,out} we test it against ALL.cocci, not unused.cocci. We thus assert (to the extent that we have test coverage) that this concatenation doesn't change the expected results of running these rules. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-01 23:35:54 +01:00
endif
cocci: add a "coccicheck-test" target and test *.cocci rules Add a "coccicheck-test" target to test our *.cocci rules, and as a demonstration add tests for the rules added in 39ea59a2570 (remove unnecessary NULL check before free(3), 2016-10-08) and 1b83d1251ed (coccinelle: add a rule to make "expression" code use FREE_AND_NULL(), 2017-06-15). I considered making use of the "spatch --test" option, and the choice of a "tests" over a "t" directory is to make these tests compatible with such a future change. Unfortunately "spatch --test" doesn't return meaningful exit codes, AFAICT you need to "grep" its output to see if the *.res is what you expect. There's "--test-okfailed", but I didn't find a way to sensibly integrate those (it relies on some in-between status files, but doesn't help with the status codes). Instead let's use a "--sp-file" pattern similar to the main "coccicheck" rule, with the difference that we use and compare the two *.res files with cmp(1). The --very-quiet and --no-show-diff options ensure that we don't need to pipe stdout and stderr somewhere. Unlike the "%.cocci.patch" rule we're not using the diff. The "cmp || git diff" is optimistically giving us better output on failure, but even if we only have POSIX cmp and no system git installed we'll still fail with the "cmp", just with an error message that isn't as friendly. The "2>/dev/null" is in case we don't have a "git" installed. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-07-05 15:46:57 +02:00
$(call mkdir_p_parent_template)
$(QUIET_SPATCH_TEST)$(SPATCH) $(SPATCH_TEST_FLAGS) \
cocci: add a "coccicheck-test" target and test *.cocci rules Add a "coccicheck-test" target to test our *.cocci rules, and as a demonstration add tests for the rules added in 39ea59a2570 (remove unnecessary NULL check before free(3), 2016-10-08) and 1b83d1251ed (coccinelle: add a rule to make "expression" code use FREE_AND_NULL(), 2017-06-15). I considered making use of the "spatch --test" option, and the choice of a "tests" over a "t" directory is to make these tests compatible with such a future change. Unfortunately "spatch --test" doesn't return meaningful exit codes, AFAICT you need to "grep" its output to see if the *.res is what you expect. There's "--test-okfailed", but I didn't find a way to sensibly integrate those (it relies on some in-between status files, but doesn't help with the status codes). Instead let's use a "--sp-file" pattern similar to the main "coccicheck" rule, with the difference that we use and compare the two *.res files with cmp(1). The --very-quiet and --no-show-diff options ensure that we don't need to pipe stdout and stderr somewhere. Unlike the "%.cocci.patch" rule we're not using the diff. The "cmp || git diff" is optimistically giving us better output on failure, but even if we only have POSIX cmp and no system git installed we'll still fail with the "cmp", just with an error message that isn't as friendly. The "2>/dev/null" is in case we don't have a "git" installed. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-07-05 15:46:57 +02:00
--very-quiet --no-show-diff \
--sp-file $< -o $@ \
$(@:.build/%.res=%.c) && \
cmp $(@:.build/%=%) $@ || \
git -P diff --no-index $(@:.build/%=%) $@ 2>/dev/null; \
.PHONY: coccicheck-test
coccicheck-test: $(COCCI_TEST_RES_GEN)
coccicheck: coccicheck-test
cocci: run against a generated ALL.cocci The preceding commits to make the "coccicheck" target incremental made it slower in some cases. As an optimization let's not have the many=many mapping of <*.cocci>=<*.[ch]>, but instead concat the <*.cocci> into an ALL.cocci, and then run one-to-many ALL.cocci=<*.[ch]>. A "make coccicheck" is now around 2x as fast as it was on "master", and around 1.5x as fast as the preceding change to make the run incremental: $ git hyperfine -L rev origin/master,HEAD~,HEAD -p 'make clean' 'make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' -r 3 Benchmark 1: make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'origin/master Time (mean ± σ): 4.258 s ± 0.015 s [User: 27.432 s, System: 1.532 s] Range (min … max): 4.241 s … 4.268 s 3 runs Benchmark 2: make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'HEAD~ Time (mean ± σ): 5.365 s ± 0.079 s [User: 36.899 s, System: 1.810 s] Range (min … max): 5.281 s … 5.436 s 3 runs Benchmark 3: make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'HEAD Time (mean ± σ): 2.725 s ± 0.063 s [User: 14.796 s, System: 0.233 s] Range (min … max): 2.667 s … 2.792 s 3 runs Summary 'make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'HEAD' ran 1.56 ± 0.04 times faster than 'make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'origin/master' 1.97 ± 0.05 times faster than 'make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'HEAD~' This can be turned off with SPATCH_CONCAT_COCCI, but as the beneficiaries of "SPATCH_CONCAT_COCCI=" would mainly be those developing the *.cocci rules themselves, let's leave this optimization on by default. For more information see my "Optimizing *.cocci rules by concat'ing them" (<220901.8635dbjfko.gmgdl@evledraar.gmail.com>) on the cocci@inria.fr mailing list. This potentially changes the results of our *.cocci rules, but as noted in that discussion it should be safe for our use. We don't name rules, or if we do their names don't conflict across our *.cocci files. To the extent that we'd have any inter-dependencies between rules this doesn't make that worse, as we'd have them now if we ran "make coccicheck", applied the results, and would then have (due to hypothetical interdependencies) suggested changes on the subsequent "make coccicheck". Our "coccicheck-test" target makes use of the ALL.cocci when running tests, e.g. when testing unused.{c,out} we test it against ALL.cocci, not unused.cocci. We thus assert (to the extent that we have test coverage) that this concatenation doesn't change the expected results of running these rules. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-01 23:35:54 +01:00
ifdef SPATCH_CONCAT_COCCI
coccicheck: contrib/coccinelle/ALL.cocci.patch
else
coccicheck: $(COCCICHECK_PATCHES_INTREE)
cocci: run against a generated ALL.cocci The preceding commits to make the "coccicheck" target incremental made it slower in some cases. As an optimization let's not have the many=many mapping of <*.cocci>=<*.[ch]>, but instead concat the <*.cocci> into an ALL.cocci, and then run one-to-many ALL.cocci=<*.[ch]>. A "make coccicheck" is now around 2x as fast as it was on "master", and around 1.5x as fast as the preceding change to make the run incremental: $ git hyperfine -L rev origin/master,HEAD~,HEAD -p 'make clean' 'make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' -r 3 Benchmark 1: make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'origin/master Time (mean ± σ): 4.258 s ± 0.015 s [User: 27.432 s, System: 1.532 s] Range (min … max): 4.241 s … 4.268 s 3 runs Benchmark 2: make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'HEAD~ Time (mean ± σ): 5.365 s ± 0.079 s [User: 36.899 s, System: 1.810 s] Range (min … max): 5.281 s … 5.436 s 3 runs Benchmark 3: make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'HEAD Time (mean ± σ): 2.725 s ± 0.063 s [User: 14.796 s, System: 0.233 s] Range (min … max): 2.667 s … 2.792 s 3 runs Summary 'make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'HEAD' ran 1.56 ± 0.04 times faster than 'make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'origin/master' 1.97 ± 0.05 times faster than 'make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'HEAD~' This can be turned off with SPATCH_CONCAT_COCCI, but as the beneficiaries of "SPATCH_CONCAT_COCCI=" would mainly be those developing the *.cocci rules themselves, let's leave this optimization on by default. For more information see my "Optimizing *.cocci rules by concat'ing them" (<220901.8635dbjfko.gmgdl@evledraar.gmail.com>) on the cocci@inria.fr mailing list. This potentially changes the results of our *.cocci rules, but as noted in that discussion it should be safe for our use. We don't name rules, or if we do their names don't conflict across our *.cocci files. To the extent that we'd have any inter-dependencies between rules this doesn't make that worse, as we'd have them now if we ran "make coccicheck", applied the results, and would then have (due to hypothetical interdependencies) suggested changes on the subsequent "make coccicheck". Our "coccicheck-test" target makes use of the ALL.cocci when running tests, e.g. when testing unused.{c,out} we test it against ALL.cocci, not unused.cocci. We thus assert (to the extent that we have test coverage) that this concatenation doesn't change the expected results of running these rules. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-01 23:35:54 +01:00
endif
# See contrib/coccinelle/README
coccicheck-pending: coccicheck-test
coccicheck-pending: $(COCCICHECK_PATCHES_PENDING_INTREE)
.PHONY: coccicheck coccicheck-pending
# "Sub"-Makefiles, not really because they can't be run stand-alone,
# only there to contain directory-specific rules and variables
## gitweb/Makefile inclusion:
MAK_DIR_GITWEB = gitweb/
include gitweb/Makefile
.PHONY: gitweb
gitweb: $(MAK_DIR_GITWEB_ALL)
ifndef NO_GITWEB
all:: gitweb
endif
### Installation rules
ifneq ($(filter /%,$(firstword $(template_dir))),)
template_instdir = $(template_dir)
else
template_instdir = $(prefix)/$(template_dir)
endif
export template_instdir
ifneq ($(filter /%,$(firstword $(gitexecdir))),)
gitexec_instdir = $(gitexecdir)
else
gitexec_instdir = $(prefix)/$(gitexecdir)
endif
gitexec_instdir_SQ = $(subst ','\'',$(gitexec_instdir))
export gitexec_instdir
ifneq ($(filter /%,$(firstword $(mergetoolsdir))),)
mergetools_instdir = $(mergetoolsdir)
else
mergetools_instdir = $(prefix)/$(mergetoolsdir)
endif
mergetools_instdir_SQ = $(subst ','\'',$(mergetools_instdir))
install_bindir_xprograms := $(patsubst %,%$X,$(BINDIR_PROGRAMS_NEED_X))
install_bindir_programs := $(install_bindir_xprograms) $(BINDIR_PROGRAMS_NO_X)
.PHONY: profile-install profile-fast-install
profile-install: profile
$(MAKE) install
profile-fast-install: profile-fast
$(MAKE) install
INSTALL_STRIP =
install: all
$(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(bindir_SQ)'
$(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(gitexec_instdir_SQ)'
$(INSTALL) $(INSTALL_STRIP) $(PROGRAMS) '$(DESTDIR_SQ)$(gitexec_instdir_SQ)'
$(INSTALL) $(SCRIPTS) '$(DESTDIR_SQ)$(gitexec_instdir_SQ)'
$(INSTALL) -m 644 $(SCRIPT_LIB) '$(DESTDIR_SQ)$(gitexec_instdir_SQ)'
$(INSTALL) $(INSTALL_STRIP) $(install_bindir_xprograms) '$(DESTDIR_SQ)$(bindir_SQ)'
$(INSTALL) $(BINDIR_PROGRAMS_NO_X) '$(DESTDIR_SQ)$(bindir_SQ)'
msvc: support building Git using MS Visual C++ With this patch, Git can be built using the Microsoft toolchain, via: make MSVC=1 [DEBUG=1] Third party libraries are built from source using the open source "vcpkg" tool set. See https://github.com/Microsoft/vcpkg On a first build, the vcpkg tools and the third party libraries are automatically downloaded and built. DLLs for the third party libraries are copied to the top-level (and t/helper) directory to facilitate debugging. See compat/vcbuild/README. A series of .bat files are invoked by the Makefile to find the location of the installed version of Visual Studio and the associated compiler tools (essentially replicating the environment setup performed by a "Developer Command Prompt"). This should find the most recent VS2015 or VS2017 installation. Output from these scripts are used by the Makefile to define compiler and linker pathnames and -I and -L arguments. The build produces .pdb files for both debug and release builds. Note: This commit was squashed from an organic series of commits developed between 2016 and 2018 in Git for Windows' `master` branch. This combined commit eliminates the obsolete commits related to fetching NuGet packages for third party libraries. It is difficult to use NuGet packages for C/C++ sources because they may be built by earlier versions of the MSVC compiler and have CRT version and linking issues. Additionally, the C/C++ NuGet packages that we were using tended to not be updated concurrently with the sources. And in the case of cURL and OpenSSL, this could expose us to security issues. Helped-by: Yue Lin Ho <b8732003@student.nsysu.edu.tw> Helped-by: Philip Oakley <philipoakley@iee.org> Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com> Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-06-25 16:49:39 +02:00
ifdef MSVC
# We DO NOT install the individual foo.o.pdb files because they
# have already been rolled up into the exe's pdb file.
# We DO NOT have pdb files for the builtin commands (like git-status.exe)
# because it is just a copy/hardlink of git.exe, rather than a unique binary.
$(INSTALL) $(patsubst %.exe,%.pdb,$(filter-out $(BUILT_INS),$(patsubst %,%$X,$(BINDIR_PROGRAMS_NEED_X)))) '$(DESTDIR_SQ)$(bindir_SQ)'
$(INSTALL) $(patsubst %.exe,%.pdb,$(filter-out $(BUILT_INS) $(REMOTE_CURL_ALIASES),$(PROGRAMS))) '$(DESTDIR_SQ)$(gitexec_instdir_SQ)'
msvc: support building Git using MS Visual C++ With this patch, Git can be built using the Microsoft toolchain, via: make MSVC=1 [DEBUG=1] Third party libraries are built from source using the open source "vcpkg" tool set. See https://github.com/Microsoft/vcpkg On a first build, the vcpkg tools and the third party libraries are automatically downloaded and built. DLLs for the third party libraries are copied to the top-level (and t/helper) directory to facilitate debugging. See compat/vcbuild/README. A series of .bat files are invoked by the Makefile to find the location of the installed version of Visual Studio and the associated compiler tools (essentially replicating the environment setup performed by a "Developer Command Prompt"). This should find the most recent VS2015 or VS2017 installation. Output from these scripts are used by the Makefile to define compiler and linker pathnames and -I and -L arguments. The build produces .pdb files for both debug and release builds. Note: This commit was squashed from an organic series of commits developed between 2016 and 2018 in Git for Windows' `master` branch. This combined commit eliminates the obsolete commits related to fetching NuGet packages for third party libraries. It is difficult to use NuGet packages for C/C++ sources because they may be built by earlier versions of the MSVC compiler and have CRT version and linking issues. Additionally, the C/C++ NuGet packages that we were using tended to not be updated concurrently with the sources. And in the case of cURL and OpenSSL, this could expose us to security issues. Helped-by: Yue Lin Ho <b8732003@student.nsysu.edu.tw> Helped-by: Philip Oakley <philipoakley@iee.org> Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com> Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-06-25 16:49:39 +02:00
ifndef DEBUG
$(INSTALL) $(vcpkg_rel_bin)/*.dll '$(DESTDIR_SQ)$(bindir_SQ)'
$(INSTALL) $(vcpkg_rel_bin)/*.pdb '$(DESTDIR_SQ)$(bindir_SQ)'
else
$(INSTALL) $(vcpkg_dbg_bin)/*.dll '$(DESTDIR_SQ)$(bindir_SQ)'
$(INSTALL) $(vcpkg_dbg_bin)/*.pdb '$(DESTDIR_SQ)$(bindir_SQ)'
endif
endif
$(MAKE) -C templates DESTDIR='$(DESTDIR_SQ)' install
$(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(mergetools_instdir_SQ)'
$(INSTALL) -m 644 mergetools/* '$(DESTDIR_SQ)$(mergetools_instdir_SQ)'
i18n: add infrastructure for translating Git with gettext Change the skeleton implementation of i18n in Git to one that can show localized strings to users for our C, Shell and Perl programs using either GNU libintl or the Solaris gettext implementation. This new internationalization support is enabled by default. If gettext isn't available, or if Git is compiled with NO_GETTEXT=YesPlease, Git falls back on its current behavior of showing interface messages in English. When using the autoconf script we'll auto-detect if the gettext libraries are installed and act appropriately. This change is somewhat large because as well as adding a C, Shell and Perl i18n interface we're adding a lot of tests for them, and for those tests to work we need a skeleton PO file to actually test translations. A minimal Icelandic translation is included for this purpose. Icelandic includes multi-byte characters which makes it easy to test various edge cases, and it's a language I happen to understand. The rest of the commit message goes into detail about various sub-parts of this commit. = Installation Gettext .mo files will be installed and looked for in the standard $(prefix)/share/locale path. GIT_TEXTDOMAINDIR can also be set to override that, but that's only intended to be used to test Git itself. = Perl Perl code that's to be localized should use the new Git::I18n module. It imports a __ function into the caller's package by default. Instead of using the high level Locale::TextDomain interface I've opted to use the low-level (equivalent to the C interface) Locale::Messages module, which Locale::TextDomain itself uses. Locale::TextDomain does a lot of redundant work we don't need, and some of it would potentially introduce bugs. It tries to set the $TEXTDOMAIN based on package of the caller, and has its own hardcoded paths where it'll search for messages. I found it easier just to completely avoid it rather than try to circumvent its behavior. In any case, this is an issue wholly internal Git::I18N. Its guts can be changed later if that's deemed necessary. See <AANLkTilYD_NyIZMyj9dHtVk-ylVBfvyxpCC7982LWnVd@mail.gmail.com> for a further elaboration on this topic. = Shell Shell code that's to be localized should use the git-sh-i18n library. It's basically just a wrapper for the system's gettext.sh. If gettext.sh isn't available we'll fall back on gettext(1) if it's available. The latter is available without the former on Solaris, which has its own non-GNU gettext implementation. We also need to emulate eval_gettext() there. If neither are present we'll use a dumb printf(1) fall-through wrapper. = About libcharset.h and langinfo.h We use libcharset to query the character set of the current locale if it's available. I.e. we'll use it instead of nl_langinfo if HAVE_LIBCHARSET_H is set. The GNU gettext manual recommends using langinfo.h's nl_langinfo(CODESET) to acquire the current character set, but on systems that have libcharset.h's locale_charset() using the latter is either saner, or the only option on those systems. GNU and Solaris have a nl_langinfo(CODESET), FreeBSD can use either, but MinGW and some others need to use libcharset.h's locale_charset() instead. =Credits This patch is based on work by Jeff Epler <jepler@unpythonic.net> who did the initial Makefile / C work, and a lot of comments from the Git mailing list, including Jonathan Nieder, Jakub Narebski, Johannes Sixt, Erik Faye-Lund, Peter Krefting, Junio C Hamano, Thomas Rast and others. [jc: squashed a small Makefile fix from Ramsay] Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-11-18 00:14:42 +01:00
ifndef NO_GETTEXT
$(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(localedir_SQ)'
(cd po/build/locale && $(TAR) cf - .) | \
(cd '$(DESTDIR_SQ)$(localedir_SQ)' && umask 022 && $(TAR) xof -)
endif
ifndef NO_PERL
Makefile: replace perl/Makefile.PL with simple make rules Replace the perl/Makefile.PL and the fallback perl/Makefile used under NO_PERL_MAKEMAKER=NoThanks with a much simpler implementation heavily inspired by how the i18n infrastructure's build process works[1]. The reason for having the Makefile.PL in the first place is that it was initially[2] building a perl C binding to interface with libgit, this functionality, that was removed[3] before Git.pm ever made it to the master branch. We've since since started maintaining a fallback perl/Makefile, as MakeMaker wouldn't work on some platforms[4]. That's just the tip of the iceberg. We have the PM.stamp hack in the top-level Makefile[5] to detect whether we need to regenerate the perl/perl.mak, which I fixed just recently to deal with issues like the perl version changing from under us[6]. There is absolutely no reason for why this needs to be so complex anymore. All we're getting out of this elaborate Rube Goldberg machine was copying perl/* to perl/blib/* as we do a string-replacement on the *.pm files to hardcode @@LOCALEDIR@@ in the source, as well as pod2man-ing Git.pm & friends. So replace the whole thing with something that's pretty much a copy of how we generate po/build/**.mo from po/*.po, just with a small sed(1) command instead of msgfmt. As that's being done rename the files from *.pm to *.pmc just to indicate that they're generated (see "perldoc -f require"). While I'm at it, change the fallback for Error.pm from being something where we'll ship our own Error.pm if one doesn't exist at build time to one where we just use a Git::Error wrapper that'll always prefer the system-wide Error.pm, only falling back to our own copy if it really doesn't exist at runtime. It's now shipped as Git::FromCPAN::Error, making it easy to add other modules to Git::FromCPAN::* in the future if that's needed. Functional changes: * This will not always install into perl's idea of its global "installsitelib". This only potentially matters for packagers that need to expose Git.pm for non-git use, and as explained in the INSTALL file there's a trivial workaround. * The scripts themselves will 'use lib' the target directory, but if INSTLIBDIR is set it overrides it. It doesn't have to be this way, it could be set in addition to INSTLIBDIR, but my reading of [7] is that this is the desired behavior. * We don't build man pages for all of the perl modules as we used to, only Git(3pm). As discussed on-list[8] that we were building installed manpages for purely internal APIs like Git::I18N or private-Error.pm was always a bug anyway, and all the Git::SVN::* ones say they're internal APIs. There are apparently external users of Git.pm, but I don't expect there to be any of the others. As a side-effect of these general changes the perl documentation now only installed by install-{doc,man}, not a mere "install" as before. 1. 5e9637c629 ("i18n: add infrastructure for translating Git with gettext", 2011-11-18) 2. b1edc53d06 ("Introduce Git.pm (v4)", 2006-06-24) 3. 18b0fc1ce1 ("Git.pm: Kill Git.xs for now", 2006-09-23) 4. f848718a69 ("Make perl/ build procedure ActiveState friendly.", 2006-12-04) 5. ee9be06770 ("perl: detect new files in MakeMaker builds", 2012-07-27) 6. c59c4939c2 ("perl: regenerate perl.mak if perl -V changes", 2017-03-29) 7. 0386dd37b1 ("Makefile: add PERLLIB_EXTRA variable that adds to default perl path", 2013-11-15) 8. 87bmjjv1pu.fsf@evledraar.booking.com ("Re: [PATCH] Makefile: replace perl/Makefile.PL with simple make rules" Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-12-10 22:13:33 +01:00
$(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perllibdir_SQ)'
(cd perl/build/lib && $(TAR) cf - .) | \
(cd '$(DESTDIR_SQ)$(perllibdir_SQ)' && umask 022 && $(TAR) xof -)
endif
ifndef NO_TCLTK
$(MAKE) -C gitk-git install
$(MAKE) -C git-gui gitexecdir='$(gitexec_instdir_SQ)' install
endif
ifneq (,$X)
$(foreach p,$(patsubst %$X,%,$(filter %$X,$(ALL_COMMANDS_TO_INSTALL) $(OTHER_PROGRAMS))), test '$(DESTDIR_SQ)$(gitexec_instdir_SQ)/$p' -ef '$(DESTDIR_SQ)$(gitexec_instdir_SQ)/$p$X' || $(RM) '$(DESTDIR_SQ)$(gitexec_instdir_SQ)/$p';)
endif
bindir=$$(cd '$(DESTDIR_SQ)$(bindir_SQ)' && pwd) && \
execdir=$$(cd '$(DESTDIR_SQ)$(gitexec_instdir_SQ)' && pwd) && \
Makefile: optionally symlink libexec/git-core binaries to bin/git Add a INSTALL_SYMLINKS option which if enabled, changes the default hardlink installation method to one where the relevant binaries in libexec/git-core are symlinked back to ../../bin, instead of being hardlinked. This new option also overrides the behavior of the existing NO_*_HARDLINKS variables which in some cases would produce symlinks within to libexec/, e.g. "git-add" symlinked to "git" which would be copy of the "git" found in bin/, now "git-add" in libexec/ is always going to be symlinked to the "git" found in the bin/ directory. This option is being added because: 1) I think it makes what we're doing a lot more obvious. E.g. I'd never noticed that the libexec binaries were really just hardlinks since e.g. ls(1) won't show that in any obvious way. You need to start stat(1)-ing things and look at the inodes to see what's going on. 2) Some tools have very crappy support for hardlinks, e.g. the Git shipped with GitLab is much bigger than it should be because they're using a chef module that doesn't know about hardlinks, see https://github.com/chef/omnibus/issues/827 I've also ran into other related issues that I think are explained by this, e.g. compiling git with debugging and rpm refusing to install a ~200MB git package with 2GB left on the FS, I think that was because it doesn't consider hardlinks, just the sum of the byte size of everything in the package. As for the implementation, the "../../bin" noted above will vary given some given some values of "../.." and "bin" depending on the depth of the gitexecdir relative to the destdir, and the "bindir" target, e.g. setting "bindir=/tmp/git/binaries gitexecdir=foo/bar/baz" will do the right thing and produce this result: $ file /tmp/git/foo/bar/baz/git-add /tmp/git/foo/bar/baz/git-add: symbolic link to ../../../binaries/git Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-03-13 21:39:35 +01:00
destdir_from_execdir_SQ=$$(echo '$(gitexecdir_relative_SQ)' | sed -e 's|[^/][^/]*|..|g') && \
{ test "$$bindir/" = "$$execdir/" || \
for p in $(OTHER_PROGRAMS) $(filter $(install_bindir_programs),$(ALL_PROGRAMS)); do \
$(RM) "$$execdir/$$p" && \
Makefile: optionally symlink libexec/git-core binaries to bin/git Add a INSTALL_SYMLINKS option which if enabled, changes the default hardlink installation method to one where the relevant binaries in libexec/git-core are symlinked back to ../../bin, instead of being hardlinked. This new option also overrides the behavior of the existing NO_*_HARDLINKS variables which in some cases would produce symlinks within to libexec/, e.g. "git-add" symlinked to "git" which would be copy of the "git" found in bin/, now "git-add" in libexec/ is always going to be symlinked to the "git" found in the bin/ directory. This option is being added because: 1) I think it makes what we're doing a lot more obvious. E.g. I'd never noticed that the libexec binaries were really just hardlinks since e.g. ls(1) won't show that in any obvious way. You need to start stat(1)-ing things and look at the inodes to see what's going on. 2) Some tools have very crappy support for hardlinks, e.g. the Git shipped with GitLab is much bigger than it should be because they're using a chef module that doesn't know about hardlinks, see https://github.com/chef/omnibus/issues/827 I've also ran into other related issues that I think are explained by this, e.g. compiling git with debugging and rpm refusing to install a ~200MB git package with 2GB left on the FS, I think that was because it doesn't consider hardlinks, just the sum of the byte size of everything in the package. As for the implementation, the "../../bin" noted above will vary given some given some values of "../.." and "bin" depending on the depth of the gitexecdir relative to the destdir, and the "bindir" target, e.g. setting "bindir=/tmp/git/binaries gitexecdir=foo/bar/baz" will do the right thing and produce this result: $ file /tmp/git/foo/bar/baz/git-add /tmp/git/foo/bar/baz/git-add: symbolic link to ../../../binaries/git Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-03-13 21:39:35 +01:00
test -n "$(INSTALL_SYMLINKS)" && \
ln -s "$$destdir_from_execdir_SQ/$(bindir_relative_SQ)/$$p" "$$execdir/$$p" || \
{ test -z "$(NO_INSTALL_HARDLINKS)$(NO_CROSS_DIRECTORY_HARDLINKS)" && \
ln "$$bindir/$$p" "$$execdir/$$p" 2>/dev/null || \
cp "$$bindir/$$p" "$$execdir/$$p" || exit; } \
done; \
} && \
for p in $(filter $(install_bindir_programs),$(BUILT_INS)); do \
$(RM) "$$bindir/$$p" && \
test -n "$(INSTALL_SYMLINKS)" && \
ln -s "git$X" "$$bindir/$$p" || \
{ test -z "$(NO_INSTALL_HARDLINKS)" && \
ln "$$bindir/git$X" "$$bindir/$$p" 2>/dev/null || \
ln -s "git$X" "$$bindir/$$p" 2>/dev/null || \
cp "$$bindir/git$X" "$$bindir/$$p" || exit; }; \
done && \
for p in $(BUILT_INS); do \
$(RM) "$$execdir/$$p" && \
Optionally skip linking/copying the built-ins For a long time already, the non-dashed form of the built-ins is the recommended way to write scripts, i.e. it is better to call `git merge [...]` than to call `git-merge [...]`. While Git still supports the dashed form (by hard-linking the `git` executable to the dashed name in `libexec/git-core/`), in practice, it is probably almost irrelevant. However, we *do* care about keeping people's scripts working (even if they were written before the non-dashed form started to be recommended). Keeping this backwards-compatibility is not necessarily cheap, though: even so much as amending the tip commit in a git.git checkout will require re-linking all of those dashed commands. On this developer's laptop, this makes a noticeable difference: $ touch version.c && time make CC version.o AR libgit.a LINK git-bugreport.exe [... 11 similar lines ...] LN/CP git-remote-https.exe LN/CP git-remote-ftp.exe LN/CP git-remote-ftps.exe LINK git.exe BUILTIN git-add.exe [... 123 similar lines ...] BUILTIN all SUBDIR git-gui SUBDIR gitk-git SUBDIR templates LINK t/helper/test-fake-ssh.exe LINK t/helper/test-line-buffer.exe LINK t/helper/test-svn-fe.exe LINK t/helper/test-tool.exe real 0m36.633s user 0m3.794s sys 0m14.141s $ touch version.c && time make SKIP_DASHED_BUILT_INS=1 CC version.o AR libgit.a LINK git-bugreport.exe [... 11 similar lines ...] LN/CP git-remote-https.exe LN/CP git-remote-ftp.exe LN/CP git-remote-ftps.exe LINK git.exe BUILTIN git-receive-pack.exe BUILTIN git-upload-archive.exe BUILTIN git-upload-pack.exe BUILTIN all SUBDIR git-gui SUBDIR gitk-git SUBDIR templates LINK t/helper/test-fake-ssh.exe LINK t/helper/test-line-buffer.exe LINK t/helper/test-svn-fe.exe LINK t/helper/test-tool.exe real 0m23.717s user 0m1.562s sys 0m5.210s Also, `.zip` files do not have any standardized support for hard-links, therefore "zipping up" the executables will result in inflated disk usage. (To keep down the size of the "MinGit" variant of Git for Windows, which is distributed as a `.zip` file, the hard-links are excluded specifically.) In addition to that, some programs that are regularly used to assess disk usage fail to realize that those are hard-links, and heavily overcount disk usage. Most notably, this was the case with Windows Explorer up until the last couple of Windows 10 versions. See e.g. https://github.com/msysgit/msysgit/issues/58. To save on the time needed to hard-link these dashed commands, with the plan to eventually stop shipping with those hard-links on Windows, let's introduce a Makefile knob to skip generating them. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-22 00:28:16 +02:00
if test -z "$(SKIP_DASHED_BUILT_INS)"; \
then \
test -n "$(INSTALL_SYMLINKS)" && \
ln -s "$$destdir_from_execdir_SQ/$(bindir_relative_SQ)/git$X" "$$execdir/$$p" || \
{ test -z "$(NO_INSTALL_HARDLINKS)" && \
ln "$$execdir/git$X" "$$execdir/$$p" 2>/dev/null || \
ln -s "git$X" "$$execdir/$$p" 2>/dev/null || \
cp "$$execdir/git$X" "$$execdir/$$p" || exit; }; \
fi \
done && \
Makefile: work around ksh's failure to handle missing list argument to for loop ksh does not like it when the list argument is missing in a 'for' loop. This can happen when NO_CURL is set which causes REMOTE_CURL_ALIASES to be unset. In this case, the 'for' loop in the Makefile is expanded to look like this: for p in ; do and ksh complains like this: /bin/ksh: syntax error at line 15 : `;' unexpected The existing attempt to work around this issue, introduced by 70b89f87, tried to protect the 'for' loop by first testing whether REMOTE_CURL_ALIASES was empty, but this does not work since, as Johannes Sixt explains, "Before the test for emptyness can happen, the complete statement must be parsed, but ksh finds a syntax error in the statement and, therefore, cannot even begin to execute the statement. (ksh doesn't follow POSIX in this regard, where this would not be a syntax error.)". Make's $(foreach) function could be used to avoid this shell glitch, but since it has already caused a problem once before by generating a command line that exceeded the maximum argument list length on IRIX, let's adopt Bruce Stephens's suggestion for working around this issue in the same way the OpenSSL folks have done it. This solution first assigns the contents of the REMOTE_CURL_ALIASES make variable to a shell variable and then supplies the shell variable as the list argument in the 'for' loop. This satisfies ksh and has the expected behavior even if $(REMOTE_CURL_ALIASES) is empty. Signed-off-by: Brandon Casey <casey@nrlssc.navy.mil> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-07-06 23:56:51 +02:00
remote_curl_aliases="$(REMOTE_CURL_ALIASES)" && \
for p in $$remote_curl_aliases; do \
$(RM) "$$execdir/$$p" && \
Makefile: optionally symlink libexec/git-core binaries to bin/git Add a INSTALL_SYMLINKS option which if enabled, changes the default hardlink installation method to one where the relevant binaries in libexec/git-core are symlinked back to ../../bin, instead of being hardlinked. This new option also overrides the behavior of the existing NO_*_HARDLINKS variables which in some cases would produce symlinks within to libexec/, e.g. "git-add" symlinked to "git" which would be copy of the "git" found in bin/, now "git-add" in libexec/ is always going to be symlinked to the "git" found in the bin/ directory. This option is being added because: 1) I think it makes what we're doing a lot more obvious. E.g. I'd never noticed that the libexec binaries were really just hardlinks since e.g. ls(1) won't show that in any obvious way. You need to start stat(1)-ing things and look at the inodes to see what's going on. 2) Some tools have very crappy support for hardlinks, e.g. the Git shipped with GitLab is much bigger than it should be because they're using a chef module that doesn't know about hardlinks, see https://github.com/chef/omnibus/issues/827 I've also ran into other related issues that I think are explained by this, e.g. compiling git with debugging and rpm refusing to install a ~200MB git package with 2GB left on the FS, I think that was because it doesn't consider hardlinks, just the sum of the byte size of everything in the package. As for the implementation, the "../../bin" noted above will vary given some given some values of "../.." and "bin" depending on the depth of the gitexecdir relative to the destdir, and the "bindir" target, e.g. setting "bindir=/tmp/git/binaries gitexecdir=foo/bar/baz" will do the right thing and produce this result: $ file /tmp/git/foo/bar/baz/git-add /tmp/git/foo/bar/baz/git-add: symbolic link to ../../../binaries/git Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-03-13 21:39:35 +01:00
test -n "$(INSTALL_SYMLINKS)" && \
ln -s "git-remote-http$X" "$$execdir/$$p" || \
{ test -z "$(NO_INSTALL_HARDLINKS)" && \
ln "$$execdir/git-remote-http$X" "$$execdir/$$p" 2>/dev/null || \
ln -s "git-remote-http$X" "$$execdir/$$p" 2>/dev/null || \
cp "$$execdir/git-remote-http$X" "$$execdir/$$p" || exit; } \
done
.PHONY: install-doc install-man install-man-perl install-html install-info install-pdf
.PHONY: quick-install-doc quick-install-man quick-install-html
Makefile: replace perl/Makefile.PL with simple make rules Replace the perl/Makefile.PL and the fallback perl/Makefile used under NO_PERL_MAKEMAKER=NoThanks with a much simpler implementation heavily inspired by how the i18n infrastructure's build process works[1]. The reason for having the Makefile.PL in the first place is that it was initially[2] building a perl C binding to interface with libgit, this functionality, that was removed[3] before Git.pm ever made it to the master branch. We've since since started maintaining a fallback perl/Makefile, as MakeMaker wouldn't work on some platforms[4]. That's just the tip of the iceberg. We have the PM.stamp hack in the top-level Makefile[5] to detect whether we need to regenerate the perl/perl.mak, which I fixed just recently to deal with issues like the perl version changing from under us[6]. There is absolutely no reason for why this needs to be so complex anymore. All we're getting out of this elaborate Rube Goldberg machine was copying perl/* to perl/blib/* as we do a string-replacement on the *.pm files to hardcode @@LOCALEDIR@@ in the source, as well as pod2man-ing Git.pm & friends. So replace the whole thing with something that's pretty much a copy of how we generate po/build/**.mo from po/*.po, just with a small sed(1) command instead of msgfmt. As that's being done rename the files from *.pm to *.pmc just to indicate that they're generated (see "perldoc -f require"). While I'm at it, change the fallback for Error.pm from being something where we'll ship our own Error.pm if one doesn't exist at build time to one where we just use a Git::Error wrapper that'll always prefer the system-wide Error.pm, only falling back to our own copy if it really doesn't exist at runtime. It's now shipped as Git::FromCPAN::Error, making it easy to add other modules to Git::FromCPAN::* in the future if that's needed. Functional changes: * This will not always install into perl's idea of its global "installsitelib". This only potentially matters for packagers that need to expose Git.pm for non-git use, and as explained in the INSTALL file there's a trivial workaround. * The scripts themselves will 'use lib' the target directory, but if INSTLIBDIR is set it overrides it. It doesn't have to be this way, it could be set in addition to INSTLIBDIR, but my reading of [7] is that this is the desired behavior. * We don't build man pages for all of the perl modules as we used to, only Git(3pm). As discussed on-list[8] that we were building installed manpages for purely internal APIs like Git::I18N or private-Error.pm was always a bug anyway, and all the Git::SVN::* ones say they're internal APIs. There are apparently external users of Git.pm, but I don't expect there to be any of the others. As a side-effect of these general changes the perl documentation now only installed by install-{doc,man}, not a mere "install" as before. 1. 5e9637c629 ("i18n: add infrastructure for translating Git with gettext", 2011-11-18) 2. b1edc53d06 ("Introduce Git.pm (v4)", 2006-06-24) 3. 18b0fc1ce1 ("Git.pm: Kill Git.xs for now", 2006-09-23) 4. f848718a69 ("Make perl/ build procedure ActiveState friendly.", 2006-12-04) 5. ee9be06770 ("perl: detect new files in MakeMaker builds", 2012-07-27) 6. c59c4939c2 ("perl: regenerate perl.mak if perl -V changes", 2017-03-29) 7. 0386dd37b1 ("Makefile: add PERLLIB_EXTRA variable that adds to default perl path", 2013-11-15) 8. 87bmjjv1pu.fsf@evledraar.booking.com ("Re: [PATCH] Makefile: replace perl/Makefile.PL with simple make rules" Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-12-10 22:13:33 +01:00
install-doc: install-man-perl
$(MAKE) -C Documentation install
Makefile: replace perl/Makefile.PL with simple make rules Replace the perl/Makefile.PL and the fallback perl/Makefile used under NO_PERL_MAKEMAKER=NoThanks with a much simpler implementation heavily inspired by how the i18n infrastructure's build process works[1]. The reason for having the Makefile.PL in the first place is that it was initially[2] building a perl C binding to interface with libgit, this functionality, that was removed[3] before Git.pm ever made it to the master branch. We've since since started maintaining a fallback perl/Makefile, as MakeMaker wouldn't work on some platforms[4]. That's just the tip of the iceberg. We have the PM.stamp hack in the top-level Makefile[5] to detect whether we need to regenerate the perl/perl.mak, which I fixed just recently to deal with issues like the perl version changing from under us[6]. There is absolutely no reason for why this needs to be so complex anymore. All we're getting out of this elaborate Rube Goldberg machine was copying perl/* to perl/blib/* as we do a string-replacement on the *.pm files to hardcode @@LOCALEDIR@@ in the source, as well as pod2man-ing Git.pm & friends. So replace the whole thing with something that's pretty much a copy of how we generate po/build/**.mo from po/*.po, just with a small sed(1) command instead of msgfmt. As that's being done rename the files from *.pm to *.pmc just to indicate that they're generated (see "perldoc -f require"). While I'm at it, change the fallback for Error.pm from being something where we'll ship our own Error.pm if one doesn't exist at build time to one where we just use a Git::Error wrapper that'll always prefer the system-wide Error.pm, only falling back to our own copy if it really doesn't exist at runtime. It's now shipped as Git::FromCPAN::Error, making it easy to add other modules to Git::FromCPAN::* in the future if that's needed. Functional changes: * This will not always install into perl's idea of its global "installsitelib". This only potentially matters for packagers that need to expose Git.pm for non-git use, and as explained in the INSTALL file there's a trivial workaround. * The scripts themselves will 'use lib' the target directory, but if INSTLIBDIR is set it overrides it. It doesn't have to be this way, it could be set in addition to INSTLIBDIR, but my reading of [7] is that this is the desired behavior. * We don't build man pages for all of the perl modules as we used to, only Git(3pm). As discussed on-list[8] that we were building installed manpages for purely internal APIs like Git::I18N or private-Error.pm was always a bug anyway, and all the Git::SVN::* ones say they're internal APIs. There are apparently external users of Git.pm, but I don't expect there to be any of the others. As a side-effect of these general changes the perl documentation now only installed by install-{doc,man}, not a mere "install" as before. 1. 5e9637c629 ("i18n: add infrastructure for translating Git with gettext", 2011-11-18) 2. b1edc53d06 ("Introduce Git.pm (v4)", 2006-06-24) 3. 18b0fc1ce1 ("Git.pm: Kill Git.xs for now", 2006-09-23) 4. f848718a69 ("Make perl/ build procedure ActiveState friendly.", 2006-12-04) 5. ee9be06770 ("perl: detect new files in MakeMaker builds", 2012-07-27) 6. c59c4939c2 ("perl: regenerate perl.mak if perl -V changes", 2017-03-29) 7. 0386dd37b1 ("Makefile: add PERLLIB_EXTRA variable that adds to default perl path", 2013-11-15) 8. 87bmjjv1pu.fsf@evledraar.booking.com ("Re: [PATCH] Makefile: replace perl/Makefile.PL with simple make rules" Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-12-10 22:13:33 +01:00
install-man: install-man-perl
$(MAKE) -C Documentation install-man
install-man-perl: man-perl
Makefile: replace perl/Makefile.PL with simple make rules Replace the perl/Makefile.PL and the fallback perl/Makefile used under NO_PERL_MAKEMAKER=NoThanks with a much simpler implementation heavily inspired by how the i18n infrastructure's build process works[1]. The reason for having the Makefile.PL in the first place is that it was initially[2] building a perl C binding to interface with libgit, this functionality, that was removed[3] before Git.pm ever made it to the master branch. We've since since started maintaining a fallback perl/Makefile, as MakeMaker wouldn't work on some platforms[4]. That's just the tip of the iceberg. We have the PM.stamp hack in the top-level Makefile[5] to detect whether we need to regenerate the perl/perl.mak, which I fixed just recently to deal with issues like the perl version changing from under us[6]. There is absolutely no reason for why this needs to be so complex anymore. All we're getting out of this elaborate Rube Goldberg machine was copying perl/* to perl/blib/* as we do a string-replacement on the *.pm files to hardcode @@LOCALEDIR@@ in the source, as well as pod2man-ing Git.pm & friends. So replace the whole thing with something that's pretty much a copy of how we generate po/build/**.mo from po/*.po, just with a small sed(1) command instead of msgfmt. As that's being done rename the files from *.pm to *.pmc just to indicate that they're generated (see "perldoc -f require"). While I'm at it, change the fallback for Error.pm from being something where we'll ship our own Error.pm if one doesn't exist at build time to one where we just use a Git::Error wrapper that'll always prefer the system-wide Error.pm, only falling back to our own copy if it really doesn't exist at runtime. It's now shipped as Git::FromCPAN::Error, making it easy to add other modules to Git::FromCPAN::* in the future if that's needed. Functional changes: * This will not always install into perl's idea of its global "installsitelib". This only potentially matters for packagers that need to expose Git.pm for non-git use, and as explained in the INSTALL file there's a trivial workaround. * The scripts themselves will 'use lib' the target directory, but if INSTLIBDIR is set it overrides it. It doesn't have to be this way, it could be set in addition to INSTLIBDIR, but my reading of [7] is that this is the desired behavior. * We don't build man pages for all of the perl modules as we used to, only Git(3pm). As discussed on-list[8] that we were building installed manpages for purely internal APIs like Git::I18N or private-Error.pm was always a bug anyway, and all the Git::SVN::* ones say they're internal APIs. There are apparently external users of Git.pm, but I don't expect there to be any of the others. As a side-effect of these general changes the perl documentation now only installed by install-{doc,man}, not a mere "install" as before. 1. 5e9637c629 ("i18n: add infrastructure for translating Git with gettext", 2011-11-18) 2. b1edc53d06 ("Introduce Git.pm (v4)", 2006-06-24) 3. 18b0fc1ce1 ("Git.pm: Kill Git.xs for now", 2006-09-23) 4. f848718a69 ("Make perl/ build procedure ActiveState friendly.", 2006-12-04) 5. ee9be06770 ("perl: detect new files in MakeMaker builds", 2012-07-27) 6. c59c4939c2 ("perl: regenerate perl.mak if perl -V changes", 2017-03-29) 7. 0386dd37b1 ("Makefile: add PERLLIB_EXTRA variable that adds to default perl path", 2013-11-15) 8. 87bmjjv1pu.fsf@evledraar.booking.com ("Re: [PATCH] Makefile: replace perl/Makefile.PL with simple make rules" Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-12-10 22:13:33 +01:00
$(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(mandir_SQ)/man3'
(cd perl/build/man/man3 && $(TAR) cf - .) | \
(cd '$(DESTDIR_SQ)$(mandir_SQ)/man3' && umask 022 && $(TAR) xof -)
install-html:
$(MAKE) -C Documentation install-html
install-info:
$(MAKE) -C Documentation install-info
install-pdf:
$(MAKE) -C Documentation install-pdf
quick-install-doc:
$(MAKE) -C Documentation quick-install
quick-install-man:
$(MAKE) -C Documentation quick-install-man
quick-install-html:
$(MAKE) -C Documentation quick-install-html
### Maintainer's dist rules
GIT_TARNAME = git-$(GIT_VERSION)
GIT_ARCHIVE_EXTRA_FILES = \
--prefix=$(GIT_TARNAME)/ \
--add-file=configure \
--add-file=.dist-tmp-dir/version \
--prefix=$(GIT_TARNAME)/git-gui/ \
--add-file=.dist-tmp-dir/git-gui/version
ifdef DC_SHA1_SUBMODULE
GIT_ARCHIVE_EXTRA_FILES += \
--prefix=$(GIT_TARNAME)/sha1collisiondetection/ \
--add-file=sha1collisiondetection/LICENSE.txt \
--prefix=$(GIT_TARNAME)/sha1collisiondetection/lib/ \
--add-file=sha1collisiondetection/lib/sha1.c \
--add-file=sha1collisiondetection/lib/sha1.h \
--add-file=sha1collisiondetection/lib/ubc_check.c \
--add-file=sha1collisiondetection/lib/ubc_check.h
endif
dist: git-archive$(X) configure
@$(RM) -r .dist-tmp-dir
@mkdir .dist-tmp-dir
@echo $(GIT_VERSION) > .dist-tmp-dir/version
@$(MAKE) -C git-gui TARDIR=../.dist-tmp-dir/git-gui dist-version
./git-archive --format=tar \
$(GIT_ARCHIVE_EXTRA_FILES) \
--prefix=$(GIT_TARNAME)/ HEAD^{tree} > $(GIT_TARNAME).tar
@$(RM) -r .dist-tmp-dir
gzip -f -9 $(GIT_TARNAME).tar
rpm::
@echo >&2 "Use distro packaged sources to run rpmbuild"
@false
.PHONY: rpm
ifneq ($(INCLUDE_DLLS_IN_ARTIFACTS),)
OTHER_PROGRAMS += $(shell echo *.dll t/helper/*.dll t/unit-tests/bin/*.dll)
endif
Optionally skip linking/copying the built-ins For a long time already, the non-dashed form of the built-ins is the recommended way to write scripts, i.e. it is better to call `git merge [...]` than to call `git-merge [...]`. While Git still supports the dashed form (by hard-linking the `git` executable to the dashed name in `libexec/git-core/`), in practice, it is probably almost irrelevant. However, we *do* care about keeping people's scripts working (even if they were written before the non-dashed form started to be recommended). Keeping this backwards-compatibility is not necessarily cheap, though: even so much as amending the tip commit in a git.git checkout will require re-linking all of those dashed commands. On this developer's laptop, this makes a noticeable difference: $ touch version.c && time make CC version.o AR libgit.a LINK git-bugreport.exe [... 11 similar lines ...] LN/CP git-remote-https.exe LN/CP git-remote-ftp.exe LN/CP git-remote-ftps.exe LINK git.exe BUILTIN git-add.exe [... 123 similar lines ...] BUILTIN all SUBDIR git-gui SUBDIR gitk-git SUBDIR templates LINK t/helper/test-fake-ssh.exe LINK t/helper/test-line-buffer.exe LINK t/helper/test-svn-fe.exe LINK t/helper/test-tool.exe real 0m36.633s user 0m3.794s sys 0m14.141s $ touch version.c && time make SKIP_DASHED_BUILT_INS=1 CC version.o AR libgit.a LINK git-bugreport.exe [... 11 similar lines ...] LN/CP git-remote-https.exe LN/CP git-remote-ftp.exe LN/CP git-remote-ftps.exe LINK git.exe BUILTIN git-receive-pack.exe BUILTIN git-upload-archive.exe BUILTIN git-upload-pack.exe BUILTIN all SUBDIR git-gui SUBDIR gitk-git SUBDIR templates LINK t/helper/test-fake-ssh.exe LINK t/helper/test-line-buffer.exe LINK t/helper/test-svn-fe.exe LINK t/helper/test-tool.exe real 0m23.717s user 0m1.562s sys 0m5.210s Also, `.zip` files do not have any standardized support for hard-links, therefore "zipping up" the executables will result in inflated disk usage. (To keep down the size of the "MinGit" variant of Git for Windows, which is distributed as a `.zip` file, the hard-links are excluded specifically.) In addition to that, some programs that are regularly used to assess disk usage fail to realize that those are hard-links, and heavily overcount disk usage. Most notably, this was the case with Windows Explorer up until the last couple of Windows 10 versions. See e.g. https://github.com/msysgit/msysgit/issues/58. To save on the time needed to hard-link these dashed commands, with the plan to eventually stop shipping with those hard-links on Windows, let's introduce a Makefile knob to skip generating them. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-22 00:28:16 +02:00
artifacts-tar:: $(ALL_COMMANDS_TO_INSTALL) $(SCRIPT_LIB) $(OTHER_PROGRAMS) \
ci: parallelize testing on Windows The fact that Git's test suite is implemented in Unix shell script that is as portable as we can muster, combined with the fact that Unix shell scripting is foreign to Windows (and therefore has to be emulated), results in pretty abysmal speed of the test suite on that platform, for pretty much no other reason than that language choice. For comparison: while the Linux build & test is typically done within about 8 minutes, the Windows build & test typically lasts about 80 minutes in Azure Pipelines. To help with that, let's use the Azure Pipeline feature where you can parallelize jobs, make jobs depend on each other, and pass artifacts between them. The tests are distributed using the following heuristic: listing all test scripts ordered by size in descending order (as a cheap way to estimate the overall run time), every Nth script is run (where N is the total number of parallel jobs), starting at the index corresponding to the parallel job. This slicing is performed by a new function that is added to the `test-tool`. To optimize the overall runtime of the entire Pipeline, we need to move the Windows jobs to the beginning (otherwise there would be a very decent chance for the Pipeline to be run only the Windows build, while all the parallel Windows test jobs wait for this single one). We use Azure Pipelines Artifacts for both the minimal Git for Windows SDK as well as the built executables, as deduplication and caching close to the agents makes that really fast. For comparison: while downloading and unpacking the minimal Git for Windows SDK via PowerShell takes only one minute (down from anywhere between 2.5 to 7 when using a shallow clone), uploading it as Pipeline Artifact takes less than 30s and downloading and unpacking less than 20s (sometimes even as little as only twelve seconds). Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-01-29 15:19:38 +01:00
GIT-BUILD-OPTIONS $(TEST_PROGRAMS) $(test_bindir_programs) \
unit tests: add TAP unit test framework This patch contains an implementation for writing unit tests with TAP output. Each test is a function that contains one or more checks. The test is run with the TEST() macro and if any of the checks fail then the test will fail. A complete program that tests STRBUF_INIT would look like #include "test-lib.h" #include "strbuf.h" static void t_static_init(void) { struct strbuf buf = STRBUF_INIT; check_uint(buf.len, ==, 0); check_uint(buf.alloc, ==, 0); check_char(buf.buf[0], ==, '\0'); } int main(void) { TEST(t_static_init(), "static initialization works); return test_done(); } The output of this program would be ok 1 - static initialization works 1..1 If any of the checks in a test fail then they print a diagnostic message to aid debugging and the test will be reported as failing. For example a failing integer check would look like # check "x >= 3" failed at my-test.c:102 # left: 2 # right: 3 not ok 1 - x is greater than or equal to three There are a number of check functions implemented so far. check() checks a boolean condition, check_int(), check_uint() and check_char() take two values to compare and a comparison operator. check_str() will check if two strings are equal. Custom checks are simple to implement as shown in the comments above test_assert() in test-lib.h. Tests can be skipped with test_skip() which can be supplied with a reason for skipping which it will print. Tests can print diagnostic messages with test_msg(). Checks that are known to fail can be wrapped in TEST_TODO(). There are a couple of example test programs included in this patch. t-basic.c implements some self-tests and demonstrates the diagnostic output for failing test. The output of this program is checked by t0080-unit-test-output.sh. t-strbuf.c shows some example unit tests for strbuf.c The unit tests will be built as part of the default "make all" target, to avoid bitrot. If you wish to build just the unit tests, you can run "make build-unit-tests". To run the tests, you can use "make unit-tests" or run the test binaries directly, as in "./t/unit-tests/bin/t-strbuf". Signed-off-by: Phillip Wood <phillip.wood@dunelm.org.uk> Signed-off-by: Josh Steadmon <steadmon@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-11-09 19:50:43 +01:00
$(UNIT_TEST_PROGS) $(MOFILES)
ci: parallelize testing on Windows The fact that Git's test suite is implemented in Unix shell script that is as portable as we can muster, combined with the fact that Unix shell scripting is foreign to Windows (and therefore has to be emulated), results in pretty abysmal speed of the test suite on that platform, for pretty much no other reason than that language choice. For comparison: while the Linux build & test is typically done within about 8 minutes, the Windows build & test typically lasts about 80 minutes in Azure Pipelines. To help with that, let's use the Azure Pipeline feature where you can parallelize jobs, make jobs depend on each other, and pass artifacts between them. The tests are distributed using the following heuristic: listing all test scripts ordered by size in descending order (as a cheap way to estimate the overall run time), every Nth script is run (where N is the total number of parallel jobs), starting at the index corresponding to the parallel job. This slicing is performed by a new function that is added to the `test-tool`. To optimize the overall runtime of the entire Pipeline, we need to move the Windows jobs to the beginning (otherwise there would be a very decent chance for the Pipeline to be run only the Windows build, while all the parallel Windows test jobs wait for this single one). We use Azure Pipelines Artifacts for both the minimal Git for Windows SDK as well as the built executables, as deduplication and caching close to the agents makes that really fast. For comparison: while downloading and unpacking the minimal Git for Windows SDK via PowerShell takes only one minute (down from anywhere between 2.5 to 7 when using a shallow clone), uploading it as Pipeline Artifact takes less than 30s and downloading and unpacking less than 20s (sometimes even as little as only twelve seconds). Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-01-29 15:19:38 +01:00
$(QUIET_SUBDIR0)templates $(QUIET_SUBDIR1) \
SHELL_PATH='$(SHELL_PATH_SQ)' PERL_PATH='$(PERL_PATH_SQ)'
test -n "$(ARTIFACTS_DIRECTORY)"
mkdir -p "$(ARTIFACTS_DIRECTORY)"
$(TAR) czf "$(ARTIFACTS_DIRECTORY)/artifacts.tar.gz" $^ templates/blt/
.PHONY: artifacts-tar
htmldocs = git-htmldocs-$(GIT_VERSION)
manpages = git-manpages-$(GIT_VERSION)
.PHONY: dist-doc distclean
dist-doc: git$X
$(RM) -r .doc-tmp-dir
mkdir .doc-tmp-dir
$(MAKE) -C Documentation WEBDOC_DEST=../.doc-tmp-dir install-webdoc
./git -C .doc-tmp-dir init
./git -C .doc-tmp-dir add .
./git -C .doc-tmp-dir commit -m htmldocs
./git -C .doc-tmp-dir archive --format=tar --prefix=./ HEAD^{tree} \
> $(htmldocs).tar
gzip -n -9 -f $(htmldocs).tar
:
$(RM) -r .doc-tmp-dir
mkdir -p .doc-tmp-dir/man1 .doc-tmp-dir/man5 .doc-tmp-dir/man7
$(MAKE) -C Documentation DESTDIR=./ \
man1dir=../.doc-tmp-dir/man1 \
man5dir=../.doc-tmp-dir/man5 \
man7dir=../.doc-tmp-dir/man7 \
install
./git -C .doc-tmp-dir init
./git -C .doc-tmp-dir add .
./git -C .doc-tmp-dir commit -m manpages
./git -C .doc-tmp-dir archive --format=tar --prefix=./ HEAD^{tree} \
> $(manpages).tar
gzip -n -9 -f $(manpages).tar
$(RM) -r .doc-tmp-dir
### Cleaning rules
distclean: clean
$(RM) configure
$(RM) config.log config.status config.cache
$(RM) config.mak.autogen config.mak.append
$(RM) -r autom4te.cache
Fix build problems related to profile-directed optimization There was a number of problems I ran into when trying the profile-directed optimizations added by Andi Kleen in git commit 7ddc2710b9. (This was using gcc 4.4 found on many enterprise distros.) 1) The -fprofile-generate and -fprofile-use commands are incompatible with ccache; the code ends up looking in the wrong place for the gcda files based on the ccache object names. 2) If the makefile notices that CFLAGS are different, it will rebuild all of the binaries. Hence the recipe originally specified by the INSTALL file ("make profile-all" followed by "make install") doesn't work. It will appear to work, but the binaries will end up getting built with no optimization. This patch fixes this by using an explicit set of options passed via the PROFILE variable then using this to directly manipulate CFLAGS and EXTLIBS. The developer can run "make PROFILE=BUILD all ; sudo make PROFILE=BUILD install" automatically run a two-pass build with the test suite run in between as the sample workload for the purpose of recording profiling information to do the profile-directed optimization. Alternatively, the profiling version of binaries can be built using: make PROFILE=GEN PROFILE_DIR=/var/cache/profile all make PROFILE=GEN install and then after git has been used for a while, the optimized version of the binary can be built as follows: make PROFILE=USE PROFILE_DIR=/var/cache/profile all make PROFILE=USE install Signed-off-by: "Theodore Ts'o" <tytso@mit.edu> Cc: Andi Kleen <ak@linux.intel.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-06 07:00:17 +01:00
profile-clean:
$(RM) $(addsuffix *.gcda,$(addprefix $(PROFILE_DIR)/, $(object_dirs)))
$(RM) $(addsuffix *.gcno,$(addprefix $(PROFILE_DIR)/, $(object_dirs)))
cocciclean:
$(RM) GIT-SPATCH-DEFINES
cocci: add a "coccicheck-test" target and test *.cocci rules Add a "coccicheck-test" target to test our *.cocci rules, and as a demonstration add tests for the rules added in 39ea59a2570 (remove unnecessary NULL check before free(3), 2016-10-08) and 1b83d1251ed (coccinelle: add a rule to make "expression" code use FREE_AND_NULL(), 2017-06-15). I considered making use of the "spatch --test" option, and the choice of a "tests" over a "t" directory is to make these tests compatible with such a future change. Unfortunately "spatch --test" doesn't return meaningful exit codes, AFAICT you need to "grep" its output to see if the *.res is what you expect. There's "--test-okfailed", but I didn't find a way to sensibly integrate those (it relies on some in-between status files, but doesn't help with the status codes). Instead let's use a "--sp-file" pattern similar to the main "coccicheck" rule, with the difference that we use and compare the two *.res files with cmp(1). The --very-quiet and --no-show-diff options ensure that we don't need to pipe stdout and stderr somewhere. Unlike the "%.cocci.patch" rule we're not using the diff. The "cmp || git diff" is optimistically giving us better output on failure, but even if we only have POSIX cmp and no system git installed we'll still fail with the "cmp", just with an error message that isn't as friendly. The "2>/dev/null" is in case we don't have a "git" installed. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-07-05 15:46:57 +02:00
$(RM) -r .build/contrib/coccinelle
cocci: make "coccicheck" rule incremental Optimize the very slow "coccicheck" target to take advantage of incremental rebuilding, and fix outstanding dependency problems with the existing rule. The rule is now faster both on the initial run as we can make better use of GNU make's parallelism than the old ad-hoc combination of make's parallelism combined with $(SPATCH_BATCH_SIZE) and/or the "--jobs" argument to "spatch(1)". It also makes us *much* faster when incrementally building, it's now viable to "make coccicheck" as topic branches are merged down. The rule didn't use FORCE (or its equivalents) before, so a: make coccicheck make coccicheck Would report nothing to do on the second iteration. But all of our patch output depended on all $(COCCI_SOURCES) files, therefore e.g.: make -W grep.c coccicheck Would do a full re-run, i.e. a a change in a single file would force us to do a full re-run. The reason for this (not the initial rationale, but my analysis) is: * Since we create a single "*.cocci.patch+" we don't know where to pick up where we left off, or how to incrementally merge e.g. a "grep.c" change with an existing *.cocci.patch. * We've been carrying forward the dependency on the *.c files since 63f0a758a06 (add coccicheck make target, 2016-09-15) the rule was initially added as a sort of poor man's dependency discovery. As we don't include other *.c files depending on other *.c files has always been broken, as could be trivially demonstrated e.g. with: make coccicheck make -W strbuf.h coccicheck However, depending on the corresponding *.c files has been doing something, namely that *if* an API change modified both *.c and *.h files we'd catch the change to the *.h we care about via the *.c being changed. For API changes that happened only via *.h files we'd do the wrong thing before this change, but e.g. for function additions (not "static inline" ones) catch the *.h change by proxy. Now we'll instead: * Create a <RULE>/<FILE> pair in the .build directory, E.g. for swap.cocci and grep.c we'll create .build/contrib/coccinelle/swap.cocci.patch/grep.c. That file is the diff we'll apply for that <RULE>-<FILE> combination, if there's no changes to me made (the common case) it'll be an empty file. * Our generated *.patch file (e.g. contrib/coccinelle/swap.cocci.patch) is now a simple "cat $^" of all of all of the <RULE>/<FILE> files for a given <RULE>. In the case discussed above of "grep.c" being changed we'll do the full "cat" every time, so they resulting *.cocci.patch will always be correct and up-to-date, even if it's "incrementally updated". See 1cc0425a27c (Makefile: have "make pot" not "reset --hard", 2022-05-26) for another recent rule that used that technique. As before we'll: * End up generating a contrib/coccinelle/swap.cocci.patch, if we "fail" by creating a non-empty patch we'll still exit with a zero exit code. Arguably we should move to a more Makefile-native way of doing this, i.e. fail early, and if we want all of the "failed" changes we can use "make -k", but as the current "ci/run-static-analysis.sh" expects us to behave this way let's keep the existing behavior of exhaustively discovering all cocci changes, and only failing if spatch itself errors out. Further implementation details & notes: * Before this change running "make coccicheck" would by default end up pegging just one CPU at the very end for a while, usually as we'd finish whichever *.cocci rule was the most expensive. This could be mitigated by combining "make -jN" with SPATCH_BATCH_SIZE, see 960154b9c17 (coccicheck: optionally batch spatch invocations, 2019-05-06). There will be cases where getting rid of "SPATCH_BATCH_SIZE" makes things worse, but a from-scratch "make coccicheck" with the default of SPATCH_BATCH_SIZE=1 (and tweaking it doesn't make a difference) is faster (~3m36s v.s. ~3m56s) with this approach, as we can feed the CPU more work in a less staggered way. * Getting rid of "SPATCH_BATCH_SIZE" particularly helps in cases where the default of 1 yields parallelism under "make coccicheck", but then running e.g.: make -W contrib/coccinelle/swap.cocci coccicheck I.e. before that would use only one CPU core, until the user remembered to adjust "SPATCH_BATCH_SIZE" differently than the setting that makes sense when doing a non-incremental run of "make coccicheck". * Before the "make coccicheck" rule would have to clean "contrib/coccinelle/*.cocci.patch*", since we'd create "*+" and "*.log" files there. Now those are created in .build/contrib/coccinelle/, which is covered by the "cocciclean" rule already. Outstanding issues & future work: * We could get rid of "--all-includes" in favor of manually specifying a list of includes to give to "spatch(1)". As noted upthread of [1] a naïve removal of "--all-includes" will result in broken *.cocci patches, but if we know the exhaustive list of includes via COMPUTE_HEADER_DEPENDENCIES we don't need to re-scan for them, we could grab the headers to include from the .depend.d/<file>.o.d and supply them with the "--include" option to spatch(1).q 1. https://lore.kernel.org/git/87ft18tcog.fsf@evledraar.gmail.com/ Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-01 23:35:50 +01:00
$(RM) contrib/coccinelle/*.cocci.patch
clean: profile-clean coverage-clean cocciclean
$(RM) -r .build $(UNIT_TEST_BIN)
$(RM) po/git.pot po/git-core.pot
$(RM) git.res
$(RM) $(OBJECTS)
$(RM) headless-git.o
$(RM) $(LIB_FILE) $(XDIFF_LIB) $(REFTABLE_LIB) $(REFTABLE_TEST_LIB)
$(RM) $(ALL_PROGRAMS) $(SCRIPT_LIB) $(BUILT_INS) $(OTHER_PROGRAMS)
$(RM) $(TEST_PROGRAMS)
$(RM) $(FUZZ_PROGRAMS)
Makefile: make the "sparse" target non-.PHONY Change the "sparse" target and its *.sp dependencies to be non-.PHONY. Before this change "make sparse" would take ~5s to re-run all the *.c files through "cgcc", after it it'll create an empty *.sp file sitting alongside the *.c file, only if the *.c file or its dependencies are newer than the *.sp is the *.sp re-made. We ensure that the recursive dependencies are correct by depending on the *.o file, which in turn will have correct dependencies by either depending on all header files, or under "COMPUTE_HEADER_DEPENDENCIES=yes" the headers it needs. This means that a plain "make sparse" is much slower, as we'll now need to make the *.o files just to create the *.sp files, but incrementally creating the *.sp files is *much* faster and less verbose, it thus becomes viable to run "sparse" along with "all" as e.g. "git rebase --exec 'make all sparse'". On my box with -j8 "make sparse" was fast before, or around 5 seconds, now it only takes that long the first time, and the common case is <100ms, or however long it takes GNU make to stat the *.sp file and see that all the corresponding *.c file and its dependencies are older. See 0bcd9ae85d7 (sparse: Fix errors due to missing target-specific variables, 2011-04-21) for the modern implementation of the sparse target being changed here. It is critical that we use -Wsparse-error here, otherwise the error would only show up once, but we'd successfully create the empty *.sp file, and running a second time wouldn't show the error. I'm therefore not putting it into SPARSE_FLAGS or SP_EXTRA_FLAGS, it's not optional, the Makefile logic won't behave properly without it. Appending to $@ without a move is OK here because we're using the .DELETE_ON_ERROR Makefile feature. See 7b76d6bf221 (Makefile: add and use the ".DELETE_ON_ERROR" flag, 2021-06-29). GNU make ensures that on error this file will be removed. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-09-23 02:07:16 +02:00
$(RM) $(SP_OBJ)
$(RM) $(HCC)
Makefile: add support for generating JSON compilation database Tools based on LibClang [1] can make use of a 'JSON Compilation Database' [2] that keeps track of the exact options used to compile a set of source files. For example, clangd [3], which is a C language server protocol implementation, can use a JSON compilation database to determine the flags needed to compile a file so it can provide proper editor integration. As a result, editors supporting the language server protocol (such as VS Code, Emacs, or Vim, with suitable plugins) can provide better searching, integration, and refactoring tools. The Clang compiler can generate JSON fragments when compiling [4], using the `-MJ` flag. These JSON fragments (one per compiled source file) can then be concatenated to create the compilation database, commonly called 'compile_commands.json'. Add support to the Makefile for generating these JSON fragments as well as the compilation database itself, if the environment variable 'GENERATE_COMPILATION_DATABASE' is set. If this variable is set, check that $(CC) indeed supports the `-MJ` flag, following what is done for automatic dependencies. All JSON fragments are placed in the 'compile_commands/' directory, and the compilation database 'compile_commands.json' is generated as a dependency of the 'all' target using a `sed` invocation. [1] https://clang.llvm.org/docs/Tooling.html [2] https://clang.llvm.org/docs/JSONCompilationDatabase.html [3] https://clangd.llvm.org/ [4] https://clang.llvm.org/docs/ClangCommandLineReference.html#cmdoption-clang-mj-arg Helped-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Philippe Blain <levraiphilippeblain@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-04 00:13:38 +02:00
$(RM) -r bin-wrappers $(dep_dirs) $(compdb_dir) compile_commands.json
i18n: add infrastructure for translating Git with gettext Change the skeleton implementation of i18n in Git to one that can show localized strings to users for our C, Shell and Perl programs using either GNU libintl or the Solaris gettext implementation. This new internationalization support is enabled by default. If gettext isn't available, or if Git is compiled with NO_GETTEXT=YesPlease, Git falls back on its current behavior of showing interface messages in English. When using the autoconf script we'll auto-detect if the gettext libraries are installed and act appropriately. This change is somewhat large because as well as adding a C, Shell and Perl i18n interface we're adding a lot of tests for them, and for those tests to work we need a skeleton PO file to actually test translations. A minimal Icelandic translation is included for this purpose. Icelandic includes multi-byte characters which makes it easy to test various edge cases, and it's a language I happen to understand. The rest of the commit message goes into detail about various sub-parts of this commit. = Installation Gettext .mo files will be installed and looked for in the standard $(prefix)/share/locale path. GIT_TEXTDOMAINDIR can also be set to override that, but that's only intended to be used to test Git itself. = Perl Perl code that's to be localized should use the new Git::I18n module. It imports a __ function into the caller's package by default. Instead of using the high level Locale::TextDomain interface I've opted to use the low-level (equivalent to the C interface) Locale::Messages module, which Locale::TextDomain itself uses. Locale::TextDomain does a lot of redundant work we don't need, and some of it would potentially introduce bugs. It tries to set the $TEXTDOMAIN based on package of the caller, and has its own hardcoded paths where it'll search for messages. I found it easier just to completely avoid it rather than try to circumvent its behavior. In any case, this is an issue wholly internal Git::I18N. Its guts can be changed later if that's deemed necessary. See <AANLkTilYD_NyIZMyj9dHtVk-ylVBfvyxpCC7982LWnVd@mail.gmail.com> for a further elaboration on this topic. = Shell Shell code that's to be localized should use the git-sh-i18n library. It's basically just a wrapper for the system's gettext.sh. If gettext.sh isn't available we'll fall back on gettext(1) if it's available. The latter is available without the former on Solaris, which has its own non-GNU gettext implementation. We also need to emulate eval_gettext() there. If neither are present we'll use a dumb printf(1) fall-through wrapper. = About libcharset.h and langinfo.h We use libcharset to query the character set of the current locale if it's available. I.e. we'll use it instead of nl_langinfo if HAVE_LIBCHARSET_H is set. The GNU gettext manual recommends using langinfo.h's nl_langinfo(CODESET) to acquire the current character set, but on systems that have libcharset.h's locale_charset() using the latter is either saner, or the only option on those systems. GNU and Solaris have a nl_langinfo(CODESET), FreeBSD can use either, but MinGW and some others need to use libcharset.h's locale_charset() instead. =Credits This patch is based on work by Jeff Epler <jepler@unpythonic.net> who did the initial Makefile / C work, and a lot of comments from the Git mailing list, including Jonathan Nieder, Jakub Narebski, Johannes Sixt, Erik Faye-Lund, Peter Krefting, Junio C Hamano, Thomas Rast and others. [jc: squashed a small Makefile fix from Ramsay] Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-11-18 00:14:42 +01:00
$(RM) -r po/build/
$(RM) *.pyc *.pyo */*.pyc */*.pyo $(GENERATED_H) $(ETAGS_TARGET) tags cscope*
$(RM) -r .dist-tmp-dir .doc-tmp-dir
$(RM) $(GIT_TARNAME).tar.gz
$(RM) $(htmldocs).tar.gz $(manpages).tar.gz
$(MAKE) -C Documentation/ clean
$(RM) Documentation/GIT-EXCLUDED-PROGRAMS
ifndef NO_PERL
Makefile: replace perl/Makefile.PL with simple make rules Replace the perl/Makefile.PL and the fallback perl/Makefile used under NO_PERL_MAKEMAKER=NoThanks with a much simpler implementation heavily inspired by how the i18n infrastructure's build process works[1]. The reason for having the Makefile.PL in the first place is that it was initially[2] building a perl C binding to interface with libgit, this functionality, that was removed[3] before Git.pm ever made it to the master branch. We've since since started maintaining a fallback perl/Makefile, as MakeMaker wouldn't work on some platforms[4]. That's just the tip of the iceberg. We have the PM.stamp hack in the top-level Makefile[5] to detect whether we need to regenerate the perl/perl.mak, which I fixed just recently to deal with issues like the perl version changing from under us[6]. There is absolutely no reason for why this needs to be so complex anymore. All we're getting out of this elaborate Rube Goldberg machine was copying perl/* to perl/blib/* as we do a string-replacement on the *.pm files to hardcode @@LOCALEDIR@@ in the source, as well as pod2man-ing Git.pm & friends. So replace the whole thing with something that's pretty much a copy of how we generate po/build/**.mo from po/*.po, just with a small sed(1) command instead of msgfmt. As that's being done rename the files from *.pm to *.pmc just to indicate that they're generated (see "perldoc -f require"). While I'm at it, change the fallback for Error.pm from being something where we'll ship our own Error.pm if one doesn't exist at build time to one where we just use a Git::Error wrapper that'll always prefer the system-wide Error.pm, only falling back to our own copy if it really doesn't exist at runtime. It's now shipped as Git::FromCPAN::Error, making it easy to add other modules to Git::FromCPAN::* in the future if that's needed. Functional changes: * This will not always install into perl's idea of its global "installsitelib". This only potentially matters for packagers that need to expose Git.pm for non-git use, and as explained in the INSTALL file there's a trivial workaround. * The scripts themselves will 'use lib' the target directory, but if INSTLIBDIR is set it overrides it. It doesn't have to be this way, it could be set in addition to INSTLIBDIR, but my reading of [7] is that this is the desired behavior. * We don't build man pages for all of the perl modules as we used to, only Git(3pm). As discussed on-list[8] that we were building installed manpages for purely internal APIs like Git::I18N or private-Error.pm was always a bug anyway, and all the Git::SVN::* ones say they're internal APIs. There are apparently external users of Git.pm, but I don't expect there to be any of the others. As a side-effect of these general changes the perl documentation now only installed by install-{doc,man}, not a mere "install" as before. 1. 5e9637c629 ("i18n: add infrastructure for translating Git with gettext", 2011-11-18) 2. b1edc53d06 ("Introduce Git.pm (v4)", 2006-06-24) 3. 18b0fc1ce1 ("Git.pm: Kill Git.xs for now", 2006-09-23) 4. f848718a69 ("Make perl/ build procedure ActiveState friendly.", 2006-12-04) 5. ee9be06770 ("perl: detect new files in MakeMaker builds", 2012-07-27) 6. c59c4939c2 ("perl: regenerate perl.mak if perl -V changes", 2017-03-29) 7. 0386dd37b1 ("Makefile: add PERLLIB_EXTRA variable that adds to default perl path", 2013-11-15) 8. 87bmjjv1pu.fsf@evledraar.booking.com ("Re: [PATCH] Makefile: replace perl/Makefile.PL with simple make rules" Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-12-10 22:13:33 +01:00
$(RM) -r perl/build/
endif
Introduce Git.pm (v4) This patch introduces a very basic and barebone Git.pm module with a sketch of how the generic interface would look like; most functions are missing, but this should give some good base. I will continue expanding it. Most desirable now is more careful error reporting, generic_in() for feeding input to Git commands and the repository() constructor doing some poking with git-rev-parse to get the git directory and subdirectory prefix. Those three are basically the prerequisities for converting git-mv. I will send them as follow-ups to this patch. Currently Git.pm just wraps up exec()s of Git commands, but even that is not trivial to get right and various Git perl scripts do it in various inconsistent ways. In addition to Git.pm, there is now also Git.xs which provides barebone Git.xs for directly interfacing with libgit.a, and as an example providing the hash_object() function using libgit. This adds the Git module, integrates it to the build system and as an example converts the git-fmt-merge-msg.perl script to it (the result is not very impressive since its advantage is not quite apparent in this one, but I just picked up the simplest Git user around). Compared to v3, only very minor things were fixed in this patch (some whitespaces, a missing export, tiny bug in git-fmt-merge-msg.perl); at first I wanted to post them as a separate patch but since this is still only in pu, I decided that it will be cleaner to just resend the patch. My current working state is available all the time at http://pasky.or.cz/~xpasky/git-perl/Git.pm and an irregularily updated API documentation is at http://pasky.or.cz/~xpasky/git-perl/Git.html Many thanks to Jakub Narebski, Junio and others for their feedback. Signed-off-by: Petr Baudis <pasky@suse.cz> Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-24 04:34:29 +02:00
$(MAKE) -C templates/ clean
$(MAKE) -C t/ clean
ifndef NO_TCLTK
$(MAKE) -C gitk-git clean
$(MAKE) -C git-gui clean
endif
$(RM) GIT-VERSION-FILE GIT-CFLAGS GIT-LDFLAGS GIT-BUILD-OPTIONS
$(RM) GIT-USER-AGENT GIT-PREFIX
$(RM) GIT-SCRIPT-DEFINES GIT-PERL-DEFINES GIT-PERL-HEADER GIT-PYTHON-VARS
msvc: support building Git using MS Visual C++ With this patch, Git can be built using the Microsoft toolchain, via: make MSVC=1 [DEBUG=1] Third party libraries are built from source using the open source "vcpkg" tool set. See https://github.com/Microsoft/vcpkg On a first build, the vcpkg tools and the third party libraries are automatically downloaded and built. DLLs for the third party libraries are copied to the top-level (and t/helper) directory to facilitate debugging. See compat/vcbuild/README. A series of .bat files are invoked by the Makefile to find the location of the installed version of Visual Studio and the associated compiler tools (essentially replicating the environment setup performed by a "Developer Command Prompt"). This should find the most recent VS2015 or VS2017 installation. Output from these scripts are used by the Makefile to define compiler and linker pathnames and -I and -L arguments. The build produces .pdb files for both debug and release builds. Note: This commit was squashed from an organic series of commits developed between 2016 and 2018 in Git for Windows' `master` branch. This combined commit eliminates the obsolete commits related to fetching NuGet packages for third party libraries. It is difficult to use NuGet packages for C/C++ sources because they may be built by earlier versions of the MSVC compiler and have CRT version and linking issues. Additionally, the C/C++ NuGet packages that we were using tended to not be updated concurrently with the sources. And in the case of cURL and OpenSSL, this could expose us to security issues. Helped-by: Yue Lin Ho <b8732003@student.nsysu.edu.tw> Helped-by: Philip Oakley <philipoakley@iee.org> Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com> Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-06-25 16:49:39 +02:00
ifdef MSVC
$(RM) $(patsubst %.o,%.o.pdb,$(OBJECTS))
$(RM) headless-git.o.pdb
msvc: support building Git using MS Visual C++ With this patch, Git can be built using the Microsoft toolchain, via: make MSVC=1 [DEBUG=1] Third party libraries are built from source using the open source "vcpkg" tool set. See https://github.com/Microsoft/vcpkg On a first build, the vcpkg tools and the third party libraries are automatically downloaded and built. DLLs for the third party libraries are copied to the top-level (and t/helper) directory to facilitate debugging. See compat/vcbuild/README. A series of .bat files are invoked by the Makefile to find the location of the installed version of Visual Studio and the associated compiler tools (essentially replicating the environment setup performed by a "Developer Command Prompt"). This should find the most recent VS2015 or VS2017 installation. Output from these scripts are used by the Makefile to define compiler and linker pathnames and -I and -L arguments. The build produces .pdb files for both debug and release builds. Note: This commit was squashed from an organic series of commits developed between 2016 and 2018 in Git for Windows' `master` branch. This combined commit eliminates the obsolete commits related to fetching NuGet packages for third party libraries. It is difficult to use NuGet packages for C/C++ sources because they may be built by earlier versions of the MSVC compiler and have CRT version and linking issues. Additionally, the C/C++ NuGet packages that we were using tended to not be updated concurrently with the sources. And in the case of cURL and OpenSSL, this could expose us to security issues. Helped-by: Yue Lin Ho <b8732003@student.nsysu.edu.tw> Helped-by: Philip Oakley <philipoakley@iee.org> Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com> Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-06-25 16:49:39 +02:00
$(RM) $(patsubst %.exe,%.pdb,$(OTHER_PROGRAMS))
$(RM) $(patsubst %.exe,%.iobj,$(OTHER_PROGRAMS))
$(RM) $(patsubst %.exe,%.ipdb,$(OTHER_PROGRAMS))
$(RM) $(patsubst %.exe,%.pdb,$(PROGRAMS))
$(RM) $(patsubst %.exe,%.iobj,$(PROGRAMS))
$(RM) $(patsubst %.exe,%.ipdb,$(PROGRAMS))
$(RM) $(patsubst %.exe,%.pdb,$(TEST_PROGRAMS))
$(RM) $(patsubst %.exe,%.iobj,$(TEST_PROGRAMS))
$(RM) $(patsubst %.exe,%.ipdb,$(TEST_PROGRAMS))
$(RM) compat/vcbuild/MSVC-DEFS-GEN
endif
.PHONY: all install profile-clean cocciclean clean strip
.PHONY: shell_compatibility_test please_set_SHELL_PATH_to_a_more_modern_shell
.PHONY: FORCE
### Check documentation
#
Optionally skip linking/copying the built-ins For a long time already, the non-dashed form of the built-ins is the recommended way to write scripts, i.e. it is better to call `git merge [...]` than to call `git-merge [...]`. While Git still supports the dashed form (by hard-linking the `git` executable to the dashed name in `libexec/git-core/`), in practice, it is probably almost irrelevant. However, we *do* care about keeping people's scripts working (even if they were written before the non-dashed form started to be recommended). Keeping this backwards-compatibility is not necessarily cheap, though: even so much as amending the tip commit in a git.git checkout will require re-linking all of those dashed commands. On this developer's laptop, this makes a noticeable difference: $ touch version.c && time make CC version.o AR libgit.a LINK git-bugreport.exe [... 11 similar lines ...] LN/CP git-remote-https.exe LN/CP git-remote-ftp.exe LN/CP git-remote-ftps.exe LINK git.exe BUILTIN git-add.exe [... 123 similar lines ...] BUILTIN all SUBDIR git-gui SUBDIR gitk-git SUBDIR templates LINK t/helper/test-fake-ssh.exe LINK t/helper/test-line-buffer.exe LINK t/helper/test-svn-fe.exe LINK t/helper/test-tool.exe real 0m36.633s user 0m3.794s sys 0m14.141s $ touch version.c && time make SKIP_DASHED_BUILT_INS=1 CC version.o AR libgit.a LINK git-bugreport.exe [... 11 similar lines ...] LN/CP git-remote-https.exe LN/CP git-remote-ftp.exe LN/CP git-remote-ftps.exe LINK git.exe BUILTIN git-receive-pack.exe BUILTIN git-upload-archive.exe BUILTIN git-upload-pack.exe BUILTIN all SUBDIR git-gui SUBDIR gitk-git SUBDIR templates LINK t/helper/test-fake-ssh.exe LINK t/helper/test-line-buffer.exe LINK t/helper/test-svn-fe.exe LINK t/helper/test-tool.exe real 0m23.717s user 0m1.562s sys 0m5.210s Also, `.zip` files do not have any standardized support for hard-links, therefore "zipping up" the executables will result in inflated disk usage. (To keep down the size of the "MinGit" variant of Git for Windows, which is distributed as a `.zip` file, the hard-links are excluded specifically.) In addition to that, some programs that are regularly used to assess disk usage fail to realize that those are hard-links, and heavily overcount disk usage. Most notably, this was the case with Windows Explorer up until the last couple of Windows 10 versions. See e.g. https://github.com/msysgit/msysgit/issues/58. To save on the time needed to hard-link these dashed commands, with the plan to eventually stop shipping with those hard-links on Windows, let's introduce a Makefile knob to skip generating them. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-22 00:28:16 +02:00
ALL_COMMANDS = $(ALL_COMMANDS_TO_INSTALL) $(SCRIPT_LIB)
ALL_COMMANDS += git
ALL_COMMANDS += git-citool
ALL_COMMANDS += git-gui
ALL_COMMANDS += gitk
ALL_COMMANDS += gitweb
ALL_COMMANDS += scalar
.PHONY: check-docs
check-docs::
$(MAKE) -C Documentation lint-docs
@(for v in $(patsubst %$X,%,$(ALL_COMMANDS)); \
do \
case "$$v" in \
git-merge-octopus | git-merge-ours | git-merge-recursive | \
git-merge-resolve | git-merge-subtree | \
git-fsck-objects | git-init-db | \
git-remote-* | git-stage | git-legacy-* | \
git-?*--?* ) continue ;; \
esac ; \
test -f "Documentation/$$v.txt" || \
echo "no doc: $$v"; \
sed -e '1,/^### command list/d' -e '/^#/d' command-list.txt | \
grep -q "^$$v[ ]" || \
case "$$v" in \
git) ;; \
*) echo "no link: $$v";; \
esac ; \
done; \
( \
sed -e '1,/^### command list/d' \
-e '/^#/d' \
-e '/guide$$/d' \
-e '/interfaces$$/d' \
-e 's/[ ].*//' \
-e 's/^/listed /' command-list.txt; \
$(MAKE) -C Documentation print-man1 | \
grep '\.txt$$' | \
sed -e 's|^|documented |' \
-e 's/\.txt//'; \
) | while read how cmd; \
do \
Optionally skip linking/copying the built-ins For a long time already, the non-dashed form of the built-ins is the recommended way to write scripts, i.e. it is better to call `git merge [...]` than to call `git-merge [...]`. While Git still supports the dashed form (by hard-linking the `git` executable to the dashed name in `libexec/git-core/`), in practice, it is probably almost irrelevant. However, we *do* care about keeping people's scripts working (even if they were written before the non-dashed form started to be recommended). Keeping this backwards-compatibility is not necessarily cheap, though: even so much as amending the tip commit in a git.git checkout will require re-linking all of those dashed commands. On this developer's laptop, this makes a noticeable difference: $ touch version.c && time make CC version.o AR libgit.a LINK git-bugreport.exe [... 11 similar lines ...] LN/CP git-remote-https.exe LN/CP git-remote-ftp.exe LN/CP git-remote-ftps.exe LINK git.exe BUILTIN git-add.exe [... 123 similar lines ...] BUILTIN all SUBDIR git-gui SUBDIR gitk-git SUBDIR templates LINK t/helper/test-fake-ssh.exe LINK t/helper/test-line-buffer.exe LINK t/helper/test-svn-fe.exe LINK t/helper/test-tool.exe real 0m36.633s user 0m3.794s sys 0m14.141s $ touch version.c && time make SKIP_DASHED_BUILT_INS=1 CC version.o AR libgit.a LINK git-bugreport.exe [... 11 similar lines ...] LN/CP git-remote-https.exe LN/CP git-remote-ftp.exe LN/CP git-remote-ftps.exe LINK git.exe BUILTIN git-receive-pack.exe BUILTIN git-upload-archive.exe BUILTIN git-upload-pack.exe BUILTIN all SUBDIR git-gui SUBDIR gitk-git SUBDIR templates LINK t/helper/test-fake-ssh.exe LINK t/helper/test-line-buffer.exe LINK t/helper/test-svn-fe.exe LINK t/helper/test-tool.exe real 0m23.717s user 0m1.562s sys 0m5.210s Also, `.zip` files do not have any standardized support for hard-links, therefore "zipping up" the executables will result in inflated disk usage. (To keep down the size of the "MinGit" variant of Git for Windows, which is distributed as a `.zip` file, the hard-links are excluded specifically.) In addition to that, some programs that are regularly used to assess disk usage fail to realize that those are hard-links, and heavily overcount disk usage. Most notably, this was the case with Windows Explorer up until the last couple of Windows 10 versions. See e.g. https://github.com/msysgit/msysgit/issues/58. To save on the time needed to hard-link these dashed commands, with the plan to eventually stop shipping with those hard-links on Windows, let's introduce a Makefile knob to skip generating them. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-22 00:28:16 +02:00
case " $(patsubst %$X,%,$(ALL_COMMANDS) $(BUILT_INS) $(EXCLUDED_PROGRAMS)) " in \
*" $$cmd "*) ;; \
*) echo "removed but $$how: $$cmd" ;; \
esac; \
done ) | sort
### Make sure built-ins do not have dups and listed in git.c
#
check-builtins::
./check-builtins.sh
### Test suite coverage testing
#
.PHONY: coverage coverage-clean coverage-compile coverage-test coverage-report
.PHONY: coverage-untested-functions cover_db cover_db_html
.PHONY: coverage-clean-results
coverage:
$(MAKE) coverage-test
$(MAKE) coverage-untested-functions
object_dirs := $(sort $(dir $(OBJECTS)))
coverage-clean-results:
$(RM) $(addsuffix *.gcov,$(object_dirs))
$(RM) $(addsuffix *.gcda,$(object_dirs))
$(RM) coverage-untested-functions
$(RM) -r cover_db/
$(RM) -r cover_db_html/
$(RM) coverage-test.made
coverage-clean: coverage-clean-results
$(RM) $(addsuffix *.gcno,$(object_dirs))
COVERAGE_CFLAGS = $(CFLAGS) -O0 -ftest-coverage -fprofile-arcs
COVERAGE_LDFLAGS = $(CFLAGS) -O0 -lgcov
GCOVFLAGS = --preserve-paths --branch-probabilities --all-blocks
coverage-compile:
$(MAKE) CFLAGS="$(COVERAGE_CFLAGS)" LDFLAGS="$(COVERAGE_LDFLAGS)" all
coverage-test: coverage-clean-results coverage-compile
$(MAKE) CFLAGS="$(COVERAGE_CFLAGS)" LDFLAGS="$(COVERAGE_LDFLAGS)" \
DEFAULT_TEST_TARGET=test -j1 test
touch coverage-test.made
coverage-test.made:
$(MAKE) coverage-test
coverage-prove: coverage-clean-results coverage-compile
$(MAKE) CFLAGS="$(COVERAGE_CFLAGS)" LDFLAGS="$(COVERAGE_LDFLAGS)" \
DEFAULT_TEST_TARGET=prove GIT_PROVE_OPTS="$(GIT_PROVE_OPTS) -j1" \
-j1 test
coverage-report: coverage-test.made
$(QUIET_GCOV)for dir in $(object_dirs); do \
$(GCOV) $(GCOVFLAGS) --object-directory=$$dir $$dir*.c || exit; \
done
coverage-untested-functions: coverage-report
grep '^function.*called 0 ' *.c.gcov \
| sed -e 's/\([^:]*\)\.gcov: *function \([^ ]*\) called.*/\1: \2/' \
> coverage-untested-functions
cover_db: coverage-report
gcov2perl -db cover_db *.gcov
cover_db_html: cover_db
cover -report html -outputdir cover_db_html cover_db
### Fuzz testing
#
# Building fuzz targets generally requires a special set of compiler flags that
# are not necessarily appropriate for general builds, and that vary greatly
# depending on the compiler version used.
#
# An example command to build against libFuzzer from LLVM 11.0.0:
#
fuzz: link fuzz programs with `make all` on Linux Since 5e47215080 (fuzz: add basic fuzz testing target., 2018-10-12), we have compiled object files for the fuzz tests as part of the default 'make all' target. This helps prevent bit-rot in lesser-used parts of the codebase, by making sure that incompatible changes are caught at build time. However, since we never linked the fuzzer executables, this did not protect us from link-time errors. As of 8b9a42bf48 (fuzz: fix fuzz test build rules, 2024-01-19), it's now possible to link the fuzzer executables without using a fuzzing engine and a variety of compiler-specific (and compiler-version-specific) flags, at least on Linux. So let's add a platform-specific option in config.mak.uname to link the executables as part of the default `make all` target. Since linking the fuzzer executables without a fuzzing engine does not require a C++ compiler, we can change the FUZZ_PROGRAMS build rule to use $(CC) by default. This avoids compiler mis-match issues when overriding $(CC) but not $(CXX). When we *do* want to actually link with a fuzzing engine, we can set $(FUZZ_CXX). The build instructions in the CI fuzz-smoke-test job and in the Makefile comment have been updated accordingly. While we're at it, we can consolidate some of the fuzzer build instructions into one location in the Makefile. Suggested-by: Junio C Hamano <gitster@pobox.com> Helped-by: Jeff King <peff@peff.net> Signed-off-by: Josh Steadmon <steadmon@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-24 20:14:42 +02:00
# make CC=clang FUZZ_CXX=clang++ \
# CFLAGS="-fsanitize=fuzzer-no-link,address" \
# LIB_FUZZING_ENGINE="-fsanitize=fuzzer,address" \
# fuzz-all
#
fuzz: link fuzz programs with `make all` on Linux Since 5e47215080 (fuzz: add basic fuzz testing target., 2018-10-12), we have compiled object files for the fuzz tests as part of the default 'make all' target. This helps prevent bit-rot in lesser-used parts of the codebase, by making sure that incompatible changes are caught at build time. However, since we never linked the fuzzer executables, this did not protect us from link-time errors. As of 8b9a42bf48 (fuzz: fix fuzz test build rules, 2024-01-19), it's now possible to link the fuzzer executables without using a fuzzing engine and a variety of compiler-specific (and compiler-version-specific) flags, at least on Linux. So let's add a platform-specific option in config.mak.uname to link the executables as part of the default `make all` target. Since linking the fuzzer executables without a fuzzing engine does not require a C++ compiler, we can change the FUZZ_PROGRAMS build rule to use $(CC) by default. This avoids compiler mis-match issues when overriding $(CC) but not $(CXX). When we *do* want to actually link with a fuzzing engine, we can set $(FUZZ_CXX). The build instructions in the CI fuzz-smoke-test job and in the Makefile comment have been updated accordingly. While we're at it, we can consolidate some of the fuzzer build instructions into one location in the Makefile. Suggested-by: Junio C Hamano <gitster@pobox.com> Helped-by: Jeff King <peff@peff.net> Signed-off-by: Josh Steadmon <steadmon@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-24 20:14:42 +02:00
FUZZ_CXX ?= $(CC)
FUZZ_CXXFLAGS ?= $(ALL_CFLAGS)
.PHONY: fuzz-all
fuzz: link fuzz programs with `make all` on Linux Since 5e47215080 (fuzz: add basic fuzz testing target., 2018-10-12), we have compiled object files for the fuzz tests as part of the default 'make all' target. This helps prevent bit-rot in lesser-used parts of the codebase, by making sure that incompatible changes are caught at build time. However, since we never linked the fuzzer executables, this did not protect us from link-time errors. As of 8b9a42bf48 (fuzz: fix fuzz test build rules, 2024-01-19), it's now possible to link the fuzzer executables without using a fuzzing engine and a variety of compiler-specific (and compiler-version-specific) flags, at least on Linux. So let's add a platform-specific option in config.mak.uname to link the executables as part of the default `make all` target. Since linking the fuzzer executables without a fuzzing engine does not require a C++ compiler, we can change the FUZZ_PROGRAMS build rule to use $(CC) by default. This avoids compiler mis-match issues when overriding $(CC) but not $(CXX). When we *do* want to actually link with a fuzzing engine, we can set $(FUZZ_CXX). The build instructions in the CI fuzz-smoke-test job and in the Makefile comment have been updated accordingly. While we're at it, we can consolidate some of the fuzzer build instructions into one location in the Makefile. Suggested-by: Junio C Hamano <gitster@pobox.com> Helped-by: Jeff King <peff@peff.net> Signed-off-by: Josh Steadmon <steadmon@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-24 20:14:42 +02:00
fuzz-all: $(FUZZ_PROGRAMS)
$(FUZZ_PROGRAMS): %: %.o oss-fuzz/dummy-cmd-main.o $(GITLIBS) GIT-LDFLAGS
fuzz: link fuzz programs with `make all` on Linux Since 5e47215080 (fuzz: add basic fuzz testing target., 2018-10-12), we have compiled object files for the fuzz tests as part of the default 'make all' target. This helps prevent bit-rot in lesser-used parts of the codebase, by making sure that incompatible changes are caught at build time. However, since we never linked the fuzzer executables, this did not protect us from link-time errors. As of 8b9a42bf48 (fuzz: fix fuzz test build rules, 2024-01-19), it's now possible to link the fuzzer executables without using a fuzzing engine and a variety of compiler-specific (and compiler-version-specific) flags, at least on Linux. So let's add a platform-specific option in config.mak.uname to link the executables as part of the default `make all` target. Since linking the fuzzer executables without a fuzzing engine does not require a C++ compiler, we can change the FUZZ_PROGRAMS build rule to use $(CC) by default. This avoids compiler mis-match issues when overriding $(CC) but not $(CXX). When we *do* want to actually link with a fuzzing engine, we can set $(FUZZ_CXX). The build instructions in the CI fuzz-smoke-test job and in the Makefile comment have been updated accordingly. While we're at it, we can consolidate some of the fuzzer build instructions into one location in the Makefile. Suggested-by: Junio C Hamano <gitster@pobox.com> Helped-by: Jeff King <peff@peff.net> Signed-off-by: Josh Steadmon <steadmon@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-24 20:14:42 +02:00
$(QUIET_LINK)$(FUZZ_CXX) $(FUZZ_CXXFLAGS) -o $@ $(ALL_LDFLAGS) \
-Wl,--allow-multiple-definition \
$(filter %.o,$^) $(filter %.a,$^) $(LIBS) $(LIB_FUZZING_ENGINE)
Makefile: use mkdir_p_parent_template for UNIT_TEST_BIN We build the UNIT_TEST_BIN directory (t/unit-tests/bin) on the fly with "mkdir -p". And so the recipe for UNIT_TEST_PROGS, which put their output in that directory, depend on UNIT_TEST_BIN to make sure it's there. But using a normal dependency leads to weird outcomes, because the timestamp of the directory is important. For example, try this: $ make [...builds everything...] [now re-build one unit test] $ touch t/unit-tests/t-ctype.c $ make SUBDIR templates CC t/unit-tests/t-ctype.o LINK t/unit-tests/bin/t-ctype So far so good. Now running make again should build nothing. But it doesn't! $ make SUBDIR templates LINK t/unit-tests/bin/t-basic LINK t/unit-tests/bin/t-mem-pool LINK t/unit-tests/bin/t-strbuf Er, what? Let's rebuild again: $ make SUBDIR templates LINK t/unit-tests/bin/t-ctype Weird. And now we ping-pong back and forth forever: $ make SUBDIR templates LINK t/unit-tests/bin/t-basic LINK t/unit-tests/bin/t-mem-pool LINK t/unit-tests/bin/t-strbuf $ make SUBDIR templates LINK t/unit-tests/bin/t-ctype What happens is that writing t/unit-tests/bin/t-ctype updates the mtime of the directory t/unit-tests/bin. And then on the next invocation of make, all of those other tests are now older and so get rebuilt. And back and forth forever. We can fix this by making the directory as part of the build recipe for the programs, using the template from 0b6d0bc924 (Makefiles: add and use wildcard "mkdir -p" template, 2022-03-03). Helped-by: SZEDER Gábor <szeder.dev@gmail.com> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-01-30 06:37:40 +01:00
$(UNIT_TEST_PROGS): $(UNIT_TEST_BIN)/%$X: $(UNIT_TEST_DIR)/%.o $(UNIT_TEST_DIR)/test-lib.o $(GITLIBS) GIT-LDFLAGS
$(call mkdir_p_parent_template)
unit tests: add TAP unit test framework This patch contains an implementation for writing unit tests with TAP output. Each test is a function that contains one or more checks. The test is run with the TEST() macro and if any of the checks fail then the test will fail. A complete program that tests STRBUF_INIT would look like #include "test-lib.h" #include "strbuf.h" static void t_static_init(void) { struct strbuf buf = STRBUF_INIT; check_uint(buf.len, ==, 0); check_uint(buf.alloc, ==, 0); check_char(buf.buf[0], ==, '\0'); } int main(void) { TEST(t_static_init(), "static initialization works); return test_done(); } The output of this program would be ok 1 - static initialization works 1..1 If any of the checks in a test fail then they print a diagnostic message to aid debugging and the test will be reported as failing. For example a failing integer check would look like # check "x >= 3" failed at my-test.c:102 # left: 2 # right: 3 not ok 1 - x is greater than or equal to three There are a number of check functions implemented so far. check() checks a boolean condition, check_int(), check_uint() and check_char() take two values to compare and a comparison operator. check_str() will check if two strings are equal. Custom checks are simple to implement as shown in the comments above test_assert() in test-lib.h. Tests can be skipped with test_skip() which can be supplied with a reason for skipping which it will print. Tests can print diagnostic messages with test_msg(). Checks that are known to fail can be wrapped in TEST_TODO(). There are a couple of example test programs included in this patch. t-basic.c implements some self-tests and demonstrates the diagnostic output for failing test. The output of this program is checked by t0080-unit-test-output.sh. t-strbuf.c shows some example unit tests for strbuf.c The unit tests will be built as part of the default "make all" target, to avoid bitrot. If you wish to build just the unit tests, you can run "make build-unit-tests". To run the tests, you can use "make unit-tests" or run the test binaries directly, as in "./t/unit-tests/bin/t-strbuf". Signed-off-by: Phillip Wood <phillip.wood@dunelm.org.uk> Signed-off-by: Josh Steadmon <steadmon@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-11-09 19:50:43 +01:00
$(QUIET_LINK)$(CC) $(ALL_CFLAGS) -o $@ $(ALL_LDFLAGS) \
$(filter %.o,$^) $(filter %.a,$^) $(LIBS)
.PHONY: build-unit-tests unit-tests
build-unit-tests: $(UNIT_TEST_PROGS)
unit-tests: $(UNIT_TEST_PROGS)
$(MAKE) -C t/ unit-tests