Valgrind says:
==18344== Conditional jump or move depends on uninitialised value(s)
==18344== at 0x406C83: open_slot (cache.c:63)
==18344== by 0x407478: cache_ls (cache.c:403)
==18344== by 0x404C9A: process_request (cgit.c:639)
==18344== by 0x406BD2: fill_slot (cache.c:190)
==18344== by 0x4071A0: cache_process (cache.c:284)
==18344== by 0x404461: main (cgit.c:952)
==18344== Uninitialised value was created by a stack allocation
==18344== at 0x40738B: cache_ls (cache.c:375)
This is caused by the keylen field being used to calculate whether or
not a slot is matched. We never then check the value of this and the
length of data read depends on the key length read from the file so this
isn't dangerous, but it's nice to avoid branching based on uninitialized
data.
Signed-off-by: John Keeping <john@keeping.me.uk>
* Name "cgit Development Team" as copyright holder to avoid listing
every single developer.
* Update copyright ranges.
Signed-off-by: Lukas Fleischer <cgit@crytocrack.de>
By using the standard library's printf, cache_ls does not redirect its
output to the cache when we change the process' stdout file descriptor
to point to the cache file. Fix this by using "htmlf" in the same way
that we do for writing HTTP headers.
Signed-off-by: John Keeping <john@keeping.me.uk>
Commit fb3655d (use struct strbuf instead of static buffers, 2013-04-06)
broke the logic in cache.c::cache_ls by failing to set slot->cache_name
before calling open_slot.
While fixing this, also free the strbufs added by that commit once we're
done with them.
Signed-off-by: John Keeping <john@keeping.me.uk>
Use "struct strbuf" from Git to remove the limit on file path length.
Notes on scan-tree:
This is slightly involved since I decided to pass the strbuf into
add_repo() and modify if whenever a new file name is required, which
should avoid any extra allocations within that function. The pattern
there is to append the filename, use it and then reset the buffer to its
original length (retaining a trailing '/').
Notes on ui-snapshot:
Since write_archive modifies the argv array passed to it we
copy the argv_array values into a new array of char* and then free the
original argv_array structure and the new array without worrying about
what the values now look like.
Signed-off-by: John Keeping <john@keeping.me.uk>
* Remove whitespace at the end of lines.
* Replace space indentation by tabs.
* Add whitespace before/after several operators ("+", "-", "*", ...)
* Add whitespace to assignments ("foo = bar;").
* Fix whitespace in parameter lists ("foobar(foo, bar, 42)").
Signed-off-by: Lukas Fleischer <cgit@cryptocrack.de>
The type used to declare the st_size field of a 'struct stat' can
be a 32- or 64-bit sized type, which can vary from one platform to
another, or even from one compilation to another. In particular,
on linux, if you include the following define:
#define _FILE_OFFSET_BITS 64
prior to including certain system header files, then the type used
for the st_size field will be __off64_t, otherwise it will be an
__off_t. Note that the above define is included at the top of
git-compat-util.h.
In cache.c, the "%zd" format specifier expects a "signed size_t",
another type which can vary, when an __off64_t or a __off_t is
provided. To supress the warning, use the PRIuMAX format specifier
and cast the st_size field to uintmax_t. This should work an any
platform for which git currently compiles.
In ui-plain.c, the size parameter of sha1_object_info() and
read_sha1_file() is defined to be "unsigned long *" not "size_t *".
So, to supress the warning, simply declare size with the correct type.
Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk>
Signed-off-by: Lars Hjemli <hjemli@gmail.com>
The change to print_slot() in cdc6b2f8e7a8d43dcfe0475a9d3498333ea686b8 made
the function return correct errno for read errors while ignoring write errors,
which is not what was intended. This patch tries to rectify things.
Signed-off-by: Lars Hjemli <hjemli@gmail.com>
If print_slot() fails, the client will be served an inferior response.
This patch makes sure that such an error will be returned to main(), which
in turn will try to inform about the error in the response itself.
The error is also printed to the cache_log, i.e. stderr, which will make
the error message appear in error_log (atleast when httpd==apache).
Noticed-by: Jim Meyering <jim@meyering.net>
Signed-off-by: Lars Hjemli <hjemli@gmail.com>
These functions handles EINTR/EAGAIN errors during read/write operations,
which is something cache.c didn't.
While at it, fix a bug in print_slot() where errors during reading from the
cache slot might go by unnoticed.
Noticed-by: Jim Meyering <jim@meyering.net>
Signed-off-by: Lars Hjemli <hjemli@gmail.com>
We'll need proper return-values from these functions to make the cache
behave correctly (which includes giving proper error messages).
Noticed-by: Jim Meyering <jim@meyering.net>
Signed-off-by: Lars Hjemli <hjemli@gmail.com>
This new page will list all entries found in the current cache, which is
useful when reviewing the new cache implementation. There are no links to
the new page, but it's reachable by adding 'p=ls_cache' to any cgit url.
Signed-off-by: Lars Hjemli <hjemli@gmail.com>
The original caching layer in cgit has no upper bound on the number of
concurrent cache entries, so when cgit is traversed by a spider (like the
googlebot), the cache might end up filling your disk. Also, if any error
occurs in the cache layer, no content is returned to the client.
This patch redesigns the caching layer to avoid these flaws by
* giving the cache a bound number of slots
* disabling the cache for the current request when errors occur
The cache size limit is implemented by hashing the querystring (the cache
lookup key) and generating a cache filename based on this hash modulo the
cache size. In order to detect hash collisions, the full lookup key (i.e.
the querystring) is stored in the cache file (separated from its associated
content by ascii 0).
The cache filename is the reversed 8-digit hexadecimal representation of
hash(key) % cache_size
which should make the filesystem lookup pretty fast (if directory content
is indexed/sorted); reversing the representation avoids the problem where
all keys have equal prefix.
There is a new config option, cache-size, which sets the upper bound for
the cache. Default value for this option is 0, which has the same effect
as setting nocache=1 (hence nocache is now deprecated).
Included in this patch is also a new testfile which verifies that the
new option works as intended.
Signed-off-by: Lars Hjemli <hjemli@gmail.com>
The functions found in cache.c are only used by cgit.c, so there's no
point in rebuilding all object files when the cache interface is changed.
Signed-off-by: Lars Hjemli <hjemli@gmail.com>
This removes the global variable which is used to keep track of the
currently selected repository, and adds a new variable in the cgit_context
structure.
Signed-off-by: Lars Hjemli <hjemli@gmail.com>
This removes another big set of global variables, and introduces the
cgit_prepare_context() function which populates a context-variable with
compile-time default values.
Signed-off-by: Lars Hjemli <hjemli@gmail.com>
This struct will hold all the cgit runtime information currently found in
a multitude of global variables.
The first cleanup removes all querystring-related variables.
Signed-off-by: Lars Hjemli <hjemli@gmail.com>
The single static buffer makes it impossible to use the result of two
different calls to this function simultaneously. Fix it by using 4
buffers.
Signed-off-by: Lars Hjemli <hjemli@gmail.com>
This makes is possible to use repo-urls like '/pub/scm/git/git.git' and
even add path specifications, like '/pub/scm/git/git.git/log/documentation'.
Signed-off-by: Lars Hjemli <hjemli@gmail.com>
Make sure we chdir(2) back to the original getcwd(2) when a page
has been generated. Also, if the cgit_cache_root do not exist,
try to create it.
This is a feature intended to ease testing/debugging.
Signed-off-by: Lars Hjemli <hjemli@gmail.com>
Since fmt() uses 8 alternating static buffers, and cache_lock might call
cache_create_dirs() multiple times, which in turn might call fmt() twice,
after four iterations lockfile would be overwritten by a cachedirectory
path.
In worst case, this could cause the cachedirectory to be unlinked and replaced
by a cachefile.
Fix: use xstrdup() on the result from fmt() before assigning to lockfile, and
call free(lockfile) before exit.
Signed-off-by: Lars Hjemli <hjemli@gmail.com>
An embarrassing thinko in cgit_check_cache() would truncate valid cachefiles
in the following situation:
1) process A notices a missing/expired cachefile
2) process B gets scheduled, locks, fills and unlocks the cachefile
3) process A gets scheduled, locks the cachefile, notices that the cachefile
now exist/is not expired anymore, and continues to overwrite it with an
empty lockfile.
Thanks to Linus for noticing (again).
Signed-off-by: Lars Hjemli <hjemli@gmail.com>
Add a global variable, cgit_max_lock_attemps, to avoid the possibility of
infinite loops when failing to acquire a lockfile. This could happen on
broken setups or under crazy server load.
Incidentally, this also fixes a lurking bug in cache_lock() where an
uninitialized returnvalue was used.
Signed-off-by: Lars Hjemli <hjemli@gmail.com>
This enables internal caching of page output.
Page requests are split into four groups:
1) repo listing (front page)
2) repo summary
3) repo pages w/symbolic references in query string
4) repo pages w/constant sha1's in query string
Each group has a TTL specified in minutes. When a page is requested, a cached
filename is stat(2)'ed and st_mtime is compared to time(2). If TTL has expired
(or the file didn't exist), the cached file is regenerated.
When generating a cached file, locking is used to avoid parallell processing
of the request. If multiple processes tries to aquire the same lock, the ones
who fail to get the lock serves the (expired) cached file. If the cached file
don't exist, the process instead calls sched_yield(2) before restarting the
request processing.
Signed-off-by: Lars Hjemli <hjemli@gmail.com>