Open
Conversation
…ension Add regions around code to read and write the cache-tree extension when the index is read or written. This is an experiment and may be dropped in future releases if we don't need it anymore. This experiment demonstrates that it takes more time to parse and deserialize the cache-tree extension than it does to read the cache-entries. Commits [1] and [2] spreads cache-entry reading across N-1 cores and dedicates a single core to simultaneously read the index extensions. Local testing (on my machine) shows that reading the cache-tree extension takes ~0.28 seconds. The 11 cache-entry threads take ~0.08 seconds. The main thread is blocked for 0.15 to 0.20 seconds waiting for the extension thread to finish. Let's use this commit to gather some telemetry and confirm this. My point is that improvements, such as index V5 which makes the cache entries smaller, may improve performance, but the gains may be limited because of this extension. And that we may need to look inside the cache-tree extension to truly improve do_read_index() performance. [1] abb4bb8 read-cache: load cache extensions on a worker thread [2] 77ff112 read-cache: load cache entries on worker threads Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Verify that `git status --deserialize=x -v` does not crash and generates the same output as a normal (scanning) status command. These issues are described in the previous 2 commits. Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Teach Git to not throw a fatal error when an explicitly-specified status-cache file (`git status --deserialize=<foo>`) could not be found or opened for reading and silently fallback to a traditional scan. This matches the behavior when the status-cache file is implicitly given via a config setting. Note: the current version causes a test to start failing. Mark this as an expected result for now. Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com> Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
…and report_tracking() Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com> Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Add data for the number of files created/overwritten and deleted during the checkout. Give proper category name to all events in unpack-trees.c and eliminate "exp". This is modified slightly from the original version due to interactions with 26f924d (unpack-trees: exit check_updates() early if updates are not wanted, 2020-01-07). Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com> Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Update tracing around report_tracking() to use 'tracking' category rather than 'exp' category. Add ahead/behind results from stat_tracking_info(). Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Teach subprocess_start() to use a copy of the passed `cmd` string rather than borrowing the buffer from the caller. Some callers of subprocess_start() pass the value returned from find_hook() which points to a static buffer and therefore is only good until the next call to find_hook(). This could cause problems for the long-running background processes managed by sub-process.c where later calls to subprocess_find_entry() to get an existing process will fail. This could cause more than 1 long-running process to be created. TODO Need to confirm, but if only read_object_hook() uses TODO subprocess_start() in this manner, we could drop this TODO commit when we drop support for read_object_hook(). Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Add function to start a subprocess with an argv. Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Create a function to add a new object to the loose object cache after the existing odb/xx/ directory was scanned. This will be used in a later commit to keep the loose object cache fresh after dynamically fetching an individual object and without requiring the odb/xx/ directory to be rescanned. Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Prevent packfile parsing from accidentally dynamically fetching each individual object found in the packfile. When index-pack parses the input packfile, it does a lookup in the ODB to test for conflicts/collisions. This can accidentally cause the object to be individually fetched when gvfs-helper (or read-object-hook or partial-clone) is enabled. Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Create gvfs-helper. This is a helper tool to use the GVFS Protocol REST API to fetch objects and configuration data from a GVFS cache-server or Git server. This tool uses libcurl to send object requests to either server. This tool creates loose objects and/or packfiles. Create gvfs-helper-client. This code resides within git proper and uses the sub-process API to manage gvfs-helper as a long-running background process. Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com> Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
The config variable `gvfs.sharedCache` contains the pathname to an alternate <odb> that will be used by `gvfs-helper` to store dynamically-fetched missing objects. If this directory does not exist on disk, `prepare_alt_odb()` omits this directory from the in-memory list of alternates. This causes `git` commands (and `gvfs-helper` in particular) to fall-back to `.git/objects` for storage of these objects. This disables the shared-cache and leads to poorer performance. Teach `alt_obj_usable()` and `prepare_alt_odb()`, match up the directory named in `gvfs.sharedCache` with an entry in `.git/objects/info/alternates` and force-create the `<odb>` root directory (and the associated `<odb>/pack` directory) if necessary. If the value of `gvfs.sharedCache` refers to a directory that is NOT listed as an alternate, create an in-memory alternate entry in the odb-list. (This is similar to how GIT_ALTERNATE_OBJECT_DIRECTORIES works.) This work happens the first time that `prepare_alt_odb()` is called. Furthermore, teach the `--shared-cache=<odb>` command line option in `gvfs-helper` (which is runs after the first call to `prepare_alt_odb()`) to override the inherited shared-cache (and again, create the ODB directory if necessary). Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Add trace2 message for CURL and HTTP errors. Fix typo reporting network error code back to gvfs-helper-client. Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Fix parsing of the "loose <odb>" response from `gvfs-helper` and use the actually parsed OID when updating the loose oid cache. Previously, an uninitialized "struct oid" was used to update the cache. This did not cause any corruption, but could cause extra fetches for objects visited multiple times. Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Add robust-retry mechanism to automatically retry a request after network
errors. This includes retry after:
[] transient network problems reported by CURL.
[] http 429 throttling (with associated Retry-After)
[] http 503 server unavailable (with associated Retry-After)
Add voluntary throttling using Azure X-RateLimit-* hints to avoid being
soft-throttled (tarpitted) or hard-throttled (429) on later requests.
Add global (outside of a single request) azure-throttle data to track the
rate limit hints from the cache-server and main Git server independently.
Add exponential retry backoff. This is used for transient network problems
when we don't have a Retry-After hint.
Move the call to index-pack earlier in the response/error handling sequence
so that if we receive a 200 but yet the packfile is truncated/corrupted, we
can use the regular retry logic to get it again.
Refactor the way we create tempfiles for packfiles to use
<odb>/pack/tempPacks/ rather than working directly in the <odb>/pack/
directory.
Move the code to create a new tempfile to the start of a single request
attempt (initial and retry attempts), rather than at the overall start
of a request. This gives us a fresh tempfile for each network request
attempt. This simplifies the retry mechanism and isolates us from the file
ownership issues hidden within the tempfile class. And avoids the need to
truncate previous incomplete results. This was necessary because index-pack
was pulled into the retry loop.
Minor: Add support for logging X-VSS-E2EID to telemetry on network errors.
Minor: rename variable:
params.b_no_cache_server --> params.b_permit_cache_server_if_defined.
This variable is used to indicate whether we should try to use the
cache-server when it is defined. Got rid of double-negative logic.
Minor: rename variable:
params.label --> params.tr2_label
Clarify that this variable is only used with trace2 logging.
Minor: Move the code to automatically map cache-server 400 responses
to normal 401 response earlier in the response/error handling sequence
to simplify later retry logic.
Minor: Decorate trace2 messages with "(cs)" or "(main)" to identify the
server in log messages. Add params->server_type to simplify this.
Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Expose the differences in the semantics of GET and POST for
the "gvfs/objects" API:
HTTP GET: fetches a single loose object over the network.
When a commit object is requested, it just returns
the single object.
HTTP POST: fetches a batch of objects over the network.
When the oid-set contains a commit object, all
referenced trees are also included in the response.
gvfs-helper is updated to take "get" and "post" command line options.
the gvfs-helper "server" mode is updated to take "objects.get" and
"objects.post" verbs.
For convenience, the "get" option and the "objects.get" verb
do allow more than one object to be requested. gvfs-helper will
automatically issue a series of (single object) HTTP GET requests
and creating a series of loose objects.
The "post" option and the "objects.post" verb will perform bulk
object fetching using the batch-size chunking. Individual HTTP
POST requests containing more than one object will be created
as a packfile. A HTTP POST for a single object will create a
loose object.
This commit also contains some refactoring to eliminate the
assumption that POST is always associated with packfiles.
In gvfs-helper-client.c, gh_client__get_immediate() now uses the
"objects.get" verb and ignores any currently queued objects.
In gvfs-helper-client.c, the OIDSET built by gh_client__queue_oid()
is only processed when gh_client__drain_queue() is called. The queue
is processed using the "object.post" verb.
Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
During development, it was very helpful to see the gvfs-helper do its work to request a pack-file or download a loose object. When these messages appear during normal use, it leads to a very noisy terminal output. Remove all progress indicators when downloading loose objects. We know that these can be numbered in the thousands in certain kinds of history calls, and would litter the terminal output with noise. This happens during 'git fetch' or 'git pull' as well when the tip commits are checked for the new refs. Remove the "Requesting packfile with %ld objects" message, as this operation is very fast. We quickly follow up with the more valuable "Receiving packfile %ld%ld with %ld objects". When a large "git checkout" causes many pack-file downloads, it is good to know that Git is asking for data from the server. Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
If our POST request includes a commit ID, then the the remote will send a pack-file containing the commit and all trees reachable from its root tree. With the current implementation, this causes a failure since we call install_loose() when asking for one object. Modify the condition to check for install_pack() when the response type changes. Also, create a tempfile for the pack-file download or else we will have problems! Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Create t/helper/test-gvfs-protocol.c and t/t5799-gvfs-helper.sh to test gvfs-helper. Create t/helper/test-gvfs-protocol.c as a stand-alone web server that speaks the GVFS Protocol [1] and serves loose objects and packfiles to clients. It is borrows heavily from the code in daemon.c. It includes a "mayhem" mode to cause various network and HTTP errors to test the retry/recovery ability of gvfs-helper. Create t/t5799-gvfs-helper.sh to test gvfs-helper. [1] https://github.com/microsoft/VFSForGit/blob/master/Protocol.md Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com> Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
gvfs-helper prints a "loose <oid>" or "packfile <name>" messages after they are received to help invokers update their in-memory caches. Move the code to accumulate these messages in the result_list into the install_* functions rather than waiting until the end. POST requests containing 1 object may return a loose object or a packfile depending on whether the object is a commit or non-commit. Delaying the message generation just complicated the caller. Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Earlier versions of the test always returned a packfile in response to a POST. Now we look at the number of objects in the POST request. If > 1, always send a packfile. If = 1 and it is a commit, send a packfile. Otherwise, send a loose object. This is to better model the behavior of the GVFS server/protocol which treats commits differently. Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
It is possible that a loose object that is written from a GVFS protocol "get object" request does not match the expected hash. Error out in this case. 2021-10-30: The prototype for read_loose_object() changed in 31deb28 (fsck: don't hard die on invalid object types, 2021-10-01) and 96e41f5 (fsck: report invalid object type-path combinations, 2021-10-01). Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Teach helper/test-gvfs-protocol to be able to send corrupted loose blobs. Add unit test for gvfs-helper to detect receipt of a corrupted loose blob. Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Teach gvfs-helper to support "/gvfs/prefetch" REST API. This includes a new `gvfs-helper prefetch --since=<t>` command line option. And a new `objects.prefetch` verb in `gvfs-helper server` mode. If `since` argument is omitted, `gvfs-helper` will search the local shared-cache for the most recent prefetch packfile and start from there. The <t> is usually a seconds-since-epoch, but may also be a "friendly" date -- such as "midnight", "yesterday" and etc. using the existing date selection mechanism. Add `gh_client__prefetch()` API to allow `git.exe` to easily call prefetch (and using the same long-running process as immediate and queued object fetches). Expanded t5799 unit tests to include prefetch tests. Test setup now also builds some commits-and-trees packfiles for testing purposes with well-known timestamps. Expanded t/helper/test-gvfs-protocol.exe to support "/gvfs/prefetch" REST API. Massive refactor of existing packfile handling in gvfs-helper.c to reuse more code between "/gvfs/objects POST" and "/gvfs/prefetch". With this we now properly name packfiles with the checksum SHA1 rather than a date string. Refactor also addresses some of the confusing tempfile setup and install_<result> code processing (introduced to handle the ambiguity of how POST works with commit objects). Update 2023-05-22 (v2.41.0): add '--no-rev-index' to 'index-pack' to avoid writing the extra (unused) file. Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Users should be allowed to delete their shared cache and have it recreated on 'git fetch'. This change makes that happen by creating any leading directories and then creating the directory itself with `mkdir()`. Users may have had more instances of this due to #840, which advises deleting the shared cache on a mistaken assumption that it would be recreated on `git fetch`. * [X] This change only applies to interactions with Azure DevOps and the GVFS Protocol.
A common problem when tracking GVFS Protocol queries is that we don't
have a way to connect client and server interactions. This is especially
true in the typical case where a cache server deployment is hidden
behind a load balancer. We can't even determine which cache server was
used for certain requests!
Add some client-identifying data to the HTTP queries using the
X-Session-Id header. This will by default identify the helper process
using its SID. If configured via the new gvfs.sessionKey config, it will
prefix this SID with another config value.
For example, Office monorepo users have an 'otel.trace2.id' config value
that is a pseudonymous identifier. This allows telemetry readers to
group requests by enlistment without knowing the user's identity at all.
Users could opt-in to provide this identifier for investigations around
their long-term performance or issues. This change makes it possible to
extend this to cache server interactions.
* [X] This change only applies to interactions with Azure DevOps and the
GVFS Protocol.
The upstream refactoring in 4c89d31 (streaming: rely on object sources to create object stream, 2025-11-23) changed how istream_source() discovers objects. Previously, it called odb_read_object_info_extended() with flags=0 to locate the object, then tried the source-specific opener (e.g. open_istream_loose). If that failed (e.g. corrupt loose object), it fell back to open_istream_incore which re-read the object — by which time the read-object hook had already re-fetched a clean copy. After the refactoring, istream_source() iterates over sources directly. When a corrupt loose object is found, odb_source_loose_read_object_stream fails and the loop continues to the next source. When no source has the object, it falls through to open_istream_incore, which calls odb_read_object_info_extended with OBJECT_INFO_DIE_IF_CORRUPT. This encounters the same corrupt loose file still on disk and dies before the read-object hook gets a chance to re-download a clean replacement. Fix this by clearing OBJECT_INFO_DIE_IF_CORRUPT in open_istream_incore when GVFS_MISSING_OK is set, matching the existing pattern in odb_read_object. This fixes the GitCorruptObjectTests functional test failures (GitRequestsReplacementForAllNullObject, GitRequestsReplacementForObjectCorruptedWithBadData, GitRequestsReplacementForTruncatedObject) that appeared when upgrading from v2.50.1.vfs.0.1 to v2.53.0.vfs.0.0. Signed-off-by: Tyler Vella <tyvella@microsoft.com>
When a new microsoft/git release is published, VFS for Git needs to pick up the new Git version. Today this is a manual process. This workflow automates it by reacting to GitHub release events. On a full releases: creates a PR in microsoft/VFSForGit to bump the default GIT_VERSION in the build workflow, so future CI runs and manual dispatches use the latest stable Git version. Authentication uses the existing Azure Key Vault + OIDC pattern (matching release-homebrew and release-winget) to retrieve a token with write access to the VFS for Git repository. In a separate effort we'll add another workflow that triggers on push to vfs-* branches to trigger a run of VFS for Git Functional Tests (from the master branch). Signed-off-by: Matthew John Cheetham <mjcheetham@outlook.com>
The upstream refactoring in 4c89d31 (streaming: rely on object sources to create object stream, 2025-11-23) changed how istream_source() discovers objects. Previously, it called odb_read_object_info_extended() with flags=0 to locate the object, then tried the source-specific opener (e.g. open_istream_loose). If that failed (e.g. corrupt loose object), it fell back to open_istream_incore which re-read the object — by which time the read-object hook had already re-fetched a clean copy. After the refactoring, istream_source() iterates over sources directly. When a corrupt loose object is found, odb_source_loose_read_object_stream fails and the loop continues to the next source. When no source has the object, it falls through to open_istream_incore, which calls odb_read_object_info_extended with OBJECT_INFO_DIE_IF_CORRUPT. This encounters the same corrupt loose file still on disk and dies before the read-object hook gets a chance to re-download a clean replacement. Fix this by clearing OBJECT_INFO_DIE_IF_CORRUPT in open_istream_incore when GVFS_MISSING_OK is set, matching the existing pattern in odb_read_object. This fixes the GitCorruptObjectTests functional test failures (GitRequestsReplacementForAllNullObject, GitRequestsReplacementForObjectCorruptedWithBadData, GitRequestsReplacementForTruncatedObject) that appeared when upgrading from v2.50.1.vfs.0.1 to v2.53.0.vfs.0.0. This is a companion to #782 (which predates 4c89d31, though, therefore it is not _exactly_ an omission of that PR).
The skip-clean-check guard in remove_worktree() was gated on core_virtualfilesystem, which is only initialized by repo_config_get_virtualfilesystem() during index loading. Since the worktree remove path never loads the index before this check, the variable was always NULL, causing check_clean_worktree() to run even when VFSForGit had already unmounted the projection and written the skip-clean-check marker file. This made 'git worktree remove' fail with 'fatal: failed to run git status' in GVFS repos. Replace core_virtualfilesystem with gvfs_config_is_set(GVFS_SUPPORTS_WORKTREES). This is the correct bit to check here: remove_worktree() can only be reached when GVFS_SUPPORTS_WORKTREES is set (cmd_worktree blocks otherwise at line 1501), and it directly expresses that the VFS layer supports worktree operations and knows how to signal when a clean check can be skipped. Unlike core_virtualfilesystem, gvfs_config_is_set() is self-loading from core.gvfs and does not depend on the index having been read. Assisted-by: Claude Opus 4.6 Signed-off-by: Tyrie Vella <tyrielv@gmail.com>
When a new `microsoft/git` release is published, VFS for Git needs to
pick up the new Git version. Today this is a manual process. This
workflow automates it by reacting to GitHub release events.
On a full releases: creates a PR in `microsoft/VFSForGit` to bump the
default `GIT_VERSION` in the build workflow, so future CI runs and
manual dispatches use the latest stable Git version.
Authentication uses the existing Azure Key Vault + OIDC pattern
(matching `release-homebrew` and `release-winget`) to retrieve a token
with write access to the VFS for Git repository.
In a separate effort we'll add another workflow that triggers on push to
`vfs-*` branches to trigger a run of VFS for Git Functional Tests (from
the `master` branch).
Build Git with VFS support using the Git for Windows SDK and package it as a MicrosoftGit artifact with an install.bat that uses robocopy to deploy to 'C:\Program Files\Git'. Find the latest successful VFSForGit build on master and call its reusable functional-tests.yaml workflow, which downloads the GVFS installer and FT executables from that run, and the Git artifact from this run. Requires a VFSFORGIT_TOKEN secret with actions:read on microsoft/VFSForGit for cross-repo artifact downloads.
The skip-clean-check guard in remove_worktree() was gated on core_virtualfilesystem, which is only initialized by repo_config_get_virtualfilesystem() during index loading. Since the worktree remove path never loads the index before this check, the variable was always NULL, causing check_clean_worktree() to run even when VFSForGit had already unmounted the projection and written the skip-clean-check marker file. This made 'git worktree remove' fail with 'fatal: failed to run git status' in GVFS repos. Replace core_virtualfilesystem with gvfs_config_is_set(GVFS_USE_VIRTUAL_FILESYSTEM), which is already loaded from core.gvfs by cmd_worktree() before dispatch to remove_worktree().
## TL;DR Add a new `vfs-functional-tests.yml` workflow that builds Git from this repository and runs the VFS for Git functional tests against it, using VFSForGit's reusable workflow. ## Why? VFS for Git functional tests currently only run in the VFSForGit repository, against a tagged microsoft/git release. This means VFS-related regressions in Git are only caught *after* a release is tagged. By running the FTs here on every push and PR to `vfs-*` branches, we can catch regressions before they ship. This is the counterpart to microsoft/VFSForGit#1932, which extracted the functional tests into a reusable `workflow_call` workflow. ## How it works 1. **Build Git** — checks out this repo, builds with the Git for Windows SDK, and packages the result into a `MicrosoftGit` artifact with an `install.bat` that deploys via robocopy to `C:\Program Files\Git`. Both ARM64 and x64 are built and combined into a single artifact for the FTs to install and use. 2. **Find VFSForGit build** — locates the latest successful VFSForGit CI run on `master` to get the GVFS installer and FT executables. If the build was a 'skipped' build (because an existing run succeeded with that tree) then follow the annotation to the real run. 3. **Call reusable workflow** — invokes `microsoft/VFSForGit/.github/workflows/functional-tests.yaml@master`, which handles the full test matrix (2 configs × 2 architectures × 10 slices)
As of 317ea9a (treewide: drop uses of `for_each_{loose,packed}_object()`, 2026-01-26), that function is no more. But we have an easy way out: `for_each_loose_file_in_source()`. Calling this function only on the first source (instead of iterating over all sources) is equivalent to the `FOR_EACH_OBJECT_LOCAL_ONLY` flag. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
There's now a Coccinelle rule to enforce the shorter way to write this. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
The new upstream config-based hooks feature (hook.<name>.event plus hook.<name>.command) validates all hook configurations when building the config map and calls die() when an event has no corresponding command. The pre/post-command hooks, however, run during every git command, including "git config". When a user configures a hook via two separate "git config --add" calls (first the event, then the command), the second invocation's pre-command hook triggers the config map build, which finds the event from the first call but not yet the command from the still-running second call, and dies. Since the pre/post-command hooks are only ever traditional hookdir hooks and should never be looked up via the config-based hook mechanism, skip list_hooks_add_configured() for those two hook names. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
As of 452b12c (builtin/maintenance: use "geometric" strategy by default, 2026-02-24), the default maintenance strategy is "geometric" rather than "gc". The "geometric" strategy runs the geometric-repack task which packs loose objects, causing the cache-local-objects tests to fail: they expected loose objects to remain in place but geometric-repack packs them into a new packfile. When PR #720 introduced these tests (targeting vfs-2.47.2, based on Git v2.47), "gc" was the default strategy, so disabling gc was enough to prevent other tasks from interfering. Now that "geometric" is the default, also disable geometric-repack to keep the tests focused on the cache-local-objects task behavior. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
The previous fixup (cdaf540) replaced OBJECT_INFO_FOR_PREFETCH with HAS_OBJECT_FETCH_PROMISOR, reasoning that the former flag "never worked." However, these are flags for two different functions: OBJECT_INFO_FOR_PREFETCH was for odb_read_object_info_extended() (which the old oid_object_info_extended() forwarded to), while HAS_OBJECT_FETCH_PROMISOR is for odb_has_object(). The call site was migrated to odb_has_object() as part of the upstream refactoring, but odb_has_object(odb, oid, HAS_OBJECT_FETCH_PROMISOR) sets only OBJECT_INFO_QUICK without OBJECT_INFO_SKIP_FETCH_OBJECT, which means it WILL trigger remote fetches via gvfs-helper. This defeats the purpose of the original commit, which was to prevent index-pack from individually fetching every object it encounters during the collision check. Passing 0 instead gives us both OBJECT_INFO_QUICK and OBJECT_INFO_SKIP_FETCH_OBJECT, which is the correct equivalent of the original OBJECT_INFO_FOR_PREFETCH behavior. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
The handle_hook_replacement() call in list_hooks_add_default() was added to interject VFS hook behavior. It dereferences `options->args`, but `options` can be NULL when called via hook_exists() which passes NULL for the options parameter through list_hooks(). This causes a UBSAN runtime error (null pointer member access) and SIGABRT on every code path that calls hook_exists(), which includes rebase, checkout, commit, and many other commands. Guard the dereference with an `options` NULL check. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
VFS for Git cannot handle -rc versions. Since nobody will use those built Git artifacts, we can simply strip the `-rc*` suffix and run the tests. The alternative (skipping the tests for -rc versions) would be undesirable: That's _exactly_ the time when we need those tests most. Note: We have to write the version to the `version` file in that instance because the tag-based method after stripping the `-rc` suffix would now let that version check in the Makefile fail with an error like this: Found version v2.54.0.vfs.0.0.NNN.g14f1dec, which is not based on v2.54.0-rc1.vfs.0.0 Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Originally, the Microsoft Git fork edited the `GIT-VERSION-GEN` script
with every rebase so that the default version would have a hard-coded
`.vfs.0.0` suffix, by editing the `DEF_VER` assignment of the upstream
Git version. That would _always_ conflict during rebases, though.
Therefore, the strategy was changed, nowadays the script starts like
this:
#!/bin/sh
DEF_VER=v2.53.0
# Identify microsoft/git via a distinct version suffix
DEF_VER=$DEF_VER.vfs.0.0
The matching logic in the `vfs-functional-tests` workflow would catch
the latter line because it defines `DEF_VER` and contains the tell-tale
`.vfs.`.
However, it does not at all catch what we would like to catch: the
actual version number.
So let's adapt it to handle both old-style _and_ new-style variants.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
d0b68d9 to
c021cf9
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This was a bit of a rough ride... Current range-diff (adding a few
fixup!s after an initial rebase that resulted in many, many CI failures):Range-diff
3: d3269ef = 1: 425c956 t: remove advice from some tests
6: 6a78411 = 2: 7ba3ebb survey: calculate more stats on refs
7: e559784 = 3: 3faf3c0 survey: show some commits/trees/blobs histograms
8: c324b30 = 4: a967370 survey: add vector of largest objects for various scaling dimensions
9: 424138b = 5: 1976b57 survey: add pathname of blob or tree to large_item_vec
10: 90b54b5 = 6: 24fe3d3 survey: add commit-oid to large_item detail
12: 4c75e0b = 7: 94d6581 survey: add commit name-rev lookup to each large_item
13: 93148be = 8: 4e2d9c1 survey: add --no-name-rev option
14: 3e76b34 = 9: 2045bc1 survey: started TODO list at bottom of source file
1: 4f9ffee = 10: 57d0154 sparse-index.c: fix use of index hashes in expand_index
2: c058963 = 11: 989c0f7 t5300: confirm failure of git index-pack when non-idx suffix requested
15: 9f706da = 12: 36a9652 survey: expanded TODO list at the bottom of the source file
4: 44d08ad = 13: b42a1cb t1092: add test for untracked files and directories
5: 7281f11 = 14: b4ee4ce index-pack: disable rev-index if index file has non .idx suffix
16: 9446142 = 15: dbe9905 survey: expanded TODO with more notes
11: f3a4f77 = 16: b2e152e trace2: prefetch value of GIT_TRACE2_DST_DEBUG at startup
17: e030c0e = 17: d1a4da7 reset --stdin: trim carriage return from the paths
18: 66e909e ! 18: 518ba17 Identify microsoft/git via a distinct version suffix
19: 96ee9e4 = 19: a396647 gvfs: ensure that the version is based on a GVFS tag
20: c5d5b7e = 20: fe1cc1a gvfs: add a GVFS-specific header file
21: 54c3608 = 21: defb2bc gvfs: add the core.gvfs config setting
22: 5103fd4 = 22: a5f6a99 gvfs: add the feature to skip writing the index' SHA-1
23: 26e5606 = 23: 1732b9d gvfs: add the feature that blobs may be missing
24: 6ac9835 = 24: 5f96794 gvfs: prevent files to be deleted outside the sparse checkout
25: acaf7ff ! 25: 3cbe356 gvfs: optionally skip reachability checks/upload pack during fetch
26: 10b1501 = 26: 49f20a3 gvfs: ensure all filters and EOL conversions are blocked
27: fc79044 ! 27: 86388b6 gvfs: allow "virtualizing" objects
@@ odb.c: static int do_oid_object_info_extended(struct object_database *odb, if (co) { if (oi) { @@ odb.c: static int do_oid_object_info_extended(struct object_database *odb, - for (source = odb->sources; source; source = source->next) - if (!packfile_store_read_object_info(source->packfiles, real, oi, flags)) + if (!odb_source_read_object_info(source, real, oi, + flags | OBJECT_INFO_SECOND_READ)) return 0; + if (gvfs_virtualize_objects(odb->repo) && !tried_hook) { + tried_hook = 1;28: 7edf0e8 ! 28: 043e699 Hydrate missing loose objects in check_and_freshen()
29: 3743bcd ! 29: 9f00fa3 sha1_file: when writing objects, skip the read_object_hook
@@ odb.c: int odb_has_object(struct object_database *odb, const struct object_id *o + int skip_virtualized_objects) { struct odb_source *source; - -@@ odb.c: int odb_freshen_object(struct object_database *odb, - if (packfile_store_freshen_object(source->packfiles, oid)) - return 1; - -- if (odb_source_loose_freshen_object(source, oid)) -+ if (odb_source_loose_freshen_object(source, oid, skip_virtualized_objects)) + odb_prepare_alternates(odb); + for (source = odb->sources; source; source = source->next) +- if (odb_source_freshen_object(source, oid)) ++ if (odb_source_freshen_object(source, oid, skip_virtualized_objects)) return 1; - } - + return 0; + } ## odb.h ## @@ odb.h: int odb_has_object(struct object_database *odb, - unsigned flags); + enum has_object_flags flags); int odb_freshen_object(struct object_database *odb, - const struct object_id *oid); @@ odb.h: int odb_has_object(struct object_database *odb, void odb_assert_oid_type(struct object_database *odb, const struct object_id *oid, enum object_type expect); + ## odb/source-files.c ## +@@ odb/source-files.c: static int odb_source_files_find_abbrev_len(struct odb_source *source, + } + + static int odb_source_files_freshen_object(struct odb_source *source, +- const struct object_id *oid) ++ const struct object_id *oid, ++ int skip_virtualized_objects) + { + struct odb_source_files *files = odb_source_files_downcast(source); + if (packfile_store_freshen_object(files->packed, oid) || +- odb_source_loose_freshen_object(source, oid)) ++ odb_source_loose_freshen_object(source, oid, skip_virtualized_objects)) + return 1; + return 0; + } + + ## odb/source.h ## +@@ odb/source.h: struct odb_source { + * has been freshened. + */ + int (*freshen_object)(struct odb_source *source, +- const struct object_id *oid); ++ const struct object_id *oid, ++ int skip_virtualized_objects); + + /* + * This callback is expected to persist the given object into the +@@ odb/source.h: static inline int odb_source_find_abbrev_len(struct odb_source *source, + * not exist. + */ + static inline int odb_source_freshen_object(struct odb_source *source, +- const struct object_id *oid) ++ const struct object_id *oid, ++ int skip_virtualized_objects) + { +- return source->freshen_object(source, oid); ++ return source->freshen_object(source, oid, skip_virtualized_objects); + } + + /* + ## t/t0410/read-object ## @@ t/t0410/read-object: while (1) { system ('git --git-dir="' . $DIR . '" cat-file blob ' . $sha1 . ' | git -c core.virtualizeobjects=false hash-object -w --stdin >/dev/null 2>&1');30: 860f9bc ! 30: 54392a4 gvfs: add global command pre and post hook procs
31: 951d38a = 31: f86b1f9 t0400: verify that the hook is called correctly from a subdirectory
32: 08520ae = 32: 6556716 t0400: verify core.hooksPath is respected by pre-command
33: a89247b = 33: 4c4b65e Pass PID of git process to hooks.
34: 61f990b = 34: 294c959 sparse-checkout: make sure to update files with a modify/delete conflict
35: 7fcfdaa = 35: 006e4be worktree: allow in Scalar repositories
36: b3a9cca = 36: 10c2a1d sparse-checkout: avoid writing entries with the skip-worktree bit
37: d85d8f4 = 37: 05b2391 Do not remove files outside the sparse-checkout
38: ebaad6e = 38: e2a2c74 send-pack: do not check for sha1 file when GVFS_MISSING_OK set
39: db181ef = 39: fe7e7d9 gvfs: allow corrupt objects to be re-downloaded
40: bd61a92 = 40: d5f5cc6 cache-tree: remove use of strbuf_addf in update_one
41: 573b59d = 41: 751b242 gvfs: block unsupported commands when running in a GVFS repo
42: 7badf14 = 42: fb8158e gvfs: allow overriding core.gvfs
43: 7572429 = 43: eefc3af BRANCHES.md: Add explanation of branches and using forks
44: d72a479 = 44: a69b585 git.c: add VFS enabled cmd blocking
45: 93e7dd8 = 45: 766f5a9 git.c: permit repack cmd in Scalar repos
46: b41a99f = 46: c495f33 git.c: permit fsck cmd in Scalar repos
47: d81bbf5 = 47: d56fcfd git.c: permit prune cmd in Scalar repos
48: a9061a8 = 48: 4b8d8d9 worktree: remove special case GVFS cmd blocking
49: 92421c0 = 49: 169f0c4 builtin/repack.c: emit warning when shared cache is present
50: 4b9a737 ! 50: 6fb78f0 Add virtual file system settings and hook proc
@@ config.c: int repo_config_get_max_percent_split_change(struct repository *r) +{ + /* Run only once. */ + static int virtual_filesystem_result = -1; ++ struct repo_config_values *cfg = repo_config_values(r); + extern char *core_virtualfilesystem; -+ extern int core_apply_sparse_checkout; + if (virtual_filesystem_result >= 0) + return virtual_filesystem_result; + @@ config.c: int repo_config_get_max_percent_split_change(struct repository *r) + + /* virtual file system relies on the sparse checkout logic so force it on */ + if (core_virtualfilesystem) { -+ core_apply_sparse_checkout = 1; ++ cfg->apply_sparse_checkout = 1; + virtual_filesystem_result = 1; + return 1; + } @@ dir.c: static void add_path_to_appropriate_result_list(struct dir_struct *dir, else if ((dir->flags & DIR_SHOW_IGNORED_TOO) || ## environment.c ## -@@ environment.c: int grafts_keep_true_parents; - int core_apply_sparse_checkout; +@@ environment.c: enum object_creation_mode object_creation_mode = OBJECT_CREATION_MODE; + int grafts_keep_true_parents; int core_sparse_checkout_cone; int sparse_expect_files_outside_of_patterns; +char *core_virtualfilesystem; @@ environment.c: int git_default_core_config(const char *var, const char *value, } if (!strcmp(var, "core.sparsecheckout")) { -- core_apply_sparse_checkout = git_config_bool(var, value); +- cfg->apply_sparse_checkout = git_config_bool(var, value); + /* virtual file system relies on the sparse checkout logic so force it on */ + if (core_virtualfilesystem) -+ core_apply_sparse_checkout = 1; ++ cfg->apply_sparse_checkout = 1; + else -+ core_apply_sparse_checkout = git_config_bool(var, value); ++ cfg->apply_sparse_checkout = git_config_bool(var, value); return 0; } @@ sparse-index.c: void expand_index(struct index_state *istate, struct pattern_lis if (!S_ISSPARSEDIR(ce->ce_mode)) { set_index_entry(full, full->cache_nr++, ce); -@@ sparse-index.c: static void clear_skip_worktree_from_present_files_full(struct index_state *ista - void clear_skip_worktree_from_present_files(struct index_state *istate) - { - if (!core_apply_sparse_checkout || +@@ sparse-index.c: void clear_skip_worktree_from_present_files(struct index_state *istate) + struct repo_config_values *cfg = repo_config_values(the_repository); + + if (!cfg->apply_sparse_checkout || + core_virtualfilesystem || sparse_expect_files_outside_of_patterns) return;51: 4c0a6f2 ! 51: 78e3ace virtualfilesystem: don't run the virtual file system hook if the index has been redirected
@@ config.c: int repo_config_get_virtualfilesystem(struct repository *r) - /* virtual file system relies on the sparse checkout logic so force it on */ if (core_virtualfilesystem) { -- core_apply_sparse_checkout = 1; +- cfg->apply_sparse_checkout = 1; - virtual_filesystem_result = 1; - return 1; + /* @@ config.c: int repo_config_get_virtualfilesystem(struct repository *r) + free(default_index_file); + if (should_run_hook) { + /* virtual file system relies on the sparse checkout logic so force it on */ -+ core_apply_sparse_checkout = 1; ++ cfg->apply_sparse_checkout = 1; + virtual_filesystem_result = 1; + return 1; + }52: b65bd6c = 52: a8bdf3d virtualfilesystem: check if directory is included
53: 8ab7bab ! 53: 040aef4 backwards-compatibility: support the post-indexchanged hook
@@ Commit message allow any `post-indexchanged` hook to run instead (if it exists). ## hook.c ## -@@ hook.c: int run_hooks_opt(struct repository *r, const char *hook_name, - .hook_name = hook_name, - .options = options, - }; -- const char *const hook_path = find_hook(r, hook_name); -+ const char *hook_path = find_hook(r, hook_name); - int ret = 0; - const struct run_process_parallel_opts opts = { - .tr2_category = "hook", -@@ hook.c: int run_hooks_opt(struct repository *r, const char *hook_name, - .data = &cb_data, - }; +@@ hook.c: static void list_hooks_add_default(struct repository *r, const char *hookname, + const char *hook_path = find_hook(r, hookname); + struct hook *h; + /* + * Backwards compatibility hack in VFS for Git: when originally @@ hook.c: int run_hooks_opt(struct repository *r, const char *hook_name, + * look for a hook with the old name (which would be found in case of + * already-existing checkouts). + */ -+ if (!hook_path && !strcmp(hook_name, "post-index-change")) ++ if (!hook_path && !strcmp(hookname, "post-index-change")) + hook_path = find_hook(r, "post-indexchanged"); + - if (!options) - BUG("a struct run_hooks_opt must be provided to run_hooks"); + if (!hook_path) + return; ## t/t7113-post-index-change-hook.sh ##54: 0d9b9fd = 54: 15e6790 gvfs: verify that the built-in FSMonitor is disabled
55: 1978fb1 = 55: 8050a68 wt-status: add trace2 data for sparse-checkout percentage
56: 8be878f = 56: 551e55f status: add status serialization mechanism
57: 8e8f2d9 = 57: 6812a3e Teach ahead-behind and serialized status to play nicely together
58: 0bce4cb = 58: 3de0da6 status: serialize to path
59: 52111d2 = 59: 5276841 status: reject deserialize in V2 and conflicts
60: e1f48ab = 60: e54f039 serialize-status: serialize global and repo-local exclude file metadata
61: 93bb8bf = 61: 827afb9 status: deserialization wait
62: afe608f = 62: ac7492e status: deserialize with -uno does not print correct hint
63: 3dd264a = 63: edc56a1 fsmonitor: check CE_FSMONITOR_VALID in ce_uptodate
64: ec49af2 ! 64: 3ba0482 fsmonitor: add script for debugging and update script for tests
@@ t/t7519/fsmonitor-watchman: sub launch_watchman { @@ t/t7519/fsmonitor-watchman: sub launch_watchman { my $o = $json_pkg->new->utf8->decode($response); - if ($retry > 0 and $o->{error} and $o->{error} =~ m/unable to resolve root .* directory (.*) is not watched/) { + if ($o->{error} and $o->{error} =~ m/unable to resolve root .* directory (.*) is not watched/) { - print STDERR "Adding '$git_work_tree' to watchman's watch list.\n"; - $retry--; qx/watchman watch "$git_work_tree"/; die "Failed to make watchman watch '$git_work_tree'.\n" . + "Falling back to scanning...\n" if $? != 0; @@ t/t7519/fsmonitor-watchman: sub launch_watchman { # return the fast "everything is dirty" flag to git and do the # Watchman query just to get it over with now so we won't pay @@ t/t7519/fsmonitor-watchman: sub launch_watchman { - close $fh; - print "/\0"; - eval { launch_watchman() }; exit 0; + } @@ t/t7519/fsmonitor-watchman: sub launch_watchman { die "Watchman: $o->{error}.\n" . "Falling back to scanning...\n" if $o->{error};65: a925cc4 = 65: 8479329 status: disable deserialize when verbose output requested.
66: 05c497d = 66: c47b5fc t7524: add test for verbose status deserialzation
67: d58fea7 = 67: 741790b deserialize-status: silently fallback if we cannot read cache file
68: 0bca058 = 68: 2264f8a gvfs:trace2:data: add trace2 tracing around read_object_process
69: c4a94ff = 69: 1ecf0d8 gvfs:trace2:data: status deserialization information
70: 06946b1 = 70: a48fc88 gvfs:trace2:data: status serialization
71: 7b39090 = 71: cd8210e gvfs:trace2:data: add vfs stats
72: 1eeb414 = 72: 791961a trace2: refactor setting process starting time
73: de029a9 = 73: 8da7772 trace2:gvfs:experiment: clear_ce_flags_1
74: e63f8b4 = 74: 45c8114 trace2:gvfs:experiment: report_tracking
75: a2fb779 = 75: 732e1ce trace2:gvfs:experiment: read_cache: annotate thread usage in read-cache
76: 3f1b032 = 76: be57ebf trace2:gvfs:experiment: read-cache: time read/write of cache-tree extension
77: ce811d2 = 77: 023393d trace2:gvfs:experiment: add region to apply_virtualfilesystem()
78: 0577e2d = 78: bc1fb61 trace2:gvfs:experiment: add region around unpack_trees()
79: 40fdd38 = 79: 78e87c9 trace2:gvfs:experiment: add region to cache_tree_fully_valid()
80: 4542ccb ! 80: 86bd64c trace2:gvfs:experiment: add unpack_entry() counter to unpack_trees() and report_tracking()
81: f735787 = 81: 2f09526 trace2:gvfs:experiment: increase default event depth for unpack-tree data
82: 0883908 = 82: 6a74074 trace2:gvfs:experiment: add data for check_updates() in unpack_trees()
83: 9b04c50 ! 83: 8641c5f Trace2:gvfs:experiment: capture more 'tracking' details
84: 583b60e = 84: 5d3a058 credential: set trace2_child_class for credential manager children
85: ad8a88e = 85: 802ac56 sub-process: do not borrow cmd pointer from caller
86: 969b74d = 86: 9557aa7 sub-process: add subprocess_start_argv()
87: 27da8d7 ! 87: 0912d5d sha1-file: add function to update existing loose object cache
@@ Commit message Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com> ## object-file.c ## -@@ object-file.c: struct oidtree *odb_source_loose_cache(struct odb_source *source, - return source->loose->cache; +@@ object-file.c: static struct oidtree *odb_source_loose_cache(struct odb_source *source, + return files->loose->cache; } +void odb_source_loose_cache_add_new_oid(struct odb_source *source, @@ object-file.c: struct oidtree *odb_source_loose_cache(struct odb_source *source, ## object-file.h ## @@ object-file.h: int odb_source_loose_write_stream(struct odb_source *source, - struct oidtree *odb_source_loose_cache(struct odb_source *source, - const struct object_id *oid); + struct odb_write_stream *stream, size_t len, + struct object_id *oid); +/* + * Add a new object to the loose object cache (possibly after the88: b28be78 = 88: 611426c index-pack: avoid immediate object fetch while parsing packfile
89: 900a62d ! 89: cf37544 gvfs-helper: create tool to fetch objects using the GVFS Protocol
90: 686c143 ! 90: b72e7c1 sha1-file: create shared-cache directory if it doesn't exist
91: 0705607 = 91: ce62603 gvfs-helper: better handling of network errors
92: 90b03f6 = 92: 2ccc7e3 gvfs-helper-client: properly update loose cache with fetched OID
93: 38eee73 = 93: 11a4ab8 gvfs-helper: V2 robust retry and throttling
94: 7d50682 = 94: 2b06076 gvfs-helper: expose gvfs/objects GET and POST semantics
95: b800370 = 95: 019ff7b gvfs-helper: dramatically reduce progress noise
96: a6bb85e = 96: d36dc03 gvfs-helper: handle pack-file after single POST request
97: cd89ff3 = 97: 54d5bbc test-gvfs-prococol, t5799: tests for gvfs-helper
98: a3ef679 = 98: 8b51b4a gvfs-helper: move result-list construction into install functions
99: ebd1cf3 = 99: 12ccd21 t5799: add support for POST to return either a loose object or packfile
100: 9b77529 = 100: dd9c910 t5799: cleanup wc-l and grep-c lines
101: ee96bd3 = 101: f5fcb19 gvfs-helper: verify loose objects after write
102: f72fbdc = 102: e61ca30 t7599: create corrupt blob test
103: 63b0411 ! 103: 2f9240b gvfs-helper: add prefetch support
104: 58be5dc = 104: 798005a gvfs-helper: add prefetch .keep file for last packfile
105: bc74155 = 105: eb6d4c0 gvfs-helper: do one read in my_copy_fd_len_tail()
106: 103c70e = 106: 300afa7 gvfs-helper: move content-type warning for prefetch packs
107: 7072a36 = 107: fbb30f5 fetch: use gvfs-helper prefetch under config
108: 9ef34ba = 108: 9d18b2a gvfs-helper: better support for concurrent packfile fetches
109: cf9f5c7 = 109: 4853658 remote-curl: do not call fetch-pack when using gvfs-helper
110: e0d9e41 = 110: 1ac7df1 fetch: reprepare packs before checking connectivity
111: 9e0844f = 111: 938142b gvfs-helper: retry when creating temp files
112: 0fe791e = 112: eb73dfc sparse: avoid warnings about known cURL issues in gvfs-helper.c
113: 4d514b4 = 113: c07d603 gvfs-helper: add --max-retries to prefetch verb
114: 6330851 = 114: e64c60f t5799: add tests to detect corrupt pack/idx files in prefetch
115: 0fa8b93 = 115: ac12b06 gvfs-helper: ignore .idx files in prefetch multi-part responses
116: 9bc205f = 116: a9d37bc t5799: explicitly test gvfs-helper --fallback and --no-fallback
118: a61d31f = 117: 5607dbe maintenance: care about gvfs.sharedCache config
120: 3ea9961 = 118: d64e435 unpack-trees:virtualfilesystem: Improve efficiency of clear_ce_flags
122: cb29337 = 119: 60104ed homebrew: add GitHub workflow to release Cask
123: 714810b = 120: 21dbad9 Adding winget workflows
124: 7ca62ec = 121: 8207650 Disable the
monitor-componentsworkflow in msft-git125: 23ec5ff = 122: 6f5b389 .github: enable windows builds on microsoft fork
126: 457ad6e = 123: 387978d .github/actions/akv-secret: add action to get secrets
127: 18be22b = 124: 0a63837 release: create initial Windows installer build workflow
128: 1f4b781 = 125: e76c3cd release: create initial Windows installer build workflow
129: 4da14e0 = 126: 92b1dae help: special-case HOST_CPU
universal117: 913db62 = 127: 7194fb0 gvfs-helper: don't fallback with new config
130: 1df2c14 = 128: c56423f release: add Mac OSX installer build
119: 648d749 = 129: bb1e28b test-gvfs-protocol: add cache_http_503 to mayhem
131: 9ef091e = 130: 8334308 release: build unsigned Ubuntu .deb package
121: 12a2346 = 131: 1c51476 t5799: add unit tests for new
gvfs.fallbackconfig setting132: 131dd46 = 132: 9ca6420 release: add signing step for .deb package
137: 66323f5 = 133: 30aa971 update-microsoft-git: create barebones builtin
133: 1c43f40 = 134: 8cdb812 release: create draft GitHub release with packages & installers
138: 77952d0 = 135: 97bc54c update-microsoft-git: Windows implementation
134: 2cbf875 = 136: 07736b9 build-git-installers: publish gpg public key
139: e6d7504 = 137: a95a24a update-microsoft-git: use brew on macOS
135: 35d8e8a = 138: e591351 release: continue pestering until user upgrades
141: 0b5116c = 139: 0a70485 .github: reinstate ISSUE_TEMPLATE.md for microsoft/git
136: b5dae6f = 140: 27eeb18 dist: archive HEAD instead of HEAD^{tree}
143: e6be121 = 141: 6cac761 .github: update PULL_REQUEST_TEMPLATE.md
140: 2c81ede = 142: 6a21a1f release: include GIT_BUILT_FROM_COMMIT in MacOS build
145: af07101 = 143: 4e15b1b Adjust README.md for microsoft/git
142: 44f941d = 144: d8654ba release: add installer validation
144: a1c2d97 = 145: c3f36b9 git_config_set_multivar_in_file_gently(): add a lock timeout
146: 5d365c1 = 146: 4db75ac scalar: set the config write-lock timeout to 150ms
147: c5f7c06 = 147: 3e326cf scalar: add docs from microsoft/scalar
148: aac2f83 = 148: 0f090c1 scalar (Windows): use forward slashes as directory separators
149: 8e2be68 = 149: e4dac60 scalar: add retry logic to run_git()
150: 9a7aad4 = 150: a1f07e0 scalar: support the
configcommand for backwards compatibility151: 2769593 = 151: 2959111 scalar: implement a minimal JSON parser
152: fdf79eb = 152: 11dbc4e scalar clone: support GVFS-enabled remote repositories
153: 1627ebd = 153: 3980aed test-gvfs-protocol: also serve smart protocol
154: 261da1d = 154: 786e66a gvfs-helper: add the
endpointcommand155: 12cf4a8 = 155: 1426aac dir_inside_of(): handle directory separators correctly
156: 0a3ef43 = 156: 70f1f58 scalar: disable authentication in unattended mode
157: 79eee6c = 157: a990ff5 abspath: make strip_last_path_component() global
158: 95f6307 = 158: 2f09c1d scalar: do initialize
gvfs.sharedCache159: 764999d = 159: 45ab580 scalar diagnose: include shared cache info
160: 781d294 = 160: fae8ba8 scalar: only try GVFS protocol on https:// URLs
161: 51df85b = 161: 4bd85a8 scalar: verify that we can use a GVFS-enabled repository
162: 46228b9 = 162: 7525683 scalar: add the
cache-servercommand163: e76c2d9 = 163: 5649f76 scalar: add a test toggle to skip accessing the vsts/info endpoint
164: 9c9f798 = 164: 4bd39ff scalar: adjust documentation to the microsoft/git fork
165: 0444d3b = 165: 8c69f40 scalar: enable untracked cache unconditionally
166: f4641dd = 166: 9e132df scalar: parse
clone --no-fetch-commits-and-treesfor backwards compatibility167: 4379e5e = 167: 042fe59 scalar: make GVFS Protocol a forced choice
168: fa45836 = 168: e1cc275 scalar: work around GVFS Protocol HTTP/2 failures
169: 95c1d1b = 169: 5719dbd gvfs-helper-client: clean up server process(es)
170: 9d33577 = 170: 00d1244 scalar diagnose: accommodate Scalar's Functional Tests
171: 57e4a13 = 171: d759872 ci: run Scalar's Functional Tests
172: 0efd951 = 172: d232b1d scalar: upgrade to newest FSMonitor config setting
173: a68c75e ! 173: 7df304e add/rm: allow adding sparse entries when virtual
174: 86609c6 = 174: 474c8dc sparse-checkout: add config to disable deleting dirs
175: e698b79 = 175: b3f6758 diff: ignore sparse paths in diffstat
176: f99893a = 176: 57eaf87 repo-settings: enable sparse index by default
177: db4acb8 = 177: 20cf871 TO-UPSTREAM: sequencer: avoid progress when stderr is redirected
178: 02834b5 = 178: c20b297 TO-CHECK: t1092: use quiet mode for rebase tests
179: 8a3e7b4 = 179: e78cdcd reset: fix mixed reset when using virtual filesystem
180: 36d0aa5 = 180: 1626596 diff(sparse-index): verify with partially-sparse
181: 60bdf7d = 181: 785cbf9 stash: expand testing for
git stash -u182: f8b5487 = 182: e02f3fc sparse-index: add ensure_full_index_with_reason()
183: 369f7f5 ! 183: fdcd09d treewide: add reasons for expanding index
-: ------------ > 184: d5cf268 fixup! unpack-trees:virtualfilesystem: Improve efficiency of clear_ce_flags
-: ------------ > 185: cdaf540 fixup! index-pack: avoid immediate object fetch while parsing packfile
184: 2519de3 = 186: 2c8f476 treewide: custom reasons for expanding index
185: 0c25a6b = 187: e16297b sparse-index: add macro for unaudited expansions
186: cc8049f = 188: f0018ed Docs: update sparse index plan with logging
187: a2eda56 = 189: 1935764 sparse-index: log failure to clear skip-worktree
188: 316de89 = 190: 19b690d stash: use -f in checkout-index child process
189: 5c7a96f = 191: ed7f705 sparse-index: do not copy hashtables during expansion
190: a241515 = 192: 9b95a01 TO-UPSTREAM: sub-process: avoid leaking
cmd191: 99f551a = 193: 8f2fa51 remote-curl: release filter options before re-setting them
192: 4692c6d = 194: 39bd48c transport: release object filter options
193: 96b2790 ! 195: 8eb3c00 push: don't reuse deltas with path walk
@@ t/meson.build @@ t/meson.build: integration_tests = [ 't5582-fetch-negative-refspec.sh', 't5583-push-branches.sh', - 't5584-vfs.sh', + 't5584-http-429-retry.sh', + 't5590-push-path-walk.sh', + 't5599-vfs.sh', 't5600-clone-fail-cleanup.sh', 't5601-clone.sh', - 't5602-clone-remote-exec.sh', ## t/t5590-push-path-walk.sh (new) ## @@194: 28db31c = 196: 2278ac9 t7900-maintenance.sh: reset config between tests
195: 00de5cb = 197: 1df7a81 maintenance: add cache-local-objects maintenance task
196: d0ac132 = 198: c6062db scalar.c: add cache-local-objects task
197: ebd3869 ! 199: e606a5f hooks: add custom post-command hook config
198: ffad07d = 200: 361b8b7 TO-UPSTREAM: Docs: fix asciidoc failures from short delimiters
199: 588c42e = 201: 79c05bd hooks: make hook logic memory-leak free
200: e6b8abf = 202: 82c9f86 t0401: test post-command for alias, version, typo
201: ff6b592 ! 203: c5ab898 hooks: better handle config without gitdir
@@ hook.c: static int handle_hook_replacement(struct repository *r, return 0; if (!strcmp(hook_name, "post-index-change")) { -@@ hook.c: int run_hooks_opt(struct repository *r, const char *hook_name, - }; +@@ hook.c: static void list_hooks_add_default(struct repository *r, const char *hookname, + struct hook *h; /* Interject hook behavior depending on strategy. */ - if (r && r->gitdir && -- handle_hook_replacement(r, hook_name, &options->args)) -+ if (r && handle_hook_replacement(r, hook_name, &options->args)) - return 0; +- handle_hook_replacement(r, hookname, &options->args)) ++ if (r && handle_hook_replacement(r, hookname, &options->args)) + return; - hook_path = find_hook(r, hook_name); + hook_path = find_hook(r, hookname); ## t/t0401-post-command-hook.sh ## @@ t/t0401-post-command-hook.sh: test_expect_success 'with post-index-change config' '202: dafc4cd = 204: 4fdf8ec cat_one_file(): make it easy to see that the
sizevariable is initialized206: 6eadd6e = 205: 018b5ba revision: defensive programming
207: c82f4a3 = 206: cd3201e get_parent(): defensive programming
208: 2426e8b = 207: c1d54cf fetch-pack: defensive programming
209: ef84940 ! 208: d129a9e unparse_commit(): defensive programming
210: 550f9b3 = 209: fd9c7ab verify_commit_graph(): defensive programming
211: 718b8b9 = 210: b6477b0 stash: defensive programming
203: 1329aeb = 211: 2be5e20 fsck: avoid using an uninitialized variable
212: 662fdec = 212: 1466e27 stash: defensive programming
204: 0dd3e02 = 213: 175ee0f load_revindex_from_disk(): avoid accessing uninitialized data
214: 2ffee54 = 214: 9f58daa push: defensive programming
205: 68494b4 = 215: 5d72323 load_pack_mtimes_file(): avoid accessing uninitialized data
213: ed47d80 ! 216: 385b20e fetch: silence a CodeQL alert about a local variable's address' use after release
@@ Commit message ## builtin/fetch.c ## @@ builtin/fetch.c: int cmd_fetch(int argc, die(_("must supply remote when using --negotiate-only")); - gtransport = prepare_transport(remote, 1); + gtransport = prepare_transport(remote, 1, &filter_options); if (gtransport->smart_options) { + /* + * Intentionally assign the address of a local variable215: d8809ba = 217: b3cfbd0 test-tool repository: check return value of
lookup_commit()216: 6dc2a93 = 218: d098545 fetch: defensive programming
217: 5a9d50d = 219: 2cb0489 shallow: handle missing shallow commits gracefully
219: acde930 = 220: dbb57b5 inherit_tracking(): defensive programming
220: 9c204b6 = 221: c194734 commit-graph: suppress warning about using a stale stack addresses
218: cebcbfc ! 222: ac1ec11 codeql: run static analysis as part of CI builds
221: 11fd31a ! 223: 553a783 codeql: publish the sarif file as build artifact
@@ .github/workflows/codeql.yml @@ .github/workflows/codeql.yml: jobs: - name: Perform CodeQL Analysis - uses: github/codeql-action/analyze@v3 + uses: github/codeql-action/analyze@v4 + with: + upload: False + output: sarif-results222: 2ac46c5 ! 224: 74b61db codeql: disable a couple of non-critical queries for now
@@ .github/workflows/codeql.yml: jobs: - name: Install dependencies run: ci/install-dependencies.sh @@ .github/workflows/codeql.yml: jobs: - uses: github/codeql-action/init@v3 + uses: github/codeql-action/init@v4 with: languages: ${{ matrix.language }} - queries: security-extended223: e9aec80 = 225: 2b52b49 date: help CodeQL understand that there are no leap-year issues here
224: ff0fd27 = 226: 3b46d49 help: help CodeQL understand that consuming envvars is okay here
225: 930579c = 227: 1e2c133 ctype: help CodeQL understand that
sane_istest()does not access array past end226: e445330 = 228: 6902767 ctype: accommodate for CodeQL misinterpreting the
zinmallocz()227: 1170a79 = 229: c2360a6 strbuf_read: help with CodeQL misunderstanding that
strbuf_read()does NUL-terminate correctly228: aecb7c5 = 230: b855d37 codeql: also check JavaScript code
229: f9bf282 = 231: e713f39 scalar: add run_git_argv
230: 1fd99c3 = 232: c83a04b scalar: add --ref-format option to scalar clone
231: 18e4fd8 = 233: 0fc6260 gvfs-helper: skip collision check for loose objects
232: 58ebe08 = 234: 6a7a23b gvfs-helper: emit advice on transient errors
233: f179d71 = 235: f8bc452 gvfs-helper: avoid collision check for packfiles
234: 9b41a6e = 236: bb85403 t5799: update cache-server methods for multiple instances
235: 1116f7b = 237: 0ec485c gvfs-helper: override cache server for prefetch
236: 155adac = 238: 98b501a gvfs-helper: override cache server for get
237: 57fe86e = 239: 2b1094d gvfs-helper: override cache server for post
238: f04bd03 = 240: 0eb982c t5799: add test for all verb-specific cache-servers together
239: c8ce832 = 241: 3016169 lib-gvfs-helper: create helper script for protocol tests
240: ba07d09 = 242: 0da04b0 t579*: split t5799 into several parts
241: 8e9ac3d < -: ------------ osxkeychain: always apply required build flags
-: ------------ > 243: a15fe98 fixup! osxkeychain: always apply required build flags
-: ------------ > 244: 3485d93 scalar: add ---cache-server-url options
-: ------------ > 245: 1c8fa75 Restore previous errno after post command hook
-: ------------ > 246: 3d71819 t9210: differentiate origin and cache servers
-: ------------ > 247: 5f989a9 http: fix bug in ntlm_allow=1 handling
-: ------------ > 248: f946d44 unpack-trees: skip lstats for deleted VFS entries in checkout
-: ------------ > 249: 013cab2 worktree: conditionally allow worktree on VFS-enabled repos
-: ------------ > 250: 2360d06 gvfs-helper: create shared object cache if missing
-: ------------ > 251: f004bd8 gvfs-helper: send X-Session-Id headers
-: ------------ > 252: 1435bfa gvfs: add gvfs.sessionKey config
-: ------------ > 253: 620b378 gvfs: clear DIE_IF_CORRUPT in streaming incore fallback
-: ------------ > 254: a247252 workflow: add release-vfsforgit to automate VFS for Git updates
-: ------------ > 255: e5f9952 worktree remove: use GVFS_SUPPORTS_WORKTREES for skip-clean-check gate
-: ------------ > 256: bff5d6d ci: add new VFS for Git functional tests workflow
-: ------------ > 257: a3e439c fixup! maintenance: add cache-local-objects maintenance task
-: ------------ > 258: 613b1eb fixup! sub-process: add subprocess_start_argv()
-: ------------ > 259: 9a172cc fixup! hooks: add custom post-command hook config
-: ------------ > 260: c4a5879 fixup! maintenance: add cache-local-objects maintenance task
-: ------------ > 261: 4c0383b fixup! index-pack: avoid immediate object fetch while parsing packfile
-: ------------ > 262: f806df9 fixup! hooks: add custom post-command hook config
This will need to be cleaned up substantially (incorporating the notes that are currently neatly stashed away in some of the
fixup!commits' messages) for -rc2.