Skip to content

Rebase onto Git for Windows 2.54.0-rc2#890

Open
dscho wants to merge 317 commits intovfs-2.54.0-rc2from
tentative/vfs-2.54.0-rc2
Open

Rebase onto Git for Windows 2.54.0-rc2#890
dscho wants to merge 317 commits intovfs-2.54.0-rc2from
tentative/vfs-2.54.0-rc2

Conversation

@dscho
Copy link
Copy Markdown
Member

@dscho dscho commented Apr 17, 2026

Range-diff relative to clean/vfs-2.53.0
  • 1: 0b9a637 (upstream: 0b9a637) < -: ------------ t5563: verify that NTLM authentication works

  • 2: a495d10 (upstream: a495d10) < -: ------------ http: disallow NTLM authentication by default

  • 5: d3269ef = 1: a8b5670 t: remove advice from some tests

  • 3: 4f9ffee = 2: fa5e46d sparse-index.c: fix use of index hashes in expand_index

  • 4: c058963 = 3: 5e2953f t5300: confirm failure of git index-pack when non-idx suffix requested

  • 6: 44d08ad = 4: f3a148c t1092: add test for untracked files and directories

  • 7: 7281f11 = 5: 227c0c3 index-pack: disable rev-index if index file has non .idx suffix

  • 8: 6a78411 = 6: 752da3b survey: calculate more stats on refs

  • 9: e559784 = 7: fe3f5f7 survey: show some commits/trees/blobs histograms

  • 10: c324b30 = 8: d41b7d0 survey: add vector of largest objects for various scaling dimensions

  • 11: 424138b = 9: 653687a survey: add pathname of blob or tree to large_item_vec

  • 12: 90b54b5 = 10: 564482d survey: add commit-oid to large_item detail

  • 13: f3a4f77 = 11: 145cc6b trace2: prefetch value of GIT_TRACE2_DST_DEBUG at startup

  • 14: 4c75e0b = 12: 98cc0b6 survey: add commit name-rev lookup to each large_item

  • 15: 93148be = 13: 72c4a50 survey: add --no-name-rev option

  • 16: 3e76b34 = 14: f75127c survey: started TODO list at bottom of source file

  • 17: 9f706da = 15: 6cd5fac survey: expanded TODO list at the bottom of the source file

  • 18: 9446142 = 16: 58f2dfe survey: expanded TODO with more notes

  • 19: e030c0e = 17: 4bd146d reset --stdin: trim carriage return from the paths

  • 20: 66e909e ! 18: 840b607 Identify microsoft/git via a distinct version suffix

    @@ Commit message
      ## GIT-VERSION-GEN ##
     @@
      
    - DEF_VER=v2.53.0
    + DEF_VER=v2.54.0-rc2
      
     +# Identify microsoft/git via a distinct version suffix
     +DEF_VER=$DEF_VER.vfs.0.0
  • 21: 96ee9e4 = 19: ee35ee1 gvfs: ensure that the version is based on a GVFS tag

  • 22: c5d5b7e = 20: 54f83ba gvfs: add a GVFS-specific header file

  • 23: 54c3608 = 21: 4f9c015 gvfs: add the core.gvfs config setting

  • 24: 5103fd4 = 22: 031d1e9 gvfs: add the feature to skip writing the index' SHA-1

  • 25: 26e5606 = 23: aa839b7 gvfs: add the feature that blobs may be missing

  • 26: 6ac9835 = 24: ac713d2 gvfs: prevent files to be deleted outside the sparse checkout

  • 105: a1c2d97 = 25: 3bb56f4 git_config_set_multivar_in_file_gently(): add a lock timeout

  • 106: 5d365c1 = 26: a46f3b2 scalar: set the config write-lock timeout to 150ms

  • 107: c5f7c06 = 27: c685d99 scalar: add docs from microsoft/scalar

  • 108: aac2f83 = 28: 504e90d scalar (Windows): use forward slashes as directory separators

  • 109: 8e2be68 = 29: f41388d scalar: add retry logic to run_git()

  • 110: 9a7aad4 = 30: 8f54a61 scalar: support the config command for backwards compatibility

  • 111: db4acb8 = 31: 31d6630 TO-UPSTREAM: sequencer: avoid progress when stderr is redirected

  • 112: dafc4cd = 32: cd4bde0 cat_one_file(): make it easy to see that the size variable is initialized

  • 113: 1329aeb = 33: 869b7d8 fsck: avoid using an uninitialized variable

  • 116: 6eadd6e = 34: 5981b91 revision: defensive programming

  • 114: 0dd3e02 = 35: 7b34fd4 load_revindex_from_disk(): avoid accessing uninitialized data

  • 117: c82f4a3 = 36: 2fa7ac8 get_parent(): defensive programming

  • 115: 68494b4 = 37: 55cdf49 load_pack_mtimes_file(): avoid accessing uninitialized data

  • 118: 2426e8b = 38: 7b01628 fetch-pack: defensive programming

  • 119: ef84940 ! 39: 0d029f7 unparse_commit(): defensive programming

    @@ commit.c: void unparse_commit(struct repository *r, const struct object_id *oid)
     -	if (!c->object.parsed)
     +	if (!c || !c->object.parsed)
      		return;
    - 	free_commit_list(c->parents);
    + 	commit_list_free(c->parents);
      	c->parents = NULL;
  • 120: 550f9b3 = 40: c6653c7 verify_commit_graph(): defensive programming

  • 121: 718b8b9 = 41: a591a45 stash: defensive programming

  • 122: 662fdec = 42: 52e0dd6 stash: defensive programming

  • 124: 2ffee54 = 43: 1d13d0d push: defensive programming

  • 123: ed47d80 ! 44: e09e3c9 fetch: silence a CodeQL alert about a local variable's address' use after release

    @@ Commit message
      ## builtin/fetch.c ##
     @@ builtin/fetch.c: int cmd_fetch(int argc,
      			die(_("must supply remote when using --negotiate-only"));
    - 		gtransport = prepare_transport(remote, 1);
    + 		gtransport = prepare_transport(remote, 1, &filter_options);
      		if (gtransport->smart_options) {
     +			/*
     +			 * Intentionally assign the address of a local variable
  • 125: d8809ba = 45: 57085a3 test-tool repository: check return value of lookup_commit()

  • 126: 6dc2a93 = 46: d86846e fetch: defensive programming

  • 127: 5a9d50d = 47: 8a1802e shallow: handle missing shallow commits gracefully

  • 128: acde930 = 48: 0524783 inherit_tracking(): defensive programming

  • 168: fdd4ffb0b9de = 49: 96fbdbe codeql: run static analysis as part of CI builds

  • 241: de19e4409c0a = 50: 5a09d69 codeql: publish the sarif file as build artifact

  • 242: 529acb723d7a = 51: 625318f codeql: disable a couple of non-critical queries for now

  • 243: 4c1923ddbbc0 = 52: d05d019 date: help CodeQL understand that there are no leap-year issues here

  • 244: ac68dc2f2a94 = 53: 20b0e5b help: help CodeQL understand that consuming envvars is okay here

  • 129: 9c204b6 = 54: bb22be6 commit-graph: suppress warning about using a stale stack addresses

  • 245: fb9811962374 = 55: b741875 ctype: help CodeQL understand that sane_istest() does not access array past end

  • 246: 4b577103eeae = 56: c2c52a2 ctype: accommodate for CodeQL misinterpreting the z in mallocz()

  • 247: f2a6953b42dd = 57: c7f2d20 strbuf_read: help with CodeQL misunderstanding that strbuf_read() does NUL-terminate correctly

  • 248: ea5bae8e7c45 = 58: c0d77a4 codeql: also check JavaScript code

  • 27: acaf7ff ! 59: 6c5c7d9 gvfs: optionally skip reachability checks/upload pack during fetch

    @@ gvfs.h: struct repository;
     
      ## t/meson.build ##
     @@ t/meson.build: integration_tests = [
    -   't5581-http-curl-verbose.sh',
        't5582-fetch-negative-refspec.sh',
        't5583-push-branches.sh',
    -+  't5584-vfs.sh',
    +   't5584-http-429-retry.sh',
    ++  't5599-vfs.sh',
        't5600-clone-fail-cleanup.sh',
        't5601-clone.sh',
        't5602-clone-remote-exec.sh',
     
    - ## t/t5584-vfs.sh (new) ##
    + ## t/t5599-vfs.sh (new) ##
     @@
     +#!/bin/sh
     +
    @@ t/t5584-vfs.sh (new)
     +'
     +
     +test_done
    - \ No newline at end of file
  • 28: 10b1501 = 60: 8f8e4a9 gvfs: ensure all filters and EOL conversions are blocked

  • 29: fc79044 ! 61: 3b3c29f gvfs: allow "virtualizing" objects

    @@ odb.c: static int do_oid_object_info_extended(struct object_database *odb,
      	if (co) {
      		if (oi) {
     @@ odb.c: static int do_oid_object_info_extended(struct object_database *odb,
    - 			for (source = odb->sources; source; source = source->next)
    - 				if (!packfile_store_read_object_info(source->packfiles, real, oi, flags))
    + 				if (!odb_source_read_object_info(source, real, oi,
    + 								 flags | OBJECT_INFO_SECOND_READ))
      					return 0;
     +			if (gvfs_virtualize_objects(odb->repo) && !tried_hook) {
     +				tried_hook = 1;
  • 30: 7edf0e8 ! 62: db094aa Hydrate missing loose objects in check_and_freshen()

    @@ odb.c: static int do_oid_object_info_extended(struct object_database *odb,
      		}
     
      ## odb.h ##
    -@@ odb.h: int odb_write_object_stream(struct object_database *odb,
    - 			    struct odb_write_stream *stream, size_t len,
    - 			    struct object_id *oid);
    +@@ odb.h: void parse_alternates(const char *string,
    + 		      const char *relative_base,
    + 		      struct strvec *out);
      
     +int read_object_process(struct repository *r, const struct object_id *oid);
     +
  • 31: 3743bcd ! 63: 0ecac98 sha1_file: when writing objects, skip the read_object_hook

    @@ odb.c: int odb_has_object(struct object_database *odb, const struct object_id *o
     +		       int skip_virtualized_objects)
      {
      	struct odb_source *source;
    - 
    -@@ odb.c: int odb_freshen_object(struct object_database *odb,
    - 		if (packfile_store_freshen_object(source->packfiles, oid))
    - 			return 1;
    - 
    --		if (odb_source_loose_freshen_object(source, oid))
    -+		if (odb_source_loose_freshen_object(source, oid, skip_virtualized_objects))
    + 	odb_prepare_alternates(odb);
    + 	for (source = odb->sources; source; source = source->next)
    +-		if (odb_source_freshen_object(source, oid))
    ++		if (odb_source_freshen_object(source, oid, skip_virtualized_objects))
      			return 1;
    - 	}
    - 
    + 	return 0;
    + }
     
      ## odb.h ##
     @@ odb.h: int odb_has_object(struct object_database *odb,
    - 		   unsigned flags);
    + 		   enum odb_has_object_flags flags);
      
      int odb_freshen_object(struct object_database *odb,
     -		       const struct object_id *oid);
    @@ odb.h: int odb_has_object(struct object_database *odb,
      void odb_assert_oid_type(struct object_database *odb,
      			 const struct object_id *oid, enum object_type expect);
     
    + ## odb/source-files.c ##
    +@@ odb/source-files.c: static int odb_source_files_find_abbrev_len(struct odb_source *source,
    + }
    + 
    + static int odb_source_files_freshen_object(struct odb_source *source,
    +-					   const struct object_id *oid)
    ++					   const struct object_id *oid,
    ++					   int skip_virtualized_objects)
    + {
    + 	struct odb_source_files *files = odb_source_files_downcast(source);
    + 	if (packfile_store_freshen_object(files->packed, oid) ||
    +-	    odb_source_loose_freshen_object(source, oid))
    ++	    odb_source_loose_freshen_object(source, oid, skip_virtualized_objects))
    + 		return 1;
    + 	return 0;
    + }
    +
    + ## odb/source.h ##
    +@@ odb/source.h: struct odb_source {
    + 	 * has been freshened.
    + 	 */
    + 	int (*freshen_object)(struct odb_source *source,
    +-			      const struct object_id *oid);
    ++			      const struct object_id *oid,
    ++			      int skip_virtualized_objects);
    + 
    + 	/*
    + 	 * This callback is expected to persist the given object into the
    +@@ odb/source.h: static inline int odb_source_find_abbrev_len(struct odb_source *source,
    +  * not exist.
    +  */
    + static inline int odb_source_freshen_object(struct odb_source *source,
    +-					    const struct object_id *oid)
    ++					    const struct object_id *oid,
    ++					    int skip_virtualized_objects)
    + {
    +-	return source->freshen_object(source, oid);
    ++	return source->freshen_object(source, oid, skip_virtualized_objects);
    + }
    + 
    + /*
    +
      ## t/t0410/read-object ##
     @@ t/t0410/read-object: while (1) {
      		system ('git --git-dir="' . $DIR . '" cat-file blob ' . $sha1 . ' | git -c core.virtualizeobjects=false hash-object -w --stdin >/dev/null 2>&1');
  • 32: 860f9bc ! 64: 6096a76 gvfs: add global command pre and post hook procs

    @@ hook.c
      #include "abspath.h"
     +#include "environment.h"
      #include "advice.h"
    - #include "gettext.h"
    - #include "hook.h"
    -@@
    + #include "config.h"
      #include "environment.h"
    - #include "setup.h"
    +@@
    + #include "strbuf.h"
    + #include "strmap.h"
      
     +static int early_hooks_path_config(const char *var, const char *value,
     +				   const struct config_context *ctx UNUSED, void *cb)
    @@ hook.c
      
      	int found_hook;
      
    +-	if (!r || !r->gitdir)
    +-		return NULL;
    +-
     -	repo_git_path_replace(r, &path, "hooks/%s", name);
    -+	strbuf_reset(&path);
    -+	if (have_git_dir())
    ++	if (!r || !r->gitdir) {
    ++		if (!hook_path_early(name, &path))
    ++			return NULL;
    ++	} else {
     +		repo_git_path_replace(r, &path, "hooks/%s", name);
    -+	else if (!hook_path_early(name, &path))
    -+		return NULL;
    -+
    ++	}
      	found_hook = access(path.buf, X_OK) >= 0;
      #ifdef STRIP_EXTENSION
      	if (!found_hook) {
  • 33: 951d38a = 65: 6af73d5 t0400: verify that the hook is called correctly from a subdirectory

  • 34: 08520ae = 66: bf3a5ff t0400: verify core.hooksPath is respected by pre-command

  • 35: a89247b = 67: 8a38f28 Pass PID of git process to hooks.

  • 36: 61f990b = 68: cb18230 sparse-checkout: make sure to update files with a modify/delete conflict

  • 37: 7fcfdaa = 69: e459ab2 worktree: allow in Scalar repositories

  • 38: b3a9cca = 70: 0682d47 sparse-checkout: avoid writing entries with the skip-worktree bit

  • 39: d85d8f4 = 71: 1cad0d4 Do not remove files outside the sparse-checkout

  • 40: ebaad6e = 72: 694a097 send-pack: do not check for sha1 file when GVFS_MISSING_OK set

  • 41: db181ef = 73: 5ba0910 gvfs: allow corrupt objects to be re-downloaded

  • 42: bd61a92 = 74: 6602ef5 cache-tree: remove use of strbuf_addf in update_one

  • 43: 573b59d = 75: 6bfef91 gvfs: block unsupported commands when running in a GVFS repo

  • 44: 7badf14 = 76: 4577d0b gvfs: allow overriding core.gvfs

  • 45: 7572429 = 77: 80db01c BRANCHES.md: Add explanation of branches and using forks

  • 46: d72a479 = 78: 588b661 git.c: add VFS enabled cmd blocking

  • 47: 93e7dd8 = 79: 5556af7 git.c: permit repack cmd in Scalar repos

  • 48: b41a99f = 80: 000c192 git.c: permit fsck cmd in Scalar repos

  • 49: d81bbf5 = 81: 6cd4041 git.c: permit prune cmd in Scalar repos

  • 52: 4b9a737 ! 82: 8642204 Add virtual file system settings and hook proc

    @@ config.c: int repo_config_get_max_percent_split_change(struct repository *r)
     +{
     +	/* Run only once. */
     +	static int virtual_filesystem_result = -1;
    ++	struct repo_config_values *cfg = repo_config_values(r);
     +	extern char *core_virtualfilesystem;
    -+	extern int core_apply_sparse_checkout;
     +	if (virtual_filesystem_result >= 0)
     +		return virtual_filesystem_result;
     +
    @@ config.c: int repo_config_get_max_percent_split_change(struct repository *r)
     +
     +	/* virtual file system relies on the sparse checkout logic so force it on */
     +	if (core_virtualfilesystem) {
    -+		core_apply_sparse_checkout = 1;
    ++		cfg->apply_sparse_checkout = 1;
     +		virtual_filesystem_result = 1;
     +		return 1;
     +	}
    @@ dir.c: static void add_path_to_appropriate_result_list(struct dir_struct *dir,
      		else if ((dir->flags & DIR_SHOW_IGNORED_TOO) ||
     
      ## environment.c ##
    -@@ environment.c: int grafts_keep_true_parents;
    - int core_apply_sparse_checkout;
    +@@ environment.c: enum object_creation_mode object_creation_mode = OBJECT_CREATION_MODE;
    + int grafts_keep_true_parents;
      int core_sparse_checkout_cone;
      int sparse_expect_files_outside_of_patterns;
     +char *core_virtualfilesystem;
    @@ environment.c: int git_default_core_config(const char *var, const char *value,
      	}
      
      	if (!strcmp(var, "core.sparsecheckout")) {
    --		core_apply_sparse_checkout = git_config_bool(var, value);
    +-		cfg->apply_sparse_checkout = git_config_bool(var, value);
     +		/* virtual file system relies on the sparse checkout logic so force it on */
     +		if (core_virtualfilesystem)
    -+			core_apply_sparse_checkout = 1;
    ++			cfg->apply_sparse_checkout = 1;
     +		else
    -+			core_apply_sparse_checkout = git_config_bool(var, value);
    ++			cfg->apply_sparse_checkout = git_config_bool(var, value);
      		return 0;
      	}
      
    @@ sparse-index.c: void expand_index(struct index_state *istate, struct pattern_lis
      
      		if (!S_ISSPARSEDIR(ce->ce_mode)) {
      			set_index_entry(full, full->cache_nr++, ce);
    -@@ sparse-index.c: static void clear_skip_worktree_from_present_files_full(struct index_state *ista
    - void clear_skip_worktree_from_present_files(struct index_state *istate)
    - {
    - 	if (!core_apply_sparse_checkout ||
    +@@ sparse-index.c: void clear_skip_worktree_from_present_files(struct index_state *istate)
    + 	struct repo_config_values *cfg = repo_config_values(the_repository);
    + 
    + 	if (!cfg->apply_sparse_checkout ||
     +	    core_virtualfilesystem ||
      	    sparse_expect_files_outside_of_patterns)
      		return;
  • 53: 4c0a6f2 ! 83: 8d21b0a virtualfilesystem: don't run the virtual file system hook if the index has been redirected

    @@ config.c: int repo_config_get_virtualfilesystem(struct repository *r)
      
     -	/* virtual file system relies on the sparse checkout logic so force it on */
      	if (core_virtualfilesystem) {
    --		core_apply_sparse_checkout = 1;
    +-		cfg->apply_sparse_checkout = 1;
     -		virtual_filesystem_result = 1;
     -		return 1;
     +		/*
    @@ config.c: int repo_config_get_virtualfilesystem(struct repository *r)
     +		free(default_index_file);
     +		if (should_run_hook) {
     +			/* virtual file system relies on the sparse checkout logic so force it on */
    -+			core_apply_sparse_checkout = 1;
    ++			cfg->apply_sparse_checkout = 1;
     +			virtual_filesystem_result = 1;
     +			return 1;
     +		}
  • 54: b65bd6c = 84: c302d0d virtualfilesystem: check if directory is included

  • 50: a9061a8 = 85: 78255c5 worktree: remove special case GVFS cmd blocking

  • 55: 8ab7bab ! 86: 4301484 backwards-compatibility: support the post-indexchanged hook

    @@ Commit message
         allow any `post-indexchanged` hook to run instead (if it exists).
     
      ## hook.c ##
    -@@ hook.c: int run_hooks_opt(struct repository *r, const char *hook_name,
    - 		.hook_name = hook_name,
    - 		.options = options,
    - 	};
    --	const char *const hook_path = find_hook(r, hook_name);
    -+	const char *hook_path = find_hook(r, hook_name);
    - 	int ret = 0;
    - 	const struct run_process_parallel_opts opts = {
    - 		.tr2_category = "hook",
    -@@ hook.c: int run_hooks_opt(struct repository *r, const char *hook_name,
    - 		.data = &cb_data,
    - 	};
    +@@ hook.c: static void list_hooks_add_default(struct repository *r, const char *hookname,
    + 	const char *hook_path = find_hook(r, hookname);
    + 	struct hook *h;
      
     +	/*
     +	 * Backwards compatibility hack in VFS for Git: when originally
    @@ hook.c: int run_hooks_opt(struct repository *r, const char *hook_name,
     +	 * look for a hook with the old name (which would be found in case of
     +	 * already-existing checkouts).
     +	 */
    -+	if (!hook_path && !strcmp(hook_name, "post-index-change"))
    ++	if (!hook_path && !strcmp(hookname, "post-index-change"))
     +		hook_path = find_hook(r, "post-indexchanged");
     +
    - 	if (!options)
    - 		BUG("a struct run_hooks_opt must be provided to run_hooks");
    + 	if (!hook_path)
    + 		return;
      
     
      ## t/t7113-post-index-change-hook.sh ##
  • 51: 92421c0 = 87: 2bc2bf0 builtin/repack.c: emit warning when shared cache is present

  • 56: 0d9b9fd = 88: 2fe9f7e gvfs: verify that the built-in FSMonitor is disabled

  • 57: 1978fb1 = 89: 061c21a wt-status: add trace2 data for sparse-checkout percentage

  • 58: 8be878f = 90: f1e5fdf status: add status serialization mechanism

  • 59: 8e8f2d9 = 91: 42dda07 Teach ahead-behind and serialized status to play nicely together

  • 60: 0bce4cb = 92: e2476d7 status: serialize to path

  • 61: 52111d2 = 93: 336c021 status: reject deserialize in V2 and conflicts

  • 62: e1f48ab = 94: 51b28b1 serialize-status: serialize global and repo-local exclude file metadata

  • 63: 93bb8bf = 95: f2e8e52 status: deserialization wait

  • 64: afe608f = 96: 1f45fda status: deserialize with -uno does not print correct hint

  • 65: 3dd264a = 97: 17b1fe2 fsmonitor: check CE_FSMONITOR_VALID in ce_uptodate

  • 66: ec49af2 ! 98: 12b3942 fsmonitor: add script for debugging and update script for tests

    @@ t/t7519/fsmonitor-watchman: sub launch_watchman {
     @@ t/t7519/fsmonitor-watchman: sub launch_watchman {
      	my $o = $json_pkg->new->utf8->decode($response);
      
    - 	if ($retry > 0 and $o->{error} and $o->{error} =~ m/unable to resolve root .* directory (.*) is not watched/) {
    + 	if ($o->{error} and $o->{error} =~ m/unable to resolve root .* directory (.*) is not watched/) {
     -		print STDERR "Adding '$git_work_tree' to watchman's watch list.\n";
    - 		$retry--;
      		qx/watchman watch "$git_work_tree"/;
      		die "Failed to make watchman watch '$git_work_tree'.\n" .
    + 		    "Falling back to scanning...\n" if $? != 0;
     @@ t/t7519/fsmonitor-watchman: sub launch_watchman {
      		# return the fast "everything is dirty" flag to git and do the
      		# Watchman query just to get it over with now so we won't pay
    @@ t/t7519/fsmonitor-watchman: sub launch_watchman {
     -		close $fh;
     -
      		print "/\0";
    - 		eval { launch_watchman() };
      		exit 0;
    + 	}
     @@ t/t7519/fsmonitor-watchman: sub launch_watchman {
      	die "Watchman: $o->{error}.\n" .
      	    "Falling back to scanning...\n" if $o->{error};
  • 67: a925cc4 = 99: 01a4a16 status: disable deserialize when verbose output requested.

  • 68: 05c497d = 100: 2a97d15 t7524: add test for verbose status deserialzation

  • 69: d58fea7 = 101: e66373a deserialize-status: silently fallback if we cannot read cache file

  • 70: 0bca058 = 102: 0aee816 gvfs:trace2:data: add trace2 tracing around read_object_process

  • 71: c4a94ff = 103: ee278a1 gvfs:trace2:data: status deserialization information

  • 72: 06946b1 = 104: bdd02dd gvfs:trace2:data: status serialization

  • 73: 7b39090 = 105: 693e7f0 gvfs:trace2:data: add vfs stats

  • 74: 1eeb414 = 106: f206888 trace2: refactor setting process starting time

  • 75: de029a9 = 107: 984bacb trace2:gvfs:experiment: clear_ce_flags_1

  • 76: e63f8b4 = 108: ca649da trace2:gvfs:experiment: report_tracking

  • 77: a2fb779 = 109: 354d2e7 trace2:gvfs:experiment: read_cache: annotate thread usage in read-cache

  • 78: 3f1b032 = 110: 3e21ca0 trace2:gvfs:experiment: read-cache: time read/write of cache-tree extension

  • 79: ce811d2 = 111: 18fa0c1 trace2:gvfs:experiment: add region to apply_virtualfilesystem()

  • 80: 0577e2d = 112: a77f91f trace2:gvfs:experiment: add region around unpack_trees()

  • 81: 40fdd38 ! 113: f72701a trace2:gvfs:experiment: add region to cache_tree_fully_valid()

    @@ cache-tree.c: static void discard_unused_subtrees(struct cache_tree *it)
      	int i;
      	if (!it)
     @@ cache-tree.c: int cache_tree_fully_valid(struct cache_tree *it)
    - 			   HAS_OBJECT_RECHECK_PACKED | HAS_OBJECT_FETCH_PROMISOR))
    + 			   ODB_HAS_OBJECT_RECHECK_PACKED | ODB_HAS_OBJECT_FETCH_PROMISOR))
      		return 0;
      	for (i = 0; i < it->subtree_nr; i++) {
     -		if (!cache_tree_fully_valid(it->down[i]->cache_tree))
  • 82: 4542ccb ! 114: ded032c trace2:gvfs:experiment: add unpack_entry() counter to unpack_trees() and report_tracking()

    @@ unpack-trees.c
      #include "refs.h"
      #include "attr.h"
     @@ unpack-trees.c: int unpack_trees(unsigned len, struct tree_desc *t, struct unpack_trees_options
    - 	struct pattern_list pl;
      	int free_pattern_list = 0;
      	struct dir_struct dir = DIR_INIT;
    + 	struct repo_config_values *cfg = repo_config_values(the_repository);
     +	unsigned long nr_unpack_entry_at_start;
      
      	if (o->reset == UNPACK_RESET_INVALID)
  • 83: f735787 = 115: c13e45c trace2:gvfs:experiment: increase default event depth for unpack-tree data

  • 84: 0883908 = 116: 053fa03 trace2:gvfs:experiment: add data for check_updates() in unpack_trees()

  • 85: 9b04c50 ! 117: 9aa2717 Trace2:gvfs:experiment: capture more 'tracking' details

    @@ remote.c
      #include "advice.h"
      #include "connect.h"
     @@ remote.c: int format_tracking_info(struct branch *branch, struct strbuf *sb,
    - 	char *base;
    - 	int upstream_is_gone = 0;
    + 		if (is_upstream && (!push_ref || !strcmp(upstream_ref, push_ref)))
    + 			is_push = 1;
      
    -+	trace2_region_enter("tracking", "stat_tracking_info", NULL);
    - 	sti = stat_tracking_info(branch, &ours, &theirs, &full_base, 0, abf);
    -+	trace2_data_intmax("tracking", NULL, "stat_tracking_info/ab_flags", abf);
    -+	trace2_data_intmax("tracking", NULL, "stat_tracking_info/ab_result", sti);
    -+	if (sti >= 0 && abf == AHEAD_BEHIND_FULL) {
    -+	    trace2_data_intmax("tracking", NULL, "stat_tracking_info/ab_ahead", ours);
    -+	    trace2_data_intmax("tracking", NULL, "stat_tracking_info/ab_behind", theirs);
    -+	}
    -+	trace2_region_leave("tracking", "stat_tracking_info", NULL);
    -+
    - 	if (sti < 0) {
    - 		if (!full_base)
    - 			return 0;
    ++		trace2_region_enter("tracking", "stat_tracking_pair", NULL);
    + 		cmp = stat_branch_pair(branch->refname, full_ref,
    + 				       &ours, &theirs, abf);
    ++		trace2_data_intmax("tracking", NULL, "stat_tracking_pair/ab_flags", abf);
    ++		trace2_data_intmax("tracking", NULL, "stat_tracking_pair/ab_result", cmp);
    ++		if (cmp >= 0 && abf == AHEAD_BEHIND_FULL) {
    ++		    trace2_data_intmax("tracking", NULL, "stat_tracking_pair/ab_ahead", ours);
    ++		    trace2_data_intmax("tracking", NULL, "stat_tracking_pair/ab_behind", theirs);
    ++		}
    ++		trace2_region_leave("tracking", "stat_tracking_pair", NULL);
    + 
    + 		if (cmp < 0) {
    + 			if (is_upstream) {
  • 86: 583b60e = 118: 1141617 credential: set trace2_child_class for credential manager children

  • 87: ad8a88e = 119: 37ef52b sub-process: do not borrow cmd pointer from caller

  • 88: 969b74d ! 120: 16e6fb6 sub-process: add subprocess_start_argv()

    @@ sub-process.c: int subprocess_start(struct hashmap *hashmap, struct subprocess_e
     +			  subprocess_start_fn startfn)
     +{
     +	int err;
    -+	size_t k;
     +	struct child_process *process;
     +	struct strbuf quoted = STRBUF_INIT;
     +
     +	process = &entry->process;
     +
     +	child_process_init(process);
    -+	for (k = 0; k < argv->nr; k++)
    -+		strvec_push(&process->args, argv->v[k]);
    ++	strvec_pushv(&process->args, argv->v);
     +	process->use_shell = 1;
     +	process->in = -1;
     +	process->out = -1;
  • 89: 27da8d7 ! 121: a8fea04 sha1-file: add function to update existing loose object cache

    @@ Commit message
         Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
     
      ## object-file.c ##
    -@@ object-file.c: struct oidtree *odb_source_loose_cache(struct odb_source *source,
    - 	return source->loose->cache;
    +@@ object-file.c: static struct oidtree *odb_source_loose_cache(struct odb_source *source,
    + 	return files->loose->cache;
      }
      
     +void odb_source_loose_cache_add_new_oid(struct odb_source *source,
    @@ object-file.c: struct oidtree *odb_source_loose_cache(struct odb_source *source,
     
      ## object-file.h ##
     @@ object-file.h: int odb_source_loose_write_stream(struct odb_source *source,
    - struct oidtree *odb_source_loose_cache(struct odb_source *source,
    - 				       const struct object_id *oid);
    + 				  struct odb_write_stream *stream, size_t len,
    + 				  struct object_id *oid);
      
     +/*
     + * Add a new object to the loose object cache (possibly after the
  • 90: b28be78 ! 122: ca951d0 index-pack: avoid immediate object fetch while parsing packfile

    @@
      ## Metadata ##
    -Author: Jeff Hostetler <jeffhost@microsoft.com>
    +Author: Johannes Schindelin <Johannes.Schindelin@gmx.de>
     
      ## Commit message ##
         index-pack: avoid immediate object fetch while parsing packfile
    @@ Commit message
         the object to be individually fetched when gvfs-helper (or
         read-object-hook or partial-clone) is enabled.
     
    +    The call site was migrated to odb_has_object() as part of the upstream
    +    refactoring, but odb_has_object(odb, oid, HAS_OBJECT_FETCH_PROMISOR)
    +    sets only OBJECT_INFO_QUICK without OBJECT_INFO_SKIP_FETCH_OBJECT, which
    +    means it WILL trigger remote fetches via gvfs-helper. But we want to
    +    prevent index-pack from individually fetching every object it encounters
    +    during the collision check.
    +
    +    Passing 0 instead gives us both OBJECT_INFO_QUICK and
    +    OBJECT_INFO_SKIP_FETCH_OBJECT, which is the correct equivalent of the
    +    original OBJECT_INFO_FOR_PREFETCH behavior.
    +
         Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
    +    Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
     
      ## builtin/index-pack.c ##
     @@ builtin/index-pack.c: static void sha1_object(const void *data, struct object_entry *obj_entry,
      	if (startup_info->have_repository) {
      		read_lock();
      		collision_test_needed = odb_has_object(the_repository->objects, oid,
    --						       HAS_OBJECT_FETCH_PROMISOR);
    -+						       OBJECT_INFO_FOR_PREFETCH);
    +-						       ODB_HAS_OBJECT_FETCH_PROMISOR);
    ++						       0);
      		read_unlock();
      	}
      
  • 91: 900a62d ! 123: 7930a81 gvfs-helper: create tool to fetch objects using the GVFS Protocol

    @@ .gitignore
     +/git-gvfs-helper
      /git-hash-object
      /git-help
    - /git-hook
    + /git-history
     
      ## Documentation/config.adoc ##
     @@ Documentation/config.adoc: include::config/gui.adoc[]
    @@ environment.c: int git_default_core_config(const char *var, const char *value,
      	if (!strcmp(var, "core.sparsecheckout")) {
      		/* virtual file system relies on the sparse checkout logic so force it on */
      		if (core_virtualfilesystem)
    -@@ environment.c: static int git_default_mailmap_config(const char *var, const char *value)
    +@@ environment.c: static int git_default_push_config(const char *var, const char *value)
      	return 0;
      }
      
    @@ environment.h: extern char *core_virtualfilesystem;
     +extern char *gvfs_cache_server_url;
     +extern const char *gvfs_shared_cache_pathname;
      
    - extern int core_apply_sparse_checkout;
      extern int core_sparse_checkout_cone;
    + extern int sparse_expect_files_outside_of_patterns;
     
      ## gvfs-helper-client.c (new) ##
     @@
    @@ gvfs-helper-client.c (new)
     +		}
     +	}
     +
    -+	if (ghc & GHC__CREATED__PACKFILE)
    -+		packfile_store_reprepare(gh_client__chosen_odb->packfiles);
    ++	if (ghc & GHC__CREATED__PACKFILE) {
    ++		struct odb_source_files *files = odb_source_files_downcast(gh_client__chosen_odb);
    ++		packfile_store_reprepare(files->packed);
    ++	}
     +
     +	*p_ghc = ghc;
     +
    @@ gvfs-helper.c (new)
     +		odb_path = gvfs_shared_cache_pathname;
     +	else {
     +		odb_prepare_alternates(the_repository->objects);
    -+		odb_path = the_repository->objects->sources->path;
    ++		odb_path = repo_get_object_directory(the_repository);
     +	}
     +
     +	strbuf_addstr(&gh__global.buf_odb_path, odb_path);
    @@ odb.c: static int do_oid_object_info_extended(struct object_database *odb,
     +		extern int core_use_gvfs_helper;
      		struct odb_source *source;
      
    - 		/* Most likely it's a loose object. */
    -@@ odb.c: static int do_oid_object_info_extended(struct object_database *odb,
    + 		for (source = odb->sources; source; source = source->next)
    + 			if (!odb_source_read_object_info(source, real, oi, flags))
      				return 0;
    - 		}
      
     +		if (core_use_gvfs_helper && !tried_gvfs_helper) {
     +			enum gh_client__created ghc;
    @@ odb.c: static int do_oid_object_info_extended(struct object_database *odb,
     +			 */
     +		}
     +
    - 		/* Not a loose object; someone else may have just packed it. */
    - 		if (!(flags & OBJECT_INFO_QUICK)) {
    - 			odb_reprepare(odb->repo->objects);
    + 		/*
    + 		 * When the object hasn't been found we try a second read and
    + 		 * tell the sources so. This may cause them to invalidate
     @@ odb.c: static int do_oid_object_info_extended(struct object_database *odb,
    - 				if (!packfile_store_read_object_info(source->packfiles, real, oi, flags))
    + 								 flags | OBJECT_INFO_SECOND_READ))
      					return 0;
      			if (gvfs_virtualize_objects(odb->repo) && !tried_hook) {
     +				// TODO Assert or at least trace2 if gvfs-helper
  • 92: 686c143 ! 124: c22235a sha1-file: create shared-cache directory if it doesn't exist

    @@ environment.h: extern int protect_hfs;
     -extern const char *gvfs_shared_cache_pathname;
     +extern struct strbuf gvfs_shared_cache_pathname;
      
    - extern int core_apply_sparse_checkout;
      extern int core_sparse_checkout_cone;
    + extern int sparse_expect_files_outside_of_patterns;
     
      ## gvfs-helper-client.c ##
     @@
    @@ gvfs-helper.c: static void approve_cache_server_creds(void)
     -		odb_path = gvfs_shared_cache_pathname;
     -	else {
     -		odb_prepare_alternates(the_repository->objects);
    --		odb_path = the_repository->objects->sources->path;
    +-		odb_path = repo_get_object_directory(the_repository);
     -	}
     -
     -	strbuf_addstr(&gh__global.buf_odb_path, odb_path);
    @@ gvfs-helper.c: static void approve_cache_server_creds(void)
     +			      &gvfs_shared_cache_pathname);
     +	else
     +		strbuf_addstr(&gh__global.buf_odb_path,
    -+			      the_repository->objects->sources->path);
    ++			      repo_get_object_directory(the_repository));
      }
      
      /*
  • 93: 0705607 = 125: a1d7ed5 gvfs-helper: better handling of network errors

  • 94: 90b03f6 = 126: 4869384 gvfs-helper-client: properly update loose cache with fetched OID

  • 95: 38eee73 = 127: 659aa92 gvfs-helper: V2 robust retry and throttling

  • 96: 7d50682 = 128: 7d0b0aa gvfs-helper: expose gvfs/objects GET and POST semantics

  • 97: b800370 = 129: e95e585 gvfs-helper: dramatically reduce progress noise

  • 98: a6bb85e = 130: 3eb677d gvfs-helper: handle pack-file after single POST request

  • 99: cd89ff3 = 131: bee254b test-gvfs-prococol, t5799: tests for gvfs-helper

  • 100: a3ef679 = 132: 75e734c gvfs-helper: move result-list construction into install functions

  • 101: ebd1cf3 = 133: 18d2344 t5799: add support for POST to return either a loose object or packfile

  • 102: 9b77529 = 134: 5004665 t5799: cleanup wc-l and grep-c lines

  • 103: ee96bd3 = 135: d6fc107 gvfs-helper: verify loose objects after write

  • 104: f72fbdc = 136: 6b7d23b t7599: create corrupt blob test

  • 130: b24f377 (upstream: b24f377) < -: ------------ http: warn if might have failed because of NTLM

  • 131: 816db62 (upstream: 816db62) < -: ------------ credential: advertise NTLM suppression and allow helpers to re-enable

  • 132: 25ede48 (upstream: 25ede48) < -: ------------ config: move show_all_config()

  • 133: 12210d0 (upstream: 12210d0) < -: ------------ config: add 'gently' parameter to format_config()

  • 134: 1ef1f9d (upstream: 1ef1f9d) < -: ------------ config: make 'git config list --type=' work

  • 135: d744923 (upstream: d744923) < -: ------------ config: format int64s gently

  • 136: 53959a8 (upstream: 53959a8) < -: ------------ config: format bools gently

  • 137: 5fb7bdc (upstream: 5fb7bdc) < -: ------------ config: format bools or ints gently

  • 138: 9c7fc23 (upstream: 9c7fc23) < -: ------------ config: format bools or strings in helper

  • 139: bcfb912 (upstream: bcfb912) < -: ------------ config: format paths gently

  • 140: 9cb4a5e (upstream: 9cb4a5e) < -: ------------ config: format expiry dates quietly

  • 141: db45e49 (upstream: db45e49) < -: ------------ color: add color_parse_quietly()

  • 142: 2d4ab5a (upstream: 2d4ab5a) < -: ------------ config: format colors quietly

  • 143: 645f92a (upstream: 645f92a) < -: ------------ config: restructure format_config()

  • 144: 096aa60 (upstream: 096aa60) < -: ------------ config: use an enum for type

  • 145: 8c8b1c8 (upstream: 1751905) < -: ------------ http: fix bug in ntlm_allow=1 handling

  • 146: dffcb8a (upstream: dffcb8a) < -: ------------ ci(dockerized): reduce the PID limit for private repositories

  • 147: 2d77dd8 (upstream: 2d77dd8) < -: ------------ mingw: skip symlink type auto-detection for network share targets

  • 148: cbf8d600030c ! 137: 0331552 gvfs-helper: add prefetch support

    @@ gvfs-helper-client.c: static int gh_client__objects__receive_response(
      
      		else if (starts_with(line, "ok"))
     @@ gvfs-helper-client.c: static int gh_client__objects__receive_response(
    - 		packfile_store_reprepare(gh_client__chosen_odb->packfiles);
    + 	}
      
      	*p_ghc = ghc;
     +	*p_nr_loose = nr_loose;
  • 149: 0e3fe28ec3a0 = 138: 5e2a48f gvfs-helper: add prefetch .keep file for last packfile

  • 150: 1124964ca749 = 139: a0ae34a gvfs-helper: do one read in my_copy_fd_len_tail()

  • 151: e721c02dabad = 140: ed044ca gvfs-helper: move content-type warning for prefetch packs

  • 152: 39f8495d7892 = 141: debafa8 fetch: use gvfs-helper prefetch under config

  • 153: c58c3f0dc75f = 142: 0e612e8 gvfs-helper: better support for concurrent packfile fetches

  • 154: 8ebbd4e2cb70 = 143: 379563f remote-curl: do not call fetch-pack when using gvfs-helper

  • 155: 3e39c6945eaa = 144: 533015f fetch: reprepare packs before checking connectivity

  • 156: 7be85cc75fb1 = 145: 3193c2b gvfs-helper: retry when creating temp files

  • 157: 08f747fec3b0 = 146: 8ff6c73 sparse: avoid warnings about known cURL issues in gvfs-helper.c

  • 158: 7e16e72baa23 = 147: f8e06d3 gvfs-helper: add --max-retries to prefetch verb

  • 159: 4b971608fc19 = 148: 70438b1 t5799: add tests to detect corrupt pack/idx files in prefetch

  • 160: 2770a13f3bbe = 149: d87df12 gvfs-helper: ignore .idx files in prefetch multi-part responses

  • 161: 049599138c79 = 150: 8bcc637 t5799: explicitly test gvfs-helper --fallback and --no-fallback

  • 162: dd6bc53f9234 = 151: a5c0dfb gvfs-helper: don't fallback with new config

  • 163: bb9255763d26 = 152: 613e283 maintenance: care about gvfs.sharedCache config

  • 164: d396ecf5e2e7 = 153: 13639f5 test-gvfs-protocol: add cache_http_503 to mayhem

  • 165: a151721b9513 ! 154: 024adf4 unpack-trees:virtualfilesystem: Improve efficiency of clear_ce_flags

    @@ virtualfilesystem.c: int is_excluded_from_virtualfilesystem(const char *pathname
     +	size_t i;
     +	struct apply_virtual_filesystem_stats stats = {0};
     +
    -+	if (!repo_config_get_virtualfilesystem(istate->repo))
    ++	/*
    ++	 * We cannot use `istate->repo` here, as the config will be read for
    ++	 * `the_repository` and any mismatch is marked as a bug by f9b3c1f731dd
    ++	 * (environment: stop storing `core.attributesFile` globally, 2026-02-16).
    ++	 * This is not a bad thing, though: VFS is fundamentally incompatible
    ++	 * with submodules, which is the only scenario where this distinction
    ++	 * would matter in practice.
    ++	 */
    ++	if (!repo_config_get_virtualfilesystem(the_repository))
     +		return;
     +
     +	trace2_region_enter("vfs", "apply", the_repository);
  • 166: b6407ed703ae = 155: 655bc3c t5799: add unit tests for new gvfs.fallback config setting

  • 167: bb034dcc364e = 156: c65584a homebrew: add GitHub workflow to release Cask

  • 169: 6ff9da58e7df ! 157: 00ce441 Adding winget workflows

    @@ .github/workflows/release-winget.yml (new)
     +          $manifestDirectory = "$PWD\manifests\m\Microsoft\Git\$version"
     +          $output = & .\wingetcreate.exe submit $manifestDirectory
     +          Write-Host $output
    -+          $url = $output | Select-String -Pattern 'https://github\.com/microsoft/winget-pkgs/pull/\S+' | ForEach-Object { $_.Matches.Value }
    ++          $url = ($output | Select-String -Pattern 'https://github\.com/microsoft/winget-pkgs/pull/\S+' | ForEach-Object { $_.Matches.Value })[0]
     +          Write-Host "::notice::Submitted ${env:TAG_NAME} to winget as $url"
     +        shell: powershell
  • 170: dd0c54edea82 = 158: 87bc2a7 Disable the monitor-components workflow in msft-git

  • 171: 1668c6f605f7 = 159: 3ae75dd .github: enable windows builds on microsoft fork

  • 172: 59164e552495 = 160: d742689 .github/actions/akv-secret: add action to get secrets

  • 173: 32ce065be1ae = 161: 4d5e7f7 release: create initial Windows installer build workflow

  • 174: b027a122557e = 162: 34c1bab release: create initial Windows installer build workflow

  • 175: 89ba28654a47 = 163: 8e968b6 help: special-case HOST_CPU universal

  • 176: 2dea1a8adea3 = 164: f51d65e release: add Mac OSX installer build

  • 177: 289a9fd4a1d9 = 165: 5880c53 release: build unsigned Ubuntu .deb package

  • 178: c641380bf6c8 = 166: 72e968e release: add signing step for .deb package

  • 179: f508f3763f6f = 167: 72ceaaa release: create draft GitHub release with packages & installers

  • 180: 90a01a38c63b = 168: ec9ce46 build-git-installers: publish gpg public key

  • 181: bd83e0fb471f = 169: 4869097 release: continue pestering until user upgrades

  • 182: 5b0aadb0a773 = 170: 6868500 dist: archive HEAD instead of HEAD^{tree}

  • 183: 66640b2f7b1c = 171: 1ee1067 release: include GIT_BUILT_FROM_COMMIT in MacOS build

  • 185: 4f64783b446a = 172: 75b7152 update-microsoft-git: create barebones builtin

  • 186: d308ddacd3ea = 173: 2f4ddf2 update-microsoft-git: Windows implementation

  • 187: 259563d13ed5 = 174: c91eb29 update-microsoft-git: use brew on macOS

  • 188: 4a9917e3226d = 175: 232f425 .github: reinstate ISSUE_TEMPLATE.md for microsoft/git

  • 189: 085918da835c = 176: 37c7c01 .github: update PULL_REQUEST_TEMPLATE.md

  • 190: 936687831d61 = 177: 0113f6b Adjust README.md for microsoft/git

  • 184: 0632e94f908c = 178: 7a62ba8 release: add installer validation

  • 191: 56c686b8b886 = 179: 4f86ec6 scalar: implement a minimal JSON parser

  • 192: f088ec5b3ca7 = 180: 5179969 scalar clone: support GVFS-enabled remote repositories

  • 193: 8162ab767e3a = 181: fd16a9d test-gvfs-protocol: also serve smart protocol

  • 194: 4484ade074b0 = 182: 779f19d gvfs-helper: add the endpoint command

  • 195: 33efb2282333 = 183: 34543fd dir_inside_of(): handle directory separators correctly

  • 196: 562ac56def25 = 184: 202f1bb scalar: disable authentication in unattended mode

  • 197: c57300a45c15 = 185: 54a4c83 abspath: make strip_last_path_component() global

  • 198: 66cc17e076c2 = 186: 2398696 scalar: do initialize gvfs.sharedCache

  • 199: 83f798462a7d = 187: c006788 scalar diagnose: include shared cache info

  • 200: 670794ccf4dc = 188: 727fe21 scalar: only try GVFS protocol on https:// URLs

  • 201: dc797ddc0c59 = 189: 71e01d8 scalar: verify that we can use a GVFS-enabled repository

  • 202: 5dc67140bcdb = 190: 5d0b827 scalar: add the cache-server command

  • 203: cb0f4706bb4a = 191: 2d92f15 scalar: add a test toggle to skip accessing the vsts/info endpoint

  • 204: 45fdb72a4a10 = 192: c2549ba scalar: adjust documentation to the microsoft/git fork

  • 205: 6eab039b9ecf = 193: 77c8e46 scalar: enable untracked cache unconditionally

  • 206: 8c49eae13f59 = 194: b68b878 scalar: parse clone --no-fetch-commits-and-trees for backwards compatibility

  • 207: 449e82daa99d = 195: 0a8b91a scalar: make GVFS Protocol a forced choice

  • 208: 4d3d69057a32 = 196: cc07369 scalar: work around GVFS Protocol HTTP/2 failures

  • 209: 9b70a972790e = 197: ff53a8f gvfs-helper-client: clean up server process(es)

  • 210: 0bc2e6360a6e = 198: 01a353e scalar diagnose: accommodate Scalar's Functional Tests

  • 211: 75c1361da1f1 = 199: 1ec4708 ci: run Scalar's Functional Tests

  • 212: 4701f8c7ef32 = 200: 1efaeac scalar: upgrade to newest FSMonitor config setting

  • 213: 5e3486384c44 ! 201: df70c2c add/rm: allow adding sparse entries when virtual

    @@ read-cache.c: static void update_callback(struct diff_queue_struct *q,
      
     -		if (!data->include_sparse &&
     +		if (!data->include_sparse && !core_virtualfilesystem &&
    - 		    !path_in_sparse_checkout(path, data->index))
    + 			!path_in_sparse_checkout(path, data->index))
      			continue;
      
  • 214: 2008e70d7294 = 202: be1b2dc sparse-checkout: add config to disable deleting dirs

  • 215: f8dd98e772c5 = 203: 2b1218c diff: ignore sparse paths in diffstat

  • 216: 8dd8b50a2584 = 204: 0aadd34 repo-settings: enable sparse index by default

  • 217: 4559c49f8945 = 205: cfc3ea0 TO-CHECK: t1092: use quiet mode for rebase tests

  • 218: daff23b62cc1 = 206: 9e2f292 reset: fix mixed reset when using virtual filesystem

  • 219: 9975426c9a14 = 207: 7a6d276 diff(sparse-index): verify with partially-sparse

  • 220: 54be5c968c62 = 208: d38ec9d stash: expand testing for git stash -u

  • 221: 0a7935a8ab60 = 209: 6347ba3 sparse-index: add ensure_full_index_with_reason()

  • 222: b3019487bac9 ! 210: b74342e treewide: add reasons for expanding index

    @@ sparse-index.c: void clear_skip_worktree_from_present_files(struct index_state *
      }
     
      ## t/t1092-sparse-checkout-compatibility.sh ##
    -@@ t/t1092-sparse-checkout-compatibility.sh: test_expect_success 'cat-file --batch' '
    - 	ensure_expanded cat-file --batch <in
    +@@ t/t1092-sparse-checkout-compatibility.sh: test_expect_success 'sparse-index is not expanded: merge-ours' '
    + 	ensure_not_expanded merge -s ours merge-right
      '
      
     +test_expect_success 'ensure_full_index_with_reason' '
  • 223: 15ea5335a815 = 211: c9b36c8 treewide: custom reasons for expanding index

  • 224: 24039906ca58 = 212: 6026407 sparse-index: add macro for unaudited expansions

  • 225: ff7363b5b543 = 213: f7dc244 Docs: update sparse index plan with logging

  • 226: 84fb13be4c12 = 214: 391ffd8 sparse-index: log failure to clear skip-worktree

  • 227: 80106842ff57 = 215: 3e5ad36 stash: use -f in checkout-index child process

  • 228: be5428d60b6d = 216: 2d63f06 sparse-index: do not copy hashtables during expansion

  • 229: 13f6d0510f9b = 217: b74a9e8 TO-UPSTREAM: sub-process: avoid leaking cmd

  • 230: 6808859f5292 = 218: 18f20c3 remote-curl: release filter options before re-setting them

  • 231: c7d00d9b1738 = 219: 6ecceb2 transport: release object filter options

  • 232: 4bcf76443903 ! 220: 1757cdf push: don't reuse deltas with path walk

    @@ t/meson.build
     @@ t/meson.build: integration_tests = [
        't5582-fetch-negative-refspec.sh',
        't5583-push-branches.sh',
    -   't5584-vfs.sh',
    +   't5584-http-429-retry.sh',
     +  't5590-push-path-walk.sh',
    +   't5599-vfs.sh',
        't5600-clone-fail-cleanup.sh',
        't5601-clone.sh',
    -   't5602-clone-remote-exec.sh',
     
      ## t/t5590-push-path-walk.sh (new) ##
     @@
  • 233: a495d6779ef9 = 221: 5331de0 t7900-maintenance.sh: reset config between tests

  • 234: 531edfa7bb99 ! 222: c028362 maintenance: add cache-local-objects maintenance task

    @@ builtin/gc.c: static int geometric_repack_auto_condition(struct gc_config *cfg U
     +{
     +	struct strbuf dstdir = STRBUF_INIT;
     +	struct repository *r = the_repository;
    ++	int ret = 0;
     +
     +	/* This task is only applicable with a VFS/Scalar shared cache. */
     +	if (!shared_object_dir)
    @@ builtin/gc.c: static int geometric_repack_auto_condition(struct gc_config *cfg U
     +	for_each_file_in_pack_dir(r->objects->sources->path, move_pack_to_shared_cache,
     +				  dstdir.buf);
     +
    -+	for_each_loose_object(r->objects, move_loose_object_to_shared_cache, NULL,
    -+			      FOR_EACH_OBJECT_LOCAL_ONLY);
    ++	ret = for_each_loose_file_in_source(r->objects->sources,
    ++				      move_loose_object_to_shared_cache,
    ++				      NULL, NULL, NULL);
     +
     +cleanup:
     +	strbuf_release(&dstdir);
    -+	return 0;
    ++	return ret;
     +}
     +
      typedef int (*maintenance_task_fn)(struct maintenance_run_opts *opts,
    @@ t/t7900-maintenance.sh: test_expect_success 'maintenance aborts with existing lo
     +
     +		test_commit something &&
     +		git config set maintenance.gc.enabled false &&
    ++		git config set maintenance.geometric-repack.enabled false &&
     +		git config set maintenance.cache-local-objects.enabled true &&
     +		git config set maintenance.cache-local-objects.auto 1 &&
     +
    @@ t/t7900-maintenance.sh: test_expect_success 'maintenance aborts with existing lo
     +		test_commit something &&
     +		git config set gvfs.sharedcache .git/objects &&
     +		git config set maintenance.gc.enabled false &&
    ++		git config set maintenance.geometric-repack.enabled false &&
     +		git config set maintenance.cache-local-objects.enabled true &&
     +		git config set maintenance.cache-local-objects.auto 1 &&
     +
    @@ t/t7900-maintenance.sh: test_expect_success 'maintenance aborts with existing lo
     +		test_commit something &&
     +		git config set gvfs.sharedcache ../cache &&
     +		git config set maintenance.gc.enabled false &&
    ++		git config set maintenance.geometric-repack.enabled false &&
     +		git config set maintenance.cache-local-objects.enabled true &&
     +		git config set maintenance.cache-local-objects.auto 1 &&
     +
    @@ t/t7900-maintenance.sh: test_expect_success 'maintenance aborts with existing lo
     +		test_commit something &&
     +		git config set gvfs.sharedcache ../cache &&
     +		git config set maintenance.gc.enabled false &&
    ++		git config set maintenance.geometric-repack.enabled false &&
     +		git config set maintenance.cache-local-objects.enabled true &&
     +		git config set maintenance.cache-local-objects.auto 1 &&
     +
  • 235: a6e0b7e7d3c6 = 223: 6606734 scalar.c: add cache-local-objects task

  • 236: 4c7a1c7f5c52 ! 224: dc7bda7 hooks: add custom post-command hook config

    @@ hook.c
      #include "abspath.h"
      #include "environment.h"
      #include "advice.h"
    -@@ hook.c: static void run_hooks_opt_clear(struct run_hooks_opt *options)
    - 	strvec_clear(&options->args);
    +@@ hook.c: void hook_free(void *p, const char *str UNUSED)
    + 	free(h);
      }
      
     +static char *get_post_index_change_sentinel_name(struct repository *r)
    @@ hook.c: static void run_hooks_opt_clear(struct run_hooks_opt *options)
     +	return 0;
     +}
     +
    - int run_hooks_opt(struct repository *r, const char *hook_name,
    - 		  struct run_hooks_opt *options)
    + /* Helper to detect and add default "traditional" hooks from the hookdir. */
    + static void list_hooks_add_default(struct repository *r, const char *hookname,
    + 				   struct string_list *hook_list,
    + 				   struct run_hooks_opt *options)
      {
    -@@ hook.c: int run_hooks_opt(struct repository *r, const char *hook_name,
    - 		.hook_name = hook_name,
    - 		.options = options,
    - 	};
    --	const char *hook_path = find_hook(r, hook_name);
    +-	const char *hook_path = find_hook(r, hookname);
     +	const char *hook_path;
    - 	int ret = 0;
    - 	const struct run_process_parallel_opts opts = {
    - 		.tr2_category = "hook",
    -@@ hook.c: int run_hooks_opt(struct repository *r, const char *hook_name,
    - 		.data = &cb_data,
    - 	};
    + 	struct hook *h;
      
     +	/* Interject hook behavior depending on strategy. */
    -+	if (r && r->gitdir &&
    -+	    handle_hook_replacement(r, hook_name, &options->args))
    -+		return 0;
    ++	if (r && r->gitdir && options &&
    ++	    handle_hook_replacement(r, hookname, &options->args))
    ++		return;
     +
    -+	hook_path = find_hook(r, hook_name);
    ++	hook_path = find_hook(r, hookname);
     +
      	/*
      	 * Backwards compatibility hack in VFS for Git: when originally
      	 * introduced (and used!), it was called `post-indexchanged`, but this
    +@@ hook.c: struct string_list *list_hooks(struct repository *r, const char *hookname,
    + 	CALLOC_ARRAY(hook_head, 1);
    + 	string_list_init_dup(hook_head);
    + 
    +-	/* Add hooks from the config, e.g. hook.myhook.event = pre-commit */
    +-	list_hooks_add_configured(r, hookname, hook_head, options);
    ++	/*
    ++	 * The pre/post-command hooks are only supported as traditional hookdir
    ++	 * hooks, never as config-based hooks. Building the config map validates
    ++	 * all hook.*.event entries and would die() on partially-configured
    ++	 * hooks, which is fatal when "git config" is still in the middle of
    ++	 * setting up a multi-key hook definition.
    ++	 */
    ++	if (strcmp(hookname, "pre-command") && strcmp(hookname, "post-command"))
    ++		list_hooks_add_configured(r, hookname, hook_head, options);
    + 
    + 	/* Add the default "traditional" hooks from hookdir. */
    + 	list_hooks_add_default(r, hookname, hook_head, options);
     
      ## t/t0401-post-command-hook.sh ##
     @@ t/t0401-post-command-hook.sh: test_expect_success 'with succeeding hook' '
  • 237: d7acf77d6eda ! 225: 3905835 TO-UPSTREAM: Docs: fix asciidoc failures from short delimiters

    @@ Documentation/trace2-target-values.adoc
     +  type can be either `stream` or `dgram`; if omitted Git will
     +  try both.
     +----
    - \ No newline at end of file
  • 238: e4ad688bca2d = 226: 465b077 hooks: make hook logic memory-leak free

  • 239: fe92a8047ff5 = 227: a316af6 t0401: test post-command for alias, version, typo

  • 240: e0b3df967017 ! 228: b09c1b1 hooks: better handle config without gitdir

    @@ hook.c: static int handle_hook_replacement(struct repository *r,
      		return 0;
      
      	if (!strcmp(hook_name, "post-index-change")) {
    -@@ hook.c: int run_hooks_opt(struct repository *r, const char *hook_name,
    - 	};
    +@@ hook.c: static void list_hooks_add_default(struct repository *r, const char *hookname,
    + 	struct hook *h;
      
      	/* Interject hook behavior depending on strategy. */
    --	if (r && r->gitdir &&
    --	    handle_hook_replacement(r, hook_name, &options->args))
    -+	if (r && handle_hook_replacement(r, hook_name, &options->args))
    - 		return 0;
    +-	if (r && r->gitdir && options &&
    ++	if (r && options &&
    + 	    handle_hook_replacement(r, hookname, &options->args))
    + 		return;
      
    - 	hook_path = find_hook(r, hook_name);
     
      ## t/t0401-post-command-hook.sh ##
     @@ t/t0401-post-command-hook.sh: test_expect_success 'with post-index-change config' '
  • 249: 6f90de3071b3 = 229: a99343a scalar: add run_git_argv

  • 250: 0348717a8d2b = 230: 48363e5 scalar: add --ref-format option to scalar clone

  • 251: a4547f02b88a = 231: 0d74324 gvfs-helper: skip collision check for loose objects

  • 252: d9af2e25e2be = 232: f3b5a2c gvfs-helper: emit advice on transient errors

  • 253: 48b1e24508ca = 233: c6ac1f6 gvfs-helper: avoid collision check for packfiles

  • 254: 89b6b90c3ac5 = 234: 507b0e6 t5799: update cache-server methods for multiple instances

  • 255: aacb81e246ca = 235: eda63b8 gvfs-helper: override cache server for prefetch

  • 256: 96182b663345 = 236: 987daa9 gvfs-helper: override cache server for get

  • 257: c8c1bd67a868 = 237: adc9bb8 gvfs-helper: override cache server for post

  • 258: 62a99f2ff0f0 = 238: 69e4d23 t5799: add test for all verb-specific cache-servers together

  • 259: 7cc34b5db627 = 239: b238540 lib-gvfs-helper: create helper script for protocol tests

  • 260: de0458ee87af = 240: ae537c5 t579*: split t5799 into several parts

  • 261: fc092adf8054 (obsoleted by 3e9cc24 (osxkeychain: define build targets in the top-level Makefile., 2026-02-20)) < -: ------------ osxkeychain: always apply required build flags

  • 263: d1f407989a46 = 241: d3d250d scalar: add ---cache-server-url options

  • 262: 5b3cc94a7cc7 = 242: c4897c6 Restore previous errno after post command hook

  • 264: dea042c5b891 = 243: d2dc844 t9210: differentiate origin and cache servers

  • 265: 26bd8a88fcae (upstream: 8c8b1c8) < -: ------------ http: fix bug in ntlm_allow=1 handling

  • 266: ea9d7ec3e65e = 244: c6d35dd unpack-trees: skip lstats for deleted VFS entries in checkout

  • 267: 75bb06acf32a = 245: 8336758 worktree: conditionally allow worktree on VFS-enabled repos

  • 268: 17430066359c = 246: eb2ebda gvfs-helper: create shared object cache if missing

  • 269: 3bacffcc5367 = 247: 6f1a0fe gvfs-helper: send X-Session-Id headers

  • 270: a59d91ce09d9 = 248: a7bda18 gvfs: add gvfs.sessionKey config

  • 271: 5afe46a08be0 ! 249: 0fd48b3 gvfs: clear DIE_IF_CORRUPT in streaming incore fallback

    @@ Commit message
     
      ## odb/streaming.c ##
     @@
    + #include "convert.h"
      #include "environment.h"
      #include "repository.h"
    - #include "object-file.h"
     +#include "gvfs.h"
      #include "odb.h"
    + #include "odb/source.h"
      #include "odb/streaming.h"
    - #include "replace-object.h"
     @@ odb/streaming.c: static int open_istream_incore(struct odb_read_stream **out,
      		.base.read = read_istream_incore,
      	};
  • 272: 37a408567f68 = 250: 1093e72 workflow: add release-vfsforgit to automate VFS for Git updates

  • 273: 42bee1b811d1 = 251: a6551e1 worktree remove: use GVFS_SUPPORTS_WORKTREES for skip-clean-check gate

  • 274: 7901136fc739 ! 252: 30ff6c8 ci: add new VFS for Git functional tests workflow

    @@ .github/workflows/vfs-functional-tests.yml (new)
     +          NO_TCLTK: Yup
     +        run: |
     +          # We do require a VFS version
    -+          def_ver="$(sed -n 's/DEF_VER=\(.*vfs.*\)/\1/p' GIT-VERSION-GEN)"
    ++          def_ver="$(sed -n '/^DEF_VER=/{
    ++            s/^DEF_VER=\(.*vfs.*\)/\1/p
    ++            tq # already found a *.vfs.* one, skip next line
    ++            s/^DEF_VER=\(.*\)/\1.vfs.0.0/p
    ++            :q
    ++            q
    ++          }' GIT-VERSION-GEN)"
     +          test -n "$def_ver"
     +
    ++          # VFSforGit cannot handle -rc versions; strip the `-rc` part, if any
    ++          case "$def_ver" in
    ++          *-rc*) def_ver=${def_ver%%-rc*}.vfs.${def_ver#*.vfs.};;
    ++          esac
    ++
     +          # Ensure that `git version` reflects DEF_VER
     +          case "$(git describe --match "v[0-9]*vfs*" HEAD)" in
     +          ${def_ver%%.vfs.*}.vfs.*) ;; # okay, we can use this
    -+          *) git -c user.name=ci -c user.email=ci@github tag -m for-testing ${def_ver}.NNN.g$(git rev-parse --short HEAD);;
    ++          *) echo ${def_ver}.NNN.g$(git rev-parse --short HEAD) >version;;
     +          esac
     +
     +          make -j5 DESTDIR="$GITHUB_WORKSPACE/MicrosoftGit/payload/${{ matrix.architecture }}" install
  • 275: 6be8749d44c0 = 253: a9c6288 azure-pipelines: add stub release pipeline for Azure

dscho and others added 30 commits April 17, 2026 15:07
These patches implement some defensive programming to address complaints
some static analyzers might have.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
CodeQL pointed out a couple of issues, which are addressed in this patch
series.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
This patch series has been long in the making, ever since Johannes
Nicolai and myself spiked this in November/December 2020.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
While performing a fetch with a virtual file system we know that there
will be missing objects and we don't want to download them just because
of the reachability of the commits.  We also don't want to download a
pack file with commits, trees, and blobs since these will be downloaded
on demand.

This flag will skip the first connectivity check and by returning zero
will skip the upload pack. It will also skip the second connectivity
check but continue to update the branches to the latest commit ids.

Signed-off-by: Kevin Willford <kewillf@microsoft.com>
Ensure all filters and EOL conversions are blocked when running under
GVFS so that our projected file sizes will match the actual file size
when it is hydrated on the local machine.

Signed-off-by: Ben Peart <Ben.Peart@microsoft.com>
The idea is to allow blob objects to be missing from the local repository,
and to load them lazily on demand.

After discussing this idea on the mailing list, we will rename the feature
to "lazy clone" and work more on this.

Signed-off-by: Ben Peart <Ben.Peart@microsoft.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Hydrate missing loose objects in check_and_freshen() when running
virtualized. Add test cases to verify read-object hook works when
running virtualized.

This hook is called in check_and_freshen() rather than
check_and_freshen_local() to make the hook work also with alternates.

Helped-by: Kevin Willford <kewillf@microsoft.com>
Signed-off-by: Ben Peart <Ben.Peart@microsoft.com>
If we are going to write an object there is no use in calling
the read object hook to get an object from a potentially remote
source.  We would rather just write out the object and avoid the
potential round trip for an object that doesn't exist.

This change adds a flag to the check_and_freshen() and
freshen_loose_object() functions' signatures so that the hook
is bypassed when the functions are called before writing loose
objects. The check for a local object is still performed so we
don't overwrite something that has already been written to one
of the objects directories.

Based on a patch by Kevin Willford.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
This adds hard-coded call to GVFS.hooks.exe before and after each Git
command runs.

To make sure that this is only called on repositories cloned with GVFS, we
test for the tell-tale .gvfs.

2021-10-30: Recent movement of find_hook() to hook.c required moving these
changes out of run-command.c to hook.c.

2025-11-06: The `warn_on_auto_comment_char` hack is so ugly that it
forces us to pile similarly ugly code on top because that hack _expects_
that the config has not been read when `cmd_commit()`, `cmd_revert()`,
`cmd_cherry_pick()`, `cmd_merge()`, or `cmd_rebase()` set that flag. But
with the `pre_command()` hook already run, that assumption is incorrect.

Signed-off-by: Ben Peart <Ben.Peart@microsoft.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Suggested by Ben Peart.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Verify that the core.hooksPath configuration is repsected by the
pre-command hook. Original regression test was written by
Alejandro Pauly.

Signed-off-by: Matthew John Cheetham <mjcheetham@outlook.com>
Signed-off-by: Alejandro Pauly <alpauly@microsoft.com>
When using the sparse-checkout feature, the file might not be on disk
because the skip-worktree bit is on. This used to be a bug in the
(hence deleted) `recursive` strategy. Let's ensure that this bug does
not resurface.

Signed-off-by: Kevin Willford <kewillf@microsoft.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
The 'git worktree' command was marked as BLOCK_ON_GVFS_REPO because it
does not interact well with the virtual filesystem of VFS for Git. When
a Scalar clone uses the GVFS protocol, it enables the
GVFS_BLOCK_COMMANDS flag, since commands like 'git gc' do not work well
with the GVFS protocol.

However, 'git worktree' works just fine with the GVFS protocol since it
isn't doing anything special. It copies the sparse-checkout from the
current worktree, so it does not have performance issues.

This is a highly requested option.

The solution is to stop using the BLOCK_ON_GVFS_REPO option and instead
add a special-case check in cmd_worktree() specifically for a particular
bit of the 'core_gvfs' global variable (loaded by very early config
reading) that corresponds to the virtual filesystem. The bit that most
closely resembled this behavior was non-obviously named, but does
provide a signal that we are in a Scalar clone and not a VFS for Git
clone. The error message is copied from git.c, so it will have the same
output as before if a user runs this in a VFS for Git clone.

Signed-off-by: Derrick Stolee <derrickstolee@github.com>
When using the sparse-checkout feature git should not write to the working
directory for files with the skip-worktree bit on.  With the skip-worktree
bit on the file may or may not be in the working directory and if it is
not we don't want or need to create it by calling checkout_entry.

There are two callers of checkout_target.  Both of which check that the
file does not exist before calling checkout_target.  load_current which
make a call to lstat right before calling checkout_target and
check_preimage which will only run checkout_taret it stat_ret is less than
zero.  It sets stat_ret to zero and only if !stat->cached will it lstat
the file and set stat_ret to something other than zero.

This patch checks if skip-worktree bit is on in checkout_target and just
returns so that the entry doesn't not end up in the working directory.
This is so that apply will not create a file in the working directory,
then update the index but not keep the working directory up to date with
the changes that happened in the index.

Signed-off-by: Kevin Willford <kewillf@microsoft.com>
Signed-off-by: Kevin Willford <kewillf@microsoft.com>
As of 9e59b38 (object-file: emit corruption errors when detected,
2022-12-14), Git will loudly complain about corrupt objects.

That is fine, as long as the idea isn't to re-download locally-corrupted
objects. But that's exactly what we want to do in VFS for Git via the
`read-object` hook, as per the `GitCorruptObjectTests` code
added in microsoft/VFSForGit@2db0c030eb25 (New
features: [...] -  GVFS can now recover from corrupted git object files
[...] , 2018-02-16).

So let's support precisely that, and add a regression test that ensures
that re-downloading corrupt objects via the `read-object` hook works.

While at it, avoid the XOR operator to flip the bits, when we actually
want to make sure that they are turned off: Use the AND-NOT operator for
that purpose.

Helped-by: Matthew John Cheetham <mjcheetham@outlook.com>
Helped-by: Derrick Stolee <stolee@gmail.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Add the ability to block built-in commands based on if the `core.gvfs`
setting has the `GVFS_USE_VIRTUAL_FILESYSTEM` bit set. This allows us
to selectively block commands that use the GVFS protocol, but don't use
VFS for Git (for example repos cloned via `scalar clone` against Azure
DevOps).

Signed-off-by: Matthew John Cheetham <mjcheetham@outlook.com>
String formatting can be a performance issue when there are
hundreds of thousands of trees.

Change to stop using the strbuf_addf and just add the strings
or characters individually.

There are a limited number of modes so added a switch for the
known ones and a default case if something comes through that
are not a known one for git.

In one scenario regarding a huge worktree, this reduces the
time required for a `git checkout <branch>` from 44 seconds
to 38 seconds, i.e. it is a non-negligible performance
improvement.

Signed-off-by: Kevin Willford <kewillf@microsoft.com>
Loosen the blocking of the `repack` command from all "GVFS repos" (those
that have `core.gvfs` set) to only those that actually use the virtual
file system (VFS for Git only). This allows for `repack` to be used in
Scalar clones.

Signed-off-by: Matthew John Cheetham <mjcheetham@outlook.com>
The following commands and options are not currently supported when working
in a GVFS repo.  Add code to detect and block these commands from executing.

1) fsck
2) gc
4) prune
5) repack
6) submodule
8) update-index --split-index
9) update-index --index-version (other than 4)
10) update-index --[no-]skip-worktree
11) worktree

Signed-off-by: Ben Peart <benpeart@microsoft.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Loosen the blocking of the `fsck` command from all "GVFS repos" (those
that have `core.gvfs` set) to only those that actually use the virtual
file system (VFS for Git only). This allows for `fsck` to be used in
Scalar clones.

Signed-off-by: Matthew John Cheetham <mjcheetham@outlook.com>
In earlier versions of `microsoft/git`, we found a user who had set
`core.gvfs = false` in their global config. This should not have been
necessary, but it also should not have caused a problem. However, it
did.

The reason was that `gvfs_load_config_value()` was called from
`config.c` when reading config key/value pairs from all the config
files. The local config should override the global config, and this is
done by `config.c` reading the global config first then reading the
local config. However, our logic only allowed writing the `core_gvfs`
variable once.

In v2.51.0, we had to adapt to upstream changes that changed way the
`core.gvfs` config value is read, and the special handling is no longer
necessary, yet we still want the test case that ensures that this bug
does not experience a regression.

Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Loosen the blocking of the `prune` command from all "GVFS repos" (those
that have `core.gvfs` set) to only those that actually use the virtual
file system (VFS for Git only). This allows for `prune` to be used in
Scalar clones.

Signed-off-by: Matthew John Cheetham <mjcheetham@outlook.com>
On index load, clear/set the skip worktree bits based on the virtual
file system data. Use virtual file system data to update skip-worktree
bit in unpack-trees. Use virtual file system data to exclude files and
folders not explicitly requested.

Update 2022-04-05: disable the "present-despite-SKIP_WORKTREE" file removal
behavior when 'core.virtualfilesystem' is enabled.

Signed-off-by: Ben Peart <benpeart@microsoft.com>
…x has been redirected

Fixes #13

Some git commands spawn helpers and redirect the index to a different
location.  These include "difftool -d" and the sequencer
(i.e. `git rebase -i`, `git cherry-pick` and `git revert`) and others.
In those instances we don't want to update their temporary index with
our virtualization data.

Helped-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Ben Peart <Ben.Peart@microsoft.com>
Add check to see if a directory is included in the virtualfilesystem
before checking the directory hashmap.  This allows a directory entry
like foo/ to find all untracked files in subdirectories.
Replace the special casing of the `worktree` command being blocked on
VFS-enabled repos with the new `BLOCK_ON_VFS_ENABLED` flag.

Signed-off-by: Matthew John Cheetham <mjcheetham@outlook.com>
Tyrie Vella and others added 19 commits April 17, 2026 22:02
When core_virtualfilesystem is set and a branch switch deletes entries
(present in old tree, absent in new tree), deleted_entry() calls
verify_absent_if_directory() with 'ce' pointing to a tree entry from
traverse_trees(). This tree entry lacks CE_NEW_SKIP_WORKTREE because
that flag is only set on src_index entries by mark_new_skip_worktree().

The missing flag causes verify_absent_if_directory()'s fast-path to
fail, falling through to verify_absent_1() which lstats every such
path. In a VFS repo each lstat may trigger callbacks, creating
placeholders. On a large repo switching between LTS releases this
produces tens of thousands of placeholders that the VFS must then
clean up when they are deleted as part of the checkout.

Fix this by propagating CE_NEW_SKIP_WORKTREE from the index entry
(old) to the tree entry (ce) when core_virtualfilesystem is set.
This allows the existing fast-path to work, eliminating the
unnecessary lstats entirely.

This is safe in VFS mode because the virtual filesystem is responsible
for tracking which files are hydrated and cleaning up placeholders
when entries are removed from the index. Additionally, when
GVFS_NO_DELETE_OUTSIDE_SPARSECHECKOUT is set (always the case in VFS
repos), deleted_entry() preserves CE_SKIP_WORKTREE on the CE_REMOVE
entry and git does not unlink skip-worktree files from disk, so the
lstat result would not be acted upon anyway.

Measured on a 2.8M file VFS repo (0% hydrated):
  Before: ~135s checkout, ~23k folder placeholders created
  After:  ~25s checkout, 0 folder placeholders created

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Add GVFS_SUPPORTS_WORKTREES flag (1<<8) to core.gvfs bitmask. When set,
allow git worktree commands to run on VFS-enabled repos instead of
blocking them with BLOCK_ON_VFS_ENABLED.

Force --no-checkout during worktree add when VFS is active so ProjFS can
be mounted before files are projected.

Support skip-clean-check marker file in worktree gitdir: if
.git/worktrees/<name>/skip-clean-check exists, skip the cleanliness
check during worktree remove. This allows VFSForGit's pre-command hook
to unmount ProjFS after its own status check, then let git proceed
without re-checking (which would fail without the virtual filesystem).

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Users should be allowed to delete their shared cache and have it
recreated on 'git fetch'. This change makes that happen by creating any
leading directories and then creating the directory itself with mkdir().

Add a test that exercises --local-cache-path for the first time and
checks this scenario.

Signed-off-by: Derrick Stolee <stolee@gmail.com>
In order to assist with tracking user experience between the Git client
and the GVFS Protocol servers, start sending the SID for the client Git
process over the wire as the X-Session-Id header. Insert this header to
all curl requests for each protocol.

Signed-off-by: Derrick Stolee <stolee@gmail.com>
When core_virtualfilesystem is set and a branch switch deletes entries
(present in old tree, absent in new tree), deleted_entry() calls
verify_absent_if_directory() with 'ce' pointing to a tree entry from
traverse_trees(). This tree entry lacks CE_NEW_SKIP_WORKTREE because
that flag is only set on src_index entries by mark_new_skip_worktree().

The missing flag causes verify_absent_if_directory()'s fast-path to
fail, falling through to verify_absent_1() which lstats every such path.
In a VFS repo each lstat may trigger callbacks, creating placeholders.
On a large repo switching between LTS releases this produces tens of
thousands of placeholders that the VFS must then clean up when they are
deleted as part of the checkout.

Fix this by propagating CE_NEW_SKIP_WORKTREE from the index entry (old)
to the tree entry (ce) when core_virtualfilesystem is set. This allows
the existing fast-path to work, eliminating the unnecessary lstats
entirely.

Measured on a 2.8M file VFS repo (0% hydrated):
  Before: ~135s checkout, ~23k folder placeholders created
  After:  ~25s checkout, 0 folder placeholders created

* [x] This change only applies to the virtualization hook and VFS for
Git.
In different engineering systems, there may already be pseudonymous
identifiers stored in the local Git config. For Office and 1JS, this is
present in the 'otel.trace2.id' config value.

We'd like to be able to collect server-side telemetry based on these
pseudonymous identifiers, so prefxing the X-Session-Id header with this
value is helpful.

We could create a single 'gvfs.sessionPrefix' config key that stores
this value, but then we'd need to duplicate the identifier and risk
drift in the value. For now, we create this indirection by saying "what
config _key_ should Git use to look up the value to add as a prefix?"

Signed-off-by: Derrick Stolee <stolee@gmail.com>
Add GVFS_SUPPORTS_WORKTREES flag (1<<8) to core.gvfs bitmask. When set,
allow git worktree commands to run on VFS-enabled repos instead of
blocking them with BLOCK_ON_VFS_ENABLED.

Force --no-checkout during worktree add when VFS is active so ProjFS can
be mounted before files are projected.

Support skip-clean-check marker file in worktree gitdir: if
.git/worktrees/<name>/skip-clean-check exists, skip the cleanliness
check during worktree remove. This allows VFSForGit's pre-command hook
to unmount ProjFS after its own status check, then let git proceed
without re-checking (which would fail without the virtual filesystem).

The corresponding change in VFSForGit is
microsoft/VFSForGit#1911

* [x] This change only applies to the virtualization hook and VFS for
Git.
Users should be allowed to delete their shared cache and have it
recreated on 'git fetch'. This change makes that happen by creating any
leading directories and then creating the directory itself with
`mkdir()`.

Users may have had more instances of this due to #840, which advises
deleting the shared cache on a mistaken assumption that it would be
recreated on `git fetch`.

* [X] This change only applies to interactions with Azure DevOps and the
      GVFS Protocol.
The upstream refactoring in 4c89d31 (streaming: rely on object
sources to create object stream, 2025-11-23) changed how
istream_source() discovers objects. Previously, it called
odb_read_object_info_extended() with flags=0 to locate the object, then
tried the source-specific opener (e.g. open_istream_loose). If that
failed (e.g. corrupt loose object), it fell back to open_istream_incore
which re-read the object — by which time the read-object hook had
already re-fetched a clean copy.

After the refactoring, istream_source() iterates over sources directly.
When a corrupt loose object is found, odb_source_loose_read_object_stream
fails and the loop continues to the next source. When no source has the
object, it falls through to open_istream_incore, which calls
odb_read_object_info_extended with OBJECT_INFO_DIE_IF_CORRUPT. This
encounters the same corrupt loose file still on disk and dies before the
read-object hook gets a chance to re-download a clean replacement.

Fix this by clearing OBJECT_INFO_DIE_IF_CORRUPT in open_istream_incore
when GVFS_MISSING_OK is set, matching the existing pattern in
odb_read_object.

This fixes the GitCorruptObjectTests functional test failures
(GitRequestsReplacementForAllNullObject,
GitRequestsReplacementForObjectCorruptedWithBadData,
GitRequestsReplacementForTruncatedObject) that appeared when upgrading
from v2.50.1.vfs.0.1 to v2.53.0.vfs.0.0.

Signed-off-by: Tyler Vella <tyvella@microsoft.com>
A common problem when tracking GVFS Protocol queries is that we don't
have a way to connect client and server interactions. This is especially
true in the typical case where a cache server deployment is hidden
behind a load balancer. We can't even determine which cache server was
used for certain requests!

Add some client-identifying data to the HTTP queries using the
X-Session-Id header. This will by default identify the helper process
using its SID. If configured via the new gvfs.sessionKey config, it will
prefix this SID with another config value.

For example, Office monorepo users have an 'otel.trace2.id' config value
that is a pseudonymous identifier. This allows telemetry readers to
group requests by enlistment without knowing the user's identity at all.
Users could opt-in to provide this identifier for investigations around
their long-term performance or issues. This change makes it possible to
extend this to cache server interactions.

* [X] This change only applies to interactions with Azure DevOps and the
      GVFS Protocol.
When a new microsoft/git release is published, VFS for Git needs to
pick up the new Git version. Today this is a manual process. This
workflow automates it by reacting to GitHub release events.

On a full releases: creates a PR in microsoft/VFSForGit to bump the
default GIT_VERSION in the build workflow, so future CI runs and
manual dispatches use the latest stable Git version.

Authentication uses the existing Azure Key Vault + OIDC pattern
(matching release-homebrew and release-winget) to retrieve a token
with write access to the VFS for Git repository.

In a separate effort we'll add another workflow that triggers on
push to vfs-* branches to trigger a run of VFS for Git Functional Tests
(from the master branch).

Signed-off-by: Matthew John Cheetham <mjcheetham@outlook.com>
The upstream refactoring in 4c89d31 (streaming: rely on object
sources to create object stream, 2025-11-23) changed how
istream_source() discovers objects. Previously, it called
odb_read_object_info_extended() with flags=0 to locate the object, then
tried the source-specific opener (e.g. open_istream_loose). If that
failed (e.g. corrupt loose object), it fell back to open_istream_incore
which re-read the object — by which time the read-object hook had
already re-fetched a clean copy.

After the refactoring, istream_source() iterates over sources directly.
When a corrupt loose object is found,
odb_source_loose_read_object_stream fails and the loop continues to the
next source. When no source has the object, it falls through to
open_istream_incore, which calls odb_read_object_info_extended with
OBJECT_INFO_DIE_IF_CORRUPT. This encounters the same corrupt loose file
still on disk and dies before the read-object hook gets a chance to
re-download a clean replacement.

Fix this by clearing OBJECT_INFO_DIE_IF_CORRUPT in open_istream_incore
when GVFS_MISSING_OK is set, matching the existing pattern in
odb_read_object.

This fixes the GitCorruptObjectTests functional test failures
(GitRequestsReplacementForAllNullObject,
GitRequestsReplacementForObjectCorruptedWithBadData,
GitRequestsReplacementForTruncatedObject) that appeared when upgrading
from v2.50.1.vfs.0.1 to v2.53.0.vfs.0.0.

This is a companion to #782 (which predates
4c89d31,
though, therefore it is not _exactly_ an omission of that PR).
The skip-clean-check guard in remove_worktree() was gated on
core_virtualfilesystem, which is only initialized by
repo_config_get_virtualfilesystem() during index loading. Since the
worktree remove path never loads the index before this check, the
variable was always NULL, causing check_clean_worktree() to run even
when VFSForGit had already unmounted the projection and written the
skip-clean-check marker file. This made 'git worktree remove' fail
with 'fatal: failed to run git status' in GVFS repos.

Replace core_virtualfilesystem with
gvfs_config_is_set(GVFS_SUPPORTS_WORKTREES). This is the correct bit
to check here: remove_worktree() can only be reached when
GVFS_SUPPORTS_WORKTREES is set (cmd_worktree blocks otherwise at line
1501), and it directly expresses that the VFS layer supports worktree
operations and knows how to signal when a clean check can be skipped.
Unlike core_virtualfilesystem, gvfs_config_is_set() is self-loading
from core.gvfs and does not depend on the index having been read.

Assisted-by: Claude Opus 4.6
Signed-off-by: Tyrie Vella <tyrielv@gmail.com>
When a new `microsoft/git` release is published, VFS for Git needs to
pick up the new Git version. Today this is a manual process. This
workflow automates it by reacting to GitHub release events.
    
On a full releases: creates a PR in `microsoft/VFSForGit` to bump the
default `GIT_VERSION` in the build workflow, so future CI runs and
manual dispatches use the latest stable Git version.
    
Authentication uses the existing Azure Key Vault + OIDC pattern
(matching `release-homebrew` and `release-winget`) to retrieve a token
with write access to the VFS for Git repository.
    
In a separate effort we'll add another workflow that triggers on push to
`vfs-*` branches to trigger a run of VFS for Git Functional Tests (from
the `master` branch).
Build Git with VFS support using the Git for Windows SDK and package
it as a MicrosoftGit artifact with an install.bat that uses robocopy
to deploy to 'C:\Program Files\Git'.

Find the latest successful VFSForGit build on master and call its
reusable functional-tests.yaml workflow, which downloads the GVFS
installer and FT executables from that run, and the Git artifact
from this run.

Requires a VFSFORGIT_TOKEN secret with actions:read on
microsoft/VFSForGit for cross-repo artifact downloads.
The skip-clean-check guard in remove_worktree() was gated on
core_virtualfilesystem, which is only initialized by
repo_config_get_virtualfilesystem() during index loading. Since the
worktree remove path never loads the index before this check, the
variable was always NULL, causing check_clean_worktree() to run even
when VFSForGit had already unmounted the projection and written the
skip-clean-check marker file. This made 'git worktree remove' fail with
'fatal: failed to run git status' in GVFS repos.

Replace core_virtualfilesystem with
gvfs_config_is_set(GVFS_USE_VIRTUAL_FILESYSTEM), which is already loaded
from core.gvfs by cmd_worktree() before dispatch to remove_worktree().
Add a stub pipeline for releases using Azure Pipelines.

The pipeline runs on Microsoft internal images/runners across:
 * Windows x64
 * Windows ARM64
 * macOS
 * Ubuntu x64
 * Ubuntu ARM64

At the start of a run there is a prerequisite stage and pre-build
validation. Today this does nothing, and should be updated to:
 * validate the current commit is tagged (annotated), and
 * capture the Git version, tag name and SHA.

Artifacts are uploaded from the build stage, and downloaded into the
release stage later for uploading to a draft GitHub release.

ESRP signing to be added later.

Signed-off-by: Matthew John Cheetham <mjcheetham@outlook.com>
## TL;DR

Add a new `vfs-functional-tests.yml` workflow that builds Git from this
repository and runs the VFS for Git functional tests against it, using
VFSForGit's reusable workflow.

## Why?

VFS for Git functional tests currently only run in the VFSForGit
repository, against a tagged microsoft/git release. This means
VFS-related regressions in Git are only caught *after* a release is
tagged. By running the FTs here on every push and PR to `vfs-*`
branches, we can catch regressions before they ship.

This is the counterpart to
microsoft/VFSForGit#1932, which extracted the
functional tests into a reusable `workflow_call` workflow.

## How it works

1. **Build Git** — checks out this repo, builds with the Git for Windows
SDK, and packages the result into a `MicrosoftGit` artifact with an
`install.bat` that deploys via robocopy to `C:\Program Files\Git`. Both
ARM64 and x64 are built and combined into a single artifact for the FTs
to install and use.

2. **Find VFSForGit build** — locates the latest successful VFSForGit CI
run on `master` to get the GVFS installer and FT executables. If the
build was a 'skipped' build (because an existing run succeeded with that
tree) then follow the annotation to the real run.

3. **Call reusable workflow** — invokes
`microsoft/VFSForGit/.github/workflows/functional-tests.yaml@master`,
which handles the full test matrix (2 configs × 2 architectures × 10
slices)
Add a stub pipeline for releases using Azure Pipelines.

The pipeline runs on Microsoft internal images/runners across:
 * Windows x64
 * Windows ARM64
 * macOS
 * Ubuntu x64
 * Ubuntu ARM64

At the start of a run there is a prerequisite stage and pre-build
validation. Today this does nothing, and should be updated to:
 * validate the current commit is tagged (annotated), and
 * capture the Git version, tag name and SHA.

Artifacts are uploaded from the build stage, and downloaded into the
release stage later for uploading to a draft GitHub release.

ESRP signing to be added later.
@dscho
Copy link
Copy Markdown
Member Author

dscho commented Apr 17, 2026

Here are explanations for the more gnarly parts of the range-diff:

t5584-vfs.sh rename
  • 27: acaf7ff ! 59: 6c5c7d9 gvfs: optionally skip reachability checks/upload pack during fetch

    @@ gvfs.h: struct repository;
     
      ## t/meson.build ##
     @@ t/meson.build: integration_tests = [
    -   't5581-http-curl-verbose.sh',
        't5582-fetch-negative-refspec.sh',
        't5583-push-branches.sh',
    -+  't5584-vfs.sh',
    +   't5584-http-429-retry.sh',
    ++  't5599-vfs.sh',
        't5600-clone-fail-cleanup.sh',
        't5601-clone.sh',
        't5602-clone-remote-exec.sh',
     
    - ## t/t5584-vfs.sh (new) ##
    + ## t/t5599-vfs.sh (new) ##
     @@
     +#!/bin/sh
     +
    @@ t/t5584-vfs.sh (new)
     +'
     +
     +test_done
    - \ No newline at end of file
  • I've had enough of those test number clashes and bumped vfs to 5599.

    ODB refactoring reaction work
    • 31: 3743bcd ! 63: 0ecac98 sha1_file: when writing objects, skip the read_object_hook

      @@ odb.c: int odb_has_object(struct object_database *odb, const struct object_id *o
       +		       int skip_virtualized_objects)
        {
        	struct odb_source *source;
      - 
      -@@ odb.c: int odb_freshen_object(struct object_database *odb,
      - 		if (packfile_store_freshen_object(source->packfiles, oid))
      - 			return 1;
      - 
      --		if (odb_source_loose_freshen_object(source, oid))
      -+		if (odb_source_loose_freshen_object(source, oid, skip_virtualized_objects))
      + 	odb_prepare_alternates(odb);
      + 	for (source = odb->sources; source; source = source->next)
      +-		if (odb_source_freshen_object(source, oid))
      ++		if (odb_source_freshen_object(source, oid, skip_virtualized_objects))
        			return 1;
      - 	}
      - 
      + 	return 0;
      + }
       
        ## odb.h ##
       @@ odb.h: int odb_has_object(struct object_database *odb,
      - 		   unsigned flags);
      + 		   enum odb_has_object_flags flags);
        
        int odb_freshen_object(struct object_database *odb,
       -		       const struct object_id *oid);
      @@ odb.h: int odb_has_object(struct object_database *odb,
        void odb_assert_oid_type(struct object_database *odb,
        			 const struct object_id *oid, enum object_type expect);
       
      + ## odb/source-files.c ##
      +@@ odb/source-files.c: static int odb_source_files_find_abbrev_len(struct odb_source *source,
      + }
      + 
      + static int odb_source_files_freshen_object(struct odb_source *source,
      +-					   const struct object_id *oid)
      ++					   const struct object_id *oid,
      ++					   int skip_virtualized_objects)
      + {
      + 	struct odb_source_files *files = odb_source_files_downcast(source);
      + 	if (packfile_store_freshen_object(files->packed, oid) ||
      +-	    odb_source_loose_freshen_object(source, oid))
      ++	    odb_source_loose_freshen_object(source, oid, skip_virtualized_objects))
      + 		return 1;
      + 	return 0;
      + }
      +
      + ## odb/source.h ##
      +@@ odb/source.h: struct odb_source {
      + 	 * has been freshened.
      + 	 */
      + 	int (*freshen_object)(struct odb_source *source,
      +-			      const struct object_id *oid);
      ++			      const struct object_id *oid,
      ++			      int skip_virtualized_objects);
      + 
      + 	/*
      + 	 * This callback is expected to persist the given object into the
      +@@ odb/source.h: static inline int odb_source_find_abbrev_len(struct odb_source *source,
      +  * not exist.
      +  */
      + static inline int odb_source_freshen_object(struct odb_source *source,
      +-					    const struct object_id *oid)
      ++					    const struct object_id *oid,
      ++					    int skip_virtualized_objects)
      + {
      +-	return source->freshen_object(source, oid);
      ++	return source->freshen_object(source, oid, skip_virtualized_objects);
      + }
      + 
      + /*
      +
        ## t/t0410/read-object ##
       @@ t/t0410/read-object: while (1) {
        		system ('git --git-dir="' . $DIR . '" cat-file blob ' . $sha1 . ' | git -c core.virtualizeobjects=false hash-object -w --stdin >/dev/null 2>&1');

    The "freshening" of loose objects was moved even further away from the call sites.

    Reacting to a new early return in the early hooks path
    • 32: 860f9bc ! 64: 6096a76 gvfs: add global command pre and post hook procs

      @@ hook.c
        #include "abspath.h"
       +#include "environment.h"
        #include "advice.h"
      - #include "gettext.h"
      - #include "hook.h"
      -@@
      + #include "config.h"
        #include "environment.h"
      - #include "setup.h"
      +@@
      + #include "strbuf.h"
      + #include "strmap.h"
        
       +static int early_hooks_path_config(const char *var, const char *value,
       +				   const struct config_context *ctx UNUSED, void *cb)
      @@ hook.c
        
        	int found_hook;
        
      +-	if (!r || !r->gitdir)
      +-		return NULL;
      +-
       -	repo_git_path_replace(r, &path, "hooks/%s", name);
      -+	strbuf_reset(&path);
      -+	if (have_git_dir())
      ++	if (!r || !r->gitdir) {
      ++		if (!hook_path_early(name, &path))
      ++			return NULL;
      ++	} else {
       +		repo_git_path_replace(r, &path, "hooks/%s", name);
      -+	else if (!hook_path_early(name, &path))
      -+		return NULL;
      -+
      ++	}
        	found_hook = access(path.buf, X_OK) >= 0;
        #ifdef STRIP_EXTENSION
        	if (!found_hook) {

    Microsoft Git has a special code path to run hooks even before any Git directory is discovered (because of the pre-/post-command hooks). This code clashes with an upstream change to return early (and without doing anything) when no gitdir was yet discovered.

    Reaction work for upstream's refactoring of the sparse checkout flag
    • 52: 4b9a737 ! 82: 8642204 Add virtual file system settings and hook proc

      @@ config.c: int repo_config_get_max_percent_split_change(struct repository *r)
       +{
       +	/* Run only once. */
       +	static int virtual_filesystem_result = -1;
      ++	struct repo_config_values *cfg = repo_config_values(r);
       +	extern char *core_virtualfilesystem;
      -+	extern int core_apply_sparse_checkout;
       +	if (virtual_filesystem_result >= 0)
       +		return virtual_filesystem_result;
       +
      @@ config.c: int repo_config_get_max_percent_split_change(struct repository *r)
       +
       +	/* virtual file system relies on the sparse checkout logic so force it on */
       +	if (core_virtualfilesystem) {
      -+		core_apply_sparse_checkout = 1;
      ++		cfg->apply_sparse_checkout = 1;
       +		virtual_filesystem_result = 1;
       +		return 1;
       +	}
      @@ dir.c: static void add_path_to_appropriate_result_list(struct dir_struct *dir,
        		else if ((dir->flags & DIR_SHOW_IGNORED_TOO) ||
       
        ## environment.c ##
      -@@ environment.c: int grafts_keep_true_parents;
      - int core_apply_sparse_checkout;
      +@@ environment.c: enum object_creation_mode object_creation_mode = OBJECT_CREATION_MODE;
      + int grafts_keep_true_parents;
        int core_sparse_checkout_cone;
        int sparse_expect_files_outside_of_patterns;
       +char *core_virtualfilesystem;
      @@ environment.c: int git_default_core_config(const char *var, const char *value,
        	}
        
        	if (!strcmp(var, "core.sparsecheckout")) {
      --		core_apply_sparse_checkout = git_config_bool(var, value);
      +-		cfg->apply_sparse_checkout = git_config_bool(var, value);
       +		/* virtual file system relies on the sparse checkout logic so force it on */
       +		if (core_virtualfilesystem)
      -+			core_apply_sparse_checkout = 1;
      ++			cfg->apply_sparse_checkout = 1;
       +		else
      -+			core_apply_sparse_checkout = git_config_bool(var, value);
      ++			cfg->apply_sparse_checkout = git_config_bool(var, value);
        		return 0;
        	}
        
      @@ sparse-index.c: void expand_index(struct index_state *istate, struct pattern_lis
        
        		if (!S_ISSPARSEDIR(ce->ce_mode)) {
        			set_index_entry(full, full->cache_nr++, ce);
      -@@ sparse-index.c: static void clear_skip_worktree_from_present_files_full(struct index_state *ista
      - void clear_skip_worktree_from_present_files(struct index_state *istate)
      - {
      - 	if (!core_apply_sparse_checkout ||
      +@@ sparse-index.c: void clear_skip_worktree_from_present_files(struct index_state *istate)
      + 	struct repo_config_values *cfg = repo_config_values(the_repository);
      + 
      + 	if (!cfg->apply_sparse_checkout ||
       +	    core_virtualfilesystem ||
        	    sparse_expect_files_outside_of_patterns)
        		return;
    • 53: 4c0a6f2 ! 83: 8d21b0a virtualfilesystem: don't run the virtual file system hook if the index has been redirected

      @@ config.c: int repo_config_get_virtualfilesystem(struct repository *r)
        
       -	/* virtual file system relies on the sparse checkout logic so force it on */
        	if (core_virtualfilesystem) {
      --		core_apply_sparse_checkout = 1;
      +-		cfg->apply_sparse_checkout = 1;
       -		virtual_filesystem_result = 1;
       -		return 1;
       +		/*
      @@ config.c: int repo_config_get_virtualfilesystem(struct repository *r)
       +		free(default_index_file);
       +		if (should_run_hook) {
       +			/* virtual file system relies on the sparse checkout logic so force it on */
      -+			core_apply_sparse_checkout = 1;
      ++			cfg->apply_sparse_checkout = 1;
       +			virtual_filesystem_result = 1;
       +			return 1;
       +		}

    Upstream Git reworked how the flag that says whether we're in a sparse checkout or not. This is no longer a global, but it is stored somewhat in the struct repository. I say somewhat, because you cannot call repo_config_values(r) on any repository but the_repository, for now...

    Adapting the post-indexchanged logic to the config-based hooks refactoring
    • 55: 8ab7bab ! 86: 4301484 backwards-compatibility: support the post-indexchanged hook

      @@ Commit message
           allow any `post-indexchanged` hook to run instead (if it exists).
       
        ## hook.c ##
      -@@ hook.c: int run_hooks_opt(struct repository *r, const char *hook_name,
      - 		.hook_name = hook_name,
      - 		.options = options,
      - 	};
      --	const char *const hook_path = find_hook(r, hook_name);
      -+	const char *hook_path = find_hook(r, hook_name);
      - 	int ret = 0;
      - 	const struct run_process_parallel_opts opts = {
      - 		.tr2_category = "hook",
      -@@ hook.c: int run_hooks_opt(struct repository *r, const char *hook_name,
      - 		.data = &cb_data,
      - 	};
      +@@ hook.c: static void list_hooks_add_default(struct repository *r, const char *hookname,
      + 	const char *hook_path = find_hook(r, hookname);
      + 	struct hook *h;
        
       +	/*
       +	 * Backwards compatibility hack in VFS for Git: when originally
      @@ hook.c: int run_hooks_opt(struct repository *r, const char *hook_name,
       +	 * look for a hook with the old name (which would be found in case of
       +	 * already-existing checkouts).
       +	 */
      -+	if (!hook_path && !strcmp(hook_name, "post-index-change"))
      ++	if (!hook_path && !strcmp(hookname, "post-index-change"))
       +		hook_path = find_hook(r, "post-indexchanged");
       +
      - 	if (!options)
      - 		BUG("a struct run_hooks_opt must be provided to run_hooks");
      + 	if (!hook_path)
      + 		return;
        
       
        ## t/t7113-post-index-change-hook.sh ##

    Upstream Git introduced "config-based hooks", which required a substantial revamping of the hook discovery. We now need to apply the post-indexchanged backwards-compatibility support in a totally different function that, true to Git's style, uses a slightly different variable name for the hook's name.

    Reacting to stat_tracking_info() -> stat_tracking_pair()
    • 85: 9b04c50 ! 117: 9aa2717 Trace2:gvfs:experiment: capture more 'tracking' details

      @@ remote.c
        #include "advice.h"
        #include "connect.h"
       @@ remote.c: int format_tracking_info(struct branch *branch, struct strbuf *sb,
      - 	char *base;
      - 	int upstream_is_gone = 0;
      + 		if (is_upstream && (!push_ref || !strcmp(upstream_ref, push_ref)))
      + 			is_push = 1;
        
      -+	trace2_region_enter("tracking", "stat_tracking_info", NULL);
      - 	sti = stat_tracking_info(branch, &ours, &theirs, &full_base, 0, abf);
      -+	trace2_data_intmax("tracking", NULL, "stat_tracking_info/ab_flags", abf);
      -+	trace2_data_intmax("tracking", NULL, "stat_tracking_info/ab_result", sti);
      -+	if (sti >= 0 && abf == AHEAD_BEHIND_FULL) {
      -+	    trace2_data_intmax("tracking", NULL, "stat_tracking_info/ab_ahead", ours);
      -+	    trace2_data_intmax("tracking", NULL, "stat_tracking_info/ab_behind", theirs);
      -+	}
      -+	trace2_region_leave("tracking", "stat_tracking_info", NULL);
      -+
      - 	if (sti < 0) {
      - 		if (!full_base)
      - 			return 0;
      ++		trace2_region_enter("tracking", "stat_tracking_pair", NULL);
      + 		cmp = stat_branch_pair(branch->refname, full_ref,
      + 				       &ours, &theirs, abf);
      ++		trace2_data_intmax("tracking", NULL, "stat_tracking_pair/ab_flags", abf);
      ++		trace2_data_intmax("tracking", NULL, "stat_tracking_pair/ab_result", cmp);
      ++		if (cmp >= 0 && abf == AHEAD_BEHIND_FULL) {
      ++		    trace2_data_intmax("tracking", NULL, "stat_tracking_pair/ab_ahead", ours);
      ++		    trace2_data_intmax("tracking", NULL, "stat_tracking_pair/ab_behind", theirs);
      ++		}
      ++		trace2_region_leave("tracking", "stat_tracking_pair", NULL);
      + 
      + 		if (cmp < 0) {
      + 			if (is_upstream) {

    @tyrielv do you need this Trace2 thing? I vaguely remember that Jeff Hostetler introduced it to optimize for git commit and figuring out with this telemetry that the ahead/behind calculation took a loooong time and disabled it. But I might be wrong about that, and this Trace2 might still be needed?

    Abiding by new code style rules
    • 88: 969b74d ! 120: 16e6fb6 sub-process: add subprocess_start_argv()

      @@ sub-process.c: int subprocess_start(struct hashmap *hashmap, struct subprocess_e
       +                    subprocess_start_fn startfn)
       +{
       +  int err;
      -+  size_t k;
       +  struct child_process *process;
       +  struct strbuf quoted = STRBUF_INIT;
       +
       +  process = &entry->process;
       +
       +  child_process_init(process);
      -+  for (k = 0; k < argv->nr; k++)
      -+          strvec_push(&process->args, argv->v[k]);
      ++  strvec_pushv(&process->args, argv->v);
       +  process->use_shell = 1;
       +  process->in = -1;
       +  process->out = -1;

    There's now a Coccinelle rule to enforce the shorter way to write this.

    Reacting to a flag parameter changing type to enforce correctness
    • 90: b28be78 ! 122: ca951d0 index-pack: avoid immediate object fetch while parsing packfile

      @@
        ## Metadata ##
      -Author: Jeff Hostetler <jeffhost@microsoft.com>
      +Author: Johannes Schindelin <Johannes.Schindelin@gmx.de>
       
        ## Commit message ##
           index-pack: avoid immediate object fetch while parsing packfile
      @@ Commit message
           the object to be individually fetched when gvfs-helper (or
           read-object-hook or partial-clone) is enabled.
       
      +    The call site was migrated to odb_has_object() as part of the upstream
      +    refactoring, but odb_has_object(odb, oid, HAS_OBJECT_FETCH_PROMISOR)
      +    sets only OBJECT_INFO_QUICK without OBJECT_INFO_SKIP_FETCH_OBJECT, which
      +    means it WILL trigger remote fetches via gvfs-helper. But we want to
      +    prevent index-pack from individually fetching every object it encounters
      +    during the collision check.
      +
      +    Passing 0 instead gives us both OBJECT_INFO_QUICK and
      +    OBJECT_INFO_SKIP_FETCH_OBJECT, which is the correct equivalent of the
      +    original OBJECT_INFO_FOR_PREFETCH behavior.
      +
           Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
      +    Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
       
        ## builtin/index-pack.c ##
       @@ builtin/index-pack.c: static void sha1_object(const void *data, struct object_entry *obj_entry,
        	if (startup_info->have_repository) {
        		read_lock();
        		collision_test_needed = odb_has_object(the_repository->objects, oid,
      --						       HAS_OBJECT_FETCH_PROMISOR);
      -+						       OBJECT_INFO_FOR_PREFETCH);
      +-						       ODB_HAS_OBJECT_FETCH_PROMISOR);
      ++						       0);
        		read_unlock();
        	}
        

    The signature of the odb_has_object() function has been sharpened to be an enum, and OBJECT_INFO_FOR_PREFETCH is not eligible (read: it never really worked as intended). Replacing it with 0 does what the original intention was, and is simpler.

    Unfortunate side effect of current repo_config_values()
    • 165: a151721b9513 ! 154: 024adf4 unpack-trees:virtualfilesystem: Improve efficiency of clear_ce_flags

      @@ virtualfilesystem.c: int is_excluded_from_virtualfilesystem(const char *pathname
       +	size_t i;
       +	struct apply_virtual_filesystem_stats stats = {0};
       +
      -+	if (!repo_config_get_virtualfilesystem(istate->repo))
      ++	/*
      ++	 * We cannot use `istate->repo` here, as the config will be read for
      ++	 * `the_repository` and any mismatch is marked as a bug by f9b3c1f731dd
      ++	 * (environment: stop storing `core.attributesFile` globally, 2026-02-16).
      ++	 * This is not a bad thing, though: VFS is fundamentally incompatible
      ++	 * with submodules, which is the only scenario where this distinction
      ++	 * would matter in practice.
      ++	 */
      ++	if (!repo_config_get_virtualfilesystem(the_repository))
       +		return;
       +
       +	trace2_region_enter("vfs", "apply", the_repository);

    The repo_config_values() function is currently in a transitional state, where it only ever accepts the_reposository and otherwise aborts with a BUG(). This is unfortunate, this code path is hit in the recursive submodules blame tests and needs to be special-cased. However, it's not as bad as it sounds, the only time where it would matter is when there are submodules, which are disabled with VFS for Git.

    Fixing a bug noticed in -rc1's release process
    • 169: 6ff9da58e7df ! 157: 00ce441 Adding winget workflows

      @@ .github/workflows/release-winget.yml (new)
       +          $manifestDirectory = "$PWD\manifests\m\Microsoft\Git\$version"
       +          $output = & .\wingetcreate.exe submit $manifestDirectory
       +          Write-Host $output
      -+          $url = $output | Select-String -Pattern 'https://github\.com/microsoft/winget-pkgs/pull/\S+' | ForEach-Object { $_.Matches.Value }
      ++          $url = ($output | Select-String -Pattern 'https://github\.com/microsoft/winget-pkgs/pull/\S+' | ForEach-Object { $_.Matches.Value })[0]
       +          Write-Host "::notice::Submitted ${env:TAG_NAME} to winget as $url"
       +        shell: powershell

    Despite my best efforts in #843, this was still broken, and was fixed in vfs-2.53.0 via #887

    Reacting to geometric repacking now being turned on in maintenance by default
    • 234: 531edfa7bb99 ! 222: c028362 maintenance: add cache-local-objects maintenance task

      @@ builtin/gc.c: static int geometric_repack_auto_condition(struct gc_config *cfg U
       +{
       +	struct strbuf dstdir = STRBUF_INIT;
       +	struct repository *r = the_repository;
      ++	int ret = 0;
       +
       +	/* This task is only applicable with a VFS/Scalar shared cache. */
       +	if (!shared_object_dir)
      @@ builtin/gc.c: static int geometric_repack_auto_condition(struct gc_config *cfg U
       +	for_each_file_in_pack_dir(r->objects->sources->path, move_pack_to_shared_cache,
       +				  dstdir.buf);
       +
      -+	for_each_loose_object(r->objects, move_loose_object_to_shared_cache, NULL,
      -+			      FOR_EACH_OBJECT_LOCAL_ONLY);
      ++	ret = for_each_loose_file_in_source(r->objects->sources,
      ++				      move_loose_object_to_shared_cache,
      ++				      NULL, NULL, NULL);
       +
       +cleanup:
       +	strbuf_release(&dstdir);
      -+	return 0;
      ++	return ret;
       +}
       +
        typedef int (*maintenance_task_fn)(struct maintenance_run_opts *opts,
      @@ t/t7900-maintenance.sh: test_expect_success 'maintenance aborts with existing lo
       +
       +		test_commit something &&
       +		git config set maintenance.gc.enabled false &&
      ++		git config set maintenance.geometric-repack.enabled false &&
       +		git config set maintenance.cache-local-objects.enabled true &&
       +		git config set maintenance.cache-local-objects.auto 1 &&
       +
      @@ t/t7900-maintenance.sh: test_expect_success 'maintenance aborts with existing lo
       +		test_commit something &&
       +		git config set gvfs.sharedcache .git/objects &&
       +		git config set maintenance.gc.enabled false &&
      ++		git config set maintenance.geometric-repack.enabled false &&
       +		git config set maintenance.cache-local-objects.enabled true &&
       +		git config set maintenance.cache-local-objects.auto 1 &&
       +
      @@ t/t7900-maintenance.sh: test_expect_success 'maintenance aborts with existing lo
       +		test_commit something &&
       +		git config set gvfs.sharedcache ../cache &&
       +		git config set maintenance.gc.enabled false &&
      ++		git config set maintenance.geometric-repack.enabled false &&
       +		git config set maintenance.cache-local-objects.enabled true &&
       +		git config set maintenance.cache-local-objects.auto 1 &&
       +
      @@ t/t7900-maintenance.sh: test_expect_success 'maintenance aborts with existing lo
       +		test_commit something &&
       +		git config set gvfs.sharedcache ../cache &&
       +		git config set maintenance.gc.enabled false &&
      ++		git config set maintenance.geometric-repack.enabled false &&
       +		git config set maintenance.cache-local-objects.enabled true &&
       +		git config set maintenance.cache-local-objects.auto 1 &&
       +

    The test case that verifies that loose objects are moved into the shared repository in Scalar needs to turn off anything in the git maintenance run that would inadvertently pack those loose objects. It already disables gc. Now that gemoetric repacking is turned on in git maintenance by default, that has to be disabled explicitly, too.

    pre-/post-command hooks vs upstream Git's config-based hooks
    • 236: 4c7a1c7f5c52 ! 224: dc7bda7 hooks: add custom post-command hook config

      @@ hook.c
        #include "abspath.h"
        #include "environment.h"
        #include "advice.h"
      -@@ hook.c: static void run_hooks_opt_clear(struct run_hooks_opt *options)
      - 	strvec_clear(&options->args);
      +@@ hook.c: void hook_free(void *p, const char *str UNUSED)
      + 	free(h);
        }
        
       +static char *get_post_index_change_sentinel_name(struct repository *r)
      @@ hook.c: static void run_hooks_opt_clear(struct run_hooks_opt *options)
       +	return 0;
       +}
       +
      - int run_hooks_opt(struct repository *r, const char *hook_name,
      - 		  struct run_hooks_opt *options)
      + /* Helper to detect and add default "traditional" hooks from the hookdir. */
      + static void list_hooks_add_default(struct repository *r, const char *hookname,
      + 				   struct string_list *hook_list,
      + 				   struct run_hooks_opt *options)
        {
      -@@ hook.c: int run_hooks_opt(struct repository *r, const char *hook_name,
      - 		.hook_name = hook_name,
      - 		.options = options,
      - 	};
      --	const char *hook_path = find_hook(r, hook_name);
      +-	const char *hook_path = find_hook(r, hookname);
       +	const char *hook_path;
      - 	int ret = 0;
      - 	const struct run_process_parallel_opts opts = {
      - 		.tr2_category = "hook",
      -@@ hook.c: int run_hooks_opt(struct repository *r, const char *hook_name,
      - 		.data = &cb_data,
      - 	};
      + 	struct hook *h;
        
       +	/* Interject hook behavior depending on strategy. */
      -+	if (r && r->gitdir &&
      -+	    handle_hook_replacement(r, hook_name, &options->args))
      -+		return 0;
      ++	if (r && r->gitdir && options &&
      ++	    handle_hook_replacement(r, hookname, &options->args))
      ++		return;
       +
      -+	hook_path = find_hook(r, hook_name);
      ++	hook_path = find_hook(r, hookname);
       +
        	/*
        	 * Backwards compatibility hack in VFS for Git: when originally
        	 * introduced (and used!), it was called `post-indexchanged`, but this
      +@@ hook.c: struct string_list *list_hooks(struct repository *r, const char *hookname,
      + 	CALLOC_ARRAY(hook_head, 1);
      + 	string_list_init_dup(hook_head);
      + 
      +-	/* Add hooks from the config, e.g. hook.myhook.event = pre-commit */
      +-	list_hooks_add_configured(r, hookname, hook_head, options);
      ++	/*
      ++	 * The pre/post-command hooks are only supported as traditional hookdir
      ++	 * hooks, never as config-based hooks. Building the config map validates
      ++	 * all hook.*.event entries and would die() on partially-configured
      ++	 * hooks, which is fatal when "git config" is still in the middle of
      ++	 * setting up a multi-key hook definition.
      ++	 */
      ++	if (strcmp(hookname, "pre-command") && strcmp(hookname, "post-command"))
      ++		list_hooks_add_configured(r, hookname, hook_head, options);
      + 
      + 	/* Add the default "traditional" hooks from hookdir. */
      + 	list_hooks_add_default(r, hookname, hook_head, options);
       
        ## t/t0401-post-command-hook.sh ##
       @@ t/t0401-post-command-hook.sh: test_expect_success 'with succeeding hook' '

    This was a lot of "fun" to figure out. The config-based hooks are fundamentally incompatible with pre-/post-command hooks. Even worse: Two git config set calls are required to configure a new hook, and the intermediate state after the first and before the second git config set call leaves the config in an invalid state. An invalid state that is verified and leads to a hard error if a hook is attempted to be run at that stage. Which the second git config set's pre-command qualifies for.

    Fixing the new vfs-functional-tests workflow for -rc versions
    • 274: 7901136fc739 ! 252: 30ff6c8 ci: add new VFS for Git functional tests workflow

      @@ .github/workflows/vfs-functional-tests.yml (new)
       +          NO_TCLTK: Yup
       +        run: |
       +          # We do require a VFS version
      -+          def_ver="$(sed -n 's/DEF_VER=\(.*vfs.*\)/\1/p' GIT-VERSION-GEN)"
      ++          def_ver="$(sed -n '/^DEF_VER=/{
      ++            s/^DEF_VER=\(.*vfs.*\)/\1/p
      ++            tq # already found a *.vfs.* one, skip next line
      ++            s/^DEF_VER=\(.*\)/\1.vfs.0.0/p
      ++            :q
      ++            q
      ++          }' GIT-VERSION-GEN)"
       +          test -n "$def_ver"
       +
      ++          # VFSforGit cannot handle -rc versions; strip the `-rc` part, if any
      ++          case "$def_ver" in
      ++          *-rc*) def_ver=${def_ver%%-rc*}.vfs.${def_ver#*.vfs.};;
      ++          esac
      ++
       +          # Ensure that `git version` reflects DEF_VER
       +          case "$(git describe --match "v[0-9]*vfs*" HEAD)" in
       +          ${def_ver%%.vfs.*}.vfs.*) ;; # okay, we can use this
      -+          *) git -c user.name=ci -c user.email=ci@github tag -m for-testing ${def_ver}.NNN.g$(git rev-parse --short HEAD);;
      ++          *) echo ${def_ver}.NNN.g$(git rev-parse --short HEAD) >version;;
       +          esac
       +
       +          make -j5 DESTDIR="$GITHUB_WORKSPACE/MicrosoftGit/payload/${{ matrix.architecture }}" install

    As I found out in #888, pretty much every test case in the VFS Functional Tests failed, solely because VFS for Git considers -rc versions invalid. This led to 45e4af4. But then the build would fail, requiring c021cf9. This range-diff represents both fixup!s being squashed into the correct target commit.

    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

    Labels

    None yet

    Projects

    None yet

    Development

    Successfully merging this pull request may close these issues.