Skip to content

cmake : skip project() when consumed as a subdirectory (#20415)#22151

Draft
jinweihan-ai wants to merge 1 commit intoggml-org:masterfrom
jinweihan-ai:cmake-no-project-in-subdir
Draft

cmake : skip project() when consumed as a subdirectory (#20415)#22151
jinweihan-ai wants to merge 1 commit intoggml-org:masterfrom
jinweihan-ai:cmake-no-project-in-subdir

Conversation

@jinweihan-ai
Copy link
Copy Markdown

@jinweihan-ai jinweihan-ai commented Apr 20, 2026

Summary

Fixes #20415.

Both CMakeLists.txt (llama) and ggml/CMakeLists.txt call project() unconditionally. When llama.cpp is pulled into another CMake project via add_subdirectory / FetchContent / CPM, those calls re-run toolchain detection inside the nested scope. For C/CXX this is mostly a no-op, but ggml's project(... ASM) triggers an ASM compiler identification pass that can overwrite parent-scope toolchain variables and, on cross-compilation toolchains (iOS, Android NDK, arm64↔x86_64 hosts), pick a different sysroot/architecture than the parent configured.

The fix follows the CMake convention of guarding project() with the standalone check so it only fires for top-level builds. The existing LLAMA_STANDALONE / GGML_STANDALONE variables continue to work because they are computed with the same predicate. No assembly sources live at the top-level of ggml — all ASM usage is gated behind enable_language(ASM) inside the backend that needs it (Metal, Hexagon HTP), so dropping ASM from the embedded project() call is safe.

Repro (before the fix)

Minimal parent project:

cmake_minimum_required(VERSION 3.20)
project(parent_app C CXX)                # parent enables C/CXX only, no ASM
add_subdirectory(/path/to/llama.cpp llama_build)
$ cmake -B build -DCMAKE_OSX_ARCHITECTURES=x86_64 ...
-- The C compiler identification is AppleClang
-- The CXX compiler identification is AppleClang
...
-- The ASM compiler identification is AppleClang       ← re-triggered by ggml's project()
-- Found assembler: /usr/bin/cc                         ← leaks into parent scope

After the fix those two lines no longer appear — the parent's toolchain state is preserved.

Test plan

  • Standalone llama.cpp build still configures and compiles end-to-end (cmake -B build && cmake --build build --target llama-cli), version reported correctly (built with AppleClang … for Darwin arm64).
  • Minimal parent project with add_subdirectory(llama.cpp) + -DCMAKE_OSX_ARCHITECTURES=x86_64 now configures without triggering ASM identification and without clobbering CMAKE_ASM_COMPILER in the parent scope.
  • GGML_SYSTEM_ARCH is still detected correctly (x86 in the cross-compile repro).
  • CI ggml-ci, Android build, and standalone builds should remain green — only the project() invocation is guarded; everything below it runs unchanged.

Requirements

  • I have read and agree with the contributing guidelines
  • AI usage disclosure: Yes. Patch drafting and PR description were AI-assisted; the subdirectory repro was built and the standalone build verified locally before submission.

When llama.cpp or ggml is pulled into a parent CMake project via
add_subdirectory / FetchContent / CPM, the unconditional project() calls
re-run toolchain detection in the nested scope. For C/CXX this is mostly
a no-op, but ggml additionally enables ASM, which triggers an ASM
compiler identification pass that can override parent-scope settings
(e.g. CMAKE_ASM_COMPILER) and, on cross-compilation toolchains such as
iOS or the Android NDK, can pick a different sysroot / architecture than
the parent configured.

Guard both project() calls behind the standard
`CMAKE_SOURCE_DIR STREQUAL CMAKE_CURRENT_SOURCE_DIR` check so they only
fire for standalone builds. The existing `LLAMA_STANDALONE` and
`GGML_STANDALONE` variables continue to work because they are computed
with the same condition. All assembly use in the tree is gated behind
`enable_language(ASM)` in the backend that needs it (Metal, Hexagon
HTP), so dropping ASM from the embedded project() call is safe.

Closes ggml-org#20415
@jinweihan-ai jinweihan-ai requested a review from ggerganov as a code owner April 20, 2026 07:03
@ggml-gh-bot
Copy link
Copy Markdown

ggml-gh-bot bot commented Apr 20, 2026

Hi @jinweihan-ai, thanks for your contribution!

Per our contribution guidelines, the automated PR checker found the following issue(s) that need your attention:

  • Multiple open PRs from a new contributor: We limit new contributors (those without a previously merged PR) to 1 open PR at a time. You currently have 2 open PRs.

  • AI-generated content: This project does not accept PRs, descriptions or commit messages that are fully or predominantly AI-generated. If you have used AI to assist you in writing code, please make sure to disclose that explicitly.


Please note that maintainers reserve the right to make final decisions on PRs. If you believe there is a mistake, please comment below.

@jinweihan-ai jinweihan-ai marked this pull request as draft April 20, 2026 09:07
@github-actions github-actions bot added build Compilation issues ggml changes relating to the ggml tensor library for machine learning labels Apr 20, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

build Compilation issues ggml changes relating to the ggml tensor library for machine learning

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Compile bug: Unconditional project() Call in ggml/CMakeLists.txt Resets Toolchain and Architecture When Used as a Subdirectory

1 participant