From e9139ee72475c73661eb4264fcd18eea20cb0a74 Mon Sep 17 00:00:00 2001 From: Claude Date: Tue, 7 Apr 2026 09:28:44 +0000 Subject: [PATCH] Upgrade llama.cpp from b8668 to b8683 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit No breaking API changes in this range — all changes are additive: new GGML_TYPE_Q1_0 quantization, new LLM_CHAT_TEMPLATE_HUNYUAN_OCR, CUDA/SYCL/WebGPU backend additions, and internal bug fixes. https://claude.ai/code/session_01WYogeJSBjEmKDFX2P6nSdo --- CLAUDE.md | 2 +- CMakeLists.txt | 2 +- README.md | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/CLAUDE.md b/CLAUDE.md index 2157abff..579aa7c0 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -6,7 +6,7 @@ This file provides guidance to Claude Code (claude.ai/code) when working with co Java bindings for [llama.cpp](https://github.com/ggerganov/llama.cpp) via JNI, providing a high-level API for LLM inference in Java. The Java layer communicates with a native C++ library through JNI. -Current llama.cpp pinned version: **b8668** +Current llama.cpp pinned version: **b8683** ## Upgrading CUDA Version diff --git a/CMakeLists.txt b/CMakeLists.txt index fa1d1733..a7f6eb1f 100644 --- a/CMakeLists.txt +++ b/CMakeLists.txt @@ -97,7 +97,7 @@ set(GGML_AVX512 OFF CACHE BOOL "" FORCE) FetchContent_Declare( llama.cpp GIT_REPOSITORY https://github.com/ggerganov/llama.cpp.git - GIT_TAG b8668 + GIT_TAG b8683 ) FetchContent_MakeAvailable(llama.cpp) diff --git a/README.md b/README.md index aece96bd..c46e2ee2 100644 --- a/README.md +++ b/README.md @@ -1,5 +1,5 @@ ![Java 8+](https://img.shields.io/badge/Java-8%2B-informational) -[![llama.cpp b8668](https://img.shields.io/badge/llama.cpp-%23b8668-informational)](https://github.com/ggml-org/llama.cpp/releases/tag/b8668) +[![llama.cpp b8683](https://img.shields.io/badge/llama.cpp-%23b8683-informational)](https://github.com/ggml-org/llama.cpp/releases/tag/b8683) # Java Bindings for [llama.cpp](https://github.com/ggerganov/llama.cpp)