Skip to content

Conversation

@renovate
Copy link
Contributor

@renovate renovate bot commented Nov 10, 2025

ℹ️ Note

This PR body was truncated due to platform limits.

This PR contains the following updates:

Package Type Update Change OpenSSF
huggingface-hub project.dependencies major ==0.36.0==1.3.4 OpenSSF Scorecard

Release Notes

huggingface/huggingface_hub (huggingface-hub)

v1.3.4: [v1.3.4] Fix CommitUrl._endpoint default to None

Compare Source

  • Default _endpoint to None in CommitInfo, fixes tiny regression from v1.3.3 by @​tomaarsen in #​3737

Full Changelog: huggingface/huggingface_hub@v1.3.3...v1.3.4

v1.3.3: [v1.3.3] List Jobs Hardware & Bug Fixes

Compare Source

⚙️ List Jobs Hardware

You can now list all available hardware options for Hugging Face Jobs, both from the CLI and programmatically.

From the CLI:

hf jobs hardware                           
NAME            PRETTY NAME            CPU      RAM     ACCELERATOR      COST/MIN COST/HOUR 
--------------- ---------------------- -------- ------- ---------------- -------- --------- 
cpu-basic       CPU Basic              2 vCPU   16 GB   N/A              $0.0002  $0.01     
cpu-upgrade     CPU Upgrade            8 vCPU   32 GB   N/A              $0.0005  $0.03     
cpu-performance CPU Performance        8 vCPU   32 GB   N/A              $0.0000  $0.00     
cpu-xl          CPU XL                 16 vCPU  124 GB  N/A              $0.0000  $0.00     
t4-small        Nvidia T4 - small      4 vCPU   15 GB   1x T4 (16 GB)    $0.0067  $0.40     
t4-medium       Nvidia T4 - medium     8 vCPU   30 GB   1x T4 (16 GB)    $0.0100  $0.60     
a10g-small      Nvidia A10G - small    4 vCPU   15 GB   1x A10G (24 GB)  $0.0167  $1.00     
a10g-large      Nvidia A10G - large    12 vCPU  46 GB   1x A10G (24 GB)  $0.0250  $1.50     
a10g-largex2    2x Nvidia A10G - large 24 vCPU  92 GB   2x A10G (48 GB)  $0.0500  $3.00     
a10g-largex4    4x Nvidia A10G - large 48 vCPU  184 GB  4x A10G (96 GB)  $0.0833  $5.00     
a100-large      Nvidia A100 - large    12 vCPU  142 GB  1x A100 (80 GB)  $0.0417  $2.50     
a100x4          4x Nvidia A100         48 vCPU  568 GB  4x A100 (320 GB) $0.1667  $10.00    
a100x8          8x Nvidia A100         96 vCPU  1136 GB 8x A100 (640 GB) $0.3333  $20.00    
l4x1            1x Nvidia L4           8 vCPU   30 GB   1x L4 (24 GB)    $0.0133  $0.80     
l4x4            4x Nvidia L4           48 vCPU  186 GB  4x L4 (96 GB)    $0.0633  $3.80     
l40sx1          1x Nvidia L40S         8 vCPU   62 GB   1x L40S (48 GB)  $0.0300  $1.80     
l40sx4          4x Nvidia L40S         48 vCPU  382 GB  4x L40S (192 GB) $0.1383  $8.30     
l40sx8          8x Nvidia L40S         192 vCPU 1534 GB 8x L40S (384 GB) $0.3917  $23.50 

Programmatically:

>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> hardware_list = api.list_jobs_hardware()
>>> hardware_list[0]
JobHardware(name='cpu-basic', pretty_name='CPU Basic', cpu='2 vCPU', ram='16 GB', accelerator=None, unit_cost_micro_usd=167, unit_cost_usd=0.000167, unit_label='minute')
>>> hardware_list[0].name
'cpu-basic'

🐛 Bug Fixes

✨ Various Improvements

📚 Documentation

v1.3.2: [v1.3.2] Zai provider support for text-to-image and fix custom endpoint not forwarded

Compare Source

Full Changelog: huggingface/huggingface_hub@v1.3.1...v1.3.2

v1.3.1: [v1.3.1] Add dimensions & encoding_format parameters to feature extraction (embeddings) task

Compare Source

  • Add dimensions & encoding_format parameter to InferenceClient for output embedding size #​3671 by @​mishig25

Full Changelog: huggingface/huggingface_hub@v1.3.0...v1.3.1

v1.3.0: [v1.3.0] New CLI Commands for Hub Discovery, Jobs Monitoring and more!

Compare Source

🖥️ CLI: hf models, hf datasets, hf spaces Commands

The CLI has been reorganized with dedicated commands for Hub discovery, while hf repo stays focused on managing your own repositories.

New commands:

# Models
hf models ls --author=Qwen --limit=10
hf models info Qwen/Qwen-Image-2512

# Datasets
hf datasets ls --filter "format:parquet" --sort=downloads
hf datasets info HuggingFaceFW/fineweb

# Spaces
hf spaces ls --search "3d"
hf spaces info enzostvs/deepsite

This organization mirrors the Python API (list_models, model_info, etc.), keeps the hf <resource> <action> pattern, and is extensible for future commands like hf papers or hf collections.

🔧 Transformers CLI Installer

You can now install the transformers CLI alongside the huggingface_hub CLI using the standalone installer scripts.

# Install hf CLI only (default)
curl -LsSf https://hf.co/cli/install.sh | bash -s

# Install both hf and transformers CLIs
curl -LsSf https://hf.co/cli/install.sh | bash -s -- --with-transformers
# Install hf CLI only (default)
powershell -c "irm https://hf.co/cli/install.ps1 | iex"

# Install both hf and transformers CLIs
powershell -c "irm https://hf.co/cli/install.ps1 | iex" -WithTransformers

Once installed, you can use the transformers CLI directly:

transformers serve
transformers chat openai/gpt-oss-120b

📊 Jobs Monitoring

New hf jobs stats command to monitor your running jobs in real-time, similar to docker stats. It displays a live table with CPU, memory, network, and GPU usage.

>>> hf jobs stats
JOB ID                   CPU % NUM CPU MEM % MEM USAGE      NET I/O         GPU UTIL % GPU MEM % GPU MEM USAGE
------------------------ ----- ------- ----- -------------- --------------- ---------- --------- ---------------
6953ff6274100871415c13fd 0%    3.5     0.01% 1.3MB / 15.0GB 0.0bps / 0.0bps 0%         0.0%      0.0B / 22.8GB

A new HfApi.fetch_jobs_metrics() method is also available:

>>> for metrics in fetch_job_metrics(job_id="6953ff6274100871415c13fd"):
...     print(metrics)
{
    "cpu_usage_pct": 0,
    "cpu_millicores": 3500,
    "memory_used_bytes": 1306624,
    "memory_total_bytes": 15032385536,
    "rx_bps": 0,
    "tx_bps": 0,
    "gpus": {
        "882fa930": {
            "utilization": 0,
            "memory_used_bytes": 0,
            "memory_total_bytes": 22836000000
        }
    },
    "replica": "57vr7"
}

💔 Breaking Change

The direction parameter in list_models, list_datasets, and list_spaces is now deprecated and not used. The sorting is always descending.

🔧 Other QoL Improvements

📖 Documentation

🛠️ Small fixes and maintenance

🐛 Bug and typo fixes
🏗️ Internal

Significant community contributions

The following contributors have made significant changes to the library over the last release:

v1.2.4: [v1.2.4] Various fixes: use dataclass_transform, fix hf-xet reqs, fix custom endpoint in Jobs API

Compare Source

Full Changelog: huggingface/huggingface_hub@v1.2.3...v1.2.4

v1.2.3: [v1.2.3] Fix private default value in CLI

Compare Source

Patch release for #​3618 by @​Wauplin.

When creating a new repo, we should default to private=None instead of private=False. This is already the case when using the API but not when using the CLI. This is a bug likely introduced when switching to Typer. When defaulting to None, the repo visibility will default to False except if the organization has configured repos to be "private by default" (the check happens server-side, so it shouldn't be hardcoded client-side).

Full Changelog: huggingface/huggingface_hub@v1.2.2...v1.2.3

v1.2.2: [v1.2.2] Fix unbound local error in local folder metadata + fix hf auth list logs

Compare Source

Full Changelog: huggingface/huggingface_hub@v1.2.1...v1.2.2

v1.2.1

Compare Source

v1.2.0: v1.2.1: Smarter Rate Limit Handling, Daily Papers API and more QoL improvements!

Compare Source

🚦 Smarter Rate Limit Handling

We've improved how the huggingface_hub library handles rate limits from the Hub. When you hit a rate limit, you'll now see clear, actionable error messages telling you exactly how long to wait and how many requests you have left.

HfHubHTTPError: 429 Too Many Requests for url: https://huggingface.co/api/models/username/reponame.
Retry after 55 seconds (0/2500 requests remaining in current 300s window).

When a 429 error occurs, the SDK automatically parses the RateLimit header to extract the exact number of seconds until the rate limit resets, then waits precisely that duration before retrying. This applies to file downloads (i.e. Resolvers), uploads, and paginated Hub API calls (list_models, list_datasets, list_spaces, etc.).

More info about Hub rate limits in the docs 👉 here.

✨ HF API

Daily Papers endpoint: You can now programmatically access Hugging Face's daily papers feed. You can filter by week, month, or submitter, and sort by publication date or trending.

from huggingface_hub import list_daily_papers

for paper in list_daily_papers(date="2025-12-03"):
    print(paper.title)

# DeepSeek-V3.2: Pushing the Frontier of Open Large Language Models

# ToolOrchestra: Elevating Intelligence via Efficient Model and Tool Orchestration
# MultiShotMaster: A Controllable Multi-Shot Video Generation Framework

# Deep Research: A Systematic Survey
# MG-Nav: Dual-Scale Visual Navigation via Sparse Spatial Memory
...

Add daily papers endpoint by @​BastienGimbert in #​3502
Add more parameters to daily papers by @​Samoed in #​3585

Offline mode helper: we recommend using huggingface_hub.is_offline_mode() to check whether offline mode is enabled instead of checking HF_HUB_OFFLINE directly.

Add offline_mode helper by @​Wauplin in #​3593
Rename utility to is_offline_mode by @​Wauplin #​3598

Inference Endpoints: You can now configure scaling metrics and thresholds when deploying endpoints.

feat(endpoints): scaling metric and threshold by @​oOraph in #​3525

Exposed utilities: RepoFile and RepoFolder are now available at the root level for easier imports.

Expose RepoFile and RepoFolder at root level by @​Wauplin in #​3564

⚡️ Inference Providers

OVHcloud AI Endpoints was added as an official Inference Provider in v1.1.5. OVHcloud provides European-hosted, GDPR-compliant model serving for your AI applications.

import os
from huggingface_hub import InferenceClient

client = InferenceClient(
    api_key=os.environ["HF_TOKEN"],
)

completion = client.chat.completions.create(
    model="openai/gpt-oss-20b:ovhcloud",
    messages=[
        {
            "role": "user",
            "content": "What is the capital of France?"
        }
    ],
)

print(completion.choices[0].message)

Add OVHcloud AI Endpoints as an Inference Provider by @​eliasto in #​3541

We also added support for automatic speech recognition (ASR) with Replicate, so you can now transcribe audio files easily.

import os
from huggingface_hub import InferenceClient

client = InferenceClient(
    provider="replicate",
    api_key=os.environ["HF_TOKEN"],
)

output = client.automatic_speech_recognition("sample1.flac", model="openai/whisper-large-v3")

[Inference Providers] Add support for ASR with Replicate by @​hanouticelina in #​3538

The truncation_direction parameter in InferenceClient.feature_extraction ( (and its async counterpart) now uses lowercase values ("left"/"right" instead of "Left"/"Right") for consistency with other specs. The Async counterpart has been updated as well

[Inference] Use lowercase left/right truncation direction parameter by @​Wauplin in #​3548

📁 HfFileSystem

HfFileSystem: A new top-level hffs alias make working with the filesystem interface more convenient.

>>> from huggingface_hub import hffs
>>> with hffs.open("datasets/fka/awesome-chatgpt-prompts/prompts.csv", "r") as f:
...     print(f.readline())
"act","prompt"
"An Ethereum Developer","Imagine you are an experienced Ethereum developer tasked..."

[HfFileSystem] Add top level hffs by @​lhoestq in #​3556
[HfFileSystem] Add expand_info arg by @​lhoestq in #​3575

💔 Breaking Change

Paginated results when listing user access requests: list_pending_access_requests, list_accepted_access_requests, and list_rejected_access_requests now return an iterator instead of a list. This allows lazy loading of results for repositories with a large number of access requests. If you need a list, wrap the call with list(...).

Paginated results in list_user_access by @​Wauplin in #​3535

🔧 Other QoL Improvements

📖 Documentation

🛠️ Small fixes and maintenance

🐛 Bug and typo fixes
🏗️ Internal

Significant community contributions

The following contributors have made significant changes to the library over the last release:

v1.1.7: [v1.1.7] Make hffs accessible at root-level

Compare Source

[HfFileSystem] Add top level hffs by @​lhoestq #​3556.

Example:

>>> from huggingface_hub import hffs
>>> with hffs.open("datasets/fka/awesome-chatgpt-prompts/prompts.csv", "r") as f:
...     print(f.readline())
...     print(f.readline())
"act","prompt"
"An Ethereum Developer","Imagine you are an experienced Ethereum developer tasked..."

Full Changelog: huggingface/huggingface_hub@v1.1.6...v1.1.7

v1.1.6: [v1.1.6] Fix incomplete file listing in snapshot_download + other bugfixes

Compare Source

This release includes multiple bug fixes:


Full Changelog: huggingface/huggingface_hub@v1.1.5...v1.1.6

v1.1.5: [v1.1.5] Welcoming OVHcloud AI Endpoints as a new Inference Provider & More

Compare Source

⚡️ New Inference Provider: OVHcloud AI Endpoints

OVHcloud AI Endpoints is now an official Inference Provider on Hugging Face! 🎉
OVHcloud delivers fast, production ready inference on secure, sovereign, fully 🇪🇺 European infrastructure - combining advanced features with competitive pricing.

import os
from huggingface_hub import InferenceClient

client = InferenceClient(
    api_key=os.environ["HF_TOKEN"],
)

completion = client.chat.completions.create(
    model="openai/gpt-oss-20b:ovhcloud",
    messages=[
        {
            "role": "user",
            "content": "What is the capital of France?"
        }
    ],
)

print(completion.choices[0].message)

More snippets examples in the provider documentation 👉 here.

QoL Improvements

Installing the CLI is now much faster, thanks to @​Boulaouaney for adding support for uv, bringing faster package installation.

Bug Fixes

This release also includes the following bug fixes:

v1.1.4: [v1.1.4] Paginated results in list_user_access

Compare Source

  • Paginated results in list_user_access by @​Wauplin in #​3535
    ⚠️ This patch release is a breaking chance but necessary to reflect API update made server-side.

Full Changelog: huggingface/huggingface_hub@v1.1.3...v1.1.4

v1.1.3: [v1.1.3] Avoid HTTP 429 on downloads + fix missing arguments in download API

Compare Source

  • Make 'name' optional in catalog deploy by @​Wauplin in #​3529
  • Pass through additional arguments from HfApi download utils by @​schmrlng in #​3531
  • Avoid redundant call to the Xet connection info URL by @​Wauplin in #​3534
    • => this PR fixes HTTP 429 rate limit issues happening when downloading a very large dataset of small files

Full Changelog: huggingface/huggingface_hub@v1.1.0...v1.1.3

v1.1.2

Compare Source

v1.1.1

Compare Source

v1.1.0: : Faster Downloads, new CLI features and more!

Compare Source

🚀 Optimized Download Experience

⚡ This release significantly improves the file download experience by making it faster and cleaning up the terminal output.

snapshot_download is now always multi-threaded, leading to significant performance gains. We removed a previous limitation, as Xet's internal resource management ensures we can parallelize downloads safely without resource contention. A sample benchmark showed this made the download much faster!

Additionally, the output for snapshot_download and hf download CLI is now much less verbose. Per file logs are hidden by default, and all individual progress bars are combined into a single progress bar, resulting in a much cleaner output.

download_2

Inference Providers

🆕 WaveSpeedAI is now an official Inference Provider on Hugging Face! 🎉 WaveSpeedAI provides fast, scalable, and cost-effective model serving for creative AI applications, supporting text-to-image, image-to-image, text-to-video, and image-to-video tasks. 🎨

import os
from huggingface_hub import InferenceClient

client = InferenceClient(
    provider="wavespeed",
    api_key=os.environ["HF_TOKEN"],
)

video = client.text_to_video(
    "A cat riding a bike",
    model="Wan-AI/Wan2.2-TI2V-5B",
)

More snippets examples in the provider documentation 👉 here.

We also added support for image-segmentation task for fal, enabling state-of-the-art background removal with RMBG v2.0.

import os
from huggingface_hub import InferenceClient

client = InferenceClient(
    provider="fal-ai",
    api_key=os.environ["HF_TOKEN"],
)

output = client.image_segmentation("cats.jpg", model="briaai/RMBG-2.0")

MixCollage-05-Nov-2025-11-49-AM-7835

🦾 CLI continues to get even better!

Following the complete revamp of the Hugging Face CLI in v1.0, this release builds on that foundation by adding powerful new features and improving accessibility.

New hf PyPI Package

To make the CLI even easier to access, we've published a new, minimal PyPI package: hf. This package installs the hf CLI tool and It's perfect for quick, isolated execution with modern tools like uvx.

# Run the CLI without installing it
> uvx hf auth whoami

⚠️ Note: This package is for the CLI only. Attempting to import hf in a Python script will correctly raise an ImportError.

A big thank you to @​thorwhalen for generously transferring the hf package name to us on PyPI. This will make the CLI much more accessible for all Hugging Face users. 🤗

Manage Inference Endpoints

A new command group, hf endpoints, has been added to deploy and manage your Inference Endpoints directly from the terminal.

This provides "one-liners" for deploying, deleting, updating, and monitoring endpoints. The CLI offers two clear paths for deployment: hf endpoints deploy for standard Hub models and hf endpoints catalog deploy for optimized Model Catalog configurations.

> hf endpoints --help
Usage: hf endpoints [OPTIONS] COMMAND [ARGS]...

  Manage Hugging Face Inference Endpoints.

Options:
  --help  Show this message and exit.

Commands:
  catalog        Interact with the Inference Endpoints catalog.
  delete         Delete an Inference Endpoint permanently.
  deploy         Deploy an Inference Endpoint from a Hub repository.
  describe       Get information about an existing endpoint.
  ls             Lists all Inference Endpoints for the given namespace.
  pause          Pause an Inference Endpoint.
  resume         Resume an Inference Endpoint.
  scale-to-zero  Scale an Inference Endpoint to zero.
  update         Update an existing endpoint.
Verify Cache Integrity

A new command, hf cache verify, has been added to check your cached files against their checksums on the Hub. This is a great tool to ensure your local cache is not corrupted and is in sync with the remote repository.

> hf cache verify --help
Usage: hf cache verify [OPTIONS] REPO_ID

  Verify checksums for a single repo revision from cache or a local directory.

  Examples:
  - Verify main revision in cache: `hf cache verify gpt2`
  - Verify specific revision: `hf cache verify gpt2 --revision refs/pr/1`
  - Verify dataset: `hf cache verify karpathy/fineweb-edu-100b-shuffle --repo-type dataset`
  - Verify local dir: `hf cache verify deepseek-ai/DeepSeek-OCR --local-dir /path/to/repo`

Arguments:
  REPO_ID  The ID of the repo (e.g. `username/repo-name`).  [required]

Options:
  --repo-type [model|dataset|space]
                                  The type of repository (model, dataset, or
                                  space).  [default: model]
  --revision TEXT                 Git revision id which can be a branch name,
                                  a tag, or a commit hash.
  --cache-dir TEXT                Cache directory to use when verifying files
                                  from cache (defaults to Hugging Face cache).
  --local-dir TEXT                If set, verify files under this directory
                                  instead of the cache.
  --fail-on-missing-files         Fail if some files exist on the remote but
                                  are missing locally.
  --fail-on-extra-files           Fail if some files exist locally but are not
                                  present on the remote revision.
  --token TEXT                    A User Access Token generated from
                                  https://huggingface.co/settings/tokens.
  --help                          Show this message and exit.
Cache Sorting and Limiting

Managing your local cache is now easier. The hf cache ls command has been enhanced with two new options:

  • --sort: Sort your cache by accessed, modified, name, or size. You can also specify order (e.g., modified:asc to find the oldest files).
  • --limit: Get just the top N results after sorting (e.g., --limit 10).
# List top 10 most recently accessed repos
> hf cache ls --sort accessed --limit 10

# Find the 5 largest repos you haven't used in over a year
> hf cache ls --filter "accessed>1y" --sort size --limit 5

Finally, we've patched the CLI installer script to fix a bug for zsh users. The installer now works correctly across all common shells.

🔧 Other

We've fixed a bug in HfFileSystem where the instance cache would break when using multiprocessing with the "fork" start method.

  • [HfFileSystem] improve cache for multiprocessing fork and multithreading by @​lhoestq in #​3500

🌍 Documentation

Thanks to @​BastienGimbert for translating the README to French 🇫🇷 🤗

and Thanks to @​didier-durand for fixing multiple language typos in the library! 🤗

🛠️ Small fixes and maintenance

🐛 Bug and typo fixes
🏗️ internal

Significant community contributions

The following contributors have made significant changes to the library over the last release:

v1.0.1: [v1.0.1] Remove aiohttp from extra dependencies

Compare Source

In huggingface_hub v1.0 release, we've removed our dependency on aiohttp to replace it with httpx but we forgot to remove it from the huggingface_hub[inference] extra dependencies in setup.py. This patch release removes it, making the inference extra removed as well.

The internal method _import_aiohttp being unused, it has been removed as well.

Full Changelog: huggingface/huggingface_hub@v1.0.0...v1.0.1

v1.0.0: v1.0: Building for the Next Decade

Compare Source

Screenshot 2025-10-24 at 10 51 55

Check out our blog post announcement!

🚀 HTTPx migration

The huggingface_hub library now uses httpx instead of requests for HTTP requests. This change wa


Configuration

📅 Schedule: Branch creation - Between 12:00 AM and 03:59 AM, only on Monday ( * 0-3 * * 1 ) in timezone Europe/Berlin, Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

@renovate renovate bot added lifecycle Update or deprecate something renovate labels Nov 10, 2025
@renovate renovate bot requested a review from freinold November 10, 2025 02:47
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 10, 2025

Important

Review skipped

Bot user detected.

To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.


Comment @coderabbitai help to get the list of available commands and usage tips.

@renovate renovate bot force-pushed the renovate/huggingface-hub-1.x branch from 9ef47aa to a9cc97a Compare November 13, 2025 11:44
@renovate renovate bot force-pushed the renovate/huggingface-hub-1.x branch 3 times, most recently from 7b80bc1 to efc13da Compare November 25, 2025 14:44
@renovate renovate bot force-pushed the renovate/huggingface-hub-1.x branch 2 times, most recently from 52e3576 to ed1bac8 Compare December 1, 2025 12:35
@renovate renovate bot force-pushed the renovate/huggingface-hub-1.x branch 2 times, most recently from b554a43 to eddfbe8 Compare December 10, 2025 18:58
@renovate renovate bot force-pushed the renovate/huggingface-hub-1.x branch from eddfbe8 to 24ddcfd Compare December 12, 2025 18:43
@renovate renovate bot force-pushed the renovate/huggingface-hub-1.x branch 3 times, most recently from 4c6eea9 to 46d23a5 Compare January 9, 2026 17:57
@renovate renovate bot force-pushed the renovate/huggingface-hub-1.x branch from 46d23a5 to 16334d3 Compare January 14, 2026 17:10
@renovate renovate bot force-pushed the renovate/huggingface-hub-1.x branch 2 times, most recently from 4e21445 to 78839eb Compare January 22, 2026 19:24
@renovate renovate bot force-pushed the renovate/huggingface-hub-1.x branch from 78839eb to c414a5b Compare January 26, 2026 17:43
@freinold freinold merged commit 28408c1 into main Jan 26, 2026
7 checks passed
@renovate renovate bot deleted the renovate/huggingface-hub-1.x branch January 26, 2026 18:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

lifecycle Update or deprecate something renovate

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant