Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 16 additions & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
FROM python:3.10-slim

RUN apt-get update -y && \
apt-get install -y default-libmysqlclient-dev pkg-config gcc g++ libgl1 libglib2.0-0

COPY . /code

WORKDIR /code

RUN pip install -r requirements.txt

RUN pip install --pre torch torchaudio --index-url https://download.pytorch.org/whl/nightly/cu126
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

Verify if PyTorch nightly is required for Blackwell support.

Using --pre to install nightly PyTorch builds introduces stability risks, as nightly versions can contain bugs and breaking changes.

Please verify whether CUDA 12.6 and Blackwell (compute capability 12.0) support is available in stable PyTorch releases. If stable releases now support Blackwell, consider using a pinned stable version instead.

If nightly is truly required, consider:

  1. Adding a comment explaining why nightly is necessary
  2. Pinning to a specific nightly date for reproducibility: torch==2.6.0.dev20251022+cu126

🏁 Script executed:

#!/bin/bash
# Check if stable PyTorch supports CUDA 12.6 and Blackwell architecture
echo "Checking PyTorch stable releases for CUDA 12.6 support..."
curl -s https://download.pytorch.org/whl/torch_stable.html | grep -i "cu126" | head -5

Length of output: 57


Blackwell support requires CUDA 12.8, not 12.6—update the Dockerfile.

PyTorch 2.7 introduces support for NVIDIA's new Blackwell GPU architecture and ships pre-built wheels for CUDA 12.8. Blackwell GPUs require 12.7 or newer, specifically 12.8 since that is the current version.

Using --pre torch torchaudio --index-url https://download.pytorch.org/whl/nightly/cu126 will not provide proper Blackwell support. CUDA 12.x-compiled binaries will only run on the new GPU if they were compiled with explicit support for this new architecture, and the PyTorch binaries that are publicly available might not have this support yet—including cu126 wheels.

Since PyTorch 2.7 has been released complete with support for the NVIDIA Blackwell GPU architecture and pre-built wheels for CUDA 12.8, update the Dockerfile to use cu128:

RUN pip install --pre torch torchaudio --index-url https://download.pytorch.org/whl/nightly/cu128

Alternatively, if stable releases are acceptable, use the official stable wheels without --pre.

🤖 Prompt for AI Agents
In Dockerfile around line 12, the pip install line uses the cu126 nightly index
which lacks proper NVIDIA Blackwell (CUDA 12.8) support; update the wheel index
from cu126 to cu128 (or switch to the stable cu128 wheels) so PyTorch/Torchaudio
are installed with CUDA 12.8-compatible binaries; modify the RUN pip install
command to reference the cu128 index URL or remove --pre and point to the
appropriate stable cu128 package source.


ENV TORCH_CUDA_ARCH_LIST="5.0;6.0;7.0;7.5;8.0;8.6;9.0;12.0"

ENV PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True
9 changes: 9 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -263,6 +263,15 @@ You can add new models by:
- `-rmr`, `--rms_mix_rate`: Volume envelope mix rate
- `-pr`, `--protect`: Protection for voiceless consonants

### Docker (nvidia blackwell support)
You can run the rvc-python codebase in docker for easier debugging & contributing. It was implemented to help support newer rtx 5000 series hardware, but works with CPU as well. Run:

```docker compose up -d```

Then, make your desired edits in `test.py` in the root of the project. Make sure to add your models to the `./models` directory. Then run:

```docker compose exec rvc python test.py```

### API Server Options
- `-p`, `--port`: API server port (default: 5050)
- `-l`, `--listen`: Allow external connections to API server
Expand Down
28 changes: 28 additions & 0 deletions docker-compose.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
services:
rvc:
restart: always
build:
context: .
dockerfile: Dockerfile
volumes:
- ./:/code
- ./models/:/models/
extra_hosts:
- "host.docker.internal:host-gateway"
stdin_open: true
tty: true
ports:
- 5050:5050
environment:
- PYTHONUNBUFFERED=1
- TORCH_CUDA_ARCH_LIST="5.0;6.0;7.0;7.5;8.0;8.6;9.0;12.0"
- PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True
deploy:
resources:
reservations:
devices:
- driver: 'nvidia'
count: all
capabilities: [gpu]

# python -m rvc_python cli -i input.wav -o output.wav -mp path/to/model.pth -de cuda:0
14 changes: 13 additions & 1 deletion rvc_python/infer.py
Original file line number Diff line number Diff line change
Expand Up @@ -127,7 +127,7 @@ def infer_file(self, input_path, output_path):
model_info = self.models[self.current_model]
file_index = model_info.get("index", "")

wav_opt = self.vc.vc_single(
result = self.vc.vc_single(
sid=0,
input_audio_path=input_path,
f0_up_key=self.f0up_key,
Expand All @@ -142,6 +142,18 @@ def infer_file(self, input_path, output_path):
file_index2=""
)

# Handle error case where vc_single returns a tuple (info, (times, wav_opt))
if isinstance(result, tuple) and len(result) == 2:
info, audio_data = result
if isinstance(audio_data, tuple):
times, wav_opt = audio_data
if wav_opt is None:
raise RuntimeError(f"Voice conversion failed: {info}")
else:
wav_opt = audio_data
else:
wav_opt = result

wavfile.write(output_path, self.vc.tgt_sr, wav_opt)
return output_path

Expand Down
5 changes: 5 additions & 0 deletions rvc_python/modules/vc/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,11 @@ def get_index_path_from_model(sid):


def load_hubert(config,lib_dir):
import torch
# Temporarily allow unsafe globals for fairseq models
from fairseq.data.dictionary import Dictionary
torch.serialization.add_safe_globals([Dictionary])

models, _, _ = checkpoint_utils.load_model_ensemble_and_task(
[f"{lib_dir}/base_model/hubert_base.pt"],
suffix="",
Expand Down