Skip to content

fix: respect model device in image preprocessing#318

Open
Mr-Neutr0n wants to merge 1 commit intoDepthAnything:mainfrom
Mr-Neutr0n:fix/image2tensor-device-handling
Open

fix: respect model device in image preprocessing#318
Mr-Neutr0n wants to merge 1 commit intoDepthAnything:mainfrom
Mr-Neutr0n:fix/image2tensor-device-handling

Conversation

@Mr-Neutr0n
Copy link

Bug

image2tensor in both depth_anything_v2/dpt.py and metric_depth/depth_anything_v2/dpt.py hardcodes device selection via torch.cuda.is_available() instead of using the model's actual device. This causes device mismatch errors when the model is on a different device (e.g., MPS, a specific CUDA GPU, or CPU when CUDA is available but the model was intentionally placed on CPU).

Fix

Replaced the hardcoded device detection with next(self.parameters()).device to infer the device directly from the model's parameters. This ensures the input tensor is always placed on the same device as the model, regardless of how the model was loaded or moved.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant