Skip to content

Pytorch 2.2.1 and Multipule GPU's #39

@Rotoslider

Description

@Rotoslider

I forked your wonderful program and updated the requirements.txt and a few other files to now work with python 3.10 and Pytorch 2.2.1.
I also fixed some of the warnings for DataLoader and the Pandas float issue from upgrading.
I have it running smoothly on a old server with 4 Tesla P40 24gb gpus. I was wondering if you have given any thought to what needs to be done to get this working with multiple gpu's?
Could you tell me what scripts use the gpu and what use the cpu?

PyTorch, including torch_geometric, supports multi-GPU setups using Data Parallelism and Distributed Data Parallelism (DDP).
Multi-GPU Inference Basics

Data Parallelism: This involves splitting the input data across different GPUs, where each GPU computes a part of the data and then aggregates the results. PyTorch’s DataParallel module can be used for this purpose, but it's more suited for single-node setups.

Distributed Data Parallelism (DDP): DDP is more scalable and efficient, especially for multi-node setups or when maximizing GPU utilization is crucial. It involves distributing both the model and the data across multiple GPUs, with each GPU (or process) handling a portion of the input data and the model.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions