Hi, kengz,
I meet a problem on how to run on multiple GPUs? In the initial of class ConvNet in conv.py, the code assigned device as follows:
self.to(self.device)
but how to extent to multi GPUs here( in initial of class ConvNet ) , or for an instantiation of class ConvNet .
When I try to use torch.nn.DataParallel(module, device_ids=None, output_device=None, dim=0) to assign to multi GPUs, there is a problem that some (public) methods or variables definition in class ConvNet will lose after conv_mode=torch.nn.DataParallel(conv_mode, device_ids={1,2,3,4}).
Hi, kengz,
I meet a problem on how to run on multiple GPUs? In the initial of class
ConvNetin conv.py, the code assigned device as follows:self.to(self.device)but how to extent to multi GPUs here( in initial of class
ConvNet) , or for an instantiation of classConvNet.When I try to use
torch.nn.DataParallel(module, device_ids=None, output_device=None, dim=0)to assign to multi GPUs, there is a problem that some (public) methods or variables definition in classConvNetwill lose afterconv_mode=torch.nn.DataParallel(conv_mode, device_ids={1,2,3,4}).