How To Fix 'input And Hidden Tensors Are Not At The Same Device' In Pytorch
When I want to put the model on the GPU, I get the following error: 'RuntimeError: Input and hidden tensors are not at the same device, found input tensor at cuda:0 and hidden ten
Solution 1:
You need to move the model, the inputs, and the targets to Cuda:
if torch.cuda.is_available():
model.cuda()
inputs = inputs.cuda()
target = target.cuda()
Solution 2:
This error occurs when PyTorch tries to compute an operation between a tensor stored on a CPU and one on a GPU. At a high level there are two types of tensor - those of your data, and those of the parameters of the model, and both can be copied to the same device like so:
device = torch.device("cuda"if torch.cuda.is_available() else"cpu")
data = data.to(device)
model = model.to(device)
Post a Comment for "How To Fix 'input And Hidden Tensors Are Not At The Same Device' In Pytorch"