[SOLVED] Error in style transfer notebook
I have an image that I want to use for my transfer style, it is 512x320 in size and 93,7ko.
I downloaded the reference notebook through Snapchat.
I put my style and reference image in the notebook and I run all the code.
In the train loop, they show me this error, I don't understand.
RuntimeError Traceback (most recent call last)
<ipython-input-199-0b6d284ff067> in <module>() 13 noise = torch.zeros_like(image) 14 noise.normal_(mean=0, std=4) ---> 15 styled_image = model(image + noise) 16 17 features_x = vgg(image, features_num=2)
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py in conv2d_forward(self, input, weight) 340 _pair(0), self.dilation, self.groups) 341 return F.conv2d(input, weight, self.bias, self.stride, --> 342 self.padding, self.dilation, self.groups) 343 344 def forward(self, input):
RuntimeError: CUDA out of memory. Tried to allocate 128.00 MiB (GPU 0; 11.17 GiB total capacity; 10.52 GiB already allocated; 110.69 MiB free; 10.75 GiB reserved in total by PyTorch)
I tried to reduce the file size but it doesn't change anything.
Is it still the size of the image that is too heavy?
As I see you have lack of memory, and folks suggesting reduce batch size https://github.com/pytorch/pytorch/issues/16417
What exact environment do you use to run it?
Hi Pavel Antonenko,
Thanks for answering me!
My computer is an Intel(R) Celeron(R) CPU G1840.
And I'm using Google colab
This error means that the GPU your notebooks uses is running is out of memory. There are a few ways to possibly resolve this issue:
1. Normally setting the
to a smaller value should work. Alternatively, you can try reduce the dimension of your input image, too.
2. If that fails, try cleaning GPU memory used by your Google Colab by clicking `Runtime` and `Factory reset runtime`
3. If none of that works, we can still run the notebook using CPU but at slower training speed:
Hey Eric Hu,
Thank you for your help !
Effectively by reducing the value of Batch_Size, I can do style transfer again!
Can you say how much value you reduced in Batch_Size?
Thanks in advance!
Could you also confirm that you haven't changed anything in notebook when you experienced "CUDA out of memory" error? So the only change was using new style image, and fixing batch size, after it was suggested by Eric?
I confirm I only changed the style image and the batch size, that's all!
Thank you for your help !
Hi Nikita Ramesh,
I changed the batch size to 11.
I did not test the 13 to 15.
If you want I can test it.
how do i even change the batch size i cant find it
i can find batch number but not the size
If you look at the Style Transfer notebook, under Global variables for training, you can change INPUT_HEIGHT and INPUT_WIDTH accordingly. Default is set as 512x256.