Upscaling

recently I read in the article that there is Upscaling solution for Nvidia Shield that is based on Tegra X1.
Therefore, two questions:

  1. if it can be adapted to run on Xavier / nano
  2. if it will support resolutions more than 4K
    Thanks

Hi,
It is specific to Shield TV and may not be applicable on L4T. We will evaluate it in future releases.

Thank you for following up.
there is such feature which runs on cards like GeForce and similar: Reference https://devtalk.nvidia.com/default/board/329/ngx-sdk/
In order to build it, however, Microsoft visual studio is required and Windows OS in order to use it

Hi Andrey1984, an alternate approach could be to use own super-resolution DNN. You can find one in implemented in PyTorch here:

thank you for your inputs, Dustin
I shall give it a try


tested and worked;

However when I take as an input a very very large image like 5mb and 4k it will get out of the range of the memory;

RuntimeError: [enforce fail at CPUAllocator.cpp:64] . DefaultCPUAllocator: can't allocate memory: you tried to allocate 46438023168 bytes. Error code 12 (Cannot allocate memory)

maybe the second tool will have some options for very large images upscaling? Otherwise, I can try it in a cloud

second tool also worked on Xavier for a small image;

when I took as an input image which is 5000x3888 pixels it would throw:

python3 super_resolution_with_onnxruntime.py 
Traceback (most recent call last):
  File "super_resolution_with_onnxruntime.py", line 137, in <module>
    torch_out = torch_model(x)
  File "/home/nvidia/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "super_resolution_with_onnxruntime.py", line 73, in forward
    x = self.relu(self.conv2(x))
  File "/home/nvidia/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/nvidia/.local/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 345, in forward
    return self.conv2d_forward(input, self.weight)
  File "/home/nvidia/.local/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 342, in conv2d_forward
    self.padding, self.dilation, self.groups)
RuntimeError: [enforce fail at CPUAllocator.cpp:64] . DefaultCPUAllocator: can't allocate memory: you tried to allocate 46438023168 bytes. Error code 12 (Cannot allocate memory)

It seems that it would require 46Gb to process the image
original.jpg

implemented at Cloud [GCP] with 120 gb RAM cpu;
on GPU it wouldn’t run saying

" python super_resolve.py --input_image in.jpg  --model model_epoch_30.pth --output_filename out.jpg --cuda
Namespace(cuda=True, input_image='in.jpg', model='model_epoch_30.pth', output_filename='out.jpg')
Traceback (most recent call last):
  File "super_resolve.py", line 29, in <module>
    out = model(input)
  File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/workspace/examples/examples/super_resolution/model.py", line 21, in forward
    x = self.relu(self.conv2(x))
  File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/activation.py", line 94, in forward
    return F.relu(input, inplace=self.inplace)
  File "/opt/conda/lib/python3.6/site-packages/torch/nn/functional.py", line 914, in relu
    result = torch.relu(input)
RuntimeError: CUDA out of memory. Tried to allocate 4.81 GiB (GPU 0; 15.90 GiB total capacity; 14.49 GiB already allocated; 717.31 MiB free; 14.50 GiB reserved in total by PyTorch)"

probably somehow two 16gb adapters could be stacked into a shared 32gb memory space for executing the process with it; but it might require modifying the code of the pytorch example in an unknown way.