An issue with PyTorch 1.13 on Jetpack 5.0.2

Hi,

we work in the container environment on Jetson NX devices and use torch script to run inference models.
There is an issue which happens when using the latest NVCR container l4t-pytorch:r35.1.0-pth1.13-py3, which is not present when using l4t-pytorch:r35.1.0-pth1.12-py3.
The issue is that the first few inferences, once the model is up (we use classification model), take very long time, with some warning. Here is an output of my script.
When using l4t-pytorch:r35.1.0-pth1.13-py3:
2023-01-12 14:07:37,706 INFO Loading inference model…
2023-01-12 14:07:54,215 INFO Inference 0
/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py:1130: UserWarning: operator() profile_node %338 : int = prim::profile_ivalue(%out_dtype.3)
does not have profile information (Triggered internally at …/torch/csrc/jit/codegen/cuda/graph_fuser.cpp:104.)
return forward_call(*input, **kwargs)
2023-01-12 14:09:49,236 INFO Inference 1
2023-01-12 14:09:49,347 INFO Inference 2
2023-01-12 14:09:49,414 INFO Inference 3

When using l4t-pytorch:r35.1.0-pth1.12-py3:
2023-01-12 13:49:53,070 INFO Loading inference model…
2023-01-12 13:50:09,327 INFO Inference 0
2023-01-12 13:50:11,456 INFO Inference 1
2023-01-12 13:50:11,528 INFO Inference 2
2023-01-12 13:50:11,589 INFO Inference 3

Since I have a solution I’m happy to move on, but please address it in the next release of l4t-pytorch.

Hi,

Have you used TorchVision?
If yes, how do you install it?

Thanks.

Yes, torchvision is used. For example, I use torchvision.io.decode_jpeg.
I did not install it because it is included in the nvcr container. My Dockerfile starts from:

FROM nvcr.io/nvidia/l4t-pytorch:r35.1.0-pth1.13-py3

My requirements.txt does not contain torchvision, just to make clear
I don’t install anything on top via pip.

Hi,

It’s recommended to do some warm-up iterations of a PyTorch model before it gets the actual performance.

Thanks.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.