ONNX model deployment issues on Jetson TX2

I am importing a pertained resnet50 onnx model and running on Jetson TX2. I have Jetpack 4.2 installed on TX2. I am able to import the ONNX file create TRT model, deploy and run inference on TX2 if ONNX model is opset_3. But I get errors if Resnet50 ONNX model is Opset_6 or above. I tried to convert the Opset using the ONNX converter, no luck the conversion to lower Opset itself fails.

https://s3.amazonaws.com/download.onnx/models/opset_3/resnet50.tar.gz . (Works fine)

https://s3.amazonaws.com/download.onnx/models/opset_6/resnet50.tar.gz (does not work)

I tried to upgrade TensorRT and ONNX with no luck. I thought it is not a good idea to upgrade without using Jetpack.


TensorRT 5.0 of JetPack4.2 supports opset-7.
Would you mind to upgrade the device into JetPack4.2 and try it with opset-7’s model?


I already have Jetpack4.2 on my Jetson. It does not work.

Thanks, now Opset7 works for me now with TensorRT on jetson. But pytorch exports Opset9 and I have difficulty converting the ONNX Opset9 to Opset7. Here is the error:

converted_model = version_converter.convert_version(inferred_model, 7)
File “/home/user/miniconda/envs/py36/lib/python3.7/site-packages/onnx/version_converter.py”, line 166, in convert_version
converted_model_str = C.convert_version(model_str, target_version)
RuntimeError: /tmp/pip-req-build-jcsc1lyi/onnx/version_converter/BaseConverter.h:60: adapter_lookup: Assertion false failed: No Adapter For Current Version $9 for BatchNormalization


Opset-9 is supported from TensorRT-5.1 but not available for Jetson yet.

Please wait for our update and release.

Any info on when this release would be available?



Sorry that we cannot release our further plan here.
Please pay attention to our announcement for the new release.


Can you provide any method to convert opset-9 to opset-7. Or controlling the opset version while converting from pytorch?