Running a pytorch network converted to ONNX with TensorRT on the TX2

Thanks, now Opset7 works for me with TensorRT on jetson. But pytorch exports Opset9 and I have difficulty converting the ONNX Opset9 to Opset7. Here is the error:

converted_model = version_converter.convert_version(inferred_model, 7)

File “/home/user/miniconda/envs/py36/lib/python3.7/site-packages/onnx/version_converter.py”, line 166, in convert_version
converted_model_str = C.convert_version(model_str, target_version)
RuntimeError: /tmp/pip-req-build-jcsc1lyi/onnx/version_converter/BaseConverter.h:60: adapter_lookup: Assertion false failed: No Adapter For Current Version $9 for BatchNormalization

Hi,

The following issue looks duplicate to the topic 1050830:
https://devtalk.nvidia.com/default/topic/1050830/jetson-tx2/onnx-model-deployment-issues-on-jetson-tx2/

Let us track the following status on that topic directly.
Thanks.

Hi, simon472

We got some update from the internal team.
This issue is fixed in the TensorRT5.1 and you will get the package in JetPack4.2.1.

Thanks.

Hi Simon,

I am stuck fixing that int64 error in my model. I casted every tensor in my torch model to float32. Upon feeding the onnx model into the Tensorrt optimizer, I am receiving the exact warning you received earlier. Can you please elaborate on how you were able to fix that error?

Thanks