I have successfully converted a pytorch model to onnx and then to tensorrt .engine file. I am looking for method to run inference on the engine. I followed official guide, bud i encounter error in deserialization and running out of memory. I have 4 GB kit of jetson. So, if anyone who has been able to run a custom pytorch model by converting it to tensorrt engine. Please share how did they run inference.
Hi,
Do you serialize the engine on the same Jetson platform?
If yes, it should work without error.
Please note TensorRT engine doesn’t support portability on Jetson.
You will need to serialize and deserialize the engine on the same device.
Thanks
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.