JetRacer error: safeSerializationVersion failed.Version tag does not match. Note: Current Version: 0, Serialized Engine Version: 97

Hi,

I’m trying to run my JetRacer road following model on my Orin. I successfully installed JetCam, torch2trt, and JetRacer

I tested the camera and IMU and both work, but when I try to load the road following model I get the following error

What is causing this error, how can I resolve it?

The torch version is pointing to ~/.local/lib/python3.8/site-packages

While torch2trt is pointing to /usr/local/lib/python3.8/dist-packages

There is also a torch installed in /usr/local/lib/python3.8/dist-packages

Do I need to add the torch installed in /usr/local/lib/python3.8/dist-packages to path? If so, how?

Thank you

Hi,

I removed my duplicate torch installation from ~/.local/lib/python3.8/site-packages/torch, then checked the torch version using:

$ python3 
$ import torch
>>> print(torch__version__)

and it correctly showed that torch was installed in:
/usr/local/lib/python3.8/dist-packages/torch

I re-ran the same notebook as in my original post, but it gave me the same error

Then, out of curiosity, I tested the basic motion notebook and I was able to control the steering, torque, and throttle as expected, and I previously confirmed that the camera and IMU work as expected

I would still like to know why I’m getting the error in my original post though–is it possible it’s because the model and training info was copied over from the Jetson Nano and the current device is a new Jetson Orin Nano?

Hi,

The error comes from TensorRT indicating that the engine you are trying to use is converted with a different TensorRT library.

Do you have the original model (.onnx)?
If yes, please use the model to recreate the TensorRT engine on Orin Nano again.

Thanks.

Hi,

Thank you for the explanation

Where would the original model .onnx file be saved? I checked the files I copied over from our Nano and it’s not there

EDIT: NVM there’s an jetson-inference python script that converts .pth to .onnx file so we’ll use that to create the .onnx

Once we have the .onnx file how would we retrain the TensorRT engine on the Orin Nano?

Hi,

Is your script based on the below?

If yes, the model is generated by the optimize_model.ipynb notebook.
So please run the script again on the Orin Nano.

Thanks.

We will try that, thank you

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.