.engine file from detectnet(jetson-inference) cannot be used in object following

Hi @dusty_nv,
i used train_ssd.py and onnx_export.py in jetson-inference and use detectnet to run object detection. And i find there’s a .engine file autonomous created.
I use this in object_following in jetbot, but it does not work.

1st step ([*] can be seen in front of all 3 cells, means under running):

2nd step (finished running all 3 cells but only the first 2 cells with number [1] and [2], the 3 cell has blank in [ ]):

3rd step (rerun the 3rd cell only after 2nd step):

you can see that when i finished(not successfully) running the third cell, there will be no number in the [ ]. When i run it again, it occurs an Error: NameError: name ‘model’ is not defined.

And also those two pre-trained ssd_mobilenet_v2_coco.engine files in NVIDIA offical website cannot be used in object_following. The error is shown in follows:

So can you help me figure this out?


Based on the log below, the deserialization failed and causes some errors when creating the context.

Is the engine file present in the same folder as the workspace?
Or could you replace the file link with the absolution path and give it a try?


Hi, @AastaLLL !
Thanks for your quick response.
The engine file is in the same folder.
And i try to use absolute path, but it dose not work.

The path in your code has the hostname in it (nano-4gb-jp441:) can you try removing that part?

You will also need to adjust the pre/post-processing (and probably the input/output layer names) that JetBot performs as it is likely to be different. The ssd_mobiletnet_v2_coco.engine is from TensorFlow, while the ssd-mobilenet.onnx.engine is from PyTorch, and hence they expect different format of input/output tensors.

If you just use jetson.inference.detectNet() then it will handle the pre/post-processing for those different models. You would need to integrate this with JetBot though.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.