Problem with object detection of jetbot

We’re currently following the tutorial of object detection “”.
Whenever we tried to run file to run our trained model it always result in “detectNet failed to load network”. How could this happened? Thanks for your answer in advance.

Hi @aussehenferfekt, it is failing to parse your ONNX model - which version of JetPack are you running? It requires at least JetPack 4.4 (L4T R32.4.3) for TensorRT to be able to load those ssd-mobilenet.onnx models exported from PyTorch.

If you do indeed have JetPack 4.4 or newer, can you post the full terminal log from when you run your here?

1 Like

sorry for the late reply, here’s the full terminal log:

jetbot@jetson-4-3:~/workspace/jetson-inference/python/examples$ python3
Error processing line 1 of /home/jetbot/.local/lib/python3.6/site-packages/vision-1.0.0-py3.6-nspkg.pth:

Traceback (most recent call last):
File “/usr/lib/python3.6/”, line 174, in addpackage
File “”, line 1, in
File “”, line 568, in module_from_spec
AttributeError: ‘NoneType’ object has no attribute ‘loader’

Remainder of file ignored

detectNet – loading detection network model from:
– prototxt NULL
– model /home/jetbot/workspace/jetson-inference/python/training/detection/ssd/models/detectnet/ssd-mobilenet.onnx
– input_blob ‘input_0’
– output_cvg ‘scores’
– output_bbox ‘boxes’
– mean_pixel 0.000000
– mean_binary NULL
– class_labels /home/jetbot/workspace/jetson-inference/python/training/detection/ssd/models/detectnet/labels.txt
– threshold 0.500000
– batch_size 1

[TRT] TensorRT version 6.0.1
[TRT] loading NVIDIA plugins…
[TRT] Plugin Creator registration succeeded - GridAnchor_TRT
[TRT] Plugin Creator registration succeeded - GridAnchorRect_TRT
[TRT] Plugin Creator registration succeeded - NMS_TRT
[TRT] Plugin Creator registration succeeded - Reorg_TRT
[TRT] Plugin Creator registration succeeded - Region_TRT
[TRT] Plugin Creator registration succeeded - Clip_TRT
[TRT] Plugin Creator registration succeeded - LReLU_TRT
[TRT] Plugin Creator registration succeeded - PriorBox_TRT
[TRT] Plugin Creator registration succeeded - Normalize_TRT
[TRT] Plugin Creator registration succeeded - RPROI_TRT
[TRT] Plugin Creator registration succeeded - BatchedNMS_TRT
[TRT] Could not register plugin creator: FlattenConcat_TRT in namespace:
[TRT] detected model format - ONNX (extension ‘.onnx’)
[TRT] desired precision specified for GPU: FASTEST
[TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT] native precisions detected for GPU: FP32, FP16
[TRT] selecting fastest native precision for GPU: FP16
[TRT] attempting to open engine cache file /home/jetbot/workspace/jetson-inference/python/training/detection/ssd/models/detectnet/ssd-mobilenet.onnx.1.1.6001.GPU.FP16.engine
[TRT] cache file not found, profiling network model on device GPU
[TRT] device GPU, loading /usr/bin/ /home/jetbot/workspace/jetson-inference/python/training/detection/ssd/models/detectnet/ssd-mobilenet.onnx

Input filename: /home/jetbot/workspace/jetson-inference/python/training/detection/ssd/models/detectnet/ssd-mobilenet.onnx
ONNX IR version: 0.0.4
Opset version: 9
Producer name: pytorch
Producer version: 1.3
Model version: 0
Doc string:

WARNING: ONNX model has a newer ir_version (0.0.4) than this parser was built against (0.0.3).
While parsing node number 0 [Conv → “203”]:
— Begin node —
input: “input_0”
input: “base_net.0.0.weight”
output: “203”
op_type: “Conv”
attribute {
name: “dilations”
ints: 1
ints: 1
type: INTS
attribute {
name: “group”
i: 1
type: INT
attribute {
name: “kernel_shape”
ints: 3
ints: 3
type: INTS
attribute {
name: “pads”
ints: 1
ints: 1
ints: 1
ints: 1
type: INTS
attribute {
name: “strides”
ints: 2
ints: 2
type: INTS

— End node —
ERROR: ModelImporter.cpp:296 In function importModel:
[5] Assertion failed: tensors.count(input_name)
[TRT] failed to parse ONNX model ‘/home/jetbot/workspace/jetson-inference/python/training/detection/ssd/models/detectnet/ssd-mobilenet.onnx’
[TRT] device GPU, failed to load /home/jetbot/workspace/jetson-inference/python/training/detection/ssd/models/detectnet/ssd-mobilenet.onnx
[TRT] detectNet – failed to initialize.
jetson.inference – detectNet failed to load network
Traceback (most recent call last):
File “”, line 27, in
net = jetson.inference.detectNet(argv=[“–model=/home/jetbot/workspace/jetson-inference/python/training/detection/ssd/models/detectnet/ssd-mobilenet.onnx”, “–labels=/home/jetbot/workspace/jetson-inference/python/training/detection/ssd/models/detectnet/labels.txt”, “–input-blob=input_0”, “–output-cvg=scores”, “–output-bbox=boxes”], threshold=0.5)
Exception: jetson.inference – detectNet failed to load network

And in case I didn’t get it wrong, this shows that my Jetpack version is 3.1 right?

@aussehenferfekt your version of TensorRT and JetPack-L4T is too old to be able to load/parse the ssd-mobilenet.onnx models. As stated above and in the documentation, JetPack 4.4 (TensorRT 7.1) is required to run those, so you would need to update your TX2.

1 Like

It shows that you currently have JetPack 4.3 / L4T R32.3.1 / TensorRT 6.0.1, but you need at least JetPack 4.4 / L4T R32.4.3 / TensorRT 7 in order to run the ssd-mobilenet.onnx

1 Like

thanks for reply, I’m currently using Ubuntu 18.4 on my jetbot, which version of Jetpack and TensorRT would you suggest me to install?

@aussehenferfekt not sure about jetbot, but I would upgrade it to JetPack 4.6.3 (which is the latest release of JetPack 4). That has TensorRT 8.2.1

1 Like

I will try it. Thank you! :)

Hi, again I met some problem after I installed new jetpack with command “sudo apt install nvidia-jetpack” ,now my jetpack is the newest version(4.3-b134). But my model show the same problem. Do I need to upgrade any other packages?

Hi @aussehenferfekt, JetPack 4.3 isn’t the newest version and doesn’t meet the minimum requirement of JetPack 4.4 in order to run ssd-mobilenet.onnx with TensorRT. You may have to reflash your TX2 with SDK Manager to perform the update to JetPack 4.4 or newer - I recommend to just flash it with JetPack 4.6 while you’re at it.

1 Like

Thanks for reply, I have some dumm questions about JetPack update: I need to backup my documents on jetbot first?
2.I cannot mount SDK Manager on my jetbot, actually after I ran the setup package of SDK Package, it seems like nothing have mounted on my computer at all

is it possible for me to fallback the model to the version that adapt my current environment?

Yes, re-flashing the device will wipe it’s contents and restore it to the factory OS image with JetPack.

SDK Manager runs on a Linux x86 PC and connects to your Jetson TX2 in recovery mode over micro-USB.

However, it occurs to me that in your post you mention JetBot - and that uses Jetson Nano. Are you using TX2 or Nano? If you are using Nano, you can simply flash the SD card from any PC (Windows, Mac, or Linux) with the SD card image using a tool like Balena Etcher (this is easier than SDK Manager is) -

I’ve not been able to get the PyTorch-trained ssd-mobilenet.onnx working with JetPack older than JetPack 4.4, sorry. JetPack 4.3 is quite old by this point and it’s recommended to update it should you desire the latest support.

I’m not sure about the version of jetbot but we do have a SD card

I think it’s nano, so that means we can just flash the SD card? Do you know how we can backup our precious documents properly, we’re currently dealing with a project from college and cannot start all over again.

If it’s indeed Nano, then yes you can reflash it’s SD card simply by removing it from your Nano and putting in your SD card slot on your laptop or PC and flashing it using Etcher and the latest JetPack image for Jetson Nano.

To backup your documents, you will need to manually copy them off the SD card before reflashing it. Or if you are concerned about losing data, I recommend just keeping your existing SD card and using a new SD card to flash the latest JetPack to.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.