Detectnet: failed to load detectNet model

when trying to test the model on a video or through the cam i get these errors

[TRT] INVALID_ARGUMENT: Cannot find binding of given name: data
[TRT] failed to find requested input layer data in network
[TRT] device GPU, failed to create resources for CUDA engine
[TRT] failed to create TensorRT engine for models/tt/ssd-mobilenet.onnx, device GPU
[TRT] detectNet – failed to initialize.
detectnet: failed to load detectNet model

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:
https://docs.nvidia.com/deeplearning/tensorrt/quick-start-guide/index.html#onnx-export

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

hi,
am not using any custom script just following the the tutorial from the GitHub repository on custom dataset
this is the command:
detectnet --model=models/tt/ssd-mobilenet.onnx --labels=models/tt/labels.txt --input-blob=input_0 --output-cvg=scores --output-bbox=boxes csi://0

Hello @BLACK_CAT,

This issue doesn’t look like TensorRT related. Please post your concern on Jetson related forum to get better help.

Thank you.