Not able to run onnx model in deepstream

**• Hardware Platform Jetson nano **
• DeepStream Version 6.0
• JetPack Version 4.6.2
• TensorRT Version 8.2.1.8
• not able to run onnx model in deepstream

How can I can addapt the config files from deepstream-app to run with the current output from nvidia tao toolkit:

  • resnet18_detector.onnx
  • calibration.bin
  • resnet18_detector.trt.int8
    When I try to run with the onnx-model the output vedeo file doesn’t detect any objects. But while trainning the model it gave an 66% accuracy. The model was also trained on top of a previous trained model in .tlt .

How did you run the model ?
Can you share you model and deepstream configuration file ?

I am only able to provide the configuration file, but I’m experiencing difficulty in successfully runt he deepstram app with any ONNX model.
config.txt (448 Bytes)

If you add the following item to you configuration file, can the program run normally?

int8-calib-file="your int8 calibration file"

you also can add this item to save the converted model.

model-engine-file="converted model"

Was some error happened when load the model ?

Upon integrating the int8-calib file and model-engine file, I encountered the same issue where running the deepstream-app successfully did not result in object detection.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.