**• Hardware Platform Jetson nano ** • DeepStream Version 6.0 • JetPack Version 4.6.2 • TensorRT Version 8.2.1.8 • not able to run onnx model in deepstream
How can I can addapt the config files from deepstream-app to run with the current output from nvidia tao toolkit:
resnet18_detector.onnx
calibration.bin
resnet18_detector.trt.int8
When I try to run with the onnx-model the output vedeo file doesn’t detect any objects. But while trainning the model it gave an 66% accuracy. The model was also trained on top of a previous trained model in .tlt .
I am only able to provide the configuration file, but I’m experiencing difficulty in successfully runt he deepstram app with any ONNX model. config.txt (448 Bytes)
Upon integrating the int8-calib file and model-engine file, I encountered the same issue where running the deepstream-app successfully did not result in object detection.
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks