I’m new to using the Nvidia Jetson Nano and DeepStream 5.1.
I want to use an onnx model developed by colleague using the “deepstream-test1-usbcam” Python example.
We have been using this model using detectNet but now want to switch to DeepStream:
net = jetson.inference.detectNet(argv=[‘–model=/home/’+currentUser+‘/jetson-inference/python/training/detection/ssd/models/’+folderName+‘/ssd-mobilenet.onnx’, ‘–labels=/home/’+currentUser+‘/jetson-inference/python/training/detection/ssd/models/’+folderName+‘/labels.txt’, ‘–input-blob=input_0’, ‘–output-cvg=scores’, ‘–output-bbox=boxes’, ‘–threshold=0.20’])
I am having trouble making the appropriate changes to the pgie configuration .txt file in the “deepstream-test1-usbcam” example… specifically on making/selecting a parse bbox file.
(The sample code is based on reading a caffemodel model and engine.)
Error:
0:00:05.089327290 14198 0x3b39b320 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::fillDetectionOutput() <nvdsinfer_context_impl_output_parsing.cpp:724> [UID = 1]: Failed to parse bboxes using custom parse function
Segmentation fault (core dumped)
Is there a general config file that you can recommend?
Or are there instructions on how to make the config file for onnx?
Environment
TensorRT Version: 7.1.3 CUDA Version: 10.2 Operating System + Version:
NVIDIA Tegra X1 (nvgpu)/integrated
ARMv8 Processor rev 1 (v8l) x4 Python Version (if applicable): Python3
There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks
Hi,
We want to check this in our environment.
Could you share the model and the corresponding configure with us?