Referring to my previous post where all the details and config files are given, the TAO YOLOV3 model trained in TAO framework, exported in .onnx format is throwing the following error in the deepstream framework, when inferred with yolov3 custom parser. The onnx model is first converted to .engine.fp16 in the deepstream then the inferencing is performed.
Here is the error snapshot.
from the log. it crashed in postprocessing. please refer to this yolov3 sample. you need provide a postprocessing function in parse-bbox-func-name to parse the inference results.
So, by following the steps I mentioned and replacing the nvdsparsebbox_Yolo.cpp file given in
/opt/nvidia/deepstream/deepstream-6.1/sources/objectDetector_Yolo/nvdsinfer_custom_impl_Yolo
with this file worked for me -
Thank you. :)
Thanks for the update! Is this still an DeepStream issue to support? Thanks!
It has resolved, thank you.