I have modified deepstream_ssd_parser example from deepstream_python_apps to run yolov5. But, the outputs of the model are wrong. I have verified a few things:
The model is good, its giving predicting predictions when running independently.
The model gives correct predictions when it on Triton server, and running inference there via a client.
The problem is only when running the model via deepstream+triton (nvinferserver).
Yes, the input images should be normalised between range 0-1 before giving to the model.
Also, there is only “output” layer in yolo not “score_layer”, “class_layer”, “box_layer”. You can take only “output” layer and take the post-processing code from official yolo repo.
Hi @sandeep.yadav.07780
Can you please share ssdparser.py file with the changes you did for yolov5 in the function nvds_infer_parse_custom_tf_ssd?
It will be really helpful.
And how the bbox parser is generated?
@sandeep.yadav.07780
thanks sandeep for the parser, I will check into that.
there won’t be any custom_lib in the pbtxt right??
and what is model config for yolov5 I mean dstest_yolov5.pbtxt?