trt-yolo-app detection errors


I trained a custom YOLO v3 engine using darknet, and then tried using the trained weights and cfg file with trt-yolo-app (on a Tesla P100).
Although objects were detected, the bounding boxes were not accurate. Sometimes, the bounding box co-ordinates were negative numbers.

I ran the same images through darknet for inference, and got good results.

I would have expected the behavior to be identical. Has anyone else encountered this problem?



Is there any update in the output semantics meaning?
If yes, please update the parser of bbox here:



There’s nothing different in the semantics. It’s just detection of objects with YOLO. Instead of the standard set of 80 objects, I used a different set of 10 or so objects. Training and inference in Darknet worked fine. However, doing this on trt-yolo-app didn’t give the same results.
I would have expected that trt-yolo-app behaves the same way as Darknet, except that the Darknet framework need not be installed.


I have the same problem as prashanth.bhat when running with streams in sample of DeepStreamSDK3.


Have you updated the model information:


No. I haven’t. Can you tell me, what I need to update?

If there’s no change in the network arch but if only the number of output classes have changed, the trt-yolo-app should work fine. Whats the difference in results you are seeing ?

Please modify the the output buffer parser function according to the changes you have made in your network -

I am also getting the same issue, the only change is number of classes. I am using tiny yolo v3

Did you got any solution ?