New repository for collecting object detection benchmark speed results + examples

I have currently tried yolov5 using pytorch and nvidia ssd300 (pytorch, no trt acceleration).

Please feel free to comment, criticise, yell or stare in awe either here or on github.

The idea is to measure only the object detection speed, without multithreading, gstreamer, cheating with postprocessing, etc.

next on my list is tensortx (I need to disentangle their example code which is pipelined and multithreaded) and perhaps tensorflow.

Let me know if there is anything specific you would like me to try.


Thanks for the sharing.
Below is an example to inference YOLOv5 with TensorRT for your reference:


Thank you AastaLLL,
Yes, I addressed it in the post. Unfortunately this is a multi threaded, pre fetch python script so I need to disentangle it. Did it before so shouldn’t be too hard. Also, iirc, in the yolov5_trt they do not count the post processing, which is actually very lengthy.
Is you mobilenet ssd repository still working?

Added the Tensorrtx implementation.

@AastaLLL , I am trying to make the TRT_object_detection work again, but it seems a bit neglected. It is using the flattenconcat library that used to be in the samples but now it can’t load the version that is there. Is there an alternative for it for running the mobilenet models in jp4.6? it was a pretty neat repository.


The GitHub is designed based on the previous environment.
So it is not compatible with the latest JetPack 4.6 software.

If TF 1.15 is used, you can find an example here to convert the model into TensorRT.
If TF 2.x is used, we have an example to demonstrate the EfficientNet conversion as below:


Problem is, everyone shows conversion of classification models… Not object detection. I have tried to convert object detection models before but failed because of the post processing. If you happen to have mobilenet ssd version running in trt, or can point me to how to produce one, I’ll add it to the repository. Efficientdet would also be interesting although I doubt it will be stunningly fast or accurate.