Python wrapper for tensorrt implementation of Yolo (currently v2)

Hello, new to this and interested in trying

But I get error message already for

wk@jetson:~$ cd $YOLO_ROOT/apps/trt-yolo -bash: cd: /home/wk/deepstream_reference_apps/yolo/apps/trt-yolo: No such file or directory

I do not see the directory structure being created and no cmakelists.txt either

Can you suggest what I have missed?

Thank you!

I havn’t tried it for a while, it might be that they have changed the libraries in newer version
 I would recommand using aasta mobilenet ssd which is faster and better.

how is aasta mobilenet ssd better? does it have better accuracy than yolo-tiny?

I have some pictures a tested and yolov3 detects persons in those but tiny does not

When you use this wrapper do you start with an image and add also the jetpack sdk or deepstream sdk?

If you check what gets cloned from command git clone GitHub - NVIDIA-AI-IOT/deepstream_reference_apps: Samples for TensorRT/Deepstream for Tesla & Jetson
I think this is changed from what it was originally, so new structure seems not work with this project

Hi, all

If C++ is an option for you, it’s recommended to try Deepstream with YOLOv3.

We can run yolov3 at 20 FPS on nano:
https://devtalk.nvidia.com/default/topic/1064871/deepstream-sdk/deepstream-gst-nvstreammux-change-width-and-height-doesn-t-affect-fps/post/5392823/#5392823

It should be much faster if you try YOLOv3 tiny on TX2.
Thanks.

hi AastaLLL

are you sure yolov3 with deepstream on nano is 20FPS or do you mean yolov3-tiny?

Hi,

It’s YOLOv3.

Have you checked the comment posted in comment#26?
The main difference is to set the input width/height into 416.

Thanks.

@AastaLLL, Jetson Nano does not support INT8. I think it’s highly unlikely that you can get 20 FPS when running TensorRT optimized yolov3-416 on Jetson Nano. In comparison, NVIDIA’s previous announcement said that tiny-yolov3 (416x416) ran at 25 FPS on Jetson Nano. Inference-speed-wise, yolov3-416 could be 6~7 times slower than tiny-yolo-416 (reference: YOLO: Real-Time Object Detection).

https://devblogs.nvidia.com/jetson-nano-ai-computing/

My python implementation of yolov3-416 runs at only ~3.07 FPS on Jetson Nano :-(

Details could be found in my tensorrt_demos (Demo #4) repository: https://github.com/jkjung-avt/tensorrt_demos

External Media

Hi @jkjung13. Is your implementation solely for use with tensorrt or can be implemented in deepstream. thank you.

@afiqlcmec My implementation follows the NVIDIA’s original sample code: Object Detection With The ONNX TensorRT Backend In Python. It serializes the optimized TensorRT engine into a file. I guess (I haven’t tested it myself) it cannot be used by DeepStream directly.

Sorry i am new and totally beginner.

I am running yolov3 on my jetson nano in order to do object (persons) detection. The detection time is arround 7 seconds in total and i would like to accelerate it.

I try to do the steps mentioned in the git page, but the step-by-step manual is not for beginners ;-)

E.g. i cannot find the file CMakeLists.txt and i also don’t have any idea in which step it should be created.