Python wrapper for tensorrt implementation of Yolo (currently v2)

I have made a wrapper to the deepstream trt-yolo program. It was not easy, but its done.
Inference speed on Nano 10w (not MAXN) is 85ms/image (including pre-processing and NMS - not like the NVIDIA benchmarks :) ), which is FAR faster then anything I have tried.
Also load time is very fast after the first engine compilation.

The code is a bit rough and still needs a lot of attention but I would be grateful if anyone can try and follow the installation because, sadly, I ran out of memory cards and don’t want to erase this one.

The github is at . Feel free to contribute to it - I still need to implement wrapper for Yolov3 (shouldn’t be hard) and add support for better image resizing logic (currently 416 is hardcoded) and error handling.

Any comments are welcome. It has been a while since I used c++ (we are talking decades here) and this wrapper was tricky.

1 Like


Thanks for the sharing.

@moshe I needed to
sudo apt-get install libgflags-dev

I will let you know if I hit more issues.

added and added support to yoloV3 as well, both tiny and regular works now. yolov2 tiny is the fastest…

1 Like

@moshe Thanks for this initiative. I’m working on a video analytics project and would like to know if with this API you made I can apply a video stream to be analysed on the Jetson Nano. Thanks in advance!

should work fine. also please check this slightly more formal and better written version of SSD models by NVIDIA

Note that other then the inference speed, you need to handle the processing/image acquisition pipeline properly to get high throughput (what this means in English, you need to have two threads (at least), one fetching the image from the camera and the other processing it. It better still to have a third thread doing the image processing before inference.)

Hi moshe,

I tried to run your work and got some error.
At step 6, the file path might miss ‘build’ after trt-yolo.
And when I run trt-yolo-app application, I get this;

trt-yolo-app: /home/nano/Downloads/deepstream_reference_apps/yolo/lib/yolo_config_parser.cpp:122: bool verifyRequiredFlags(): Assertion `(FLAGS_config_file_path.find(".cfg") != std::string::npos) && "config file not recognised. File needs to be of '.cfg' format"' failed.
Aborted (core dumped)

Finally, would you say what it means XXXXXXX of step 10?


yes, sorry, my bad… I didn’t run the cmake and make from the build directory
you need to add build as you already found. the other error is from i think missing one “…” because of not being in the build directory. I is better to give it absolute path.

I have fixed and checked the readme. Hopefully following the instructions now will result in working project… I recommend removing the deepstream directory and cloning it again

Thanks Moshe,

I will test it again and update soon

Hi moshe,

I tested your updated resources and leave some comments here.

  1. 5.ii, to build I had to remove ‘set(CUDA_NVCC_FLAGS “${CUDA_NVCC_FLAGS} -fPIC” )
  2. 9.iii, it returns error,
File does not exist : data/labels.txt
trt-yolo-app: /home/nano/Downloads/deepstream_reference_apps/yolo/lib/trt_utils.cpp:124: std::vector<std::__cxx11::basic_string<char> > loadListFromTextFile(std::__cxx11::string): Assertion `fileExists(filename)' failed.
Aborted (core dumped)

To execute, I have to move $YOLO_ROOT directory.
3. 15. like above 2, at $YOLO_ROOT it succeed.

Good work! Way better!
And I hope your work would support camera input.


Thank you for helping me in making this stable! Much appreciated.

as for (1), this is really strange… without adding this line I got an error while linking. I’ll retry from scratch and see. Did you get an error?
I’ll also recheck the paths. this basically looks like you didn’t do 14… the paths in the yolo configs are relative.

It basically supports anything you throw at it… if you want to use a camera, use opencv VideoCapture to get the frames. I am currently writing a home security system with threaded fetch from the camera, motion detection, object detection and snippet recording. If this is also what you aim for, you can wait for it… its not in a stage that I can release it yes but give me a week or two…


when you update this git, I will be willing to test !

can you please tell me why you removed the NVCC line? what error did you get?

Hi Moshe,

I update what I did;

  • Jetson Nano default SD image is fused.
  1. If I add a ‘NVCC flag line’ in CMakeLists.txt, it returns following:
nano@nano-devel:~/Downloads/deepstream_reference_apps/yolo/apps/trt-yolo/build$ sudo make install
[  7%] Building NVCC (Device) object lib/CMakeFiles/cuda_compile_1.dir/
nvcc fatal   : Unknown option 'fPIC'
CMake Error at (message):
  Error generating

lib/CMakeFiles/yolo-lib.dir/build.make:225: recipe for target 'lib/CMakeFiles/cuda_compile_1.dir/' failed
make[2]: *** [lib/CMakeFiles/cuda_compile_1.dir/] Error 1
CMakeFiles/Makefile2:122: recipe for target 'lib/CMakeFiles/yolo-lib.dir/all' failed
make[1]: *** [lib/CMakeFiles/yolo-lib.dir/all] Error 2
Makefile:129: recipe for target 'all' failed
make: *** [all] Error 2
  1. step 14 should be done before 9.3, modification $YOLO_ROOT/config/yolov2-tiny.txt.
    In addition to step 14, ‘–test_images’ should be modified properly as well.


Thank you for your patience on this and on helping get it right!

Did you cut and paste the line or did you write it? this is really perplexing… on my nano it works very well, actually it does not work without it.

after a bit of research, it might have to do with the version of cmake. can you please try “–compiler-options -fPIC” instead of “-fPIC”?

Hi moshe,


--compiler-options or -Xcompiler option can fix this issue.


Hi Moshe,
great job thanks a lot.
Any plan to extend it to video streams?

The problem seems to be that you are trying to use 8 bit, which os not supported by nano and anyways needs calibration.

Just Google the weights, it should be very easy to find on the darknet site. I am on the phone now so awkward but if you can’t find it, let me know and i will look it up for you later.

Moshe, good job with your implementation. It’s hard to come across a port of YOLO on tensorRT, not to mention a Python wrapper.
I’ll keep an eye on your project.


what version of opencv are you using to run this? i have tried opencv 3.4.6 and 4.0 and both gave similar errors about some undefined symbol inside the