Hi,
I am building a real-time pipeline in C++ and OpenCV which consists in the following tasks:
Stream colour frames → Stabilize frames → Perform Inference → Output Position
----------------------------------------------- Stream depth frames --^
Obviously on a Jetson Nano things start to get slow, so I would like to arrange the pipeline with multiple threads.
Currently, I can think of 3 ways to do multithreading: boost C++, OpenMP and standard C++ threads. Which option should I pick?
Also, I am considering to switch some OpenCV functions to its CUDA version to speed things up.
Thank you
Hi,
For running deep learning inference on Jetson platforms, we would suggest use DeepStream SDK. Please check
https://forums.developer.nvidia.com/t/announcing-deepstream-sdk-4-0-2/109325
You can install the package through SDKManager and see it in
/opt/nvidia/deepstream/deepstream-4.0/
1 Like
You presumably need to apply the same transform on the depth frames that you apply to the color frames for stabilization?
In general, the thread API you use doesn’t matter, unless you use some specific library that documents requiring a particular threading package.
I prefer native pthreads, but any of the options should work, assuming you pay attention to thread safety or not of the libraries/functions you’re using.
1 Like