Use OpenCV or Deepstream SDK on Jetson Nano for big project

Good afternoon!

We want to build a project which involves the processing of video streams from several cameras. I’ve read a lot of articles and topics about that it’s highly undesirable to use OpenCV + Python on Jetson Nano and that it’s better to user Deep stream SDK. But some sources claims that everything is Ok with OpenCV.

There are a few questions that concern me the most:

  1. Is it possible to process videos using custom neural network and OpenCV? Will it cause any problems?
  2. Will there be any performance gaps?

Too much information and we can’t make a decision. What’s more Deep stream SDK documentation quite confusing…

Although I’m supporting using OpenCv for many simple cases, if you want to process several video streams with neural networks, I’d advise to go to DS and infer with Tensor RT.

Using OpenCV will cause problems? Or it’s only because of performance?

Hi,
Please check this post:
[Gstreamer] nvvidconv, BGR as INPUT - #2 by DaneLLL

Due to the constraint we suggest use DeepStream SDK. For using OpenCV, handling/processing BGR data requires significant CPU usage and performance may be capped by CPU capability.

For trying DeepStream SDK, you may start with deepstream-app:

/opt/nvidia/deepstream/deepstream-5.1/samples/configs/deepstream-app$ deepstream-app -c source8_1080p_dec_infer-resnet_tracker_tiled_display_fp16_nano.txt

Thank you for your answer!

Is this an only problem that using OpenCV may cause or are there any other pitfalls that should be taken in mind? Is it real to process video stream from multiple cameras?

And one more question - can you advise some resources about how to use DeepStream SDK with custom neural networks? Of course, we explored docs and the official repo, but we still have many questions…

Hi,

Could you share more details about the model?
So we can share the corresponding example with you.

For example, which frameworks do you use? Is it a classifier or detector?

Thanks

Hi!

As an example - PyTorch implementation for SphereFace

Hi, @AastaLLL! Do you have any ideas about exhaustive examples?

Hi,

It’s more recommended to use Deepstream for inferencing.
Deepstream uses TensorRT as backend which can save memory and is also better optimized for Jetson.

In the GitHub you shared, it seems a classification use case from PyTorch.
To deploy a model with Deepstream, please convert it into an ONNX format first.
And you only need to update the configure file based on your use case:

Sample: deepstream_python_apps/apps/deepstream-test2 at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub
Document: Using a Custom Model with DeepStream — DeepStream 6.1.1 Release documentation

Thanks.