Video Analytics without using any gstreamer plugins (both gstreamer-base & nvidia gstreamer plugins)

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson
• DeepStream Version5.0
**• JetPack Version (valid for Jetson only)**4.4
• TensorRT Version7.1
• NVIDIA GPU Driver Version (valid for GPU only)
**• Issue Type( questions, new requirements, bugs)**question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Dear NVIDIA team,
I’m building a gstreamer pipeline, which include: video decode - streammux - nvinfer - nvtracker on JetsonNX DevKit.
I found your ‘DeepStream Reference Application - deepstream-app’ & tried to run it successfully. But with the gstreamer librares, there are too many thread were launched as bellow, and I think it is not good at all.

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND P
14767 jetsonnx 20 0 14,976g 2,129g 866208 S 11,9 28,1 0:01.01 deepstream-app 1
14768 jetsonnx 20 0 14,976g 2,129g 866208 S 11,9 28,1 0:01.01 deepstream-app 5
14770 jetsonnx 20 0 14,976g 2,129g 866208 S 11,9 28,1 0:01.01 deepstream-app 2
14773 jetsonnx 20 0 14,976g 2,129g 866208 S 11,9 28,1 0:01.03 deepstream-app 4
14776 jetsonnx 20 0 14,976g 2,129g 866208 S 11,9 28,1 0:01.01 deepstream-app 5
14757 jetsonnx 20 0 14,976g 2,129g 866208 S 11,6 28,1 0:01.05 deepstream-app 0
14763 jetsonnx 20 0 14,976g 2,129g 866208 S 11,6 28,1 0:01.02 deepstream-app 0
14764 jetsonnx 20 0 14,976g 2,129g 866208 S 11,6 28,1 0:01.00 deepstream-app 3
14774 jetsonnx 20 0 14,976g 2,129g 866208 S 11,6 28,1 0:01.02 deepstream-app 1
14779 jetsonnx 20 0 14,976g 2,129g 866208 S 11,6 28,1 0:01.01 deepstream-app 2
14780 jetsonnx 20 0 14,976g 2,129g 866208 S 11,6 28,1 0:01.00 deepstream-app 3
14758 jetsonnx 20 0 14,976g 2,129g 866208 S 10,9 28,1 0:01.03 deepstream-app 5
14609 jetsonnx 20 0 14,976g 2,129g 866208 S 6,6 28,1 0:00.68 deepstream-app 4
14597 jetsonnx 20 0 14,976g 2,129g 866208 S 4,6 28,1 0:00.46 tiled_display_q 1
14745 jetsonnx 20 0 14,976g 2,129g 866208 S 4,6 28,1 0:00.44 src_bin_muxer:s 3
14595 jetsonnx 20 0 14,976g 2,129g 866208 S 4,3 28,1 0:00.46 osd_conv_queue: 2
14596 jetsonnx 20 0 14,976g 2,129g 866208 S 2,6 28,1 0:00.22 osd_queue:src 2
14612 jetsonnx 20 0 14,976g 2,129g 866208 S 1,3 28,1 0:00.14 primary_gie_que 1
14639 jetsonnx 20 0 14,976g 2,129g 866208 S 1,3 28,1 0:00.10 queue:src 1
14657 jetsonnx 20 0 14,976g 2,129g 866208 S 1,3 28,1 0:00.10 multiqueue0:src 4
14673 jetsonnx 20 0 14,976g 2,129g 866208 S 1,3 28,1 0:00.10 multiqueue5:src 3
14610 jetsonnx 20 0 14,976g 2,129g 866208 S 1,0 28,1 0:00.07 deepstream-app 5
14617 jetsonnx 20 0 14,976g 2,129g 866208 S 1,0 28,1 0:00.11 queue:src 1
14620 jetsonnx 20 0 14,976g 2,129g 866208 S 1,0 28,1 0:00.10 queue:src 1
14623 jetsonnx 20 0 14,976g 2,129g 866208 S 1,0 28,1 0:00.10 queue:src 0
14629 jetsonnx 20 0 14,976g 2,129g 866208 S 1,0 28,1 0:00.10 queue:src 1
14631 jetsonnx 20 0 14,976g 2,129g 866208 S 1,0 28,1 0:00.10 queue:src 4
14633 jetsonnx 20 0 14,976g 2,129g 866208 S 1,0 28,1 0:00.10 queue:src 0
14638 jetsonnx 20 0 14,976g 2,129g 866208 S 1,0 28,1 0:00.10 queue:src 2
14647 jetsonnx 20 0 14,976g 2,129g 866208 S 1,0 28,1 0:00.10 queue:src 1
14653 jetsonnx 20 0 14,976g 2,129g 866208 S 1,0 28,1 0:00.09 multiqueue1:src 0
14660 jetsonnx 20 0 14,976g 2,129g 866208 S 1,0 28,1 0:00.10 multiqueue1:src 0
14662 jetsonnx 20 0 14,976g 2,129g 866208 S 1,0 28,1 0:00.10 multiqueue4:src 4
14672 jetsonnx 20 0 14,976g 2,129g 866208 S 1,0 28,1 0:00.10 multiqueue2:src 4
14676 jetsonnx 20 0 14,976g 2,129g 866208 S 1,0 28,1 0:00.11 multiqueue8:src 4
14678 jetsonnx 20 0 14,976g 2,129g 866208 S 1,0 28,1 0:00.10 multiqueue6:src 2
14680 jetsonnx 20 0 14,976g 2,129g 866208 S 1,0 28,1 0:00.10 multiqueue3:src 3
14681 jetsonnx 20 0 14,976g 2,129g 866208 S 1,0 28,1 0:00.09 multiqueue10:sr 2
14682 jetsonnx 20 0 14,976g 2,129g 866208 S 1,0 28,1 0:00.10 multiqueue11:sr 0
14683 jetsonnx 20 0 14,976g 2,129g 866208 S 1,0 28,1 0:00.09 multiqueue8:src 4
14684 jetsonnx 20 0 14,976g 2,129g 866208 S 1,0 28,1 0:00.09 multiqueue9:src 3
14689 jetsonnx 20 0 14,976g 2,129g 866208 S 1,0 28,1 0:00.09 NVMDecBufProcT 5
14694 jetsonnx 20 0 14,976g 2,129g 866208 S 1,0 28,1 0:00.09 NVMDecBufProcT 0
14715 jetsonnx 20 0 14,976g 2,129g 866208 S 1,0 28,1 0:00.09 NVMDecBufProcT 4
14716 jetsonnx 20 0 14,976g 2,129g 866208 S 1,0 28,1 0:00.08 NVMDecBufProcT 5
14727 jetsonnx 20 0 14,976g 2,129g 866208 S 1,0 28,1 0:00.09 NVMDecBufProcT 3
14591 jetsonnx 20 0 14,976g 2,129g 866208 S 0,7 28,1 0:00.33 deepstream-app 1
14614 jetsonnx 20 0 14,976g 2,129g 866208 S 0,7 28,1 0:00.10 queue:src 1
14626 jetsonnx 20 0 14,976g 2,129g 866208 S 0,7 28,1 0:00.10 queue:src 1
14644 jetsonnx 20 0 14,976g 2,129g 866208 S 0,7 28,1 0:00.10 queue:src 1
14650 jetsonnx 20 0 14,976g 2,129g 866208 S 0,7 28,1 0:00.06 qtdemux1:sink 0
14654 jetsonnx 20 0 14,976g 2,129g 866208 S 0,7 28,1 0:00.09 multiqueue0:src 2
14658 jetsonnx 20 0 14,976g 2,129g 866208 S 0,7 28,1 0:00.05 qtdemux6:sink 3
14659 jetsonnx 20 0 14,976g 2,129g 866208 S 0,7 28,1 0:00.10 multiqueue4:src 4
14661 jetsonnx 20 0 14,976g 2,129g 866208 S 0,7 28,1 0:00.10 multiqueue2:src 3
14664 jetsonnx 20 0 14,976g 2,129g 866208 S 0,7 28,1 0:00.09 multiqueue5:src 2
14669 jetsonnx 20 0 14,976g 2,129g 866208 S 0,7 28,1 0:00.11 multiqueue3:src 3
14670 jetsonnx 20 0 14,976g 2,129g 866208 S 0,7 28,1 0:00.09 multiqueue6:src 5
14671 jetsonnx 20 0 14,976g 2,129g 866208 S 0,7 28,1 0:00.10 multiqueue7:src 0
14674 jetsonnx 20 0 14,976g 2,129g 866208 S 0,7 28,1 0:00.09 multiqueue11:sr 1
14675 jetsonnx 20 0 14,976g 2,129g 866208 S 0,7 28,1 0:00.11 multiqueue9:src 3
14677 jetsonnx 20 0 14,976g 2,129g 866208 S 0,7 28,1 0:00.11 multiqueue10:sr 3
14679 jetsonnx 20 0 14,976g 2,129g 866208 S 0,7 28,1 0:00.09 multiqueue7:src 0
14685 jetsonnx 20 0 14,976g 2,129g 866208 S 0,7 28,1 0:00.08 NVMDecBufProcT 5
14692 jetsonnx 20 0 14,976g 2,129g 866208 S 0,7 28,1 0:00.03 V4L2_DecThread 1
14696 jetsonnx 20 0 14,976g 2,129g 866208 S 0,7 28,1 0:00.08 NVMDecBufProcT 3
14704 jetsonnx 20 0 14,976g 2,129g 866208 S 0,7 28,1 0:00.08 NVMDecBufProcT 4
14710 jetsonnx 20 0 14,976g 2,129g 866208 S 0,7 28,1 0:00.08 NVMDecBufProcT

As I understand, I can manually create a pipeline without using gstreamer libraries like this topic ‘KLT NvMOT usage
So could you please share with us the guildelines or examples about:

  • How to decode RTSP stream by using only NVIDA Deepstream SDK API
  • How to mux streams by using only NVIDA Deepstream SDK API
  • How to detect and classify objects by using only NVIDA Deepstream SDK API
  • How to track objects by using only NVIDA Deepstream SDK API

Thanks in advanced!!

Before start with DeepStream, please study Gstreamer first. https://gstreamer.freedesktop.org/

We have many deepstream sample apps other than deepstream-app. Your questions can be covered by these samples.

You can find the source codes under /opt/nvidia/deepstream/deepstream-5.0/sources/apps

Hi Fiona.Chen,
Thanks for your response.
I’ve learned about Gstreamer before but as I mentioned above, the application which developed base on Gstreamer creates too many linux-ptheads and we cannot develop our own software architecture. So I wanna know how to implement the same video analytic features without using any gstreamer plugins. It means that we want to use only nvds_infer, nvds_inferutils, libaries, nvds_mot_klt, nvds_mot_iou, … instead of gst-nvinfer, gst-nvtracker, …
Regards!

DeepStream is based on Gstreamer. It is not recommended to use the low level interfaces in DS. And we will not support to do this.
For the inference low level interfaces, you may need to refer to TensorRT, (https://docs.nvidia.com/deeplearning/tensorrt/index.html)and you can also ask questions about TensorRT in TensorRT forum (https://forums.developer.nvidia.com/c/ai-data-science/deep-learning/tensorrt/92).

As to RTSP, Nvidia does not provide any solution other than gstreamer.
Tracking solution only has gstreamer based solution. To use low level interfaces are not supported.

Hi Fiona.Chen,
As I mentioned before, I see an example which implements object tracking function without gstreamer plugin:


So, do you have another examples / guidelines for decoding video, objects detection & classification?
In fact, your Deepstream SDK API reference here: https://docs.nvidia.com/metropolis/deepstream/4.0/dev-guide/DeepStream_Development_Guide/baggage/index.html is not enough for me to implement my desired features.

Regards!

If you want to decode videos with Nvidia hardware, you can refer to multimedia API for Jetson. https://docs.nvidia.com/jetson/l4t-multimedia/index.html
https://docs.nvidia.com/jetson/archives/l4t-archived/l4t-3231/index.html#page/Tegra%20Linux%20Driver%20Package%20Development%20Guide%2Fmultimedia.html%23

For objects detection&classification, please refer to TensorRT. https://docs.nvidia.com/deeplearning/tensorrt/index.html

Since you don’t want to use DeepStreamSDK(it is gstreamer based), there is no framework to integrate decoding, inference(detection and classification) together, you need to integrate them into one application by yourself.

DeepStream document is not suitable for your case.