Is there a C++ implementation that uses gstreamer to obtain RTSP streams and convert the data into a specified format?

Here is my device information:

Model: NVIDIA Orin Nano Developer Kit - Jetpack 5.1.3 [L4T 35.5.0]
gst-launch-1.0 version 1.16.3

I have successfully obtained the RTSP data using the gstreamer command:

gst-launch-1.0 rtspsrc location=rtsp://192.168.0.12:8554/test protocols=udp latency=100 ! rtph264depay ! h264parse ! nvv4l2decoder ! nv3dsink -e

Now I need to implement the same function through C++, which can obtain RTSP data and decode it into the format required by the facial recognition algorithm; May I ask if there is a sample that can be used as a reference? I found a gstreamer sample that compiles normally, but there is no preview or data reading after running.
the pipeline of c++ programs:

#define RTSP_PIPELINE "rtspsrc location=rtsp://192.168.0.12:8554/test protocol=udp latency=100 ! " \
    "rtph264depay ! h264parse ! nvv4l2decoder ! " \
    "nvvidconv ! " \
    "video/x-raw format=(string)BGRx ! " \
    "videoconvert ! " \
    "video/x-raw format=BGR ! " \
    "appsink name=sink"

print log after runing:

Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NvMMLiteBlockCreate : Block : BlockType = 261
Stream format not found, dropping the frame
Stream format not found, dropping the frame

Best Regards.

Hi,
For the use-case we suggest use DeepStream SDK. You can install the packages through SDKManager. Please check the document and give it a try:

NVIDIA Metropolis Documentation

Hi @DaneLLL
Thank you for your reply。
I visited the Deepstream SDK C++/Python repository and found some Python samples in the Python program, but there are no relevant samples in the C++ program:
https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps

If there is a sample based on gstreamer to obtain RTSP streams and decode and convert formats, please send it to me, hoping that I can refer to it.
Best Regards

Hi
I have already installed deepstream on orin nano, deestream version:

deepstream-app version 6.3.0
DeepStreamSDK 6.3.0

After successful installation, I tried to execute the sample but encountered an error:

**deepstream-app -c source1_csi_dec_infer_resnet_int8.txt** 

WARNING: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/../../models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine open error
0:00:03.924373472  6176 0xaaab2898b820 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1976> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/../../models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed
0:00:04.146649472  6176 0xaaab2898b820 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2081> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/../../models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed, try rebuild
0:00:04.146778528  6176 0xaaab2898b820 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.

I searched for similar errors in the forum and changed all permissions in Deepstream to 777, but the same error still persists

Files in the path:

nvidia@ubuntu:/opt/nvidia/deepstream/deepstream-6.3/samples/models/Primary_Detector$ ls
cal_trt.bin  labels.txt  resnet10.caffemodel  resnet10.prototxt

Please help me solve this problem.
Best Regards.

Hi,
Please try other config files. And please try to run the command with sudo

Hi
I have also tested with sudo when executing the sample, and the result is the same.
I tested other config files and the sample print same error.

deepstream-app version 6.3.0
DeepStreamSDK 6.3.0
CUDA Driver Version: 11.4
CUDA Runtime Version: 11.4
TensorRT Version: 8.5
cuDNN Version: 8.6
libNVWarp360 Version: 2.0.1d3

Does the current component version support running deepstream samples on Orin Nano?

Hi @DaneLLL

I checked the files in the /opt/nvidia/deepstream/deepstream-6.3/samples/models/Primary_Detector path and found that there is no resnet10.caffemodel-b30_gpu0uint8.engine

nvidia@ubuntu:/opt/nvidia/deepstream/deepstream-6.3/samples/models/Primary_Detector$ ls
cal_trt.bin  labels.txt  resnet10.caffemodel  resnet10.prototxt

Can you guide me in generating this engine?

Best Regards.

Hi,
Please run the command with sudo since the path needs root permission. Or please copy the whole deepstream folder to non-root-permission path(such as home directory)

Hi, @DaneLLL
I added sudo in front of the deepstream-app command when running it, but still got the same error. Then I copied the deepstream-6.3 directory to/home/Nvidia and executed it, but still got the same error

sudo deepstream-app -c source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt 
WARNING: Deserialize engine failed because file path: /home/nvidia/code/deepstream-6.3/samples/configs/deepstream-app/../../models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine open error
0:00:03.884848352  5694 0xaaab072ba860 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1976> [UID = 6]: deserialize engine from file :/home/nvidia/code/deepstream-6.3/samples/configs/deepstream-app/../../models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine failed
0:00:04.110614272  5694 0xaaab072ba860 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2081> [UID = 6]: deserialize backend context from engine from file :/home/nvidia/code/deepstream-6.3/samples/configs/deepstream-app/../../models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine failed, try rebuild
0:00:04.110748992  5694 0xaaab072ba860 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2002> [UID = 6]: Trying to create engine from model files
WARNING: [TRT]: The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.

Hi,
We don’t observe the issue. The engine file should be generated in first run. A bit strange you cannot successfully run the default sample. Are you able to re-flash the system and try again?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.