Deepstream gst-python , no video output

Please provide complete information as applicable to your setup.

•Hardware Platform - GPU
•DeepStream Version 5.1
•TensorRT Version - 7.2
•NVIDIA GPU Driver Version - 460.27

Upon running the sample python app following this link - Python Sample Apps and Bindings Source Details — DeepStream 6.1.1 Release documentation

and adhering to all the requirements I am facing this issue:

(tf_ob) kuk@kuk-desktop:/opt/nvidia/deepstream/deepstream-5.1/sources/deepstream_python_apps/apps/deepstream-test1$ python3 deepstream_test_1.py sample_720p.mp4
Creating Pipeline

Creating Source

Creating H264Parser

Creating Decoder

Creating EGLSink

Playing file sample_720p.mp4
Adding elements to Pipeline

Linking elements in the Pipeline

Starting pipeline

WARNING: …/nvdsinfer/nvdsinfer_func_utils.cpp:36 [TRT]: TensorRT was linked against cuDNN 8.0.5 but loaded cuDNN 8.0.4
WARNING: …/nvdsinfer/nvdsinfer_func_utils.cpp:36 [TRT]: TensorRT was linked against cuBLAS/cuBLAS LT 11.3.0 but loaded cuBLAS/cuBLAS LT 11.2.1
WARNING: …/nvdsinfer/nvdsinfer_func_utils.cpp:36 [TRT]: TensorRT was linked against cuDNN 8.0.5 but loaded cuDNN 8.0.4
WARNING: …/nvdsinfer/nvdsinfer_func_utils.cpp:36 [TRT]: TensorRT was linked against cuBLAS/cuBLAS LT 11.3.0 but loaded cuBLAS/cuBLAS LT 11.2.1
0:00:00.986911101 24714 0x3fb1320 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1702> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x368x640
1 OUTPUT kFLOAT conv2d_bbox 16x23x40
2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40

0:00:00.986957398 24714 0x3fb1320 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1806> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-5.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
0:00:00.987566540 24714 0x3fb1320 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully

###########
it just stops at this point there is no response, GPU usage is still zero, there is no case of missing libraries.

The same behavior happens even in the official docker image - nvcr.io/nvidia/deepstream 5.1-21.02-triton

I cant seem to understand what is wrong?
Other than that the normal deepstream-app and its variations runs flawlessly with kafka or otherwise.

Kindly help me to figure it out.

test1 sample only accept h264 elementary stream.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.