Deepstram python app test1 execution failing

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Geforce MX350
• DeepStream Version 6.1.0
• JetPack Version (valid for Jetson only)
• TensorRT Version 8.4.3-1+cuda11.6
• NVIDIA GPU Driver Version (valid for GPU only) Driver Version: 510.47.03
• Issue Type( questions, new requirements, bugs)
I am running one of the sample python test app(test1)
python3 deepstream_test_1.py /opt/nvidia/deepstream/deepstream-6.1/samples/streams/sample_720p.h264
I am getting below error:

Creating Pipeline

Creating Source

Creating H264Parser

Creating Decoder

Creating EGLSink

Playing file /opt/nvidia/deepstream/deepstream-6.1/samples/streams/sample_720p.h264
Adding elements to Pipeline

Linking elements in the Pipeline

Starting pipeline

0:00:00.125352400 6933 0x25b3b60 WARN nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1161> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
0:00:00.963225953 6933 0x25b3b60 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x368x640
1 OUTPUT kFLOAT conv2d_bbox 16x23x40
2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40

0:00:00.963737760 6933 0x25b3b60 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2003> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
0:00:00.964652241 6933 0x25b3b60 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus: [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
Error String : Feature not supported on this GPUError Code : 801
Error: gst-resource-error-quark: Failed to process frame. (1): gstv4l2videodec.c(1747): gst_v4l2_video_dec_handle_frame (): /GstPipeline:pipeline0/nvv4l2decoder:nvv4l2-decoder:
Maybe be due to not enough memory or failing driver


I am not able to understand what is that I should do to make it working, I also wanted to mention that its the test app1 which is not working whereas if I try to run ( deepstream-test1-usbcam ) it works absolutely fine. Why am I getting error when I try to run test app1.

For DS 6.1, you should use TensorRT 8.2.5.1

Thank you for your response, could you tell me how should I use TensortRT?

Install the correct TensorRT version 8.2.5.1 for DS 6.1

Thanks for your response,
but then as per the instructions Recommended TensorRT version is 8.4.1.5, could you tell me from where I can get tensorRT version 8.2.5.1

Please follow the document,
https://docs.nvidia.com/metropolis/deepstream/6.1/dev-guide/text/DS_Quickstart.html#dgpu-setup-for-ubuntu

Thanks for your response, I’ve followed the entire set up once again and I still have same problem

There is no HW video decoder in MX350. Video Encode and Decode GPU Support Matrix | NVIDIA Developer

This device is not suitable for running DeepStream.

Thanks for your response, I was not aware of this. Could you tell me if A100 can run Deepstream as I have access of cluster node which has A100. For HW Video Encoder its written N/A for A100 what does that mean?

It means not not available. It cannot support any Encoder.

alright can you suggest any alternative in that case, I only have access to A100.

You can try to use soft encoder plugin by modifying the source code ,like x264enc.

Thanks, is there any alternate for decorder as well, let say I don’t want to do decoding on gpu. is that possible?

You can try to use “libav” plugin as your decoder.

Thank you so much for your timely response.

I am currently trying to understand deepstream test app1.
what changes i should make in app to use ‘libav’ plugin, at present It’s set to decoder = gst_element_factory_make (“nvv4l2decoder”, “nvv4l2-decoder”);

gst_element_factory_make (“libav”, “libav-decoder”);
We suggest you learn some basic concepts of gstreamer.
https://gstreamer.freedesktop.org/
Also if it’s a new issue, you can open a new topic. Thanks

Thanks you can close this thread, I have another doubt I will open another thread.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.