DeepStream SDK 6.1 deepstream app example issue "nvv4l2decoder0: Failed to process frame."

Hi,
I’m trying to make run the DeepStream SDK 6.1 sample deep-stream_app_source1_dashcamnet_vehiclemakenet_vehicletypenet.txt but without success yet.

• Hardware Platform (Jetson / GPU)
The hardware platform is an Azure Standard_NV6 virtual machine with 1x NVidia Tesla M60 GPU, VMware Photon OS 3 as guest os with installed NVidia-Container-Toolkit and pulled Deepstream:6.1-Triton docker container. Photon OS runs with no X window (headless).
• DeepStream Version 6.1
• JetPack Version (valid for Jetson only) -
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
root@NVidia01 [ ~ ]# nvidia-smi
Fri Aug 26 09:00:16 2022
±----------------------------------------------------------------------------+
| NVIDIA-SMI 515.65.01 Driver Version: 515.65.01 CUDA Version: 11.7 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla M60 Off | 00003130:00:00.0 Off | Off |
| N/A 32C P0 35W / 150W | 0MiB / 8192MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+

±----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
±----------------------------------------------------------------------------+

• Issue Type( questions, new requirements, bugs)
Starting deepstream-app -c deepstream_app_source1_dashcamnet_vehiclemakenet_vehicletypenet.txt ends with

**** INFO: <bus_callback:194>: Pipeline ready**
**Error String : Feature not supported on this GPUError Code : 801**
**ERROR from nvv4l2decoder0: Failed to process frame.**
Debug info: gstv4l2videodec.c(1747): gst_v4l2_video_dec_handle_frame (): /GstPipeline:pipeline/GstBin:multi_src_bin/GstBin:src_sub_bin0/GstURIDecodeBin:src_elem/GstDecodeBin:decodebin0/nvv4l2decoder:nvv4l2decoder0:
**Maybe be due to not enough memory or failing driver**
**ERROR from qtdemux0: Internal data stream error.**
Debug info: qtdemux.c(6605): gst_qtdemux_loop (): /GstPipeline:pipeline/GstBin:multi_src_bin/GstBin:src_sub_bin0/GstURIDecodeBin:src_elem/GstDecodeBin:decodebin0/GstQTDemux:qtdemux0:
**streaming stopped, reason error (-5)**
Quitting
[NvMultiObjectTracker] De-initialized
App run failed

• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

The following steps were made after Azure virtual machine start.

Step1:

# docker container start
`docker run --gpus all -it --rm --net=host nvcr.io/nvidia/deepstream:6.1-triton -p 8000:8000/tcp -p 8001:8001/tcp -p 8002:8002/tcp -p 5400:5400/udp`

# Download configuration files
git clone https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps.git
cd /opt/nvidia/deepstream/deepstream-6.1/deepstream_reference_apps/deepstream_app_tao_configs
cp -a * /opt/nvidia/deepstream/deepstream-6.1/samples/configs/tao_pretrained_models/

# Download models
apt-get install -y wget zip
cd /opt/nvidia/deepstream/deepstream-6.1/samples/configs/tao_pretrained_models/
./download_models.sh

Step2:
In deep-stream_app_source1_dashcamnet_vehiclemakenet_vehicletypenet.txt the group [sink0] has been deactivated (enable=0), and [sink2] has been activated (enable=2). Additionally in [sink2], codec=2 #2=h265 has been set because of [source0] uri=file://../../streams/sample_1080p_h265.mp4.

Step3:
deepstream-app -c deepstream_app_source1_dashcamnet_vehiclemakenet_vehicletypenet.txt
Protocol.txt (66.0 KB)

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

How can I fix the errors “Error String : Feature not supported on this GPUError Code : 801” and “ERROR from nvv4l2decoder0: Failed to process frame.”.
As it is a headless installation with rtsp output, are there prerequisites ?

please refer to GPU decoding ability in Video Encode and Decode GPU Support Matrix | NVIDIA Developer, Tesla M60 dose not support h265 hardware decoding.

Not supported means there is no possibility to switch “back to software” e.g. to configure [sink2] enc-type=1 (=Software) in deepstream_app_source1_dashcamnet_vehiclemakenet_vehicletypenet.txt. Are there options to make the DS6.1 example run on a M60?

  1. from “ERROR from nvv4l2decoder0”, there is decoding error because Tesla M60 dose not support h265 hardware decoding. you can try sample_1080p_h264.mp4 source.
  2. software encoding is supported. and from the link, Tesla M60 supported h265 encoding, you can try h265 hardware encoding in sink。
1 Like

Hi @Fanzh,

Many thanks for your assistance! I wasn’t aware about the hardware dependencies. Thanks for the weblink, and, for the hint to switch to the sample_1080p_h264.mp4. It starts on the M60 as predicted.

How can the output of rtsp://localhost:8554/ds-test be visualized as weburl? The “docker run” has been started without e.g. “-p 8554:8554”. Even if I add it, there is no output of the inferenced sample though.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.