Running `nvv4l2h264enc` in DGX Station A100 with Docker

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
DGX Station A100
• DeepStream Version
6.0
• Issue Type( questions, new requirements, bugs)
DeepStream for video encoding from docker

Hi,
I’m trying to use nvv4l2h264enc to encode a stream as a part of my pipeline.

I get

root@docker:/home/nvidia/dgx-test/python-deepstream-testsDS5/scripts# python3 dgx-test.py 
pygst initialized..
creating pipeline..
Creating source
Creating H264Parser 

Creating Decoder 

creating H264 Encoder
Creating H264 rtppay

 *** DeepStream: Launched RTSP Streaming at rtsp://localhost:8554/ds-test ***



** try gst-launch-1.0 rtspsrc location=rtsp://192.168.0.11:8554/ds-test latency=0 buffer-mode=auto ! queue ! rtph264depay ! h264parse ! avdec_h264 ! videoconvert ! videoscale ! video/x-raw,width=640,height=480 ! autovideosink**

Starting to run pipeline
0:00:00.361111163 110440      0x2141210 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1161> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
0:00:02.038644256 110440      0x2141210 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x368x640       
1   OUTPUT kFLOAT conv2d_bbox     16x23x40        
2   OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40         

0:00:02.038720910 110440      0x2141210 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
0:00:02.040099407 110440      0x2141210 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
Error: gst-resource-error-quark: Could not get/set settings from/on resource. (13): gstv4l2object.c(3501): gst_v4l2_object_set_format_full (): /GstPipeline:pipeline0/nvv4l2h264enc:encoder:

when looking into that I suspect that this may be something due to nvv4l2h264enc which probably relies on some hardware encoding resource not available for docker like something in /dev/

Can you help me finding the correct recipe for this please.

Cheers,
Ganindu.

There is no HW encoder with A100. Video Encode and Decode GPU Support Matrix [NEW] | NVIDIA Developer

Hi Thanks for letting me know. does that mean I can’t use the nvv4l2h264enc in the DGX? I’m trying to make a RTSP sink (I got the equivalent working on jetson already so this is purely for convenience) Is there any suitable plugin I can use in the DGX? the openmax ones seems not registered as well?

Looking at the matrix and whitepaper it seems likes the chances are very slim but


Can we use some sort of software encoder and expect it to work with any sort of decency?

Cheers,
Ganindu.

You can not use nvv4l2h264enc with A100.

It is also the HW encoder. You can not use any HW encoder with A100.

There are SW encoder in public gstreamer. Only nvvideoconvert to convert the buffer to “video/x-raw” before encoder is needed.

Hi Fiona!! Great Stuff!!! that is the right answer!! Just tested and working beautifully!! It is working so fast I don’t even have time to open the preview RTSP stream on VLC :P

Thanks a lot,
Ganindu.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.