I’m trying to live stream a rtsp source using hlssink. Since I wasn’t able to make it work using nvv4l2h264enc I had to fall back to omxh264enc (probably because of nvv4l2h264enc not generating key frames). The pipeline I’m currently testing is something like:
gst-launch-1.0 rtspsrc location="rtsp://some_resource" ! rtph264depay ! decodebin ! queue ! mux.sink_0 nvstreammux name=mux width=2560 height=1920 batch-size=1 ! nvmultistreamtiler width=2560 height=1920 ! omxh264enc ! mpegtsmux ! fakesink
Changing the fakesink with hlssink makes no difference for the purpose of reproducing the issue. I get the following error with the pipeline above:
NvMMLiteVideoEncDoWork: Surface resolution (0 x 0) smaller than encode resolution (2560 x 1920)
VENC: NvMMLiteVideoEncDoWork: 4231: BlockSide error 0x4
Event_BlockError from 0BlockAvcEnc : Error code - 4
Sending error event from 0BlockAvcEncERROR: from element /GstPipeline:pipeline0/GstOMXH264Enc-omxh264enc:omxh264enc-omxh264enc0: GStreamer encountered a general supporting library error.
Additional debug info:
/dvs/git/dirty/git-master_linux/3rdparty/gst/gst-omx/omx/gstomxvideoenc.c(1331): gst_omx_video_enc_loop (): /GstPipeline:pipeline0/GstOMXH264Enc-omxh264enc:omxh264enc-omxh264enc0:
OpenMAX component in error state Bad parameter (0x80001005)
I get no error if I use nvv4l2h264enc instead of omxh264enc (with fakesink).
On the other hand, if I don’t use nvstreammux and nvmultistreamtiler/nvstreamdemux it works:
gst-launch-1.0 rtspsrc location="rtsp://some_resource" ! rtph264depay ! decodebin ! queue ! omxh264enc ! mpegtsmux ! fakesink
Thanks in advance.
nvstreammux/nvmultistreamtiler/nvstreamdemux are specific to DeepStream SDK. If you don’t need deep learning inference, you should not need to link the elemtents. If you would like to use DeepStream SDK, you may start with deepstream-app
Hi DaneLLL, thank you for your answer.
I actually intend to use nvinvfer, track, osd, etc, and I have also already developed another app in C that uses them based on deepstream-app.
So now I need to live stream, hopefully using hlssink and I’ll eventually use the rest of elements specific to DeepStream SDK, but for now I need those (I mux/tiler or demux depending on some configuration).
Please refer to
$ gst-launch-1.0 uridecodebin uri=file:///home/nvidia/1080.mp4 ! mx.sink_0 nvstreammux width=1920 height=1080 batch-size=1 name=mx ! nvinfer config-file-path=/home/nvidia/deepstream-4.0/samples/configs/deepstream-app/config_infer_primary.txt unique-id=8 ! nvmultistreamtiler width=1920 height=1080 rows=1 columns=1 ! nvvideoconvert ! nvdsosd ! nvvideoconvert ! nvv4l2h264enc ! h264parse ! mpegtsmux ! hlssink
The omx plugins are not eligible in DeepStream SDK. Please look at the documents to know what elements are supported.
I tried that pipeline and it worked. After trying to adapt mine so it’d become similar to yours (but without nvinver) I added nvosd after nvmultistreamtiler (with nvvideoconvert) and it worked, hlssink started to update the playlist and rotate the ts files correctly.
Do you know what might have happened that without nvosd, nvv4l2h264enc + hlssink doesn’t work correctly?
Not sure but hlssink should not support raw h264 stream, so need to have muxer such as mpegtsmux or qtmux.
On the complete version of my pipeline I already had mpegtsmux. It’s just that nvv4l2h264enc wasn’t working well with it apparently. I think it may not create those I-frames or something like that, but hlssink wouldn’t create more segments and update the playlist file (whatever triggers that).
I’ve confirmed. nvv4l2h264enc does not work with hlssink even with mpegtsmux. If the pipeline looks like nvv4l2enc ! mpegtsmux ! hlssink it will result in one segment ever being created and no playlist will ever be generated.