I am trying to develop a camera connected to a Xavier system. My camera produces an h264 stream using nvidia’s cuda based encoder. I want to create an HLS stream from this so that my camera output can be accessed from a web browser. I think I am almost there but there is one hurdle. The hlssink (or hlssink2) modes only produce a single segment file: I cannot get it to split into multiple segments. Firstly, the following doesn’t work:
gst-launch-1.0 v4l2src device=/dev/video2 ! mpegtsmux ! hlssink target-duration=2 max-files=5
It says “could not create handle for stream”. Then I tried adding h264parse:
gst-launch-1.0 v4l2src device=/dev/video2 ! h264parse ! mpegtsmux ! hlssink target-duration=2 max-files=5
This works but always a single segment file is created and it keeps growing in size. Then I tried hlssink2 as follows:
gst-launch-1.0 v4l2src device=/dev/video2 ! h264parse ! hlssink2 target-duration=2 max-files=5
The result is practically the same as before: a single segment file is created. To debug this system, I converted a video to H264 format using nvidia jetson samples (03_video_cuda_enc). The video has about 1500 frames. Then I tried using filesrc instead of v4lsrc:
gst-launch-1.0 filesrc location=test.h264 ! h264parse ! mpegtsmux ! hlssink target-duration=2 max-files=5
The result is the same as before: I only get one segment file (segment00000.ts). In order to further debug the system, I used videotestsrc:
gst-launch-1.0 videotestsrc is-live=true ! x264enc ! h264parse ! mpegtsmux ! hlssink target-duration=2 max-files=5
This works perfectly: 2 second segments are created and the playlist.m3u8 file is also correctly updated. Then I wanted to try getting rid of h264parse:
gst-launch-1.0 videotestsrc is-live=true ! x264enc ! mpegtsmux ! hlssink target-duration=2 max-files=5
This also worked perfectly, which confused me a little because when I output h264 from my camera, I had to specify h264parse before hlssink. But after x264enc apparently it wasn’t needed. I thought my h264 and x264enc would produce compatible streams.
To summarize I have one main and one sub question. The main question is why my h264 files (or the camera stream) do not get split into multiple segments by hlssink? The subquestion is why do I need to specify h264parse for my stream but this is not necessary for the videotestsrc. Thanks for any help.