Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
Jetson Nano, Xavier, DGPU
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
The project I’m currently working with wants to have a web frontend. Does the DeepStream team have any recommendations on how to best get video from a Gstreamer pipeline to a html5 video tag?
Ideally, we’d like a solution that works on all of the above specified platforms. I noticed vp8/vp9 is hit and miss (xavier, nano, x86), so webmmux is out. I guess h264/5 is the only option. My current understanding is that we should use “encoder ! qtmux streamable=true ! tcpserversink host=0.0.0.0 port=8081” and put the host and port in the video tag
src, however from googling I see that this has mixed results. Any working configurations or examples would be very much appreciated.
Could you please go to gstreamer forum to get more information about html5 in gtreamer?
We have the samples of launching RTSP server. And a post about http live streaming:
We don’t have experience in html5 protocol. Would be better to go to the gstreamer forum for further knowledge. Once you get the working pipelines, you can integrate with DeepStream elements such as nvinfer, nvv4l2h264enc.
I will go there and ask about this part since you don’t have experience in this area.
RTSPserver launch is deepstream-app example ?
No, test-launch is example from gstreamer community.
We have similar implementation in sink group. Please check development guide
RTSP is not compatible with web browsers, it needs custom plugins and a lot of hacks. You could look into creating a HLS stream with h264 codec. Or webrtc stream with MJPEG codec. Or use RTMP sink with h264 codec and use a server do the streaming for web clients (typically using h265 + hls + html5).
Thanks. I was looking into hlssink. I got it spitting out video but it needs a web server running. Setting up an Nginx instance is tomorrow’s job I think. If I get something working I will post notes here since there seems to be some interest in this.
Sure it would be interesting to see how this can be done.
We ended up getting hls working with gstreamer and nginx in a test pipeline. Thanks for the hlssink suggestion anyway since it seems to work pretty well. We haven’t tested it with nvidia’s encoder, but I have no doubt it’ll work in our new gstreamer/deepstream based pipelines. Another option pointed out by @jasonpgf2a is kvssink which may make more sense for the amazon cloud. I suspect other cloud providers have similar elements.
I have successfully done this using
RTMPSink directed to an Nginx server which can be configured to handle an RTMP stream and serve either HLS or DASH. It will serve an
.m3u8 file which can be embedded in a
<video> tag. The downside to this is latency. Another option I have used successfully is to have a WebRTC server like Janus. You will point Janus to your RTSP stream from deepstream, then point your browser to Janus to display the stream in a
@mattcarp88, Can I ask what sort of latency you were getting on the rtmp- nginx- hls ? Was dash better?
HLS serves video “chunks” of a pre-determined length, e.g. 5 seconds. So Nginx will listen to the RTMP stream for 5 seconds, save a video file for that period of time, then send that to the browser. Thus the size of your video chunks is the lower bound of the latency: you can’t do any better than 5 second latency. I haven’t tested HLS vs DASH but they both work on this same principle.
Thanks @mattcarp88. I’ll give this a test - I thought you were alluding to other latencies than the file save time. I’ve been using kvssink from AWS which also has about 5-6sec latency when using HLS (thats the best I can get it). This is going view AWS Kinesis Video streaming service.
Hi, did you have success with the nvidia encoder for the HLS stream? So far I got HLS working only with the gstreamer x264enc. The nvidia encoder doesn’t seem to be able to chop the stream resulting in only one segment being produced.