Has anyone successfully live-streamed the nvcamera on the TX2 to an IP and port that can be accessed by an HTML5
I have tried variations of gstreamer but none of them, thus far, have performed as I would have expected.
Has anyone successfully live-streamed the nvcamera on the TX2 to an IP and port that can be accessed by an HTML5
I have tried variations of gstreamer but none of them, thus far, have performed as I would have expected.
Hi ben,
We don’t have experience in HTML5 streaming. Other users may share experience.
There is a HLS case. For your reference.
[url]No Video for HLS on iOS - Jetson TX1 - NVIDIA Developer Forums
Hi DaneLLL,
If we can forget the HTML5 for a moment and I could just stream the nvcamera or any other camera over gstreamer to TCP or UDP that would be helpful.
This works and opens a window showing the camera:
gst-launch-01 nvcamerasrc ! video/x-raw(memory:NVMM), width=(int)2592, height=(int)1458, format=(string)I420, framerate=(fraction)30/1 ! nvvidconv ! video/x-raw, width=(int)1920, height=(int)1080, format=(string)BGRx ! videoconvert ! appsink
This fails immediately and doesn’t even try to push the camera stream out
gst-launch-01 nvcamerasrc ! video/x-raw(memory:NVMM), width=(int)2592, height=(int)1458, format=(string)I420, framerate=(fraction)30/1 ! nvvidconv ! video/x-raw, width=(int)1920, height=(int)1080, format=(string)BGRx ! videoconvert ! tcpserversink host=127.0.0.1, port=4321
Here is the error:
OpenCV Error: Unspecified error (GStreamer: cannot find appsink in manual pipeline
) in cvCaptureFromCAM_GStreamer, file /home/nvidia/deep-learning/opencv-3.4.0/modules/videoio/src/cap_gstreamer.cpp, line 805
VIDEOIO(cvCreateCapture_GStreamer (CV_CAP_GSTREAMER_FILE, filename)): raised OpenCV exception:
/home/nvidia/deep-learning/opencv-3.4.0/modules/videoio/src/cap_gstreamer.cpp:805: error: (-2) GStreamer: cannot find appsink in manual pipeline
in function cvCaptureFromCAM_GStreamer
Any help with that would be appreciated.
First you may need to convert your onboard camera to generic webcamera type via theloopback .
That will result in a browser reflecting content from onboard devkit camera, if you will open the attached 1.html file.
In my opinion, you may be interested in WebRTC:
What's New in HTML5 Media - YouTube
as it is shown in the youtube presentation, the sequence below creates a stream:
<b>video.src = window.url.createobjecturl(stream)</b>
More reference:
https://stackoverflow.com/questions/9506145/how-do-i-broadcast-video-from-my-webcam-with-html5
1.html (969 Bytes)
Some posts about streaming:
[url]https://devtalk.nvidia.com/default/topic/1018689/jetson-tx2/vlc-playing-gstreamer-flow/post/5187270/#5187270[/url]
[url]https://devtalk.nvidia.com/default/topic/1014789/jetson-tx1/-the-cpu-usage-cannot-down-use-cuda-decode-/post/5188538/#5188538[/url]
[url]https://devtalk.nvidia.com/default/topic/1027423/jetson-tx2/gstreamer-issue-on-tx2/post/5225972/#5225972[/url]
[url]Code to send a bayer video feed from a TX1 to an h.264 encoder to an RTP sink - Jetson TX1 - NVIDIA Developer Forums
For your reference.
All,
So this is sort of a 90-degree turn, and correct me if I am wrong, but it is my suspicion that if I am running a neural network that is using gstreamer (via OpenCV (either python or c++)) that it is likely that the gst-launch program will block multiple gstream calls on a single port. For example, if I am running detectnet or yolo and using OpenCV and gstreamer to open the video stream and process it, I would be unable to open up a second terminal window on the host and stream that device to anything.
With that understood, I think what I will likely do (and have validated that I can do) is create a zmq socket within the neural network program and spit out the video frame-by-frame over a pub/sub socket at some set rate (let’s say 30fps). I could then implement the other side of that socket from my web server and supply those frames to the HTML client by whatever means I then choose. I could optionally do that for any number of camera devices on the host, then.
I think my second option seems like a more elegant production solution following design guidelines then opening up 5-10 gstreams (depending on the number of cameras).
Thank you for all of your help!