Hello,
I have a Jetson Nano with a Camera, and I have implemented a face detection program. I am streaming the results of the face detection to a connected screen using OpenCV, but I would also like to send it to a webpage. I am using rtmpsink (using gstreamer) to create the stream, and using OpenCV Videowriter to write image frames (Opencv Mats) on the stream. The problem is the delay (about 4 seconds) between the video on the screen and the video on the webpage. Is this due to encoding or videowriter? What can I do to reduce it?
My code is :
writer.open("appsrc ! videoconvert ! video/x-raw,format=I420 ! omxh264enc ! video/x-h264,stream-format=(string)byte-stream,alignment=(string)au ! h264parse ! queue ! flvmux ! rtmpsink location=rtmp://localhost:1935/live/ ", 0, (double)30, cv::Size(640, 480), true);
// loop
writer << IMG_STREAM;
I already did sudo nvpmodel -m 0 & sudo jetson_clocks
Hi,
Please try to use nvv4l2h264enc and set
maxperf-enable : Enable or Disable Max Performance mode
flags: readable, writable, changeable only in NULL or READY state
Boolean. Default: false
You may run sudo tegrastats to get system loading. See if can get more clues from it.
Also you can comapre with gst-launch command:
$ gst-launch-1.0 videotestsrc is-live=1 ! video/x-raw,width=640,height=480 ! clockoverlay ! nvvidconv ! nvv4l2h264enc maxperf-enable=1 ! h264parse ! queue ! flvmux ! rtmpsink
Switching to nvv4l2h264enc returns the following error:
[ WARN:4] global /home/nvidia/opencv/modules/videoio/src/cap_gstreamer.cpp (1663) writeFrame OpenCV | GStreamer warning: Error pushing buffer to GStreamer pipeline
using the gst-launch command with videotestsrc shows a video with the same 4 second latency.
Hi,
Please go to gstreamer forum to get further suggestion. We usually run RTSP on Jetson platforms and don’t have much experience in RTMP. You may go t the forum and request for help with software encoder x264enc. Once you get a pipeline with acceptable latency, please replace it with hardware encoder to get better performance.