My camera on the tx2 is transferred to the server via rtsp
(v4l2) I used /usr/src/jetson_multimedia_api/samples/12_camera_v4l2_cuda for the camera reading part
Using v4l2 to read the camera data and then transfer it via the gstreamer pipeline opened by cv::VideoWriter
Hi,
Since OpenCV uses BGR format which is not supported by hardware engines in Jetson chip, would need to take certain CPU usage in this case. An optimal solution is to run gstreamer pipeline and use OpenCV CUDA filter filter. There is a sample for this usecase: Nano not using GPU with gstreamer/python. Slow FPS, dropped frames - #8 by DaneLLL
Please check the sample and see if you can apply it to your usecase. If you have to run with cv::VideoWriter, please execute sudo nvpmodel -m 0 and sudo jetson_clocks to get max throughput of CPU cores.
RGB → YUV with videoconvert is very CPU expensive. You may use HW conversion with nvvidconv. However, the latter doesn"t support BGR, but supports BGRx or RGBA. So you may also try:
You may better explain the big latency problem, it’s not obvious without your camera.
I also think that 5 videoconvert instances that are CPU only would probably not be a good solution, so the NVMM path is probably better.
I assume your camera provides UYVY format, not sure what are the supported framerates, you only set 25 fps in this first pipeline.
You may add -v flag to gst-launch-1.0 so that you can see what caps are used between plugins.
You may also try nvv4l2h264enc plugin instead of omxh264enc (OMX plugins are going depracted on Jetson).
There may be some ways to deal with latency that depend on encoders.
Also be aware that setting latency=0 may not be the best option, so you may give a few frames latency.
You may also try to set sync=false for rtspclientsink if you don’t need sync.