std::string url = "rtsp://192.168.1.12:554/h265/ch1/main/av_stream ! rtph265depay ! h265parse ! nvv4l2decoder ! nvvidconv ! video/x-raw,width=1280,height=720,format=BGRx ! videoconvert ! video/x-raw,format=BGR !appsink sync=false
I read frames from 16 cameras at the same time in 16 threads, and I got following performance results:
AGX jetson Mode 30w ALL:
Do the url above used in gstreamer pipe not call hard decode for RTSP stream ? Why I can not see the GPU usage, but CPU usage is so high?
Could you please tell the reason???
Please refer to discussion in:
[Gstreamer] nvvidconv, BGR as INPUT
For geneting frame data in BGR format we would need to do conversion on CPU. This takes significant CPU usage.
Do you have some other optional method to do conversion on CPU for BGR frame data, particularly for RTSP stream in jetson AGX ?
I have checked the talk about BGR input for nvvidconv, and it used " decodebin" to replace “rtph265depay ! h265parse ! nvv4l2decoder” . What is the difference between them? It can be faster?
rtspsrc location=rtspt://usr:firstname.lastname@example.org/Streaming/Channels/101 ! decodebin ! nvvidconv ! video/x-raw,format=BGRx,width=1061,height=600 ! videoconvert ! video/x-raw,format=BGR ! queue! appsink sync=0
Can you please tell me jetson_multimedia_api decode RTSP stream and convert to cv::Mat ?
The jetson_multimedia_api samples is to demonstrate hardware functions. The input is h264/h265 stream. For using jetson_multimedia_api you would need to implement code for depaying RTSP stream.
For using OpenCV, an optimal way is to use CUDA filters. It can map RGBA buffer instead of BGR. Please take a look at this sample and see if it can be applied to your use-case:
Nano not using GPU with gstreamer/python. Slow FPS, dropped frames - #8 by DaneLLL
The CUDA filters sample just open one local camera, and out purpose is that, pull stream from RTSP and convert to cv::Mat.
Do you have similar demos for our this case without hight CPU usage?
Since hardware converter(VIC engine) in Jetson do not support BGR format, so this would need to rely on CPU cores for converting RGBA to BGR. For using
cv::Mat, this is the optimal solution on Jetson platforms.
In following years, can you add hardware support to convert RGB to BGR for cv::Mat ?
We will pass the request to hardware design team. Hardware architecture is not as flexible as software coding. It may take a long period.
On Jetson platforms, please check VPI:
VPI - Vision Programming Interface: Main Page
If certain functions can be applied to your use-case, please consider use VPI as substitute.
We find a new problem, which is that the images are not clear by cv::VideoCap with the following url:
"rtsp://192.168.1.12:554/h265/ch1/main/av_stream ! rtph265depay ! h265parse ! nvv4l2decoder ! nvvidconv ! video/x-raw,width=1280,height=720,format=BGRx ! videoconvert ! video/x-raw,format=BGR !appsink sync=false
Could you please tell us whether there exists some solutions to improve image quality ?
It sounds to be an issue in the source. You would need to check with the camera vendor to see if there is parameters for tuning image quality.
Hardware decoder is to decode the compressed stream and not able to make improvement on image quality.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.