Currently I am using this pipeline to get video frames in appsink:
uridecodebin uri="file:///home/stiv/lpr/data/dual_day.avi" !nvvideoconvert ! appsink
cout << "Error opening video stream or file" << endl;
cap >> frame;
cout << frame.channels()<<endl;
imshow( "Frame", frame );
As a result I get frame with 1 channel, but it works. How do I get colored image, as it appears in input stream ? (If I put EGL sink in the end, I see color videostream).
It should work by sending BGR buffers to appsink:
uridecodebin uri="file:///home/stiv/lpr/data/dual_day.avi" ! nvvideoconvert ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw,format=BGR ! appsink
You may try I420 also. Please refer to the sample in
UPDATE: finally it worked:
uridecodebin uri="file:///home/stiv/lpr/data/dual_day.avi" ! nvvideoconvert ! video/x-raw,format=(string)BGRx ! videoconvert ! video/x-raw,format=(string)BGR ! appsink
Thank you so much. It would be cool to understand why I need this line exactly, is it covered anywhere in the documentation?
Advice from your second link doesn’t work unfortunately, because it refers to nvvidconv, which is not found. Also tried flip-method=2, and it also doesn’t help.
BGR is the main format in OpenCV. Since nvvideoconverter doesn’t support this format, videoconvert is required to convert to BGR.
nvvidconv is on Jetson platforms.
hello,when i push a rtsp streaming to gst rtsp server ,can u give me some suggestion about rtsp server launch commnd?
But videoconvert is terribly slow (9 fps).
Is there any way to get nvvidconv, or nvvideoconvert to OpenCV appsink?
For using appsink to get buffer in OpenCV, you have to suffer the extra emmcpy() from NVMM buffer to CPU buffers. A similar discussion is in
You may check if you ca apply your case to deepstream-app to achieve optimal performance.