Hello,
I’m trying same pipeline with my laptop and Jetson Nano, and pipelines only get blocked using Jetson. I use appsink from sink-pipe and get buffers of one frame for image processing and push it into source-pipe.
But it looks like appsink pipeline isn’t prerolling buffer and some blocking messages always showed.
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
this is the screeshot from Jetson Nano.
i’ve used both rtspsrc+decodebin and uridecodebin, it didn’t work out. Pipelines are like below.
sink-pipe:
uridecodebin uri=rtsp://~ ! videoconvert ! video/x-raw,format=BGR ! appsink
source-pipe:
appsrc ! video/x-raw, format=BGR,width=1280,height=720,framerate=30/1 ! videoconvert ! x264enc ! mpegtsmux ! hlssink max-files=10 target-duration=5 locaton=./segment%05d.ts playlist-location=./playlist.m3u8
What’s the meaning of BlockType and Why is my app opened in blocking mode?
Thank you.
Hi,
You need to have nvvidconv in sink-pipe. Please try
uridecodebin uri=rtsp://~ ! nvvdconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw,format=BGR ! appsink
Thanks, now buffer flows!
But why have to add nvvidconv in pipeline? Does nvvidconv work different than videoconvert do?
And if I use video/x-raw(memory:NVMM) caps(-it’s using GPU, as I got), how can I apply it?
I tried,
nvvidconvcap = gst_caps_from_string(“video/x-raw(memory:NVMM), format=(string)BGRx”);
also
nvvidconvcap = gst_caps_new_simple (“video/x-raw”, “format”, G_TYPE_STRING, “BGRx”,NULL);
features = gst_caps_features (“memory:NVMM”, 0, NULL);
gst_caps_set_features_simple (nvvidconvcap, features);
but they all causes error.
GStreamer-CRITICAL **: 16:30:53.850: gst_mini_object_copy: assertion ‘mini_object != NULL’ failed
GStreamer-CRITICAL **: 16:30:53.851: gst_caps_get_structure: assertion ‘GST_IS_CAPS (caps)’ failed
and uridecodbin pad is not linked to nvvidconv (GST_PAD_LINK_NOFORMAT).
I’m very new to GStreamer and using GPU, any word can help me a lot! Thank you.
Hi,
The nvv4l2decoder should be picked in uridecodebin, so the decoded frames are in NVMM buffers. We need to convert and copy it to CPU buffers via nvvidconv. The pipeline would look like
... ! nvv4l2decoder ! video/x-raw(memory:NVMM),format=NV12 ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw,format=BGR ! appsink
The hardware converter engine does not support BGR so we can convert to a close format BGRx and utilize videoconvert to re-sample to BGR.
Thanks!
As I understand, uridecodebin is already getting NVMM buffers and so that I should put nvvidconv to convert it to CPU buffer.
And if I don’t want to convert it to BGR(BGRx) and keep NV12 to appsink, can this also possible?
I’m now using BGR format for OpenCV Mat, and planning to use GPU for image processing (GpuMat class, I believe), and it would be great if I keep buffers in GPU.
Thank you.
Hi,
You may check this sample and see if it can be applied to your usecase.
Please note re-installing OpenCV is not required. It is 4.1.1 from r32.2.3.
And please rename tegra_multimedia_api to jetson_multimedia_api.
1 Like
Thank you very much, I’ll definately try :)