Since you get frames in CPU memory from v4l2src and want output in CPU memory as well for multifilesink, you would use double nvvidconv, as it expects at least one of its input or output to be in NVMM memory.
You may try:
Your solution is exactly what I want. Thank you for your kind reply.
Can I ask below questions, if it’s not bothering you?
If I use just one nvvidconv, my frame data would be in GPU memory?
In jetson nano case, do you think double nvvidconv has better performance than one videoconvert plugin?
Actually I could change format with videoconvert but it’s performance was very bad. That’s why I’m trying to use nvvidconv.
For 1, I should first say that it is the same physical memory, as Jetsons have integrated GPU sharing the same memory chip.
I use to say ‘CPU memory’, but I should say CPU allocated memory. NVMM memory in gstreamer refers to contiguous (DMA-able) memory allocation suitable for HW accelerators.
The first nvvidconv has input in CPU allocated memory, so it will output into NVMM memory. Yes, this could be used by GPU, or video-encoder for example.
For 2, I think that unless you are using low resolution and framerate, the double nvvidconv would be faster than videoconvert. Note in my pipeline I haven’t specified if the UYVY to BGRx conversion should be done with the first nvvidconv or by the second one. You may also try to see if this make a difference.