Split the displayed screen abnormally when using nvv4l2camerasrc

• Hardware Platform: Jetson Xavier NX
• DeepStream Version: 6.1.1
• JetPack Version (valid for Jetson only): 5.0.2
• TensorRT Version:

I use UDP streaming to transfer data from server to client host.
Here are pipelines I use:

  • Server:
gst-launch-1.0 v4l2src device=/dev/video2 ! videoconvert ! video/x-raw ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! nvv4l2h265enc insert-sps-pps=1 maxperf-enable=1 idrinterval=15 ! rtph265pay ! udpsink host= port=5000
  • Client:
gst-launch-1.0 udpsrc port=5000 ! application/x-rtp,  encoding-name="H265", payload=96 ! rtph265depay ! avdec_h265 ! xvimagesink sync=false

Client host displays video well like this:

However, if I change the server pipeline to the command below (using nvv4l2camerasrc instead of v4l2src):

gst-launch-1.0 nvv4l2camerasrc device=/dev/video2 ! nvvideoconvert ! 'video/x-raw(memory:NVMM), format=UYVY' ! nvvideoconvert ! "video/x-raw(memory:NVMM), format=NV12" ! nvv4l2h265enc insert-sps-pps=1 idrinterval=15 maxperf-enable=1 ! rtph265pay ! udpsink host= port=5000

Client host will display abnormally like this:

My questions are:

  1. According to this Jetson Nano FAQ, and this Macrosilicon USB - #5 by DaneLLL, I know I must customize nvv4l2camerasrc’ source to solve the wrong color issue. But I will greatly appreciate it if you could guide me on how to do it, customize and rebuild the source. Of course it will depend on enviroment I use, Deepstream’s version and Camera’s format, the DS’s version is at the opening and the Camera’s format will be at the “For more information” section.

  2. Why is the displayed video splitted so abnormally?

For more information:

  1. Camera format is:
  2. If I add width=640 and height=512 in the caps like this command:
gst-launch-1.0 nvv4l2camerasrc device=/dev/video2 ! nvvideoconvert ! 'video/x-raw(memory:NVMM), format=UYVY, width=640, height=512' ! nvvideoconvert ! "video/x-raw(memory:NVMM), format=NV12" ! nvv4l2h265enc insert-sps-pps=1 idrinterval=15 maxperf-enable=1 ! rtph265pay ! udpsink host= port=5000

Then the displayed video on screen is:


So, could you guide me how to chose the correct format for the camera pls ?

Hi, it seems that my camera needs to be supported YUV420 and NV12 like this post How to build nvv4l2camerasrc which support YUV420 and NV12. According to @DaneLLL in the post, he/she said “But for YUV420 and NV12, would need to consider data alignment in most cases”.
So, is this the “data alignment” he/she mentioned when I change UYVY to NV12?

If the source format is NV12 or YUV420, we would suggest use v4l2src plugin. You can link like v4l2src ! nvvidconv ! video/x-raw(memory:NVMM)

Yeah, I’ve used a pipeline with v4l2src and it’s worked normally, but I would like to use nvv4l2camerasrc instead.
However, it seems like nvv4l2camerasrc has not supported those color formats yet.
I would like to ask an additional question. Except UYVY, which color formats nvv4l2camerasrc could support now? I see in nvbufsurface.h, there are many color formats include NV12.

Please refer to the topics:
Image format conversion with NvBufferTransform - #7 by DaneLLL
Image format conversion with NvBufferTransform - #7 by DaneLLL
Multimedia API capture with multi-planer color space - #3 by DaneLLL

You may check if the camera source can generate frame data fitting alignment of NvBuffer, and then customize nvv4l2camerasrc or 12_camera_v4l2_cuda to give it a try.

640x512 is not a larger resolution, so it should be fine to capture into CPU buffer first and then copy to NVMM buffer(NvBuffer). However, if you would like to do customization, please check the above topics.

1 Like

Thank you.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.