I’m facing some weird issues with corrupted frame with my gstreamer pipeline utilising the nvjpegdec.
A brief overview of the pipeline is trying to achieve (Detailed pipeline below, the actual pipeline is very long):
- The pipeline capture images at 60fps from a stereo camera with a resolution of 2560x960, limit the frame to 8 fps and only decode the mjpeg image format to raw at 8fps.
- It then create a tee to do 2 separate process
- The first tee process crops the 2560x960 stereo frame to 1280x960, and resize it to560x420. This is then output to an appsink which a python process will grab the image and send for further processing
- The second tee process crops the 2560x960 stereo frame to 1280x960, encode it to h264 and saves it as a video.
v4l2src device=/dev/video0 io-mode=2 do-timestamp=true ! image/jpeg,width=2560,height=960,framerate=60/1' ! videorate ! image/jpeg,framerate=8/1 ! nvjpegdec ! video/x-raw ! nvvidconv ! video/x-raw(memory:NVMM) ! tee name=t ! queue leaky=2 flush-on-eos=true ! nvvidconv top=0 bottom=960 left=0 right=1280 ! video/x-raw,width=560,height=420,format=BGRx ! max-buffers=1 drop=true wait-on-eos=false t. ! queue leaky=2 flush-on-eos=true ! videorate max-rate=8 ! nvvidconv top=0 bottom=1280 left=0 right=960 ! video/x-raw(memory:NVMM),format=NV12,width=1280,height=960,pixel-aspect-ratio=1/1 ! queue leaky=2 flush-on-eos=true ! nvv4l2h264enc control-rate=0 bitrate=450000 maxperf-enable=1 iframeinterval=10 insert-sps-pps=1 profile=4 preset-level=4 ! h264parse config-interval=1 disable-passthrough=true ! queue leaky=2 flush-on-eos=true ! splitmuxsink location=/vid max-size-time=10000000000 async-finalize=True muxer-factory=mpegtsmux sink-properties="properties,enable-last-sample=false"
From the pipeline above, it is clear that I was only using the image from the left sensor (it is a stereo sensor that outputs a stitched image of the left and right image), and getting a 560x420 output to appsink, and 1280x960 video.
I have a jetson nano running 2 of that pipeline (2 cameras connected to 1 jetson nano) and have them both running. On some occasions, I saw corrupted frames on the image and video output on 1 of the pipeline running (its always one and not both). At first, I thought that it was getting corrupted frames from part of the pipeline, but by matching the images I get from appsink and the video from splitmuxsink, the corrupted frame is identical on both of them (example of this is attached below). This seems to point out that the issue probably lies in the nvjpegdec element, or any element before the tee element. I’ve also verified that the camera was not faulty in anyway and I’ve tested it on my local machine.
Taking a closer inspection of the corrupted frame, it seems like the part of the frame that was corrupted came from the other sensor!!! On some frames where the corrupted frame was not as bad, I was seeing images coming from the right sensor (which is supposed to be cropped out by both appsink and splitmuxsink pipeline).
I’ve also checked dmesg and there was no error, CPU and mem usage was all reasonable too.
This issue is a weird one and hopefully, someone can help me figure out what’s the root cause for it.
The image is not extracted from the video, but rather from the appsink code. The image size is 560x420 and video size is 1280x960
This is the raw jpeg output from the camera, as you can see the left image can’t see the door as it was blocked by the blue pillar but the right image can see the door, and only a small part of the blue pillar. If you compare this to the image above, you can clearly tell that the corrupted frame was partly built from the right image. It seems like the right image is merging/ corrupting into the left image during crop/ nvvidcon/ nvjpegdec element