Thank you for your suggestions, @DaneLLL. They both work.
However, they raise some more questions. Perhaps I should explain what I’m trying to achieve.
The above 200fps MJPEG stream is from an IR camera. I’m ultimately testing whether the Nano will be able to transcode two such streams from the left and right cameras. These are greyscale, so YUV order and chroma subsampling probably don’t matter.
I understand my current options are:
-
jpegparse ! nvjpegdec ! 'video/x-raw': Performance is pretty good (~2400fps). However, it drops significantly (to ~950fps) on adding nvv4l2h264enc, presumably due to the raw frames being copied in and out of NVMM. -
nvv4l2decoder mjpeg=1 ! 'video/x-raw(memory:NVMM)': Performance is OK (~1800fps) but drops to an unacceptable 120fps on adding H.264 as follows:nvv4l2decoder mjpeg=1 ! 'video/x-raw(memory:NVMM)' ! nvvidconv ! nvv4l2h264enc bitrate=200000. - Patched nvjpegdec to NVMM with jpegparse. Not tested yet.
With the revised focus on greyscale JPEG, what would you advise? While option #1 sounds best I’m still a little worried about the unnecessary copying as the Nano will have other things to do, so I want to maximise DMA use. Option #2 is way too slow, not sure why. Option #3 will probably perform best but I’m concerned about maintainability as in its current form it re-links libraries from several apt packages.
Finally, can you assure me you’re treating the SIGSEGV as a bug? I’d hate to have segmentation faults in production due to something as banal as a frame rate setting.