Hello,
I developed a gstreamer plugin based on nvivafilter to process and combine two images. It works well when I launch it from two mp4 videos, using the following pipeline:
gst-launch-1.0 -e \
filesrc location=$left_path ! qtdemux ! h264parse ! omxh264dec ! nvvidconv ! video/x-raw\(memory:NVMM\),format=RGBA,width=3024,height=2280 ! mix. \
filesrc location=$right_path ! qtdemux ! h264parse ! omxh264dec ! nvvidconv ! video/x-raw\(memory:NVMM\),format=RGBA,width=3024,height=2280 ! mix. \
nvcompositor name=mix sink_0::xpos=0 sink_0::ypos=0 sink_1::xpos=3024 sink_1::ypos=0 ! \
video/x-raw\(memory:NVMM\),format=RGBA,width=6048,height=2280 ! \
nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! \
nvivafilter customer-lib-name=./lib-gst-myplugin.so pre-process=true cuda-process=true ! 'video/x-raw(memory:NVMM), format=RGBA' ! \
nvvidconv ! video/x-raw\(memory:NVMM\),format=RGBA,width=1512,height=570 ! \
autovideosink sync=false
However, I need to read the input from cameras. I do so using nvargus, but it fails after processing 4 frames with “NvRmChannelSubmit: NvError_IoctlFailed with error code 22” messages. Here is the considered pipeline:
gst-launch-1.0 -e \
nvarguscamerasrc sensor_id=0 ! video/x-raw\(memory:NVMM\),format=NV12,width=3024,height=2280,framerate=30/1 ! nvvidconv flip-method=2 ! video/x-raw\(memory:NVMM\),format=RGBA ! mix. \
nvarguscamerasrc sensor_id=1 ! video/x-raw\(memory:NVMM\),format=NV12,width=3024,height=2280,framerate=30/1 ! nvvidconv flip-method=2 ! video/x-raw\(memory:NVMM\),format=RGBA ! mix. \
nvcompositor name=mix sink_0::xpos=0 sink_0::ypos=0 sink_1::xpos=3024 sink_1::ypos=0 ! \
video/x-raw\(memory:NVMM\),format=RGBA,width=6048,height=2280 ! \
nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! \
nvivafilter customer-lib-name=./lib-gst-myplugin.so pre-process=true cuda-process=true ! 'video/x-raw(memory:NVMM), format=RGBA' ! \
nvvidconv ! video/x-raw\(memory:NVMM\),format=RGBA,width=1512,height=570 ! \
autovideosink sync=false
I get the following error messages in the console:
> NvRmChannelSubmit: NvError_IoctlFailed with error code 22
> NvRmPrivFlush: NvRmChannelSubmit failed (err = 196623, SyncPointIdx = 12, SyncPointValue = 0)
> fence_set_name ioctl failed with 22
> NvDdkVicExecute Failed
> nvbuffer_composite Failed
> Got EOS from element "pipeline0".
> Execution ended after 0:02:16.247952709
> Setting pipeline to PAUSED ...
> Setting pipeline to READY ...
> GST_ARGUS: Cleaning up
> CONSUMER: Done Success
> GST_ARGUS: Done Success
> GST_ARGUS: Cleaning up
> CONSUMER: Done Success
> GST_ARGUS: Done Success
> Setting pipeline to NULL ...
> Freeing pipeline ...
I first suspected my own lib (lib-gst-myplugin.so
) to cause this problem, since it takes a long time (~2 minutes) at initialization in the first call to the pre-process method. I therefore tested the pipelines above, replacing lib-gst-myplugin.so
in the pipeline with:
./lib-gst-dummy.so
, which is a “dummy” lib which just callsstd::this_thread::sleep_for()
to spend the same amount of time in pre-process and in cuda-process as my original lib → this works with both pipelines (i.e. from mp4 and from cameras)./lib-gst-dummyX.so
which is a non-existent lib → this works with the 1st pipeline (from mp4), but fails with the 2nd pipeline (from cameras) with the same log messages as above (NvRmChannelSubmit: NvError_IoctlFailed with error code 22
, etc.).
According to this last test, the problem is not related to my lib-gst-myplugin.so
library. However, I do not understand what fails when reading from the cameras and not from the mp4.
Should I give up with nvivafilter? Or does the issue come from something else?
Here is my configuration:
- Hardware: Jetson Nano Developer Kit, SoC: tegra 210
- JetPack: 4.6.1 (note we cannot update the JetPack, since our camera drivers are not supported by newer JetPack versions), L4T 32.7.1, Ubuntu 18.04.6 LTS
- gstreamer version: 1.14.5
- 2 identical cameras: IMX477-160, 12.3MPixels
Thanks!