Gstreamer fails with nvivafilter and nvargus: "NvRmChannelSubmit: NvError_IoctlFailed with error code 22"

Hello,

I developed a gstreamer plugin based on nvivafilter to process and combine two images. It works well when I launch it from two mp4 videos, using the following pipeline:

gst-launch-1.0 -e  \
filesrc location=$left_path  ! qtdemux ! h264parse ! omxh264dec ! nvvidconv ! video/x-raw\(memory:NVMM\),format=RGBA,width=3024,height=2280 ! mix. \
filesrc location=$right_path ! qtdemux ! h264parse ! omxh264dec ! nvvidconv ! video/x-raw\(memory:NVMM\),format=RGBA,width=3024,height=2280 ! mix. \
nvcompositor name=mix sink_0::xpos=0 sink_0::ypos=0 sink_1::xpos=3024 sink_1::ypos=0 ! \
video/x-raw\(memory:NVMM\),format=RGBA,width=6048,height=2280 ! \
nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! \
nvivafilter customer-lib-name=./lib-gst-myplugin.so pre-process=true cuda-process=true ! 'video/x-raw(memory:NVMM), format=RGBA' ! \
nvvidconv ! video/x-raw\(memory:NVMM\),format=RGBA,width=1512,height=570 ! \
autovideosink sync=false

However, I need to read the input from cameras. I do so using nvargus, but it fails after processing 4 frames with “NvRmChannelSubmit: NvError_IoctlFailed with error code 22” messages. Here is the considered pipeline:

gst-launch-1.0 -e  \
nvarguscamerasrc sensor_id=0 ! video/x-raw\(memory:NVMM\),format=NV12,width=3024,height=2280,framerate=30/1 ! nvvidconv flip-method=2 !  video/x-raw\(memory:NVMM\),format=RGBA ! mix. \
nvarguscamerasrc sensor_id=1 ! video/x-raw\(memory:NVMM\),format=NV12,width=3024,height=2280,framerate=30/1 ! nvvidconv flip-method=2 !  video/x-raw\(memory:NVMM\),format=RGBA ! mix. \
nvcompositor name=mix sink_0::xpos=0 sink_0::ypos=0 sink_1::xpos=3024 sink_1::ypos=0 ! \
video/x-raw\(memory:NVMM\),format=RGBA,width=6048,height=2280 ! \
nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! \
nvivafilter customer-lib-name=./lib-gst-myplugin.so pre-process=true cuda-process=true ! 'video/x-raw(memory:NVMM), format=RGBA' ! \
nvvidconv ! video/x-raw\(memory:NVMM\),format=RGBA,width=1512,height=570 ! \
autovideosink sync=false

I get the following error messages in the console:

> NvRmChannelSubmit: NvError_IoctlFailed with error code 22
> NvRmPrivFlush: NvRmChannelSubmit failed (err = 196623, SyncPointIdx = 12, SyncPointValue = 0)
> fence_set_name ioctl failed with 22
> NvDdkVicExecute Failed
> nvbuffer_composite Failed
> Got EOS from element "pipeline0".
> Execution ended after 0:02:16.247952709
> Setting pipeline to PAUSED ...
> Setting pipeline to READY ...
> GST_ARGUS: Cleaning up
> CONSUMER: Done Success
> GST_ARGUS: Done Success
> GST_ARGUS: Cleaning up
> CONSUMER: Done Success
> GST_ARGUS: Done Success
> Setting pipeline to NULL ...
> Freeing pipeline ...

I first suspected my own lib (lib-gst-myplugin.so) to cause this problem, since it takes a long time (~2 minutes) at initialization in the first call to the pre-process method. I therefore tested the pipelines above, replacing lib-gst-myplugin.so in the pipeline with:

  • ./lib-gst-dummy.so, which is a “dummy” lib which just calls std::this_thread::sleep_for() to spend the same amount of time in pre-process and in cuda-process as my original lib → this works with both pipelines (i.e. from mp4 and from cameras)
  • ./lib-gst-dummyX.so which is a non-existent lib → this works with the 1st pipeline (from mp4), but fails with the 2nd pipeline (from cameras) with the same log messages as above (NvRmChannelSubmit: NvError_IoctlFailed with error code 22, etc.).

According to this last test, the problem is not related to my lib-gst-myplugin.so library. However, I do not understand what fails when reading from the cameras and not from the mp4.

Should I give up with nvivafilter? Or does the issue come from something else?

Here is my configuration:

  • Hardware: Jetson Nano Developer Kit, SoC: tegra 210
  • JetPack: 4.6.1 (note we cannot update the JetPack, since our camera drivers are not supported by newer JetPack versions), L4T 32.7.1, Ubuntu 18.04.6 LTS
  • gstreamer version: 1.14.5
  • 2 identical cameras: IMX477-160, 12.3MPixels

Thanks!

Hi,
Please help try the two cases:

  1. Do you observe the issue with default lib:

/usr/lib/aarch64-linux-gnu/libnvsample_cudaprocess.so

  1. Is the issue present if nvivafilter plugin is removed from the pipeline?

Hi DaneLLL,

Here are my results:

Do you observe the issue with default lib:

/usr/lib/aarch64-linux-gnu/libnvsample_cudaprocess.so

Yes, I have the same issue with the same NvRmChannelSubmit: NvError_IoctlFailed with error code 22 etc. error message.

Is the issue present if nvivafilter plugin is removed from the pipeline?

No, I have no error when nvivafilter is removed.

Hi,
We will need to set up and replicate the issue to check further. Will update.

Hi,

We tested on Jetson Nano + 2 IMX219 camera on 32.7.4, and it run without bumping into the same error:

nvidia@nvidia-desktop:~$ gst-launch-1.0 -e  \
> nvarguscamerasrc sensor_id=0 ! video/x-raw\(memory:NVMM\),format=NV12,width=1920,height=1080,framerate=30/1 ! nvvidconv flip-method=2 !  video/x-raw\(memory:NVMM\),format=RGBA ! mix. \
> nvarguscamerasrc sensor_id=1 ! video/x-raw\(memory:NVMM\),format=NV12,width=1920,height=1080,framerate=30/1 ! nvvidconv flip-method=2 !  video/x-raw\(memory:NVMM\),format=RGBA ! mix. \
> nvcompositor name=mix sink_0::xpos=0 sink_0::ypos=0 sink_1::xpos=1920 sink_1::ypos=0 ! \
> video/x-raw\(memory:NVMM\),format=RGBA,width=3840,height=1080 ! \
> nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! \
> nvivafilter pre-process=true cuda-process=true ! 'video/x-raw(memory:NVMM), format=RGBA' ! \
> nvvidconv ! video/x-raw\(memory:NVMM\),format=RGBA,width=960,height=270 ! \
> autovideosink sync=false
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
GST_ARGUS: Creating output stream
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected...
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 3264 x 2464 FR = 21.000000 fps Duration = 47619048 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 3264 x 1848 FR = 28.000001 fps Duration = 35714284 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1920 x 1080 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1640 x 1232 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 59.999999 fps Duration = 16666667 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

CONSUMER: Waiting until producer is connected...
GST_ARGUS: 1280 x 720 FR = 120.000005 fps Duration = 8333333 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: Running with following settings:
   Camera index = 1 
   Camera mode  = 2 
   Output Stream W = 1920 H = 1080 
   seconds to Run    = 0 
   Frame Rate = 29.999999 
GST_ARGUS: Available Sensor modes :
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
GST_ARGUS: 3264 x 2464 FR = 21.000000 fps Duration = 47619048 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 3264 x 1848 FR = 28.000001 fps Duration = 35714284 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1920 x 1080 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1640 x 1232 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 59.999999 fps Duration = 16666667 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 120.000005 fps Duration = 8333333 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: Running with following settings:
   Camera index = 0 
   Camera mode  = 2 
   Output Stream W = 1920 H = 1080 
   seconds to Run    = 0 
   Frame Rate = 29.999999 
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
CONSUMER: Producer has connected; continuing.
Redistribute latency...
^Chandling interrupt.
Interrupt: Stopping pipeline ...
EOS on shutdown enabled -- Forcing EOS on the pipeline
Waiting for EOS...
Got EOS from element "pipeline0".
EOS received - stopping pipeline...
Execution ended after 0:00:07.025354926
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
CONSUMER: ERROR OCCURRED
CONSUMER: ERROR OCCURRED
GST_ARGUS: Cleaning up
GST_ARGUS: Cleaning up
Setting pipeline to NULL ...
Freeing pipeline ...

Maybe the issue lies in the camera you use or your own plugin.

Hi DaveYYY,

In the pipeline you show, there is an input resolution of 1920x1080 px on each camera and 3840x1080 in nvcompositor output.

This also works on my side when I use this resolution (either with my lib or with /usr/lib/aarch64-linux-gnu/libnvsample_cudaprocess.so).

However, we need to use a higher resolution. Does it work on your side, using input resolutions of 3024x2280 (or something approaching depending on your camera’s possibilities), and adapting the nvcompositor output similarly?

Thanks.

Hi,
Does it work if you use smaller resolution? The supported resolution is 4K and it may not work properly if the resolution is above it.

YES, the highest setup supported on IMX219 is 3264 x 2464 with 21 FPS.
We have also tried this combination:

nvidia@nvidia-desktop:~$ gst-launch-1.0 -e  nvarguscamerasrc sensor_id=0 ! video/x-raw\(memory:NVMM\),format=NV12,width=3264,height=2464,framerate=21/1 ! nvvidconv flip-method=2 !  video/x-raw\(memory:NVMM\),format=RGBA ! mix. nvarguscamerasrc sensor_id=1 ! video/x-raw\(memory:NVMM\),format=NV12,width=3264,height=2464,framerate=21/1 ! nvvidconv flip-method=2 !  video/x-raw\(memory:NVMM\),format=RGBA ! mix. nvcompositor name=mix sink_0::xpos=0 sink_0::ypos=0 sink_1::xpos=3264 sink_1::ypos=0 ! video/x-raw\(memory:NVMM\),format=RGBA,width=6528,height=2464 ! nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! nvivafilter pre-process=true cuda-process=true ! 'video/x-raw(memory:NVMM), format=RGBA' ! nvvidconv ! video/x-raw\(memory:NVMM\),format=RGBA,width=1632,height=616 ! autovideosink sync=false
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
GST_ARGUS: Creating output stream
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected...
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 3264 x 2464 FR = 21.000000 fps Duration = 47619048 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 3264 x 1848 FR = 28.000001 fps Duration = 35714284 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1920 x 1080 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1640 x 1232 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 59.999999 fps Duration = 16666667 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

CONSUMER: Waiting until producer is connected...
GST_ARGUS: 1280 x 720 FR = 120.000005 fps Duration = 8333333 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: Running with following settings:
   Camera index = 0 
   Camera mode  = 0 
   Output Stream W = 3264 H = 2464 
   seconds to Run    = 0 
   Frame Rate = 21.000000 
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 3264 x 2464 FR = 21.000000 fps Duration = 47619048 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 3264 x 1848 FR = 28.000001 fps Duration = 35714284 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1920 x 1080 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1640 x 1232 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 59.999999 fps Duration = 16666667 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 120.000005 fps Duration = 8333333 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: Running with following settings:
   Camera index = 1 
   Camera mode  = 0 
   Output Stream W = 3264 H = 2464 
   seconds to Run    = 0 
   Frame Rate = 21.000000 
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
CONSUMER: Producer has connected; continuing.
Redistribute latency...
^Chandling interrupt.
Interrupt: Stopping pipeline ...
EOS on shutdown enabled -- Forcing EOS on the pipeline
Waiting for EOS...
Got EOS from element "pipeline0".
EOS received - stopping pipeline...
Execution ended after 0:00:21.409865845
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
GST_ARGUS: Cleaning up
CONSUMER: Done Success
GST_ARGUS: Done Success
GST_ARGUS: Cleaning up
CONSUMER: Done Success
GST_ARGUS: Done Success
Setting pipeline to NULL ...
Freeing pipeline ...

Yes, our pipeline from nvargus works when we reduce the input resolution, e.g. to 1920*1080.

The supported resolution is 4K and it may not work properly if the resolution is above it.

Yet, it works with 3024x2280 inputs if our input is read from mp4, or from videotestsrc, or when we do not use nvivafilter (whatever the lib). It looks like it is not just related to the input resolution. See my reply below for a summary of tests.

Thanks @DaveYYY for the test on IMX219 in 3264 x 2464 with 21 FPS.

Or my side, I also tested the following:

1- Using IMX477 cameras with 3024x2280 at 30 fps: Replace nvivafilter with a “dummy” call to nvvidconv to flip the image. It worked properly. Here is the pipeline:

gst-launch-1.0 -e  \
nvarguscamerasrc sensor_id=0 ! video/x-raw\(memory:NVMM\),format=NV12,width=3024,height=2280,framerate=30/1 ! nvvidconv flip-method=2 !  video/x-raw\(memory:NVMM\),format=RGBA ! mix. \
nvarguscamerasrc sensor_id=1 ! video/x-raw\(memory:NVMM\),format=NV12,width=3024,height=2280,framerate=30/1 ! nvvidconv flip-method=2 !  video/x-raw\(memory:NVMM\),format=RGBA ! mix. \
nvcompositor name=mix sink_0::xpos=0 sink_0::ypos=0 sink_1::xpos=3024 sink_1::ypos=0 ! \
video/x-raw\(memory:NVMM\),format=RGBA,width=6048,height=2280 ! \
nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! \
nvvidconv flip-method=5 ! 'video/x-raw(memory:NVMM), format=RGBA' ! \            <-- dummy test to replace nvivafilter
nvvidconv ! video/x-raw\(memory:NVMM\),format=RGBA,width=1512,height=570 ! \
autovideosink sync=false

2- Replace camera inputs with videotestsrc, simulating 3024x2280 at 30fps. It also works, whatever LIBPATH is (my own lib or /usr/lib/aarch64-linux-gnu/libnvsample_cudaprocess.so or non-existent lib):

gst-launch-1.0 \
videotestsrc is-live=1 ! video/x-raw,format=NV12,width=320,height=240 ! nvvidconv ! 'video/x-raw(memory:NVMM),width=3024,height=2280,framerate=30/1,format=NV12' ! comp.sink_0 \
videotestsrc is-live=1 ! video/x-raw,format=NV12,width=320,height=240 ! nvvidconv ! 'video/x-raw(memory:NVMM),width=3024,height=2280,framerate=30/1,format=NV12' ! comp.sink_1 \
nvcompositor name=comp sink_0::xpos=0 sink_0::ypos=0 sink_1::xpos=3024 sink_1::ypos=0 ! 'video/x-raw(memory:NVMM),format=RGBA,width=6048,height=2280' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! \
nvivafilter customer-lib-name=$LIBPATH pre-process=true cuda-process=true ! 'video/x-raw(memory:NVMM),format=RGBA' ! \
nvvidconv ! video/x-raw\(memory:NVMM\),format=RGBA,width=1512,height=570 ! autovideosink sync=false

Here is a summary of the tests:

Image source, resolution pipeline with nvivafilter (any lib) pipeline with nvvidconv
videotestsrc 3024 x 2280, 30fps OK OK
mp4 3024 x 2280, 30fps OK OK
IMX477 3024 x 2280, 30fps NvRmChannelSubmit: NvError_IoctlFailed with error code 22 OK
IMX477 1920 x 1080, 30fps OK OK
IMX219 3264 x 2464, 21fps OK

So the issue occurs only when we use nvivafilter and our IMX477 cameras with the desired resolution of 3024x2280 at 30fps (i.e. the case that we need to work :-( ).
Can’t there be some bottleneck in nvivafilter?

Hello,
I come back to you again, still I’m still stuck with my problem.
According to all the tests summarized in my previous post, is there a chance this NvRmChannelSubmit: NvError_IoctlFailed with error code 22 be solved by a fix in nvivafilter?
Or should I consider reimplementing my plugin oustide of it? Or can the issue still come from something else?

Hi,
We are not able replicate the issue with RPi camera v2. It looks specific to using imx477. Since there is no issue with nvvidconv plugin, you may customize the plugin to access the frame data through NvBuffer. The plugin is open source in the package:

Jetson Linux R32.7.4 | NVIDIA Developer
Driver Package (BSP) Sources

Please follow the instructions to build/run the plugin, and then do customization.

OK, let’s give up with nvivafilter, then.
Since we use the output from 2 cameras to combine them, I was rather thinking of restarting from nvcompositor source code, and simplify my pipeline.
Is this a good idea or are there some hidden tricks for which I should rather restart from nvvidconv anyway?

Hi,
Both nvvidconv and nvcompositor plugins are public, so you can customize either plugin.

Or can get NvBuffer in appsink like:
How to run RTP Camera in deepstream on Nano - #29 by DaneLLL

OK, I will then reimplement my plugin from the nvcompositor source code. If needed, I may ask some help in a separate topic.

Thanks for your information on this one.

1 Like

I reimplemented my plugin from nvcompositor. The good news is that it is more efficient, however I didn’t expect to spend several milliseconds to retrieve CUeglFrame’s.
I opened a separate thread about this here.
Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.