Nvarguscamerasrc exits Internal data stream error

I’m trying to build a gstreamer pipeline to process some data coming in from a camera, but the pipeline keeps exiting with an “Internal data stream error” coming from nvarguscamerasrc.

Trying to isolate it, I’ve found that this command is the smallest that produces this error:

gst-launch-1.0 -v nvarguscamerasrc sensor-id=3 ! nvvidconv ! 'video/x-raw, format=(string)NV12' ! fakesink

If I shorten the command to gst-launch-1.0 -v nvarguscamerasrc sensor-id=3 ! fakesink, then it executes without error. Also, if I change the output format requested from nvvidconv to video/x-raw(memory:NVMM), format=(string)NV12, it executes without error. Unfortunately, those latter two aren’t good enough for my purposes.

I’m using an AR0231 camera connected to the nvcsi port on the jetson, with drivers from my camera’s manufacturer that expose it at /dev/video3. Here’s the output of running v4l2-ctl -d3 --list-formats-ext on my jetson:

        Type: Video Capture

        [0]: 'BA12' (12-bit Bayer GRGR/BGBG)
                Size: Discrete 1928x1208
                        Interval: Discrete 0.033s (30.000 fps)

Does anyone know what would cause nvarguscamerasrc to produce these errors when I try to use it?

Update: Running gst-inspect-1.0 nvarguscamerasrc gives this information about the capabilities of the pad:

Pad Templates:
  SRC template: 'src'
    Availability: Always
                  width: [ 1, 2147483647 ]
                 height: [ 1, 2147483647 ]
                 format: { (string)NV12 }
              framerate: [ 0/1, 2147483647/1 ]

This seems to me like the format that nvarguscamerasrc produces and the format that my cameras produce are two different formats (NV12 vs BA12). I would think that my camera’s drivers would work with nvarguscamerasrc since they came with instructions that said to use it to grab data from the cameras, but it’s possible that these drivers that I got would be

Sadly, nvarguscamerasrc seems to be closed-source, so I can’t check what nvarguscamerasrc is able to do, but it looks to me like this would cause issues unless nvarguscamerasrc is able to convert (which it may or may not be able to do, idk).

hello jarred1,

it’s necessary to include video/x-raw(memory:NVMM), ... to enable bayer sensor with nvarguscamerasrc.
for example,
you should able to render the preview to an HDMI screen as following,
$ gst-launch-1.0 nvarguscamerasrc ! ‘video/x-raw(memory:NVMM), width=1920, height=1080, format=(string)NV12, framerate=(fraction)30/1' ! nvoverlaysink -e

please also refer to Applications Using V4L2 IOCTL Directly by using V4L2 IOCTL to verify basic functionality during sensor bring-up.

1 Like

The solution was that the pipeline specified by the people who provided the camera drivers were wrong. From the link @JerryChang provided, I used this command instead (I changed the dimensions from the link because my camera is 1928x1208, but it is otherwise the same):

gst-launch-1.0 nvarguscamerasrc num-buffers=200 ! 'video/x-raw(memory:NVMM),width=1928, height=1208, framerate=30/1, format=NV12' ! omxh264enc ! qtmux ! filesink location=test.mp4 -e

And that was able to work where the command that I was originally provided wasn’t.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.