I’m trying to build a gstreamer pipeline to process some data coming in from a camera, but the pipeline keeps exiting with an “Internal data stream error” coming from nvarguscamerasrc.
Trying to isolate it, I’ve found that this command is the smallest that produces this error:
If I shorten the command to gst-launch-1.0 -v nvarguscamerasrc sensor-id=3 ! fakesink, then it executes without error. Also, if I change the output format requested from nvvidconv to video/x-raw(memory:NVMM), format=(string)NV12, it executes without error. Unfortunately, those latter two aren’t good enough for my purposes.
I’m using an AR0231 camera connected to the nvcsi port on the jetson, with drivers from my camera’s manufacturer that expose it at /dev/video3. Here’s the output of running v4l2-ctl -d3 --list-formats-ext on my jetson:
This seems to me like the format that nvarguscamerasrc produces and the format that my cameras produce are two different formats (NV12 vs BA12). I would think that my camera’s drivers would work with nvarguscamerasrc since they came with instructions that said to use it to grab data from the cameras, but it’s possible that these drivers that I got would be
Sadly, nvarguscamerasrc seems to be closed-source, so I can’t check what nvarguscamerasrc is able to do, but it looks to me like this would cause issues unless nvarguscamerasrc is able to convert (which it may or may not be able to do, idk).
it’s necessary to include video/x-raw(memory:NVMM), ... to enable bayer sensor with nvarguscamerasrc.
for example,
you should able to render the preview to an HDMI screen as following, $ gst-launch-1.0 nvarguscamerasrc ! ‘video/x-raw(memory:NVMM), width=1920, height=1080, format=(string)NV12, framerate=(fraction)30/1' ! nvoverlaysink -e
The solution was that the pipeline specified by the people who provided the camera drivers were wrong. From the link @JerryChang provided, I used this command instead (I changed the dimensions from the link because my camera is 1928x1208, but it is otherwise the same):