Tegra TX2 - CSI Camera Custom resolution issue

Hi,
We have developed a custom TX2 + ADV7180(CVBS input support) board and the camera is working fine when using the regular gstreamer pipeline.

$ gst-launch-1.0 -v v4l2src device=/dev/video0 ! video/x-raw,format=YUY2,width=720,height=507 ! videoconvert ! xvimagesink>

Now we want to work with libarguscamerasrc plugin. The command we are using is,

$ gst-launch-1.0 nvarguscamerasrc sensor-id=0 num-buffers=300 ! ‘video/x-raw(memory:NVMM), width=(int)720, height=(int)508, format=(string)NV12, framerate=(fraction)60/1’ ! nvvidconv ! queue ! xvimagesink>>

Actual resolution is 720x507 but since nvarguscamerasrc supports only even number of resolution so we changed it to 720x508.

While running the command, we are getting the below error,

SCF: Error InvalidState: Session has suffered a critical failure (in src/api/Session.cpp, function capture(), line 667)
Apr 22 11:50:51 nvidia-desktop nvargus-daemon[5484]: (Argus) Error InvalidState: (propagating from src/api/ScfCaptureThread.cpp, function run(), line 109)

Same issue when trying with 720x506 as well.

In the “CSI Camera Supported Resolutions” part in the “Accelerated GStreamer User Guide” there is no support for 720x508. So will it be not possible to run nvargus at this resolution?

Looking forard for suggestions on this issues.

Thanks in advance.

1 Like

Hi
Argus pipeline only support bayer format sensor.

Thank you so much for the quick response ShaneCCC.

On our final software we want to use gstreamer with a pipeline to mix multiple sources as below:

“gst-launch-1.0 -v -e "
“glvideomixer name=c”
“sink_0::xpos=0 sink_0::ypos=0 sink_0::width=1280 sink_0::height=720”
“sink_1::xpos=1280 sink_1::ypos=0 sink_1::width=640 sink_1::height=360”
“sink_2::xpos=1280 sink_2::ypos=360 sink_2::width=640 sink_2::height=360”
“sink_3::xpos=0 sink_3::ypos=720 sink_3::width=1920 sink_3::height=360 !”
“video/x-raw, width=1920, height=1080, framerate=30/1 !
queue ! " " omxvp8enc ! webmmux streamable=true ! shout2send
ip=127.0.0.1 port=8000 password=hackme mount=/video.webm”
" nvcamerasrc sensor-id=2 ! ‘video/x-raw(memory:NVMM),
width=1920, height=1080, framerate=60/1’ !”
" nvvidconv ! tee ! queue ! c.sink_0"
" nvcamerasrc sensor-id=1 ! ‘video/x-raw(memory:NVMM),
width=1920, height=1080, framerate=60/1’ !"
“nvvidconv ! tee ! queue ! c.sink_1”
nvcamerasrc sensor-id=0 ! nvvidconv ! tee ! queue ! c.sink_2
" videotestsrc is-live=true ! queue ! c.sink_3"

*adv7280 driver will feed the bold line.

We understand that we need to use “nvcamerasrc” to enable the hardware acceleration of glvideomixer, something related to NVMM memory. Now that the nvcamerasrc is deprecated, is it necessary to have nvarguscamerasrc to accelerate the glvideomixer plugin?

Or is there any alternate method to take advantage of the underlying hardware without loading the ARM cores?

Only v4l2src can support adv7280, nvcamerasrc and nvarguscamerasrc both not support it.

Hi @ShaneCCC,

Do you mean nvcamerasrc and nvarguscamerasrc are used only to convert from bayer to another color space?
I can use the hardware accelerating capability of TX2 without it?

Thanks

Yes, no matter nvcamerasrc or nvarguscamerasrc don’t support none bayer format sensors.

Just do confirm: Can I accelerate the TX2 without nvarguscamerasrc?

What kind of accelerate?

Using the glvideomixer plugin, doing the mixer in the gpu.

Yes, it could be using without nvarguscamerasrc/nvcamerasrc