I want to change camera exposure, AWB while using detectnet-camera.

Detecting objects with detecnet-camera of jetson-inference.
The camera is a CSI camera (OV 5693).
Please let me know how to change camera exposure, AWB and other settings while operating detecnet-camera.
I tried nvcapture - 1.0 [option], but I could not do it while playing the camera.
It is best to be able to use libargus at jetson-inference.
Since I am a beginner I do not understand, please let me know.

hello mr.whatky,

please refer to [L4T Multimedia API] from Jetson Download Center,
please check [API Modules]-> [Libargus Camera API] for more details.

Thank you for your reply.

I was able to change Exposure and WB with argus_camera.

However, executing jetson-inference’s ./detectnet-camera while argus_camera is running generates the following error.
Please tell me how to make ./detectnet-camera work while argus_camera is running.

nvidia@tegra-ubuntu:~/jetson-inference/build/aarch64/bin$ sudo ./detectnet-camera 
[sudo] password for nvidia: 
  args (1):  0 [./detectnet-camera]  

[gstreamer] initialized gstreamer, version
[gstreamer] gstreamer decoder pipeline string:
nvcamerasrc fpsRange="30.0 30.0" ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)NV12 ! nvvidconv flip-method=0 ! video/x-raw ! appsink name=mysink

detectnet-camera:  successfully initialized video device
    width:  1280
   height:  720
    depth:  12 (bpp)

detectNet -- loading detection network model from:
          -- prototxt    networks/ped-100/deploy.prototxt
          -- model       networks/ped-100/snapshot_iter_70800.caffemodel
          -- input_blob  'data'
          -- output_cvg  'coverage'
          -- output_bbox 'bboxes'
          -- mean_pixel  0.000000
          -- threshold   0.500000
          -- batch_size  2

[TRT]  TensorRT version 2.1.2
[TRT]  attempting to open cache file networks/ped-100/snapshot_iter_70800.caffemodel.2.tensorcache
[TRT]  loading network profile from cache... networks/ped-100/snapshot_iter_70800.caffemodel.2.tensorcache
[TRT]  platform has FP16 support.
[TRT]  networks/ped-100/snapshot_iter_70800.caffemodel loaded
[TRT]  CUDA engine context initialized with 3 bindings
[TRT]  networks/ped-100/snapshot_iter_70800.caffemodel input  binding index:  0
[TRT]  networks/ped-100/snapshot_iter_70800.caffemodel input  dims (b=2 c=3 h=512 w=1024) size=12582912
[cuda]  cudaAllocMapped 12582912 bytes, CPU 0x100ce0000 GPU 0x100ce0000
[TRT]  networks/ped-100/snapshot_iter_70800.caffemodel output 0 coverage  binding index:  1
[TRT]  networks/ped-100/snapshot_iter_70800.caffemodel output 0 coverage  dims (b=2 c=1 h=32 w=64) size=16384
[cuda]  cudaAllocMapped 16384 bytes, CPU 0x1018e0000 GPU 0x1018e0000
[TRT]  networks/ped-100/snapshot_iter_70800.caffemodel output 1 bboxes  binding index:  2
[TRT]  networks/ped-100/snapshot_iter_70800.caffemodel output 1 bboxes  dims (b=2 c=4 h=32 w=64) size=65536
[cuda]  cudaAllocMapped 65536 bytes, CPU 0x1019e0000 GPU 0x1019e0000
networks/ped-100/snapshot_iter_70800.caffemodel initialized.
[cuda]  cudaAllocMapped 16 bytes, CPU 0x101ae0000 GPU 0x101ae0000
maximum bounding boxes:  8192
[cuda]  cudaAllocMapped 131072 bytes, CPU 0x101be0000 GPU 0x101be0000
[cuda]  cudaAllocMapped 32768 bytes, CPU 0x1019f0000 GPU 0x1019f0000
default X screen 0:   1920 x 1080
[OpenGL]  glDisplay display window initialized
[OpenGL]   creating 1280x720 texture
loaded image  fontmapA.png  (256 x 512)  2097152 bytes
[cuda]  cudaAllocMapped 2097152 bytes, CPU 0x101ce0000 GPU 0x101ce0000
[cuda]  cudaAllocMapped 8192 bytes, CPU 0x1018e4000 GPU 0x1018e4000
[gstreamer] gstreamer transitioning pipeline to GST_STATE_PLAYING
Socket read error. Camera Daemon stopped functioning.....
gst_nvcamera_open() failed ret=0
[gstreamer] gstreamer failed to set pipeline state to PLAYING (error 0)

hello mr.whatky,

you’re not able to launch detectnet-camera and argus_camera together due to both of them are using the same pipeline.
please have a try to add the wbmode options in the gstreamer pipeline for white balance settings.
for example,

gst-launch-1.0 nvarguscamerasrc <b>wbmode=1</b> num-buffers=1 ! 'video/x-raw(memory:NVMM), width=2592, height=1944' ! nvjpegenc ! filesink location=sample.jpg

hello Jerrychang.

Thank you for your reply.

I was able to use the command that told me.
Incidentally, dusty-nv of jetson-inference author has recently released a program that also supports nvarguscamerasrc.

Please tell me the following, although it is separate matter.
Currently we are building 3 multi camera systems using Leopard camera. Currently we have launched three argus_camera windows with argus_camra --device = 0, argus_camera --device = 1, argus_camera --device = 2. When jetson-inference corresponds to nvarguscamerasrc, one camera is launched as jetson-inference. Can the remaining two cameras be started with argus_camera?

please tell me.

hello hello mr.whatky,

you may also check the multiple camera examples from [L4T multimedia API reference],
please check [Multimedia API Sample Applications]-> [13_multi_camera] chapter for your 3-camera use-case.