We have an application that has been using GStreamer to capture images from a MIPI CSI-2 camera. Now our customer is desiring to have our system programatically set the exposure/gain on this camera (not just at start). I see how this is done with the libargus autoexposure example. Does anyone know if there a way to either:
A) use libargus to control the camera while using GStreamer to capture (I tried this but they both can’t grab the camera interface as I have it coded)
B) somehow configure the gain/exposure using GStreamer
It is not supported.
Please try nvgstcapture-1.0
The source code is at https://developer.nvidia.com/embedded/dlc/l4t-sources-28-1
@DaneLLL - I have looked at the nvgstcapture source as you suggested and am finding it difficult to easily answer our problem with it. It sounds like you have an idea of how to do this and we need a few more pointers. Our application is based on jetson inference (https://github.com/dusty-nv/jetson-inference)
This is the gstreamer pipeline that I am using with tbriese:
nvcamerasrc fpsRange=“30.0 30.0” ! video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12 ! nvvidconv flip-method=0 ! video/x-raw ! appsink name=mysink
We are pretty new to gstreamer, but have traced the code to understand that we set up our application to get call backs and then get video frames after calling
GstSample* gstSample = gst_app_sink_pull_sample(mAppSink);
Do we modify the gstreamer pipeline to allow the camera to be written to (and how would that be done), or do we set up a second pipeline to write the camera exposure?
You need to do integration by yourself. nvgstcapture-1.0 demonstrates all controls of nvcamerasrc.
Here is a sample of doing runtime bitrate control of h264 encoding:
The sample above gets deleted by the user.
Please refer to https://devtalk.nvidia.com/default/topic/1020558/jetson-tx1/h265-decode-failed/post/5196041/#5196041
You can refer to it and implement runtime control of nvcamerasrc.