Thanks to this solution, I can use the following
videobalance settings to control contrast and brightness in a gstreamer pipeline displaying a stream from a Raspberry Pi HQ camera (Nano B01, Jetpack 4.4.0):
gst-launch-1.0 nvarguscamerasrc sensor-id=0 saturation=0 ! "video/x-raw(memory:NVMM),width=1920,height=1080,framerate=30/1" ! nvvidconv ! videobalance contrast=1.5 brightness=-0.3 ! nvoverlaysink
I’m able to use these settings from the command line, and in my inference code, but I also need to record my training data using the same camera and settings.
However, when I try to insert the same
videobalance element into the pipeline recommended by Ridgerun for recording to mp4:
gst-launch-1.0 nvarguscamerasrc sensor-id=0 saturation=0 ! "video/x-raw(memory:NVMM),width=1920,height=1080,framerate=30/1" ! nvv4l2h264enc ! videobalance contrast=1.5 brightness=-0.3 ! h264parse ! mp4mux ! filesink location=test.mp4 -e
I get the error:
WARNING: erroneous pipeline: could not link nvv4l2h264enc0 to videobalance0
I’ve tried searching through the libargus API documentation, but haven’t managed to find anything referencing
Any suggestions for where I’m going wrong and/or possible solutions would be greatly appreciated.