I am working with an NVIDIA Jetson SOM and need guidance on utilizing the ISP-related features for the following requirements:
- Capturing Images: Ability to capture high-quality still images.
- Live Streaming: Displaying live camera feeds.
- Dynamic Parameter Adjustment: Changing parameters like
wbmode
,saturation
,exposure
, etc., during live streaming.
I have seen references in the forums to use libargus
or L4T Multimedia API
. For example, commands like the following are useful for setting initial parameters:
gst-launch-1.0 nvarguscamerasrc wbmode=4 saturation=1 ! ‘video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1’ ! nvvidconv flip-method=0 ! ‘video/x-raw, width=960, height=616’ ! nvvidconv ! nvegltransform ! nveglglessink -e
However, I want to know:
- How to dynamically change parameters like
wbmode
orsaturation
while the stream is running? - How to utilize the
tegra_multimedia_api
and specifically theargus_camera
application for these use cases? - The best practices for integrating
libargus
or MMAPI in custom applications for advanced control over camera features.
I found suggestions about building the tegra_multimedia_api
and using the argus_camera
sample application. Can you provide:
- Steps to install and build
tegra_multimedia_api
. - Guidance on how to modify the
argus_camera
example for custom needs. - Any additional resources or code samples that would help me get started.
Thank you for your assistance!