Dynamically changing MIPI camera resolution and framerate in Python on Jetson Orin Nano

Hello. I am trying to dynamically switch the resolution and the framerate of my ArduCam IMX519 camera on Jetson Orin Nano developer kit. For this, rebuilding the GStreamer pipeline was not very suitable, as it randomly takes a lot of time to discard the remaining frames while transitioning to 4656x3496@9fps from 1920x1080@60fps.

This is the camera I am using: IMX519 Quick Start Guide

I have tried installing v4l2py (deprecated) and linuxpy to run the camera using v4l2 instead of a pipeline to maybe try to change the resolution dynamically inside the program but I had no luck getting my camera working, as there is no resource to guide me on how to convert my frames from Bayer RG10 to BGR, so I can use those frames in my TensorRT based YOLO inference python script.

Here is the Camera format I have that I couldn’t convert to BGR inside the v4l2 python API apps.

I would like to ask everyone who has done this dynamic resolution change in their projects, how can I handle this situation? Prefebably the 4656→1920 conversion must be very quick, as I will be processing those frames live.

If necessary, I can provide parts of the code I use for the GStreamer pipeline, but it shouldn’t be necessary. This post made to rather ask how I can handle the resolution+fps change very quickly, than to fix the code.

Thanks for your attention.

*** Please note that this reply is generated by LLM automatically ***

To dynamically switch the resolution and framerate of your ArduCam IMX519 camera on Jetson Orin Nano, you’re correct that rebuilding the GStreamer pipeline might not be the most efficient approach, especially when dealing with high-resolution and high-framerate video streams.

Using v4l2 instead of a pipeline is a good idea, but you’re facing challenges in converting the Bayer RG10 frames to BGR, which is required for your TensorRT-based YOLO inference Python script.

Here’s a comprehensive answer to help you handle this situation:

1. Understanding the camera format:
The IMX519 camera produces Bayer RG10 frames, which are 10-bit raw images. To convert these frames to BGR, you’ll need to perform demosaicing and color conversion.

2. Using v4l2:
You can use the v4l2 API to control the camera and capture frames. However, as you’ve mentioned, there’s no straightforward way to convert the Bayer RG10 frames to BGR using the v4l2 API.

3. Demosaicing and color conversion:
To convert the Bayer RG10 frames to BGR, you’ll need to perform demosaicing and color conversion. You can use libraries like OpenCV or libdc1394 to perform these operations.

Here’s an example using OpenCV:

import cv2

# Capture a frame from the camera using v4l2
frame = cv2.VideoCapture(0).read()

# Convert the Bayer RG10 frame to BGR
bgr_frame = cv2.cvtColor(frame, cv2.COLOR_BAYER_RG2BGR)

# Now you can use the BGR frame in your TensorRT-based YOLO inference script

4. Dynamic resolution and framerate change:
To dynamically change the resolution and framerate, you can use the v4l2 API to set the new resolution and framerate. However, as you’ve mentioned, this might take some time to take effect.

To minimize the transition time, you can try the following:

  • Use the v4l2 API to set the new resolution and framerate, and then immediately start capturing frames.
  • Use a separate thread to capture frames, so that the main thread can continue processing the previous frames while the new frames are being captured.
  • Use a buffer to store the previous frames, so that you can continue processing them while the new frames are being captured.

Here’s an example using v4l2 and OpenCV:

import cv2

# Set the new resolution and framerate
cv2.VideoCapture(0).set(cv2.CAP_PROP_FRAME_WIDTH, 4656)
cv2.VideoCapture(0).set(cv2.CAP_PROP_FRAME_HEIGHT, 3496)
cv2.VideoCapture(0).set(cv2.CAP_PROP_FPS, 30)

# Start capturing frames
while True:
    frame = cv2.VideoCapture(0).read()
    bgr_frame = cv2.cvtColor(frame, cv2.COLOR_BAYER_RG2BGR)
    # Process the BGR frame in your TensorRT-based YOLO inference script

5. Comparison with GStreamer pipeline:
Using a GStreamer pipeline can provide a more efficient way to handle the camera capture and processing, especially when dealing with high-resolution and high-framerate video streams. However, rebuilding the pipeline can take some time, which might not be suitable for your use case.

In contrast, using the v4l2 API and OpenCV can provide a more flexible way to handle the camera capture and processing, especially when dealing with dynamic resolution and framerate changes. However, this approach might require more manual effort to handle the demosaicing and color conversion.

In summary, using the v4l2 API and OpenCV can provide a more flexible way to handle the camera capture and processing, especially when dealing with dynamic resolution and framerate changes. However, this approach might require more manual effort to handle the demosaicing and color conversion.

*** Please give the thumbs up if you get answers you like. Or provide feedback to help us improve the answer. ***

Thanks for the response.

I have tried the LLM’s suggestions already and I had no result. Probably the color format cannot be converted via. COLOR_BAYER_RG2BGR, as the result I got from that is a black screen (note that the camera works through Gstreamer, so its not a camera issue)

What would be your suggestion? try to make v4l2 work or use some property of opencv to switch gstreamer pipeline values? Getting gstreamer pipeline for the camera to work inside opencv was already a big hassle.

Hi,
For information, do you use nvarguscamerasrc or v4l2src to launch the camera source?

I am currently using nvarguscamerasrc. I am building a GStreamer pipeline with nvarguscamerasrc, and it takes some time to rebuild the pipeline when I want to change resolutions, and I want that operation to be as fast as possible.

As an update, I somewhat got around the GStreamer pipeline destruction problem by having my 2 cameras on 2 different threads and reading in every frame, that way the pipeline destruction doesn’t have to destroy any frames left (at least thats my theory). Now it does not take 10 seconds to re-build the pipeline, just around 2 seconds.

I am still wondering, is there really no way to dynamically update the resolution and framerate of a nvarguscamerasrc source? I would expect it to be a bit simpler than completely rebuilding the pipeline and be available in a Python environment.

Hi,
The default nvarguscamerasrc plugin does not support dynamic resolution and framerate. You may customize the plugin to add the functions. The source code is in

Jetson Linux Release 36.4.4 | NVIDIA Developer
Driver Package (BSP) Sources

Thanks a lot for the valuable information. Since I am okay with my current workaround method and don’t have to edit the plugin for my application, I will mark this as a solution to my questions.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.