Jetson-Inference Camera Resolution

Hi, How can I change the camera resolution 1280x720 to 3264x2464?

GST_ARGUS: Available Sensor modes :
GST_ARGUS: 3264 x 2464 FR = 21.000000 fps Duration = 47619048 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 3264 x 1848 FR = 28.000001 fps Duration = 35714284 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1920 x 1080 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 59.999999 fps Duration = 16666667 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 120.000005 fps Duration = 8333333 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: Running with following settings:
Camera index = 0
Camera mode = 4
Output Stream W = 1280 H = 720
seconds to Run = 0
Frame Rate = 120.000005
GST_ARGUS: PowerService: requested_clock_Hz=329280
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.

You would try to adjust parameters from your python script such as

camera = jetson.utils.gstCamera(3264, 2464, ...

I can’t use that code because I’m turning the camera on for another purpose earlier. If I use that code, it gives an error because it will try to open the camera 2 times in a row. I need to change the resolution from the source code.

Could you further detail this ?

I’m building an autonomous vehicle. I use a camera to collect data. Autonomous driving software and object detection software use image arrays as input data. Using the bounding box drawn around the object, I created a distance formula. It measures the distance between the vehicle and the object. While creating the distance formula, the resolution of the camera was 1280x720. In autonomous driving, my camera resolution is 3280x2464. The formula gives excellent results only when I run the object detection. When running autonomous driving and object detection together, the resolution is 3280x2464, so the formula gives wrong results. So I need to update my object detection formula according to 3280x2464 resolution.

Settings of my camera:

Should I change this parameters:
mwidth , mheight , mdepth?

Not yet. From your previous image, it seems that your camera capture is 3280x2464, but this is converted into 244x244 resolution before inference. You may explain the input part requiring for 1280X720 for distance. This may help for better advice.

This is the code I use when calculating the formula. The camera capture of Jetson-inference is 1280x720. I wrote gstCamera (224,224 …) as the resolution I used while autonomous driving was 224x224. Camera capture don’t match. autonomous driving (3280,2464), object detection (1280,720). The area of the bounding box around the object varies depending on the resolution. The area of the box becomes larger because there are more pixels at 3280x2464. Pixels are less because I use 1280x720 resolution when creating the formula.

Although I wrote gstcamera (224,224) as seen in the picture, the camera capture is running with 1280x720 resolution. In autonomous driving, the picture resolution is still the same (224,224), but since the camera capture is 3280x2464, the formula does not give the correct result.

Does anyone know how to change camera capture ?

Your camera would only run in one resolution/format/fps at once.

So you would try to capture with highest required resolution, and create a second frame by resizing the high res one. You would therefore have 2 images, one for each net.

AFAIK, there is no python binding for cudaResizeRGBA(). For your case, C++ API may be easier.
You would allocate a ZeroCopy array for your second image in init, and in the loop after capture you would create the smaller image with cudaResizeRGBA().

For python, there is info in this topic that may help, but may not be as efficient as CUDA.
I have poor experience with python, but maybe someone else would be able to tell, if possible, how to add python binding for cudaResizeRGBA().

Hi @tuna.akyol, the input video to jetson-inference imageNet/detectNet/ect is automatically resized to match the resolution expected by the DNN. So with AlexNet/GoogleNet/ResNet your 1280x760 camera video will automatically be rescaled to 224x224 (this occurs during DNN pre-processing, along with NCHW format conversion and mean-pixel subtraction).

If you want to change the resolution of your camera, it needs to be at a resolution supported by your camera. Try these commands from this part of the tutorial to print out the supported resolutions of your camera:

$ sudo apt-get install v4l-utils
$ v4l2-ctl --list-formats-ext

Thanks man for your effort but i think my problem is easier to solve.

Hi @dusty_nv

I am using the CSI camera. It supports 3264x2464 resolution. I want to change the resolution permanently not just from command line.

I changed this parameters : mWidth , mHeigh in gstCamera.cpp. I did sudo make, sudo make install. but it didn’t work. It still works camera mode 4(1280x720)

Changing the resolution inside gstCamera constructor won’t do anything, it will be overwritten during initialization by the requested resolution (of which the default, if left unspecified, is 1280x720)

To change the resolution permanently, as HP mentioned above - from Python you could set it in the script like so:

There is no need to edit the internals of gstCamera.cpp to change the camera resolution.

1 Like