Hi,
I am new to Nano and even to Linux. I have Sony IMX camera and i am using Nano to capture videos using the command. However, when I tried with python, it’s not working. I tried all the options, though the same command works in the command prompt when coming to python, it’s not working. I tried with NanoCamera as well as other GitHub libraries.
Also, I want to save the video in MP4 format. Would be great, if someone can suggest the command to be used to get max quality. While processing image, I would be using frames, that way, I want the frames to be in raw format to have good quality images.
I can able to make it work. The issue was because of not using sudo. Once I started using sudo while running python program, I can see video as well capturing photos.
Can someone suggest me the command to use to get great quality video or frames?
please share your commands to capture images for reference, could you please attach the capture results which shows the bad quality?
according to Camera Architecture Stack, if you’re access camera sensor with libargus, it’ll also involved ISP process to produce better quality frames.
thanks
that’s python script to enable gstreamer pipeline and using nvarguscamerasrc plugin to access camera sensor.
could you please access L4T Multimedia API and enable argus_camera application.
argus_camera application has user-interface for you to enable noise-reduction and other camera controls.
thanks
My idea is to capture video and pass through the frames in terms of finding certain objects using OpenCV. In this case, do you think going with argus approach works? As I am planning to capture around 15X10 meters area, and want to recognise small objects, I want the quality be as good as possible. Can you please advise. As processing happens in Nano, I am not bother about the size of video/frames. At the same time, I want to record the video for future reference.
I changed the resolution to higher, and it looks the image quality is better. I am interpreting this after seeing the image size.
def gstreamer_pipeline(
capture_width= 3820,
capture_height= 2464,
display_width= 3820,
display_height= 2464,
However, I am seeing image color changed to grey color as you see in the attached image. I tried with png, jpeg, and TIFF, but in vain. What might be the issue?
it depends on the sensor output frames,
you may check the sensor capability for all supported sensor modes.
please install v4l-utils, i.e. sudo apt-get install v4l-utils for necessary v4l2 tools.
you may check the sensor modes,
for example,
Below is the response I got from the mentioned command
ioctl: VIDIOC_ENUM_FMT
Index : 0
Type : Video Capture
Pixel Format: ‘RG10’
Name : 10-bit Bayer RGRG/GBGB
Size: Discrete 3264x2464
Interval: Discrete 0.048s (21.000 fps)
Size: Discrete 3264x1848
Interval: Discrete 0.036s (28.000 fps)
Size: Discrete 1920x1080
Interval: Discrete 0.033s (30.000 fps)
Size: Discrete 1280x720
Interval: Discrete 0.017s (60.000 fps)
Size: Discrete 1280x720
Interval: Discrete 0.017s (60.000 fps)
Does it mean that sensor supports 10 bit pixel format. When I try running, gst-launch-1.0 nvarguscamerasrc !
‘video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080,
format=(string)NV12, framerate=(fraction)30/1’ !
nvvidconv ! ‘video/x-raw(memory:NVMM), format=(string)I420_10LE’ !
omxh265enc ! matroskamux ! filesink location=test_10bit.mkv -e
Thought its running properly, but generating 0 bytes data. Could you please help me with command on fetching high quality video/frames.
that’s a 10-bit RGGB bayer sensor,
once the camera stream going through nvarguscamerasrc plugin, it’ll processed by internal ISP, and you may handle it with YUV420.
may I know why you using video converter to change the format as I420_10LE,
here’s sample commands to enable sensor to do video recording.
for example,
Hi,
I used this command to get better quality images. From your response, it looks to be that sensor does not support this feature.
Currently, I am using below command for recording video
that’s not a failure, since you specify num-buffers=300, the GST pipeline terminate after 300 capture buffers,
you should also note that my samples using h264 encoder, and you’re enable h265 encoder; that’s why you see different quality images.
thanks
Thanks Danelll. I am looking for the command which at high level gives good quality frames. Probably, I will check the one you had sent.
As I am new to this field, can you share me some documentation on how can I explore on different pipelines available along with the info related to system resource utilisation.