I am in development of a automation inspection system which needs to capture images after the component is at a certain position ready for capture…
for this I am accessing nvgstcapture as subprocess in python and using the process’s input stream to write with “j\n” to trigger capturing of image and getting it from a directory which is supposed to have only one image at a time (there I always have to clean it before captures and scan it after capture to get the current image)
I have some problems in the above process listed as follows:
when I capture image say at time: 10:10:10.300300, I get the image of the time: 10:10:10.200300 or so i.e. from around from 100 milliseconds ago.
I was able to understand this by capturing the image of a live timer which running on jetson nano itself and then compare the time the capturing image was requested and the time that is visible in the frame.
How can I make sure to get the image of time or after the time when the capture request was made and how can I minimise the latency for the same, like currently I am able to get an in 0.4 to 0.5 seconds around a from 100 millisecond ago.
So after getting things are working as expected I can expect it to give me images in 0.5 to 0.6 second.
the command used is as follows: ‘nvgstcapture-1.0 --image-res 9 -m 1 --file-name=/path/to/folder/’, the reason I have given path to folder is that the name that I give to file name is taken as prefix only and not full name, say I give name “myimg.jpg” then the final image created is named like this: “myimg.jpg_8212_s00_00005.jpg”.
Because of this, I need to scan the whole folder delete everything, capture and scan folder again because I not now the name of the newly captured image(the non-suffix is different every time).
So how can I make the command to consider my file-name as full name and not prefix
While using the above command as mentioned in point 2, the screen is covered with the preview which I do not want. How can I disable preview here?
if there is any way of getting the image directly in memory to skip IO operations which may also reduce the time taken by IO operations please suggest.
I also tried the OpenCV way by feeding the VideoCapture with a gtreamer Pipeline as
`"nvarguscamerasrc ! "
"queue max-size-buffers=0 leaky=downstream ! " #QUEUE "video/x-raw(memory:NVMM), " "width=(int)%d, height=(int)%d, " "format=(string)NV12, framerate=(fraction)%d/1 ! " "nvvidconv flip-method=%d ! " "queue max-size-buffers=0 leaky=downstream ! " #QUEUE "video/x-raw, width=(int)%d, height=(int)%d, " "format=(string)BGRx ! " "videoconvert ! " "queue max-size-buffers=1 leaky=downstream ! " #QUEUE "video/x-raw, format=(string)BGR ! " "appsink max-buffers=0 drop=True"`
as it is visible here, I also tried to reduce the buffer sizes but still would get image from some time ago. The gstreamer pipeline here may not be as expected because I have barely scratched the surface of gstreamer.
so my main aim is to capture an image (for opencv processing so should be in numpy array format) of resolution 4032x3040(from imx447) as soon as possible, i.e. in at least 400 milliseconds or less, which should of the time or after time of capture request.
I could not find any similar topic, so hope this is unique.
Thanks in advance