Fast nvme frame dump from camera using multiprocessing in python

I have a question if anyone has encountered similar problem.
I have a MIPI camera reading 60fps and store to a multiprocessing queue, frame by frame.

I would like to save these to a m2 Nvme drive in same 60fps.
Is this possible? I can not get the code properly working and I have tried all stupid things. Like Einstein defined an idiot, “doing the same thing over and over expecting a different result”… :D

I have the main thread (GUI) that starts 1 cam reader process and 1 process that saves the image.
first process reads and puts to queue the second process gets and item.

I have tried to add two consumer processes but does not seem to work. Since the drive should be capable of writing in parallell I assume the queue lock is preventing me from working faster.Hence my attempt to create two queues. The cam reader process alternates between queue1 and queue2. The consumer process 1 reads from queue1 and p2 from queue 2. This does not seem to work either.

Any clues? Can I get it directly from a gstream v4l2src to drive via some memory magic? I need to do this cause I have further AI processing to do on the images, so if I loose time on writing Im sure I have problems running the AI stuff.

  • writing to file I use cv2.imwrite
  • the queue is multiprocessing.queue no joinablequeue.
  • the camera must use v4l2 not nvarguscamerasrc (cant use it as first pipe, if you understand…)

I can provide code if it would help…

What’s if recording as video file?

Have a check the multimedia API sample for the recording/decording.

Below command to get the sample code.

sudo apt list -a nvidia-l4t-jetson-multimedia-api
sudo apt install nvidia-l4t-jetson-multimedia-api=32.4.3-20200625213407

And below is the lin for the API.
https://docs.nvidia.com/jetson/l4t-multimedia/index.html

1 Like

Hi, Thank you for your reply.

I will try that but I need the result as individual images. Of course I could use ffmpeg and split the file afterwards, but the idea was to frame dump directly. I will use the images later for classification.
I guess the overhead, writing images instead of stream video file is one difference but I would like to make it optimized and fast. I have read about hardware accelerated write to file. I dont know what that means in practice to write the code in python. Does it mean that I just pipe the camera input, using g-streamer, and create a filesink? I guess that is what I need to do, using nvjpegenc?

Like dusty_nv wrote (feb 16): gst-launch-1.0 … ! nvjpegenc ! filesink location=file.jpg -e

and then create a continuous pipe: … -e ! appsink
Then perhaps I can both use it for further processing, and get the file to disk.

Should be the video encoding not jpeg encoding. Have a reference to below command line.

gst-launch-1.0 nvcamerasrc num-buffers=600 ! 'video/x-raw(memory:NVMM), width=1920, height=1080' ! omxh264enc bitrate=14000000 ! qtmux ! filesink location=temp.mp4
1 Like

thank you!

But :D

  1. If I cant use nvcamerasrc but must use v4l2. I use Arducam and they have a different pixelformat that is not supported directly by nv…
  2. If I want to dump frame by frame I must use jpeg, not qtmux. I will try the video file, but if you also have a encoding for frame I would appriciate it. My current string to get from src to nv and appsink is:

gst_str = ('v4l2src device=/dev/video0 ! video/x-raw , width=(int)1600 , height=(int)1300 , format=(string)GRAY8 , framerate=(fraction)60/1 ’
'! nvvidconv ! video/x-raw(memory:NVMM), width=(int)800, height=(int)600, format=I420 ! ’
'nvvidconv ! video/x-raw, format=BGRx ! videoconvert ! video/x-raw, format=BGR, width=(int)400 , height=(int)400 ! appsink ')

So can I just add
! omxh264enc bitrate=14000000 ! qtmux ! filesink location=temp.mp4 so it reads:

gst_str = ('v4l2src device=/dev/video0 ! video/x-raw , width=(int)1600 , height=(int)1300 , format=(string)GRAY8 , framerate=(fraction)60/1 ’
'! nvvidconv ! video/x-raw(memory:NVMM), width=(int)1600, height=(int)1300, format=I420 ! ’
‘nvvidconv ! video/x-raw, format=BGRx ! videoconvert ! video/x-raw, format=BGR, width=(int)400 , height=(int)400 ! appsink ! videoconvert ! omxh264enc bitrate=14000000 ! qtmux ! filesink location=temp.mp4’)

or:
gst_str = ('v4l2src device=/dev/video0 ! video/x-raw , width=(int)1600 , height=(int)1300 , format=(string)GRAY8 , framerate=(fraction)60/1 ’
'! nvvidconv ! video/x-raw(memory:NVMM), width=(int)1600, height=(int)1300, format=I420 ! ’
‘nvvidconv ! video/x-raw, format=BGRx ! videoconvert ! video/x-raw, format=BGR, width=(int)400 , height=(int)400 ! appsink ! videoconvert ! jpegenc ! filesink location=out.jpg’)

If so. Can I have iteration on a counter on the filename?. Am I making any sense :D

Hi, So I got that to work. I piped src to a queue and then to a video output. I could pick it up in GUI thread from the queue as well and see the image in a window. The gstr is:

gst_str = ('v4l2src device=/dev/video0 ! video/x-raw , width=(int)1600 , height=(int)1300 , format=(string)GRAY8 , framerate=(fraction)60/1 ! nvvidconv ! video/x-raw(memory:NVMM), width=(int)1600, height=(int)1300, format=I420 ! tee name=t t. ! queue! omxh265enc ! matroskamux ! filesink location=test.mkv t. ! queue! nvvidconv ! video/x-raw, format=BGRx ! videoconvert ! video/x-raw, format=BGR, width=(int)400 , height=(int)400 ! appsink ')

But how can I do to individual frames instead of video file? Is it possible to do filesink to jpeg. BUT it must alternate the filename of course.

br Magnus

Fixed it. So for other who wants to know here is the answer using Arducam MIPI v4l2src and pipe to files and further processing.

gst_str = ('v4l2src device=/dev/video0 ! video/x-raw , width=(int)1600 , height=(int)1300 , format=(string)GRAY8 , framerate=(fraction)60/1 ’
'! nvvidconv ! video/x-raw(memory:NVMM), width=(int)1600, height=(int)1300, format=I420 ! tee name=t ’
't. ! queue! ’
'nvjpegenc ! ’
'multifilesink location=Images4/test_%05d.jpg ’
't. ! queue! ’
'nvvidconv ! video/x-raw, format=BGRx ! videoconvert ! video/x-raw, format=BGR, width=(int)400 , height=(int)400 ! appsink ')

Next thing I will let you know is how / if possible, to get the framerate in the file_name or as overlay in gstreamer

Thank you for references!

Did not solve the problem with fpsdisplaysink. Anyone know how to incorp this into the gstr above?
Do I have to add another queue? I want the overlay on the image that I save.
Perhaps I need to use clock or time overlay and calculate by hand afterwards… But no good…