Convert RAW to JPEG?

Hello I am using Basler Camera , which produces RAW RGB video as input to my object detector.
For detected objects i need to pass the whole frame image with detected metadata to output as well. Since I do it on RAW RGB, i need a solution which can convert that RAW to JPEG image very fast (around 10-20 images/sec). How to utilize jetsons nvjpeg for that in python? Or some other viable solution?
Thanks!

If possible to use gstreamer, you may try something like this (here using simulated BGR video source and recording 300 frames so 10s) and saving into file:

gst-launch-1.0 videotestsrc num-buffers=300 ! video/x-raw,format=RGB,width=320,height=240,framerate=30/1 ! videoconvert ! video/x-raw,format=BGRx ! nvvidconv ! 'video/x-raw(memory:NVMM),format=I420' ! nvjpegenc ! image/jpeg,format=MJPG,framerate=30/1 ! filesink location=test_320x240p30.MJPG

You may playback with:

gst-launch-1.0 filesrc location=test_320x240p30.MJPG ! image/jpeg,format=MJPG,width=320,height=240,framerate=30/1 ! nvjpegdec ! 'video/x-raw(memory:NVMM),format=I420' ! nvvidconv ! xvimagesink

yeah i am aware of gstreamer, i am looking for more programmatic way (from numpy array in python)

It may be easy to wrap the numpy array into an opencv mat and then use an opencv VideoWriter for pushing processed frames into a gstreamer pipeline encoding into JPG. The thing is that opencv uses a jpeg lib that is incompatible with NV jpeg lib AFAIK, so you may not be able to use nvjpegenc. If you don’t need high pixel rate, CPU plugin jpegenc may be enough.
There may be other solutions, such as using a v4l2loopback node or shmsink, but these would have significant CPU overhead and limitations.

May be better to programmaticaly build a gstreamer pipeline and send your buffers into appsrc.
Best solution may also depend on the final usage of jpg encoded frames. Do you need a MJPG video file, a jpg file per frame, or want to stream with some protocols ?

well i have raw images from machine vision camera which are fed into my detector in 25fps (on jetson nano) , and when detector identifies object, i need that screenshot, but i can not afford to have 2MPix screenshot, i need to parse it through jpegenc to have something like 100Kb. Problem is, that object can get detected up to 10 times per second, so i need to have this conversion in near real time performance.
So far it looks like gstreamer pipeline with appsrc and appsink might be an answer?

Hi,
Looks like your source is a v4l2 source. There is a sample demonstrating capturing data into CUDA buffers

/usr/src/jetson_multimedia_api/samples/v4l2cuda

By default the format is UYVY. You would need to customize it to capture RGB. And then can run like:

RGB in CUDA bufer -> RGBA NvBuffer -> NvBufferTransform() -> YUV420 NvBuffer -> NvJPEGEncoder

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.