Capture a jpg with the results of the inference in deepstream_nvdsanalytics.py

• Hardware Platform (Jetson / GPU) Jetson
• DeepStream Version 6.2
• JetPack Version (valid for Jetson only) 5.1.1
• TensorRT Version 8.5.2.2
• Issue Type( questions, new requirements, bugs) Questions

Hi, I would like to integrate a function in this example code, that gets a jpg image with what is shown on the screen, i.e. original image plus deepstream annotations, detected objects, etc.

What would be the right strategy? I’ve seen this on stackoverflow but I’m not sure https://stackoverflow.com/questions/68777303/how-to-use-deepstream-sdk-to-take-a-video-and-just-extract-the-frames-in-jpg

I can’t find this specifically in the deepstream documentation either.

Thanks!

You may put nvjpegenc (or jpegenc if you are using Orin Nano) after your osd plugin, a reference pipeline here: Accelerated GStreamer — Jetson Linux
Developer Guide 34.1 documentation (nvidia.com)

Thank you very much for your reply, I am trying jpegenc as I use an Orin Nano.

I create the stages like this:

 # Create jpegenc instance to encode jpeg format.
    print("Creating jpgenc ")
    jpegenc = Gst.ElementFactory.make('jpegenc', 'jpegenc')
    if not jpegenc:
        sys.stderr.write(" Could not create jpegenc.")
    jpegenc.set_property('quality', 50)
        
    # Create filesink instance for jpeg saving
    print("Creating filesink ")
    filesink = Gst.ElementFactory.make("filesink", "filesink")
    if not filesink:
        sys.stderr.write(" Could not create filesink \n")
    
    # Set filesink properties, location 
    filesink.set_property("location", "output.jpg")

My queue skipping order is as follows
streammux → pgie → tracker → nvanalytics → tiler → nvvidconv → nvosd → jpegenc → filesink → sink

But it doesn’t get to show the screen output, at frame 5-6 it pauses and doesn’t show any error, just

** PERF: {'stream0": 0.0

What am I doing wrong?

The last sink is a filesink, thus there is no screen out. And as the filename is output, there will only be one output.
As there is no hardware encoder in Orin nano, you have to use software encoder if needed, which costs quite a lot CPU resource in this scenario.

Ok, I’ll try to get an image with opencv, similar to the deepstream-imagedata-multistream-redaction examples. As it will be once every 30 to 60 sec, it will not influence the overall performance very much.

With this I get the frame of one of the input video streams,
n_frame = pyds.get_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id)

How could I get the output video streams, i.e. a screenshot of what is currently being displayed?

This would look much better.

this code code can save object to disk by opencv, you can modify crop_object to save the whole frame.

If so, I have been able to solve it. I leave my steps in case they can be useful to anyone:

In the pipeline I add filter1 and nvvidconv1 to convert to RGBA, between the nvvidconv and nvosd stages.

Then I use:

n_frame = pyds.get_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id)
frame_copy = np.array(n_frame, copy=True, order='C')
frame_copy = cv2.cvtColor(frame_copy, cv2.COLOR_RGBA2BGRA)

And now I write down the values that interest me with openCV. And then save with cv2.imwrite.

Thank you very much to all once again!. This is the best attended forum by a manufacturer in which I am, congratulations to all the team.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.