Access frame and crop it based on detected bounding box and send the crop image via kafka broker

Hi

I was able to run deepstream-test5 app with kafka broker.I am getting the kafka message in form of PAYLOAD_DEEPSTREAM_MINIMAL. Now i want to crop the bounding box and send the cropped image along with kafka message, where should i modify?

How can i attach crop image in the dsexample plugin and access in the msgconv plugin to send through kafka?

Is this forum active?

They are active… generally for bugs though I think… For the situation you are trying I was investigating it and it doesn’t seem possible with the current version of deepstream. Nvidia don’t seem to want to share any info of upcoming features or a roadmap of any kind on here, so you are kind of on your own.

I’ve had to create a gstreamer application that dynamically adds to the pipeline to record detections to files and then send them up to the cloud. My requirement may be more complex that yours as I require video clips and not just a still of the frame with a detection in it or even a crop of whats in the bounding box.

I know that i need to work on this function.

‘generate_event_msg_meta’ in deepstream_test5_app_main.c but i am not sure how to get the frame and add it to NvDsEventMsgMeta specially how to get frame. They have provided something on the dsexample but not sure how to implement that in here.

Or if i can know how to add the NvDsEventMsgMeta in the dsexample it will still be possible. cause cropping is being done in the dsexample but dont see how to add the output in the NvDsEventMsgMeta.

waiting for moderator to reply.

You can get the frame as per this thread without using dsexample: https://devtalk.nvidia.com/default/topic/1061205/deepstream-sdk/rtsp-camera-access-frame-issue/1

The feature for Cropped encoding and setting it as user metadata will be made available in the upcoming DeepStream release.

+1 thankyou.

+1

Hi Chris,

That’s great! With the next release, will we have the option to include the whole frame as metadata? Some use cases require the whole frame, not just the detected objects.

Thanks!

1 Like

+1

Hi. Is this feature available in DeepStream 5? I cannot find any reference.

Yes. its available in deep stream 5!

Please check this sample app.
Apps/sample_apps / deepstream-image-meta-test

Thanks!

I’ll check that.

I am trying to sned the cropped object to a Rabbit queue.

I reviewed the code in deepstream-image-meta-test and it seems the way to go, thanks. But I still have some doubts on how to push that additional metadata all the way through the pipeline and send it via the AMPQ broker.

For what I understand this task implies:

  1. Get the cropped object from the metadata (ref: deepstream-image-meta)
  2. Save that JPEG bytes in a custom object and wrap it inside a extMsg (ref: deepstream-test4-app)
  3. Modify nvmsgconv library to decode that custom object and generate the corresponding custom JSON payload.

Is this right or is there a simpler way to achieve this task?

Thanks in advance,

Hi horacio.vico,

Please help to open a new topic for your issue.

Thanks