Currently deepstream is publishing the detections and embeddings to kafka, how can we publish the images also with it?

• GPU: L4
• Deepstream Version: 7
• Nvidia-Driver Version: 566.14 | CUDA Version: 12.7
• Issue Type: Modifying the output message structure of deepstream

Currently deepstream is publishing the detections and embeddings to kafka. Check how can we publish the images also with it.

A sample of this structure that I want to receive would look like:

{
    "frame_ID": "frame_001",
    "detections": [
        {
            "tracking_ID": "track_001",
            "class_label": "Vehicle",
            "confidence_threshold": 0.95,
            "bbox": {
                "topleftx": 1,
                "toplefty": 480,
                "bottomrightx": 99,
                "bottomrighty": 668
            }
        },
        {
            "tracking_ID": "track_002",
            "class_label": "Person",
            "confidence_threshold": 0.89,
            "bbox": {
                "topleftx": 120,
                "toplefty": 300,
                "bottomrightx": 200,
                "bottomrighty": 400
            }
        }
    ],
    "frame": "base64_encoded_frame_data"
}

Please provide the link of deepstream_test4_app.c , and explicitly mention where to put the below given code snippets.

if (usrMetaData->base_meta.meta_type == NVDS_CROP_IMAGE_META) {
            NvDsObjEncOutParams *enc_jpeg_image =
                (NvDsObjEncOutParams *) usrMetaData->user_meta_data;
            START_PROFILE;
            encoded_data = g_base64_encode(enc_jpeg_image->outBuffer, enc_jpeg_image->outLen);

Also, please note, my objective is to extract the entire frame, not just the object crops.

I only want to extract the frame and push it to kafka as a base64 encoded frame without having a MASSIVE drop in speed.

Please refer to “3.Send the image by the broker based on Kafka” in readme of ready-made sample /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test4/.

1 Like

Hi, how did you publish embeddings to kafka? I’m stuck and unable to get that done.

Thanks in advance!

please refer to this ready-made code.

cool, my bad.
I was trying to consume the embeddings using mtmc app from metropolis.
Apparently, I used the below setting in configs.

msg-conv-payload-type: 2
  msg-conv-msg2p-new-api: 1

This does not directly consume the embeddings.
Do you have any pointers to modify protobuf schema? and where is the path to protobuf file?

Here are the steps.

  1. On app layer, use nvds_add_user_meta_to_frame to add embeddings information to frame meta. it is similar to the code in the last comment.
  2. nvmsgconv plugin and low-level lib are opensource. please find generate_dsmeta_message_protobuf in nvds_msg2p_generate_new of \opt\nvidia\deepstream\deepstream\sources\libs\nvmsgconv\nvmsgconv.cpp. you can get embeddings information from frame meta and wrap it to nv::Frame pbFrame.
  3. rebuild and replace libnvds_msgconv.so according to readme. you can add some logs to make sure the new so is used.

3.Send the image by the broker based on Kafka
What does it mean
linkIs this the link you are talking about?

please refer to /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test4/readme In DeepStream SDK. Especially the part of “3.Send the image by the broker based on Kafka” in readme.

/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test4/readme

DO I need to clone DeepStream for that?
explain the steps to get the readme file

please refer to this link for how to install DeepStream SDK, or you can download the DeepStream SDK from this link.

What about deepStream-test5?
there is no mention about publishing image data to Kafka

Both test4 and test5 are opensource. you can port the sending jpeg logics in osd_sink_pad_buffer_image_probe to test5.