How can i send frames though kafka along with metadata

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 6.2
• Language Python

The official case by mentioning the use of kafka into the data sent, I also tested and used, using deeptream-app, you only need to configure to send the results of the test message through kafka, you can write a python script to receive it.
If you are sending frame data, I think kafka does not support it, you can try other ways to solve your problem as well.

I want to convert the frames into string and then add it along with the meta data to send it through kafka

You can try the method below:
1.use Base64 to encode your image
2.send it with NVDS_CUSTOM_MSG_BLOB type metadata.

how do i convert a frame to base64? is there a direct plugin ?

You need to implement the base64 encoding and decoding by yourself, Like Encoding-and-decoding-base-64-with-cpp.

Should I add a jpeg encoder plugin and a probe to extract the frame and encode it to base64? So that at the Kafka consumer I can save the frame into jpeg format. Can you please tell me the right approach in deepstream python?

We suggest that you encode it first. After encoding, the image size can be reduced a lot. It’s more efficient for network transmission.

We do not have a similar demo yet. Currently, you need to implement it yourself.

i would like to know if we can directly convert into base 64 after pgie plugin .Or should we add any another converter plugin in between.

Because base64 is a relatively simple algorithm, you can just use it before transferring data to the broker.

could you please tell me how to extract the data ?

There are many methods for this, you can directly refer to the interface API.

I used a converter to convert the frame into RGBA and added a probe to extract the frames in a NumPy array. Later, it was converted into BGR using OpenCV and then encoded to jpeg format and base64. Finally, send the data through using python -kafka as a JSON object. Is there a better way? Opencv is a heavy library; I would like to remove the dependency of openCV.

to get in RGBA format

caps = Gst.Caps.from_string("video/x-raw, format=RGBA")
self.filter1.set_property("caps", caps)

code inside the probe

        overlay(batch_meta, frame_meta, frame_number, num_rects, obj_counter)
        n_frame = pyds.get_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id)
        frame_copy = np.array(n_frame, copy=True, order='C')
        frame_copy  = cv2.cvtColor(frame_copy, cv2.COLOR_RGBA2BGR)
        encoded_image = base64.b64encode(cv2.imencode('.jpg', frame_copy)[1]).decode()
     
        msg = {"camera": "abc",
              "detection": 'f',
              "time": 't',
              "frame": encoded_image}

        msg = json.dumps(msg).encode('utf-8')
        kafka_producer.send(topic, value=msg)

If you use python, you’d better use OpenCV. But if you use C/C++, you can try to use our API:sources\includes\nvds_obj_encode.h.

Is there any python implementation of the same?

No. If you want to use this in python, you need to binding by yourself. You can refer to the link below: BINDINGSGUIDE.md

i am not able to insert the NVDS_CUSTOM_MSG_BLOB into the the message, can you please share some reference code? I believe i have to update the frame meta, and not the user event meta, am i right?

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

You can refer to our source code: osd_sink_pad_buffer_probe sources\apps\sample_apps\deepstream-test4\deepstream_test4_app.c. Its basic process is similar with creating NVDS_ EVENT_ MSG_ The META type.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.