Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 6.3
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) Questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
How should I add a base 64 encoded frame to a user_meta object so that it is sent in my AMQP payload?
I have gotten payloads with the frame data to be sent by using the otherAttrs field and implementing my own eventmsg_payload.cpp file. This gives me the payload I want, but will often result in a segfault.
Here’s how I get the frame.
# Getting Image data using nvbufsurface
# the input should be address of buffer and batch_id
n_frame = pyds.get_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id)
# Convert python array into numpy array format in the copy mode.
frame_copy = np.array(n_frame, copy=True, order='C')
frame_copy = cv2.cvtColor(frame_copy, cv2.COLOR_RGBA2BGRA)
frame_copy = frame_copy.astype(np.uint8)
success, encoded_image = cv2.imencode('.png', frame_copy)
base64_string = base64.b64encode(encoded_image.tobytes()).decode('utf-8')
msg_meta.otherAttrs = base64_string
if is_aarch64(): # If Jetson, since the buffer is mapped to CPU for retrieval, it must also be unmapped
pyds.unmap_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id) # The unmap call should be made after operations with the original array are complete.
# The original array cannot be accessed after this call.
and the eventmsg_payload.cpp code adding the frame
json_object_set_string_member (jobject, "frame", events[0].metadata->otherAttrs);
I’ve also tried to validate that events[0].metadata != nullptr, no change there though.
I’ve updated meta_copy_func to copy the field
if srcmeta.otherAttrs:
dstmeta.otherAttrs = pyds.get_string(srcmeta.otherAttrs)
and updated meta_free_func to free the buffer
pyds.free_buffer(srcmeta.otherAttrs)
I feel confident that this this seg fault is caused by this logic because commenting out the msg_meta.otherAttrs = base64_string line makes my pipeline run consistently without a seg fault.
Is there something else I’m missing that might be causing this segfault? Is otherAttrs a bad place to put such a large piece of data? Could it be causing a buffer overflow or something simliar?
It’s also worth noting I have implemented the accepted answer here. No change.