instead of using opencv (CPU), How do i can draw in image using GPU?
Did you use deepstream for this?
No, deepstream is very good, but my problem does not use deepstream.
How does deepstream draw many text/box on frame without using CPU?
We have nvosd plugin, This plugin draws bounding boxes, text, and region of interest (RoI) polygons. (Polygons are presented as a set of lines.) The plugin accepts an RGBA buffer with attached metadata from the upstream component. It draws bounding boxes, which may be shaded depending on the configuration (e.g. width, color, and opacity) of a given bounding box. It also draws text and RoI polygons at specified locations in the frame. Text and polygon parameters are configurable through metadata.
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvdsosd.html
Hi @Amycao Thanks for your reply.
i tried to get drawed frame after nvosd plugin as below:
user_meta = pyds.NvDsUserMeta.cast(l_user_meta.data)
print(user_meta.user_meta_data)
how do i get numpy frame from user_meta.user_meta_data?
Please refer to code after comment in function tiler_sink_pad_buffer_probe from app deepstream-imagedata-multistream about how to access frame data.
# Getting Image data using nvbufsurface
# the input should be address of buffer and batch_id
hello @Amycao
I checked that code. I successfully get the frame. But the frame hasn’t been drawn yet
# Do get cv2 frame
n_frame = pyds.get_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id)
# n_frame = draw_bounding_boxes(n_frame, obj_meta, obj_meta.confidence)
# convert python array into numpy array format in the copy mode.
frame_copy = np.array(n_frame, copy=True, order='C')
# convert the array into cv2 default color format
frame_copy = cv2.cvtColor(frame_copy, cv2.COLOR_RGBA2BGRA)
I want to get the frame already drawn with nvosd. I don’t want to have to redraw with opencv anymore
I found a file, they said they can get the drawn frame from nvosd. I tried to do it on python but it didn’t work.
NvDsUserMetaList *usrMetaList = obj_meta->obj_user_meta_list;
FILE *file;
while (usrMetaList != NULL) {
NvDsUserMeta *usrMetaData = (NvDsUserMeta *) usrMetaList->data;
if (usrMetaData->base_meta.meta_type == NVDS_CROP_IMAGE_META) {
NvDsObjEncOutParams *enc_jpeg_image =(NvDsObjEncOutParams *) usrMetaData->user_meta_data;
file = fopen (fileNameString, "wb");
fwrite (enc_jpeg_image->outBuffer, sizeof (uint8_t),enc_jpeg_image->outLen, file);
fclose (file);
This is for c++ version.
If you want to use NvDsObjEncOutParams it in python, you need python bindings for it.
The easy way to do is add probe on osd sink pad. by that time, you have got the frame drawn by nvosd.
I want to take the drawn frame, then send that frame via kafka. I’ve been trying to do this for weeks but can’t. Do you have any examples to help me?
We do not have examples for this, do you meet any issue?