**• Hardware Platform (Jetson / GPU)**Jetson
• DeepStream Version6.1.1
**• Issue Type( questions, new requirements, bugs)**questions
I found the codes and plugins are highly mudular or integrated,I mean that we can just change some parameters in the config files to control the whole operation.
But after I have seen many codes in the deepstream,I may want to add some extra functions like edge detection to identify the angle of specific objects.
However,I don’t know where should I add the function,how can I obtain the every frame of the video and also add my result like angle in the display.
You can refer to our source code below to draw anything on every frame of the video. This demo shows how to draw the number of people and cars on the frame.
sources\apps\sample_apps\deepstream-test1\deepstream_test1_app.c
static GstPadProbeReturn
osd_sink_pad_buffer_probe (GstPad * pad, GstPadProbeInfo * info,
gpointer u_data)
Thank you for your answers!
It seems that I can only get the metadata of each frame,not exactly every frame picture.
Because I want to directly cope with the image in my “egde_detection(const Mat& img)” function which returns an angle, and the input should be a frame by frame image.
You can refer to gst-dsexample, which is basically a GStreamer pipeline element to demonstrate Deepstream OpenCV integration. This is also supported in deepstream-app out of the box.
You can also refer to our nvdsvideotemplate plugin to process the raw data according to your needs.
Lately I am testing sources/apps/sample_app/deepstream-app using command “deepstream-app -c deepstream_app_config.txt”. The deepstream_app_config.txt is a config file about yolov8 model and other elements’ configs like source,sink0 and so on.
I find that almost all the elements are highly integrated,for example, the osd element is created by the code “create_osd_bin”,in this function the osd element is composed of several plugins like nvvideoconvert-queue-nvdsosd.
However,I can’t find where the nvvideoconvert or the queue plugin is defined.What I really want to convey is that since there are so many elements in the pipeline,which part should I change to add my “edge detection” function to get the rotated angle of objects into the whole pipeline.
And as I mentioned above,the elements are highly integrated,should I change the code in gst-plugins to change some plugins?
If you want to customize your own inference, we don’t recommend using deepstream-app directly. As you said, it’s highly integrated. We suggest that you refer to our deepstream_tao_apps for easy integration.
The logic I want to realize is simple.I want to define a function to realize edge detection or maybe there are some places I can just add some codes to realize this because the codes are just few lines.
The parameter I need is just the frame by frame images from the decoded video,and the output will be an angle,that’s it.So how does every element cope with the frame by frame images,maybe I can just follow that to add some codes to realize this.
So after the description,where do you think I can turn to? or still the same answers
sincerely looking forward to your reply
We don’t recommend you make changes directly in deepstream_app, it can be complicated. But if you insist on that, you can try to add that in the probe function below. Please refer other demos like deepstream-image-meta-test
to learn how to get the data from the buffer.
You can refer to our source code sources\gst-plugins\gst-dsexample
to learn how to get the data from the nvbufsurface too.
sources\apps\sample_apps\deepstream-app\deepstream_app.c
static GstPadProbeReturn
gie_processing_done_buf_prob (GstPad * pad, GstPadProbeInfo * info,
gpointer u_data)
I may need extra library,namely the opencv2/opencv.hpp, there must be something I need to add on the Makefile to generate new deepstream application.What should I do?
You can refer to our source code sources\gst-plugins\gst-dsexample
to learn how to compile the opencv in DeepStream.
I have seen the
gie_processing_done_buf_prob (GstPad * pad, GstPadProbeInfo * info,
gpointer u_data){
GstBuffer *buf = (GstBuffer *) info->data;
NvDsInstanceBin *bin = (NvDsInstanceBin *) u_data;
guint index = bin->index;
AppCtx *appCtx = bin->appCtx;
if (gst_buffer_is_writable (buf));
process_buffer (buf, appCtx, index);
return GST_PAD_PROBE_OK;
}
I think the key codes is the process_buffer part,and the process_buffer function includes the process_meta function to cope with something like the words displayed next to the box.
So my question is that where should I add my code in? in the gie_processing_done_buf_prob or the process_buffer or even in the process_meta?
You can add your own function in the gie_processing_done_buf_prob
.
Thank you for your reply!
After I get the angle,I want to save it in obj_meta_list,where datas of each detected object are stored ,so I can display the angle on the screen just like the label and confidence of each object displayed on the screen.
Do you think it is possible?
That is to say I want to add the angle on the * NvDsObjectMeta which includes confidence, obj_label and so on of each detected object .Is that possible?
No. Because we have no variable in the NvDsObjectMeta
to record the angle. You can try to use the user-custom-metadata-label.
Thank you ,I will try that later.Before that,I write something to realize my opinion in the gie_processing_done_buf_prob
:
> *> NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta(buf);*
> *> for (NvDsMetaList * l_frame = batch_meta->frame_meta_list; l_frame != NULL;*
> *> l_frame = l_frame->next) {*
> *> //NvDsFrameMeta *frame_meta = gst_meta_get_nvds_frame_meta(batch_meta, buf);*
> *> NvDsFrameMeta *frame_meta = l_frame->data;*
> *> for (NvDsMetaList * l_obj = frame_meta->obj_meta_list; l_obj != NULL;*
> *> l_obj = l_obj->next) {*
> *> NvDsObjectMeta *obj = (NvDsObjectMeta *) l_obj->data;*
> *> // 裁剪物体区域*
> *> cv::Rect bbox(obj->rect_params.left, obj->rect_params.top,*
> *> obj->rect_params.width, obj->rect_params.height);*
> *> // 将GStreamer缓冲区转换为OpenCV图像*
> *> GstMapInfo map;*
> *> gst_buffer_map(buf, &map, GST_MAP_READ);*
> *> cv::Mat frame(cv::Size(frame_meta->source_frame_width, frame_meta->source_frame_height),*
> *> CV_8UC3, (uchar*)map.data, cv::Mat::AUTO_STEP);*
> *> gst_buffer_unmap(buf, &map);*
> *> // 裁剪出物体图像*
> *> cv::Mat object_img = frame(bbox);*
> *> // 计算角度*
> *> float angle;*
> *> Ellipse_feature_extraction2(object_img,angle);*
> Ellipse_feature_extraction2(object_img,angle) is my custom funtion about edge detection and returns an angle about each detected object in every frame.
Do you think the logic is right?
Besides,there are some codes involving opencv,and I consider to realize the function in another cpp file.Then,call that function in this C file.
About this part, the map.data
is not the raw data. It’s NvbufSurface. You can refer to our source code sources\gst-plugins\gst-dsexample\gstdsexample_optimized.cpp
to learn how to get the raw data from the NvbufSurface.
So dose it mean the raw image frame datas exist in the NvbufSurface
? I mean the frame by frame images.
As you can see,I want to get the ROI areas of every object in every frame.So,in the frame and object loop,the only parameter I need is the raw image.