**• Hardware Platform (Jetson / GPU)**Jetson
• DeepStream Version6.1.1
**• Issue Type( questions, new requirements, bugs)**questions
I found the codes and plugins are highly mudular or integrated,I mean that we can just change some parameters in the config files to control the whole operation.
But after I have seen many codes in the deepstream,I may want to add some extra functions like edge detection to identify the angle of specific objects.
However,I don’t know where should I add the function,how can I obtain the every frame of the video and also add my result like angle in the display.
You can refer to our source code below to draw anything on every frame of the video. This demo shows how to draw the number of people and cars on the frame.
sources\apps\sample_apps\deepstream-test1\deepstream_test1_app.c
static GstPadProbeReturn
osd_sink_pad_buffer_probe (GstPad * pad, GstPadProbeInfo * info,
gpointer u_data)
Thank you for your answers!
It seems that I can only get the metadata of each frame,not exactly every frame picture.
Because I want to directly cope with the image in my “egde_detection(const Mat& img)” function which returns an angle, and the input should be a frame by frame image.
You can refer to gst-dsexample, which is basically a GStreamer pipeline element to demonstrate Deepstream OpenCV integration. This is also supported in deepstream-app out of the box.
You can also refer to our nvdsvideotemplate plugin to process the raw data according to your needs.
Lately I am testing sources/apps/sample_app/deepstream-app using command “deepstream-app -c deepstream_app_config.txt”. The deepstream_app_config.txt is a config file about yolov8 model and other elements’ configs like source,sink0 and so on.
I find that almost all the elements are highly integrated,for example, the osd element is created by the code “create_osd_bin”,in this function the osd element is composed of several plugins like nvvideoconvert-queue-nvdsosd.
However,I can’t find where the nvvideoconvert or the queue plugin is defined.What I really want to convey is that since there are so many elements in the pipeline,which part should I change to add my “edge detection” function to get the rotated angle of objects into the whole pipeline.
And as I mentioned above,the elements are highly integrated,should I change the code in gst-plugins to change some plugins?
If you want to customize your own inference, we don’t recommend using deepstream-app directly. As you said, it’s highly integrated. We suggest that you refer to our deepstream_tao_apps for easy integration.
The logic I want to realize is simple.I want to define a function to realize edge detection or maybe there are some places I can just add some codes to realize this because the codes are just few lines.
The parameter I need is just the frame by frame images from the decoded video,and the output will be an angle,that’s it.So how does every element cope with the frame by frame images,maybe I can just follow that to add some codes to realize this.
So after the description,where do you think I can turn to? or still the same answers
sincerely looking forward to your reply
We don’t recommend you make changes directly in deepstream_app, it can be complicated. But if you insist on that, you can try to add that in the probe function below. Please refer other demos like deepstream-image-meta-test
to learn how to get the data from the buffer.
You can refer to our source code sources\gst-plugins\gst-dsexample
to learn how to get the data from the nvbufsurface too.
sources\apps\sample_apps\deepstream-app\deepstream_app.c
static GstPadProbeReturn
gie_processing_done_buf_prob (GstPad * pad, GstPadProbeInfo * info,
gpointer u_data)
I may need extra library,namely the opencv2/opencv.hpp, there must be something I need to add on the Makefile to generate new deepstream application.What should I do?
You can refer to our source code sources\gst-plugins\gst-dsexample
to learn how to compile the opencv in DeepStream.