How to insert dsexample as a preprocessor?

Please provide complete information as applicable to your setup.

**• Hardware Platform (Jetson / GPU)GPU
**• DeepStream Version 5.0
• JetPack Version (valid for Jetson only)
**• TensorRT Version 7.0
**• NVIDIA GPU Driver Version (valid for GPU only) 460
**• Issue Type( questions, new requirements, bugs)questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Dear professor:
I am sorry to create a new topic to ask help for my problem.
My problem is to make preprocessing for the input video of deepstream.
My model is YoloV4. I hope the pipline like this:
input video —> decoder----->preprocessing -----> infer ------> output.

In the last topic, I understand that I can create a GStream plugin such as dsexample to be a processor. And I understand the code of gst-dstexample.cpp. But I found that code is just for the post processing.

In the forum, I nearly found all the similar answer, however, it is not clear.
Now I have two methods, maybe they are wrong.
(1) I modify the “nvdinfer_context_impl.cpp”, but it is said that the data in this file is based on CUDA. So, If I use this method, could I need to refer the method in gst-dstexample.cpp and transform the data into CPU mat? That I can use opencv

(2) If I create a new GStream plugin. I use YoloV4, there are just engine and configuration files. I don’t know where is port that I can put. Or which variable should I modify in the “gst-dstexample.cpp” , then “gst-dstexample.cpp” can be put behind the decoder.

This problem make me worry. And I stop here 4 days. Thank you very much. Please kindly give me some help.

The configuration as below:
[ds-example]
enable=1
processing-width=1280
processing-height=760
full-frame=1
#batch-size for batch supported optimized plugin
batch-size=1
unique-id=15
gpu-id=0

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=1
rows=2
columns=2
width=1280
height=720
gpu-id=0
nvbuf-memory-type=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=3
uri=file:/opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_1080p_h264.mp4
num-sources=1
gpu-id=0
cudadec-memtype=0

[source1]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=3
uri=file:/opt/nvidia/deepstream/deepstream-5.0/sources/deepstream_yolov4/test4.mp4
num-sources=1
gpu-id=0
cudadec-memtype=0

[source2]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=3
uri=file:/opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_qHD.mp4
num-sources=1
gpu-id=0
cudadec-memtype=0

[source3]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=3
uri=file:/opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_cam5.mp4
num-sources=1
gpu-id=0
cudadec-memtype=0

[osd]
enable=1
gpu-id=0
border-width=1
text-size=12
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=1
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000

Set muxer output width and height

#width=1280
#height=720
width=1280
height=720
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0

config-file property is mandatory for any gie section.

Other properties are optional and if set will override the properties set in

the infer config file.

[primary-gie]
enable=1
gpu-id=0
model-engine-file=/home/yangyi/PycharmProjects/pytorch-YOLOv4/yolov4-new.engine
labelfile-path=labels.txt
batch-size=1
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary_yoloV4.txt

[tracker]
enable=0
tracker-width=512
tracker-height=320
ll-lib-file=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_klt.so
gpu-id=0
enable-batch-process=1
enable-past-frame=0
display-tracking-id=1

[sink0]
enable=0
#Type - 1=FakeSink 2=EglSink 3=File
type=3
sync=0
source-id=0
gpu-id=0
nvbuf-memory-type=0
#1=mp4 2=mkv
container=1
#1=h264 2=h265
codec=1
output-file=yolov4.mp4

[sink1]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=1
source-id=0
gpu-id=0
nvbuf-memory-type=0

[sink2]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=1
source-id=1
gpu-id=0
nvbuf-memory-type=0

[sink3]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=1
source-id=2
gpu-id=0
nvbuf-memory-type=0

[sink4]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=1
source-id=3
gpu-id=0
nvbuf-memory-type=0

[tests]
file-loop=0

I find a topic said, I can change the plugin order, so guess that the ds-example could be put behind the decoder. I modify the file: “deepstream.c” in soruce/apps/sample_apps/deepstream_app/

I change the location of following code in the ::create_pipeline().

// Decide where in the pipeline the element should be added and add only if
// enabled
if (config->dsexample_config.enable) {
// Create dsexample element bin and set properties
if (!create_dsexample_bin (&config->dsexample_config,
&pipeline->dsexample_bin)) {
goto done;
}
// Add dsexample bin to instance bin
gst_bin_add (GST_BIN (pipeline->pipeline), pipeline->dsexample_bin.bin);

// Link this bin to the last element in the bin
NVGSTDS_LINK_ELEMENT (pipeline->dsexample_bin.bin, last_elem);

// Set this bin as the last element
last_elem = pipeline->dsexample_bin.bin;

}

But It can not work.

Is my method wrong?

Thank you very much.

  • Can someone give me a hand?

Hey,
Could you explain more details about your requirement, I still cannot get which type of preprocess are you trying to do

Thank you for you reply.

My preprocess is to make the video(picture) more clearly. This algorithm based on enhancement method. Beacuese I can not find the place to wright the code, I just use adding noise to instead. I think if I can add noise to the video before inference, I can modify the Mat of the stream by my algorithm.

Thank you very much.

Is this based on a CNN model or something else?
Anyway you can create a gstreamer plugin to do preprocess refer dsexample, it’s just a referrence, you need to implement your own logic per your requirement.

Thank you reply.
The preprocess can be deal with CNN and gstreamer. However I want to use the gstreamer, if I use CNN, the preprocessing result is not easy to get when inferring.

Thank you to tell me that I can refer dsexample. Please kindly me give me some clear advice? How can modify the dsexample or orther file for preprocessing. Such as, If I want ot insert the plugin behind decoder, how can i modify the “deepstream_app::create_pipeline().”

Thank you very much

Hey, do you need to modify the original frame in your algorithm, if no.
You can implement it in your own gstreamer plugin referring dsexample and attach the result in meta data.
So you can get the result in the downstream plugin(nvinfer).

Thank you bcao, I am trying.