**• Hardware Platform (Jetson / GPU)GU
**• DeepStream Version 5.1
• JetPack Version (valid for Jetson only)
**• TensorRT Version 7.2.23
**• NVIDIA GPU Driver Version (valid for GPU only)11.1
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
Dear professors:
I have a problem. My aim is to use the secondary engine to classify the result of primary engine. Such as recognizing human face. The primary engine and classification engine are pytorch models and trained by myself.
Now, I can run the two models by the commant of “deepstream-app -c …….txt”. The configuration as below:
[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
[tiled-display]
enable=1
rows=2
columns=2
width=1280
height=720
gpu-id=0
nvbuf-memory-type=0
[source0]
enable=1
type=4
uri=rtsp://admin:root1234@192.168.1.61
num-sources=1
#drop-frame-interval=2
gpu-id=0
cudadec-memtype=0
[sink0]
enable=1
type=4
codec=1
enc-type=0
sync=0
bitrate=400000
profile=0
rtsp-port=8554
udp-port=5400
[osd]
enable=1
gpu-id=0
border-width=1
text-size=11
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0
[streammux]
gpu-id=0
live-source=0
batch-size=1
batched-push-timeout=40000
width=1920
height=1080
enable-padding=0
nvbuf-memory-type=0
[primary-gie]
enable=1
gpu-id=0
model-engine-file=primaryEngine.engine
batch-size=1
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;0;1
bbox-border-color2=0;0;1;1
bbox-border-color3=1;1;0;1
bbox-border-color4=0;1;1;1
bbox-border-color5=1;0;1;1
bbox-border-color6=0.5;0.5;0;1
interval=1
gie-unique-id=1
nvbuf-memory-type=0
config-file=primaryEngineConfig.txt
[secondary-gie0]
enable=1
model-engine-file=secondaryEngine.engine
batch-size=16
gpu-id=0
gie-unique-id=5
operate-on-gie-id=1
operate-on-class-ids=0;
config-file=secondaryEngineConfig.txt
[tracker]
enable=1
For the case of NvDCF tracker, tracker-width and tracker-height must be a multiple of 32, respectively
tracker-width=640
tracker-height=384
#ll-lib-file=/opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_mot_iou.so
#ll-lib-file=/opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_nvdcf.so
ll-lib-file=/opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_mot_klt.so
#ll-config-file required for DCF/IOU only
#ll-config-file=tracker_config.yml
#ll-config-file=iou_config.txt
gpu-id=0
#enable-batch-process and enable-past-frame applicable to DCF only
enable-batch-process=1
enable-past-frame=0
display-tracking-id=1
[tests]
file-loop=1
So, I can get the primary result by ds-example plugin. By I do not know how to get the result of the secondary raw result.
I have check the answer in this forum, and read more than 100 webpages. I know there are some examples, such as “Test2”, “back to back detector” shown in github. However, if I use those examples, I can not regulate the stream, such as I hope get the source of rstp and send the result by [sink] as rtsp.
I hope to get result of secondary engine result in some plugin such as ds-example. So please let me know, which method can be used to get the result directly, and where can find the plugin, file, and function ? Thank you very much.