How to run in run time source addition and deletion with deepstream app?

Hi,
I’ve download this file: https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps/tree/master/runtime_source_add_delete, and how to use it?
In read me file, I’ve got some information. Which is I’ve done. For example:
I’ve edit dstest_pgie_config.txt dstest_sgie1_config.txt dstest_sgie2_config.txt dstest_sgie3_config.txt with replace in model-file, proto-file, model-engine-file, labelfile-path, int8-calib-file and mean file from my deepstream_SDK_4.0.1 sample folder -> model folder and so on. In make file I just paste in export DS_SDK_ROOT=“my deepstream download folder path, where I’ve run deepstream”, then make the runtime_source_add_delete/ path.

and, when I’ve run the error is:

Using winsys: x11 
Creating LL OSD context new
0:00:03.246931180 15245   0x558b469790 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<secondary-nvinference-engine3> NvDsInferContext[UID 4]:log(): The engine plan file is generated on an incompatible device, expecting compute 5.3 got compute 7.2, please rebuild.
0:00:03.247023943 15245   0x558b469790 WARN                 nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<secondary-nvinference-engine3> NvDsInferContext[UID 4]:useEngineFile(): Failed to create engine from file
0:00:03.247052902 15245   0x558b469790 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<secondary-nvinference-engine3> NvDsInferContext[UID 4]:initialize(): Trying to create engine from model files
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
0:03:03.471224249 15245   0x558b469790 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<secondary-nvinference-engine3> NvDsInferContext[UID 4]:generateTRTModel(): Storing the serialized cuda engine to file at /home/imran/Music/deepstream_sdk_v4.0.1_jetson/samples/models/Secondary_VehicleTypes/resnet18.caffemodel_b8_fp16.engine
0:03:04.600602793 15245   0x558b469790 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<secondary-nvinference-engine1> NvDsInferContext[UID 2]:log(): The engine plan file is generated on an incompatible device, expecting compute 5.3 got compute 7.2, please rebuild.
0:03:04.600664045 15245   0x558b469790 WARN                 nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<secondary-nvinference-engine1> NvDsInferContext[UID 2]:useEngineFile(): Failed to create engine from file
0:03:04.600690868 15245   0x558b469790 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<secondary-nvinference-engine1> NvDsInferContext[UID 2]:initialize(): Trying to create engine from model files
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
0:05:58.455754877 15245   0x558b469790 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<secondary-nvinference-engine1> NvDsInferContext[UID 2]:generateTRTModel(): Storing the serialized cuda engine to file at /home/imran/Music/deepstream_sdk_v4.0.1_jetson/samples/models/Secondary_CarColor/resnet18.caffemodel_b8_fp16.engine
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_klt.so
gstnvtracker: Optional NvMOT_RemoveStreams not implemented
gstnvtracker: Batch processing is OFF
0:05:59.398136687 15245   0x558b469790 WARN                 nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:checkEngineParams(): Requested Max Batch Size is less than engine batch size
0:05:59.399024426 15245   0x558b469790 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
0:07:56.746493709 15245   0x558b469790 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:generateTRTModel(): Storing the serialized cuda engine to file at /home/imran/Music/deepstream_sdk_v4.0.1_jetson/samples/models/Primary_Detector/resnet10.caffemodel_b8_fp16.engine
Now playing: /home/imran/Music/deepstream_sdk_v4.0.1_jetson/samples/configs/deepstream-app/source8_1080p_dec_infer-resnet_tracker_tiled_display_fp16_nano_one.txt
Creating LL OSD context new
0:07:57.150407549 15245   0x558b469790 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<secondary-nvinference-engine3> NvDsInferContext[UID 4]:log(): The engine plan file is generated on an incompatible device, expecting compute 5.3 got compute 7.2, please rebuild.
0:07:57.150469478 15245   0x558b469790 WARN                 nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<secondary-nvinference-engine3> NvDsInferContext[UID 4]:useEngineFile(): Failed to create engine from file
0:07:57.150500000 15245   0x558b469790 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<secondary-nvinference-engine3> NvDsInferContext[UID 4]:initialize(): Trying to create engine from model files
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
0:10:39.067284394 15245   0x558b469790 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<secondary-nvinference-engine3> NvDsInferContext[UID 4]:generateTRTModel(): Storing the serialized cuda engine to file at /home/imran/Music/deepstream_sdk_v4.0.1_jetson/samples/models/Secondary_VehicleTypes/resnet18.caffemodel_b8_fp16.engine
0:10:40.383069952 15245   0x558b469790 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<secondary-nvinference-engine1> NvDsInferContext[UID 2]:log(): The engine plan file is generated on an incompatible device, expecting compute 5.3 got compute 7.2, please rebuild.
0:10:40.383129485 15245   0x558b469790 WARN                 nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<secondary-nvinference-engine1> NvDsInferContext[UID 2]:useEngineFile(): Failed to create engine from file
0:10:40.383158236 15245   0x558b469790 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<secondary-nvinference-engine1> NvDsInferContext[UID 2]:initialize(): Trying to create engine from model files
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
0:13:34.892677974 15245   0x558b469790 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<secondary-nvinference-engine1> NvDsInferContext[UID 2]:generateTRTModel(): Storing the serialized cuda engine to file at /home/imran/Music/deepstream_sdk_v4.0.1_jetson/samples/models/Secondary_CarColor/resnet18.caffemodel_b8_fp16.engine
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_klt.so
gstnvtracker: Optional NvMOT_RemoveStreams not implemented
gstnvtracker: Batch processing is OFF
0:13:35.850531309 15245   0x558b469790 WARN                 nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:checkEngineParams(): Requested Max Batch Size is less than engine batch size
0:13:35.851395761 15245   0x558b469790 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
0:15:31.785105642 15245   0x558b469790 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:generateTRTModel(): Storing the serialized cuda engine to file at /home/imran/Music/deepstream_sdk_v4.0.1_jetson/samples/models/Primary_Detector/resnet10.caffemodel_b8_fp16.engine
Failed to set pipeline to playing. Exiting.

And now, what should I do? Please help me.

By the way in Deepstream_SDK_4.0.1 folder -> sample folder -> models folder -> (Secondary_VehicleTypes and Secondary_CarColor) folder have not .engine file. .enginfe file for this two folder I’ve copy-paste from old version deepstream_SDK_4.0

Thank you.

I’ve solved the upper problem. when I’m run $ ./deepstream-test-rt-src-add-del then, the error is, in my terminal:

Using winsys: x11 
Creating LL OSD context new
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_nvdcf.so
gstnvtracker: Optional NvMOT_RemoveStreams not implemented
gstnvtracker: Batch processing is ON
[NvDCF] Initialized
Now playing: /home/imran/Music/deepstream_sdk_v4.0.1_jetson/samples/configs/deepstream-app/source8_1080p_dec_infer-resnet_tracker_tiled_display_fp16_nano_one.txt
Creating LL OSD context new
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_nvdcf.so
gstnvtracker: Optional NvMOT_RemoveStreams not implemented
gstnvtracker: Batch processing is ON

Currently NvMOT_RemoveStreams is not implemented.
refer this
https://docs.nvidia.com/metropolis/deepstream/dev-guide/DeepStream_Development_Guide/baggage/group__ee__NvMOTracker.html#ga83a4b20b5c03f96780651ea5b6ea9386

Thanks bcao for replay.

I read the article which is you are given me. So what should I do for run time source addition and deletion in the deepstream app? should I add the given below in run_time_source_add_del.c file?

void NvMOT_RemoveStreams	(	NvMOTContextHandle 	contextHandle,
NvMOTStreamId 	streamIdMask 
)

You don’t need to implement it. It’s optional.

https://devtalk.nvidia.com/default/topic/1062677/deepstream-sdk/whats-means-in-nvtracker-/post/5384130/#5384130