Custom / different model IntelligentEdgeHOL

Hello,

I have succesfully get this application to work using both Youtube and RTSP streams, and I like the concept of it: https://github.com/toolboc/IntelligentEdgeHOL

For a project, I would like to use a custom trained YOLOv3-tiny model. For now, I wanted to test a different model (I tried the Yolo9000 set).

I changed the dataset in modules/YoloModule/app/YoloInference.py with the new dataset, weights and config I want to load, however when I restart iotedge, the application seems to load the default from somewhere else and I do not understand from where.

I even “replaced” the default YOLOv3-tiny model by using a different one but with the same name as the standard model. However it still loads the defaults.

Can somebody help me to understand this? I really would like to use a custom trained model and have this application as a basis to test it.

Thank you!

Hi,
We have YOLOv3 sample in DeepStream SDK. Please check

deepstream_sdk_v4.0.1_jetson\sources\objectDetector_Yolo

The latest version is 4.0.2.
https://devtalk.nvidia.com/default/topic/1068639/deepstream-sdk/announcing-deepstream-sdk-4-0-2/
RTSP source is supported in the SDK.
https://docs.nvidia.com/metropolis/deepstream/dev-guide/index.html#page/DeepStream%2520Development%2520Guide%2Fdeepstream_app_config.3.2.html%23wwpID0E0QB0HA
Would suggest you rtry sample config files and then customize the config file to your usecase.