• Hardware Platform : GPU
• DeepStream Version : 6.1
• TensorRT Version : 8.2.5
• NVIDIA GPU Driver Version : cuda11.4
• Container : nvcr.io/nvidia/deepstream:6.1-devel
I am trying to run a object-detection + tracking pipeline using DeepSORT and ReID model. When I look at the results ReID is not working, ID for person A is different when person is not visible for a few seconds and reappears.
I am using the default configuration file provided in Deepstream6.1 Container with the only change being maxShadowTrackingAge which was increased to accommodate tracking a object for a longer period of time when the object is not visible.
The ReID model was downloaded from link provided in https://github.com/nwojke/deep_sort repository.
The complete configuration file :-
config.yml (5.1 KB)
How long is the person not visible? Can you share any video to show your issue?
There are two instances in which the ReID fails, one the person is not visible for a long period of time, around 15 seconds and other other only 4 seconds at the most.
I am actually trying to implement a pipeline with a Custom ReID model which I had earlier tested without deepstream.
The ReID works for the same video using the below mentioned repository, where as it fails when I use the same model inside deepstream, ReID also fails when I use the default parameters and the default model recommended by Nvidia.
This is the resultant video using the config file previously uploaded :
This is the resultant video using the custom reid model, which is a engine generated from osnet_x1_0_msmt17 :
The configuration used for the custom reid model :
config_osnet.yml (4.9 KB)
The result from using yolov5_deepsort_reid :
Instance 1 : The person leaves around 22 seconds in the video and comes back at 37
Instance 2 : The person leaves at 50 seconds and comes back at 54
For both instances the yolov5_osnet_reid was able to reidentify the person and assign the id correctly where as reid fails when I run it in a deepstream pipeline
Yes, I can see the id of the person changed. Can you share the reproduce steps? So we can reproduce it and improve it. Thanks!
I have provided the configuration file for both the mars-128 model and osnet model previously , please refer to that .
The detector model used is Peoplenet in FP32 mode. The config file for Peoplenet is included below:
peoplenet_config.txt (2.2 KB)
The probe used for visualiztion is the same probe provided in deepstream_test2.py file.
Name of the probe function : osd_sink_pad_buffer_probe
The osnet model engine file was generated by converting the .pt file to onnx and the engine file was generated using /usr/src/tensorrt/bin/trtexec inside the deepstream:6.1-devel container.
The preprocessing parameters for the osnet engine was changed appropriately in config_osnet.yml
You shared video already have BBox. Can you share the video without BBox?
Original Video without Bbox:
Can you share one package (source code and models and all configures) and reproduce steps (command line you run) to let me reproduce the three results which you shared in Jul 20? Thanks!
Sorry for the late reply
To get the results using the outside a deepstream environment, clone the repo mentioned below and follow the instructions in the Readme.md of that repository:
Like to the repository :
I filtered out all the other classes from the yolo model other than humans by using the yolov5 _crowdhuman weights and passing the argument --classes 0 while running the script. The reid model used was Osnet_x1_0 model trained on MSMT 17 dataset .
The link to the ReID model zoo and Yolov5_Crowdhuman Weights are given in the repository
For the deepstream environment , the mars128-small_deepstream.mp4 results was obtained by running the defualt deepstream-test-2 app given in deepstream 6.1 with the only change being the detector used was peoplenet.
For the osnet_x10_msmt17_deepstream.mp4 results, the detector again used was peoplenet and the reid model used was the engine file generated from the Osnet_x1_0 model trained on MSMT 17 pytorch model.
To generate the engine file ,.the pytorch model was first converted into onnx model using torch.onnx.export command, to load the model in python for export, please refer to the following link : How-to — torchreid 1.4.0 documentation
Using the FeatureExtractor you can easily load the network and weights which can then be exported to onnx.
The engine file is then created using trtexec located in /usr/src/tensorrt/bin.
Command used : trtexec --onnx=/path/to/osnet.onnx --saveEngine=/path/to/osnet.engine
The configuration files have already been shared in a previous reply , please refer to that, for the yolov5_deepsort python script , outside deepstream environment, please change the values in config.yaml located in deep_sort/configs. The following values needs to be changed :
Model_Type : To the type of model you have selected, in this case osnet_x1_0
Max_age : 500
Can you share the command line? So I can reproduce the same result shared by you. Thanks!