Pipetuner, DsApp score is 0

Continuing the discussion from PipeTuner:

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 7.1
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only) 560.35.03

Hello, i’m trying to use the pipe tuner tool but encounter the same problem as the post linked above.
I am also using a custom model (a yolov4 model trained with TAO exported in onnx format).
I’ve used the model with deepstream before.

Here are my config files for pipe tuner and the inference :
config_files.zip (4.0 KB)

I made sure every files were mapped in the container even the custom-lib
I get this result :

Found avaliable ports: 51096 53017
Launch containers:
docker run --gpus all -itd --net=host -v /home/ia/pipe-tuner-sample:/home/ia/pipe-tuner-sample --name ds_2024-11-14_16-21-19 nvcr.io/nvidia/deepstream:7.0-triton-multiarch;
docker run --gpus all -itd --net=host --name tuner_2024-11-14_16-21-19 -v /var/run/docker.sock:/var/run/docker.sock -v /home/ia/pipe-tuner-sample:/home/ia/pipe-tuner-sample nvcr.io/nvidia/pipetuner:1.0;
94889f9626010fa65e1e61b494e6189838da0836dbcfa28e2fa264fb18dfa188
72eeb7b1cfc29f1580d4486edbdcddb2c3e012812065b9e11dc47de0da8bc0c1
Creating output directory...
mkdir -p /home/ia/pipe-tuner-sample/output; cp ../configs/config_PipeTuner/pipe_tuner_maritime.yml /home/ia/pipe-tuner-sample/output; sed -i "s/        containerImageID:*/containerImageID:ds_2024-11-14_16-21-19\n/" -i "s/    port: 51096\n/" /home/ia/pipe-tuner-sample/output/pipe_tuner_maritime.yml;
Installing dependencies...
Installing dependencies (1/2)
Installing dependencies (2/2)
Launch BBO client...
Launch BBO server...
PipeTuner started successfully!

!!!!! To stop tuning process in the middle, press CTRL+C !!!!!

  adding: DsAppServer (deflated 63%)
2024-11-14 07:33:19,294 root         INFO     seq_list: ['Boat_tracking_2', 'Boat_tracking_3', 'Boat_tracking_4', 'Boat_tracking_5', 'Boat_tracking_6', 'Boat_tracking_7', 'Boat_tracking_8', 'Boat_tracking_9']
2024-11-14 07:33:19,669 root         INFO     Writing configs to /home/ia/pipe-tuner-sample/output/pipe_tuner_maritime.yml_output/results/configs_11-14-2024_07-33-19
2024-11-14 07:33:19,676 root         INFO     send backend init
2024-11-14 07:33:19,678 root         INFO     creating optimizers...
2024-11-14 07:33:19,681 root         INFO     done. created 2
 * Serving Flask app 'ds_bbo_frontend_server'
 * Debug mode: on
2024-11-14 07:33:19,759 root         INFO     init jobs done
2024-11-14 07:33:19,774 root         INFO     progress: 0% (0/200)
Launching server on: http://0.0.0.0:51096
received /init call
2024-11-14 07:33:19,925 root         INFO     wait backend ready
[server core] initializing
2024-11-14 07:33:20,957 root         INFO     wait backend ready
2024-11-14 07:33:23,144 root         WARNING  DsApp score is 0. number of 0 scores:1
2024-11-14 07:33:25,281 root         WARNING  DsApp score is 0. number of 0 scores:2
2024-11-14 07:33:27,416 root         WARNING  DsApp score is 0. number of 0 scores:3
2024-11-14 07:33:29,239 root         WARNING  DsApp score is 0. number of 0 scores:4
2024-11-14 07:33:29,248 root         WARNING  DsApp score is 0. number of 0 scores:5
2024-11-14 07:33:29,920 root         INFO     progress: 2% (5/200) ETA 00:06:30
2024-11-14 07:33:31,440 root         WARNING  DsApp score is 0. number of 0 scores:6
2024-11-14 07:33:31,441 root         WARNING  DsApp score is 0. number of 0 scores:7
2024-11-14 07:33:33,564 root         WARNING  DsApp score is 0. number of 0 scores:8
2024-11-14 07:33:33,565 root         WARNING  DsApp score is 0. number of 0 scores:9
2024-11-14 07:33:35,695 root         WARNING  DsApp score is 0. number of 0 scores:10
2024-11-14 07:33:35,696 root         WARNING  DsApp score is 0. number of 0 scores:11
2024-11-14 07:33:37,534 root         WARNING  DsApp score is 0. number of 0 scores:12
2024-11-14 07:33:37,536 root         WARNING  DsApp score is 0. number of 0 scores:13
2024-11-14 07:33:39,709 root         WARNING  DsApp score is 0. number of 0 scores:14
2024-11-14 07:33:39,710 root         WARNING  DsApp score is 0. number of 0 scores:15
2024-11-14 07:33:40,027 root         INFO     progress: 7% (15/200) ETA 00:04:06
2024-11-14 07:33:41,836 root         WARNING  DsApp score is 0. number of 0 scores:16
2024-11-14 07:33:41,838 root         WARNING  DsApp score is 0. number of 0 scores:17
2024-11-14 07:33:43,685 root         WARNING  DsApp score is 0. number of 0 scores:18
2024-11-14 07:33:43,994 root         WARNING  DsApp score is 0. number of 0 scores:19
2024-11-14 07:33:45,821 root         WARNING  DsApp score is 0. number of 0 scores:20
2024-11-14 07:33:45,822 root         WARNING  DsApp score is 0. number of 0 scores:21
2024-11-14 07:33:46,132 root         ERROR    Too many 0 scores from DS app. We stop all processes. Please check DS app logs
2024-11-14 07:33:46,133 root         ERROR    OPTIMIZATION not completed!
[result sender] up and running!
[worker 0] up and running!
[worker 1] up and running!
received /reset call
[server core] resetting
stopping result senders...
Waiting for result sender to stop...
done.
stopping workers...
number of workers to stop: 2
number of workers to stop: 0
done.

!!!!! Press CTRL+C key to end PipeTuner !!!!!

I have tried the default configuration, it worked fine.
I tried, as suggested, to use the deepstream app in the DS container to make sure the output with my model is as expected and didn’t see any problem.

What can I do to try to find the problem?
Thank you

So you have tried all suggestions in PipeTuner - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums? We will check your configurations first.

@Alex6s You just provided two configuration files. Seems you changed more configurations and scripts to run your case. How did you run the pipetuner case? What did you changed for your cases?

I don’t think i’ve changed anything else, apart from the input data of course (that i’ve set according to the documentation). What make you think there is something else that changed ?
I modified the .yml pipetuner config file to include the path to my data and the pgiePath. I’ve also just changed the “checkClassMatch” parameter in the DCF config file, but that’s it. I run pipetuner with the command : >>bash launch.sh nvcr.io/nvidia/deepstream:7.0-triton-multiarch …/configs/config_PipeTuner/pipe_tuner_maritime.yml

@Alex6s
Have you tried all suggestions in PipeTuner - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums?

I noticed that you run the Pipetuner with your own dataset. Can you check whether the dataset is prepared correctly? Pipetuner Guide — DeepStream documentation

Hello,
Yes, here is what i’ve tried so far
Here is my data directory:

~/pipe-tuner-sample/data/maritime_videos$ tree
.
├── Boat_tracking_2
│   ├── gt
│   │   └── gt.txt
│   └── seqinfo.ini
├── Boat_tracking_2.mp4
├── Boat_tracking_3
│   ├── gt
│   │   └── gt.txt
│   └── seqinfo.ini
├── Boat_tracking_3.mp4
├── Boat_tracking_4
│   ├── gt
│   │   └── gt.txt
│   └── seqinfo.ini
├── Boat_tracking_4.mp4
├── Boat_tracking_5
│   ├── gt
│   │   └── gt.txt
│   └── seqinfo.ini
├── Boat_tracking_5.mp4
├── Boat_tracking_6
│   ├── gt
│   │   └── gt.txt
│   └── seqinfo.ini
├── Boat_tracking_6.mp4
├── Boat_tracking_7
│   ├── gt
│   │   └── gt.txt
│   └── seqinfo.ini
├── Boat_tracking_7.mp4
├── Boat_tracking_8
│   ├── gt
│   │   └── gt.txt
│   └── seqinfo.ini
├── Boat_tracking_8.mp4
├── Boat_tracking_9
│   ├── gt
│   │   └── gt.txt
│   └── seqinfo.ini
├── Boat_tracking_9.mp4
└── seqmap.txt

seqmap.txt:

name
Boat_tracking_2
Boat_tracking_3
Boat_tracking_4
Boat_tracking_5
Boat_tracking_6
Boat_tracking_7
Boat_tracking_8
Boat_tracking_9

seqinfo.ini (the lenght change depending on the video):

[Sequence]
seqLength=743

I launch the pipetuner tool with :

bash launch.sh nvcr.io/nvidia/deepstream:7.0-triton-multiarch ../configs/config_PipeTuner/pipe_tuner_maritime.yml

Once the containers started, i attach a terminal to the DS docker with (with the name of the docker that has been created) :

docker exec -it ds_2024-11-20_10-50-12 /bin/bash

I modified a config file from /opt/nvidia/deepstream/deepstream-7.0/samples/configs/deepstream-app to read one of my video in the data directory, and use my inference config file and model, it output a video with bboxes, i can confirm the detection is performed correctly.

I do have a question, in the MOT annotation format, the class is given as an id, in the label.txt file, labels are strings with the name of the class. Does pipetuner associate the “n” id in the MOT annotation file as the label at the nth line in the label file (not sure if my question is clear)?

The label.txt file is given by the model card which is generated when the model is trained. Take the PeopleNet | NVIDIA NGC model as the example, the model is trained to classify 3 classes of objects - person, bag and face. The coresponding class id of the three casses are “person -1 , bag - 2, face - 3”. In the Pipetuner “DS-based Perception tuning” sample, the class id 1 in SDG_1min_videos dataset means the person object.

Pls check how to get log messages from client and server in Pipetuner Guide — DeepStream documentation, and see if there’s any log messages that provides more insights.

Also, each checkpoint folder includes the exact DS pipeline config that was used for each iteration. Please try running the DS pipeline with the exact config files in each checkpoint, and see if it runs fine.

If you have any questions on those log messages or analysis, please post your observations and log files here, so we can help out further.

Thank you for the repply, i will check this as soon as i can and come back to you

Hi @Alex6s

The issue I was having similar to yours is solved here, in case it might be of help to your case.

Hi @davconde ! thank you very much for letting me know !
I tried your solution but maybe i didn’t understand correctly what you did exactly to solve the problem. Are you saying that the class id in the ground truth dataset dos not matter ? I tried setting all class id in the ground truth dataset to 1, and edited the kittiTrack2mot.sh as you suggested to include my classes and their id number (corresponding to the order in label.txt, or should i set them all to 1 ?)

declare -A IDMAP
IDMAP=(
    [boat]=1
    [cargoship]=2
    [coastguard]=3
    [fishing]=4
    [jetski]=5
    [kayak]=6
    [passengership]=7
    [person]=8
    [sailingvessel]=9
    [speedboat]=10
    [warship]=11
    [yacht]=12
)

but i still get the same error

Hi! Yes, it seems that what you’re sharing is what I did as well. So my only dataset related modifications since the issue arised were:

  1. I replaced the IDMAP values in kittiTrack2mot.sh as suggested by NVIDIA staff. If you look into the script, at the end you see that the values extracted from the IDMAP variable are only used to filter out those classes with assigned value ‘0’. So it’s not doing any actual mapping, just including rows that contain each key, but filtering out the ones set to ‘0’. Keep in mind tho that it’s case sensitive. The way you have it set should be okay.
  2. In every gt.txt of your data, class ID needs to be set to ‘1’, so every row of the dataset should end with 1,1,1 for included_for_eval, class_id, visibility_ratio (I don’t know if visibility is used for anything, I kept it the same way).

Also, the config files in configs/config_PipeTuner/ have way too much broad value ranges, both for the NvDCF and PGIE groups. For example, the pre-cluster-threshold: [ 0,0.5,linear,real ] in PGIE allows any iteration to have that value at almost ‘0’, which makes that iteration incredibly slow. Then you have things like minTrackerConfidence: [ 0.1,0.9,linear,real ], which may only take into account tracklets with a very high confidence that you’re never achieving if data is complex enough.

I’d suggest to comment out most if not all of the lines in the NvDCF and PGIE groups to rule out the possibility of your issue being derived from this. Each line discarded from the config_PipeTuner file will make PipeTuner use the corresponding value found in the config_Tracker file whose path you set in the trackerPath field. You could assign there a Tracker file that you know works well to discard this posibility as the error source, as it also leads to 0.0 score values since tracklets are not generated. After that you can set more coherent ranges.

For reference, I’m using the Tracker file that comes with the DeepStream container at /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_tracker_NvDCF_accuracy.yml and it works for me. Here’s a breakdown of the min/max values defined in /home/david/sources/pipetuner/pipe-tuner-road/configs/config_PipeTuner/SDG_sample_PeopleNet-ResNet34_NvDCF-ResNet50_MOT.yml compared with the ones used by default with the DeepStream reference app:

[
    ['minDetectorConfidence', 0, 0.4, 0.1894],
    ['enableBboxUnClipping', 0, 1, 1],
    ['minIouDiff4NewTarget', 0.1, 0.9, 0.3686],
    ['minTrackerConfidence', 0.1, 0.9, 0.1513],
    ['probationAge', 1, 10, 2],
    ['maxShadowTrackingAge', 0, 100, 42],
    ['minMatchingScore4Overall', 0, 0.9, 0.6622],
    ['minTrackletMatchingScore', 0, 0.9, 0.2940],
    ['minMatchingScore4ReidSimilarity', 0, 0.9, 0.0771],
    ['matchingScoreWeight4TrackletSimilarity', 0, 0.9, 0.7981],
    ['matchingScoreWeight4ReidSimilarity', 0, 0.9, 0.3848],
    ['minTrajectoryLength4Projection', 5, 40, 34],
    ['trajectoryProjectionLength', 10, 150, 33],
    ['maxAngle4TrackletMatching', 60, 180, 67],
    ['minSpeedSimilarity4TrackletMatching', 0, 0.1, 0.0574],
    ['minBboxSizeSimilarity4TrackletMatching', 0, 1.0, 0.1013],
    ['reidExtractionInterval', 0, 50, 8],
    ['minMatchingScore4Overall (DA)', 0, 0.9, 0.0222],
    ['minMatchingScore4SizeSimilarity', 0, 0.9, 0.3552],
    ['minMatchingScore4Iou', 0, 0.9, 0.0548],
    ['minMatchingScore4VisualSimilarity', 0, 0.9, 0.5043],
    ['matchingScoreWeight4VisualSimilarity', 0, 1.0, 0.3951],
    ['matchingScoreWeight4SizeSimilarity', 0, 1.0, 0.6003],
    ['matchingScoreWeight4Iou', 0, 1.0, 0.4033],
    ['tentativeDetectorConfidence', 0.1, 0.9, 0.1024],
    ['minMatchingScore4TentativeIou', 0.1, 0.9, 0.2852],
    ['processNoiseVar4Loc', 1, 10000, 6810.8668],
    ['processNoiseVar4Size', 1, 10000, 1541.8647],
    ['processNoiseVar4Vel', 1, 10000, 1348.4874],
    ['measurementNoiseVar4Tracker', 1, 10000, 293.3238],
    ['featureFocusOffsetFactor_y', -0.5, 0.5, -0.1054],
    ['filterLr', 0.01, 0.5, 0.0767],
    ['filterChannelWeightsLr', 0.01, 0.1, 0.0339],
    ['gaussianSigma', 0.01, 1.8, 0.5687]
]

If you’re using a custom PGIE lib implementation like this one for YOLO make sure to remove the parametrization of dbscan-min-score, eps and minBoxes as this implementation does not support DBSCAN.

If nothing of this works, I’m afraid your issue may be different than mine. Hope this can help!

1 Like

Hello, I would like to reopen this discussion because i finally found the time to work on this subject again.
I am now on a fresh new install, with a new dataset and new model, but still get the same error.

Following your instructions, i tried to launch the deepstream-app with the exact config file that is in : pipe-tuner-sample/output/Custom_dataset_config.yml_output/results/configs_03-18-2025_07-16-38/config_dsApp/dsAppConfig_0.txt (i just modified it so that it output a video with osd). By default, this file point to the tracker config file in pipe-tuner-sample/output/Custom_dataset_config.yml_output/results/configs_03-18-2025_07-16-38/config_Tracker/config_tracker_NvDCF_accuracy_ResNet50.yml but i also tried to change it for pipe-tuner-sample/output/Custom_dataset_config.yml_output/checkpoints/DsAppRun_output_20250318_071705/0/config_tracker.yml, which should be, as you said, the exact DS pipeline config used for the last iteration that returned a score of 0. I launch the app inside the container using :

deepstream-app -c /home/ia/pipe-tuner-sample/output/Custom_dataset_config.yml_output/results/configs_03-18-2025_07-16-38/config_dsApp/dsAppConfig_0.txt

In both case, the output video has no problem, the bbox are present and correct.
I did all of that after making the recommended changes by user davconde, changing the IDMAP in the kittiTrack2mot.sh with my classes, and making sure my dataset was correctly annotated.

I am including all logs i could find (but i don’t think they provide any usefull information in my case) and the configs files used. If necessary, i can provide the dataset i’m using (it is just one video for now because i wanted to make sure it worked first), and the model.
config_infer_primary_240514_d01_vis_aerial.txt (943 Bytes)
log_server_2025-03-18_15-04-51.txt (422 Bytes)
log_client_2025-03-18_15-04-51.txt (8.6 KB)
Custom_dataset_config.zip (3.4 KB)

Any help would be appreciated

As davconde suggested, can you start from the minimal tuning set-up like comment out all the params in tuning config but only one. So, you are tuning only one param with very narrow range around the default value, which you already verified as a standalone DS execution. If that succeeds, you can add more params with wider ranges gradually in tuning set-up.

Hello, sorry again for the late reply. I did just what you said and tried with this config :

NvDCF:
      BaseConfig:
        minDetectorConfidence: [ 0.3,0.4,linear,real ]

Because i know that anywhere in this window of confidence threshold, my detector is working properly. But i still get the same error…