Weird behavior when switching engines on deepstream-app

Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) Jetson Xavier AGX
• DeepStream Version 6.0.1
• JetPack Version (valid for Jetson only) 4.6
• TensorRT Version 8.2.1

When I change config_infer_primary.txt to a load different model (onnx-file, labelfile-path, num-detected-classes), it keeps detecting the previous model. Important, I am describing the objects that are detected at the image, not labels in the bounding boxes. The latter is consistent to the label file I provide.

I have 2 YoloV5 models: (1) the web available, let’s call it, it has 80 classes; (2) and my custom model, let’s call it, it has 8 classes. For each one I export .pt to .onnx and its corresponding label.txt.

Supose I run deepstream-app -c deepstream_app_config.txt, with this in the config_infer_primary.txt:


It detects the objects from A and places the bounding box’s labels from A.

Now, when I change do the model to B:


It detects the objects from A and places the bounding box’s labels from B ?! Be careful again, I am not talking about the labels from the bounding boxes, I am concerning the objects that are being detected. I don’t understand that.

Moreover, if I force a “file not found” error with the model-engine-file by providing a file that does not exists, I can switch between model A and B, back and forth, and everything works perfect. But why?

Thus, what it seems is that the absence of an model-engine-file is forcing some kind of reset/default values the allows the switched model to be noted, and to provide the correct detections.

What am I missing here?

Are your two models the same name? Could you change them and model-engine-file to different names?

Yes, I am pretty sure about the names. Each model have different names, each label file has different names, and at the config file I am sure I specified the correct pair of model file and label file. I have already eliminated this possibility of this mistake.

I believe we could debug this error by considering the observation I made about the model-engine-file error I forced. It produces the expected behavior.

Could you attach the models, config files, and label files to us?

These are the files:
sample_configs.tar.gz (54.5 MB)

Can you comment out the line below and try it out?


Yes, it works fine as expected. Can you explain why?

If you have configured this model-engine-file field and there is an engine file. Nvinfer plugin will no longer generate engine from the onnx model you set.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.