Is there a way to configure the model-engine-file dynamically?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
Jetson Nano
• DeepStream Version
5.0.1
• JetPack Version (valid for Jetson only)
4.5 (latest SD image)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I’m having a question regarding the config entry model-engine-file. I’m having a configuration, in which I can use up to three cameras, dynamically from the command line, as in the Python “test-3” sample with the files.

As like with the files I’m programmatically change the batch-size parameter according to the found sources (so 1 for one, 2 for two and so on).

This seems to clash with the fixed entry in the config, which always points to just one specific file., e.g./home/neil/jetson/models/primary-detector-nano/resnet10.caffemodel_b1_gpu0_fp16.engine

If now for instance this file was produced with (as the name “b1” says) batch-size 1, everything is OK and the engine loads it and boots up fast, if I start it with one source only. But if I start it with for instance 3 sources, the entire file is re-constructed, which slows down the start (and sometimes also causes a crash).

Is there a way to programmatically configure the engine file w/o having to adapt the config file all the time depending on the number of inputs?

I could als omit the dynamic change of the batch-size, but I don’t know about the consequences.

I think I would be better of if I configure the max batch size (3) by default and don’t change it programmatically. This would allow me to use just one engine file from configuration.

Let me know, if I’m wrong

1 Like

try set batch-size=3 in group [primary-gie] and run again