A generalized deepstream adaptable to all models

Is anyone else tired of the constant struggle to create .engine files for different models in DeepStream? Every time I need to convert a new model, it feels like I’m starting a wild goose chase all over again. 😫I’m thinking there’s got to be a better way. What if we could create a generalized config file that works for most models? Here’s what I’m envisioning:

  1. A base config file with common settings
  2. Placeholders for model-specific stuff (file paths, input/output layers, etc.)
  3. A simple script to auto-generate configs for different models

Has anyone tackled this problem before? I’d love to hear your thoughts or see any solutions you’ve come up with. Maybe we could even collaborate on a community-driven tool to make this process less painful for everyone.Let’s put an end to the .engine file headache once and for all! 💪

Is anyone else tired of the constant struggle to create .engine files for different models in DeepStream? Every time I need to convert a new model, it feels like I’m starting a wild goose chase all over again. 😫I’m thinking there’s got to be a better way. What if we could create a generalized config file that works for most models? Here’s what I’m envisioning:

  1. A base config file with common settings
  2. Placeholders for model-specific stuff (file paths, input/output layers, etc.)
  3. A simple script to auto-generate configs for different models

Has anyone tackled this problem before? I’d love to hear your thoughts or see any solutions you’ve come up with. Maybe we could even collaborate on a community-driven tool to make this process less painful for everyone.Let’s put an end to the .engine file headache once and for all! 💪

Are you asking for a tool or a method to generate the TensorRT engine automatically?

We use "trtexec"as the common tool to generate TensorRT engine files from ONNX models.

Which kind of use cases do you want to be added?

I am asking for a tool to generate deepstream config files

The latest DeepStream 7.1 supports ONNX models, the model parameters needed in the configuration file are simple. The other parameters are for preprocessing and postprocessing. These can’t be deduced from the ONNX model itself.

Thanks for the reply!
Okay ,I have another doubt how do we get all the parameters in a config file …When ever we are making a config file from a model we have to copy paste from some git repo …like is there a way to know all the parameters that have to mentioned in a config file of a specific model