Hello everyone.
I am looking for a feature I thought I was going to find right away, but haven’t been able to find.
How do I set the tlt-model-key on a deepstream-app based application’s source code instead of setting it from config file or config_infer file?
I tryed to add it on deepstream_app_config_parser.c
Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) • DeepStream Version • JetPack Version (valid for Jetson only) • TensorRT Version • NVIDIA GPU Driver Version (valid for GPU only) • Issue Type( questions, new requirements, bugs) • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
Hello @bcao, thank you for your answer
here is the info. Sorry for not posting it before.
• Hardware Platform (Jetson / GPU)
nvidia GPU • DeepStream Version
5.1 • JetPack Version (valid for Jetson only) • TensorRT Version
7.2.2 • NVIDIA GPU Driver Version (valid for GPU only)
460.39 • Issue Type( questions, new requirements, bugs)
question • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
Thanks, I guess you cannot change the key from app level, the config “CONFIG_GROUP_INFER_TLT_MODEL_KEY” in gstnvinfer_property_parser.h is used in gstnvinfer. If you really don’t want it in config file, I think you can change the gstnvinfer source code.