Yes, it is input-dims=3;224;224;0
Glad to see that you have the solution now.
Actually, when you deploy the tlt model in the config file. It should work.
For example, if you trained a two-classes(person and another class) model with TLT classification network, then you can run inference with below two ways in deepstream.
- Work as primary trt engine
ds_classification_as_primary_gie (3.4 KB)
config_as_primary_gie.txt (741 Bytes)
nvidia@nvidia:/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app$ deepstream-app -c ds_classification_as_primary_gie
- Work as secondary trt engine
ds_classification_as_secondary_gie (3.6 KB)
config_as_secondary_gie.txt (741 Bytes)
nvidia@nvidia:/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app$ deepstream-app -c ds_classification_as_secondary_gie