I believe one needs to create these config files from custom models to build it.Let me know if I am right here.
Thus I want a clear explanation what is the required work flow if one has ONNX model file. Also it would be great if you can mention a similar process for etlt model file.
Thanks for the reply. I found Integrating classification model in deepstream, which seems to be somewhat similar to what I want, I have trained the classification model using TAO Toolkit, I got the following things as output, nvinfer_config.txt, an etlt model file , and a labels.txt, the doc I shared above says I need to modify a config_infer_*.txt, I am unable to find it.
[property]
gpu-id=0
# preprocessing parameters
net-scale-factor=1.0
offsets=103.939;116.779;123.68
model-color-format=1
batch-size=30
# Model specific paths. These need to be updated for every classification model.
int8-calib-file=<Path to optional INT8 calibration cache>
labelfile-path=<Path to classification_labels.txt>
tlt-encoded-model=<Path to Classification etlt model>
tlt-model-key=<Key to decrypt model>
infer-dims=c;h;w # where c = number of channels, h = height of the model input, w = width of model input
uff-input-blob-name=input_1
uff-input-order=0
output-blob-names=predictions/Softmax
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
# process-mode: 2 - inferences on crops from primary detector, 1 - inferences on whole frame
process-mode=2
interval=0
network-type=1 # defines that the model is a classifier.
gie-unique-id=1
classifier-threshold=0.2
Please observe the fields from the sample config and compare. I feel I need to change the common fields, please correct me if I am wrong.
Please note: Since my trained model is a classification model, so I am expecting my nvinfer_config.txt should have all the important fields present in config_infer_secondary_*.txt.
Thanks, @yuweiw; I can now integrate TAO-trained ETLT models (I will focus on ONNX model file integration, too. Hence I require this query to be open till I complete it). I am attaching my config file here for community reference.
[property]
gpu-id=0
# preprocessing parameters
net-scale-factor=1.0
offsets=103.939;116.779;123.68
model-color-format=1
batch-size=30
maintain-aspect-ratio=0
# Model specific paths. These need to be updated for every classification model.
# int8-calib-file=<Path to optional INT8 calibration cache>
labelfile-path=<Path to labels.txt>
tlt-encoded-model=<.etlt model file>
tlt-model-key=<KEY> ## automatically generated after TAO training.
## input-dims=3;224;224;0
infer-dims=3;224;224
uff-input-blob-name=input_1
uff-input-order=0
output-blob-names=predictions/Softmax
num-detected-classes=2
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
process-mode=2 # Mode (primary or secondary) in which the element is to operate on
network-type=1
gie-unique-id=5 # mention this ID manually,
operate-on-gie-id=1 # Unique ID of the GIE on whose metadata (bounding boxes) this GIE is to operate on
classifier-threshold=0.2
operate-on-class-ids=0 # Class IDs of the parent GIE on which this GIE is to operate on
As mentioned in the above answers, the content in nvinfer_config.txt is required to write your own config file
PS: gie ids were of great help for my case as I utilized a primary model (detection) along with my secondary (classification).