Continuing the discussion from Tao customizing pretrained model:
Please provide the following information when requesting support.
• Hardware (T4/V100/Xavier/Nano/etc)
T4
• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc)
Classification: Resnet18
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here)
tao info:
Configuration of the TAO Toolkit Instance
dockers: [‘nvidia/tao/tao-toolkit-tf’, ‘nvidia/tao/tao-toolkit-pyt’, ‘nvidia/tao/tao-toolkit-lm’]
format_version: 2.0
toolkit_version: 3.22.05
published_date: 05/25/2022
• Training spec file(If have, please share here)
Standrad classification_spec.cfg (mentioned in the link)
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)
Problem: Model works but deepstream doesnt show classification labels
Assumed:
(1) Added model to deepstream (Details below)
(2) Expect that the output will be in frame_meta → objmeta → classificationmeta → labels list → labels
(3) In process_meta function inside deepstream_App.c I added print statements
obj->classifier_meta_list =
g_list_sort (obj->classifier_meta_list, component_id_compare_func);
printf(“in classifier before classifier meta list\n”);
for (NvDsMetaList * l_class = obj->classifier_meta_list; l_class != NULL;
l_class = l_class->next) {
printf(“in classifier 0 before label\n”);
NvDsClassifierMeta *cmeta = (NvDsClassifierMeta *) l_class->data;
printf(“in classifier before label\n”);
for (NvDsMetaList * l_label = cmeta->label_info_list; l_label != NULL;
l_label = l_label->next) {
NvDsLabelInfo *label = (NvDsLabelInfo *) l_label->data;
if (label->pResult_label) {
sprintf (str_ins_pos, " %s", label->pResult_label);
} else if (label->result_label[0] != ‘\0’) {
sprintf (str_ins_pos, " %s", label->result_label);
}
printf(“CLASSIFIER %s\n”,str_ins_pos);
str_ins_pos += strlen (str_ins_pos);
}
Only the first printf exists (printf(“in classifier before classifier meta list\n”) ).
Why is it not showing any labels?
Details below:
The classification training worked.
The “tao test” worked well on the deepstream stream file:
/opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_1080p_h264.mp4
(I break them down to frames and then run the test program. I get the results.csv file and it makes sense)
(1) I export the model to etlt and then to trt
(2) I add them to deepstream (6.0) specific folders and appropriate links to main config etc.
labels: aeroplane;bicycle;bird;boat;bottle;bus;car;cat;chair;cow;diningtable;dog;dropfill;horse;idle;motorbike;person;pottedplant;roller;sheep;sofa;towel;train;tvmonitor
config:
[property]
gpu-id=0
net-scale-factor=1.0
offsets=103.939;116.779;123.68
infer-dims=3;224;224
tlt-model-key=XXXXXXX
network-type=1
num-detected-classes=24
uff-input-order=0
output-blob-names=predictions/Softmax
uff-input-blob-name=input_1
model-color-format=1
maintain-aspect-ratio=0
output-tensor-meta=0
#model-engine-file=…/…/…/…/…/samples/models/Classification/final_model.engine
int8-calib-file=…/…/…/…/…/samples/models/Classification/final_model_int8_cache.bin
labelfile-path=…/…/…/…/…/samples/models/Classification/labels.txt
tlt-encoded-model=…/…/…/…/…/samples/models/Classification/final_model.etlt
(3) trt model doesnt work with deepstream (some lib conflict) but etlt integrates and works. I get the below:
0:00:40.660459034 33264 0x7f1974001410 INFO nvinfer gstnvinfer.cpp:654:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: Detected invalid timing cache, setup a local cache instead
0:00:46.403691343 33264 0x7f1974001410 INFO nvinfer gstnvinfer.cpp:654:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1947> [UID = 1]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-6.0/samples/models/Classification/final_model.etlt_b4_gpu0_fp32.engine successfully
INFO: [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT input_1 3x224x224
1 OUTPUT kFLOAT predictions/Softmax 24x1x1
0:00:46.413073716 33264 0x7f1974001410 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-test5/configs/nvinfer_classification_config.txt sucessfully
But no lo provide labels are printed. Happy to provide more details. I made it a primary engine:
[primary-gie]
enable=1
gpu-id=0
batch-size=4
network-mode=0
process-mode: 2 - inferences on crops from primary detector, 1 - inferences on whole frame
process-mode=1
interval=0
network-type=1 # defines that the model is a classifier.
gie-unique-id=1
classifier-threshold=0.2
config-file=/opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-test5/configs/nvinfer_classification_config.txt
Why is it not working with deepstream?
thanks