How to run Tao resenet50 classifier model as PGIE in Deepstream 6.1

Hi @Morganh ,I have resnet50 tao classifier model and i successfully converted the engine file from etlt model in nvinfer and tried to run the model as primary classifier.But i didn’t get any results in frame_meta, the number of objects in frame_meta is zero. Here is my config file, do i need to change anything in config file to get meta or where i will get the output of classifier?

Cuda version: 11.4
Deepstream version: 6.1

[property]
process-mode=1
gpu-id=0
net-scale-factor=1.0
#offsets=103.939;116.779;123.68
model-color-format=1
labelfile-path=labels.txt
tlt-encoded-model=clf_v9.etlt
model-engine-file=clf_v9.etlt_b1_gpu0_fp16.engine
tlt-model-key=nvidia_tlt
infer-dims=3;320;320
uff-input-blob-name=input_1
uff-input-order=0
batch-size=1
network-mode=2
interval=0
gie-unique-id=8
network-type=1
scaling-filter=1
scaling-compute-hw=1
output-blob-names=predictions/Softmax
classifier-threshold=0.000000001
is-classifier=1
scaling-filter=1
scaling-compute-hw=1

I just processed the batchmeta like this

NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta(buf);
for (l_frame = batch_meta->frame_meta_list; l_frame != NULL;
l_frame = l_frame->next)
{

NvDsFrameMeta *frame_meta = (NvDsFrameMeta *)(l_frame->data);
cout << "number of objects in frame_meta " <<frame_meta->num_obj_meta << endl;
}

You may compare your config with the config on FAQ DeepStream SDK FAQ - #25 by fanzh.

I checked all the parameters, only difference is i am using network mode fp16.Please help me on this

Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hi @fanzh
Hardware Platform : GPU
DeepStream Version : 6.1
TensorRT : 8.2
NVIDIA GPU DRIVER : NVIDIA GeForce GTX 1660 Ti/PCIe/SSE2

My question is , I Ran renet50 tao classifier model as pgie in Deepstream, but i didn’t get any results in frame meta. I converted the etlt model with network-mode FP16 and this is my config file. Do i need to change anything in config file to get meta or where i will get the output of classifier?

[property]
process-mode=1
gpu-id=0
net-scale-factor=1.0
#offsets=103.939;116.779;123.68
model-color-format=1
labelfile-path=labels.txt
tlt-encoded-model=clf_v9.etlt
model-engine-file=clf_v9.etlt_b1_gpu0_fp16.engine
tlt-model-key=nvidia_tlt
infer-dims=3;320;320
uff-input-blob-name=input_1
uff-input-order=0
batch-size=1
network-mode=2
interval=0
gie-unique-id=8
network-type=1
scaling-filter=1
scaling-compute-hw=1
output-blob-names=predictions/Softmax
classifier-threshold=0.000000001
is-classifier=1
scaling-filter=1
scaling-compute-hw=1

Could anyone please help me on this?

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

sorry for the late reply,

  1. did you try this command: DeepStream SDK FAQ - #25 by fanzh. it uses a classification model as pgie.
  2. is the engine generated by trtexec? could you share the command? please ensure the configuration is correct.
  3. nvinfer plugin is opensource, you can add log in attach_metadata_classifier to debug.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.