Port Resnet50.caffemodel and its prototxt to be used in Deepstream 5.0?

Hi I have a Resnet50.caffemodel, Kindy give me the easiest step on how to port it on Deepstream SDK. So that I can test it using a sample pipeline in NVINFER plugin.

**• Hardware Platform (Jetson / GPU)**T4
• DeepStream Version5.0
• JetPack Version (valid for Jetson only)
• TensorRT Version7.0
**• NVIDIA GPU Driver Version (valid for GPU only)**450.36.06

@GalibaSashi

Resnet50 is now usually utilized as backbones in object detection, classification or segmentation tasks.
What kind of task you need to do using resnet50?

Hi @ersheng

1)I will use YoloV3-Tiny for detection and resnet50.caffemodel for classification task .

2)Is trtexec the best and easiest way to port resnet50.caffemodel to DeepStream. If so what will be the exact command arguments needed.

3)Also one query is that will this trtexec command be enough for all detection caffe models and prototxt.

Kindly share your comments.
Thanks in Advance
Abraham

@GalibaSashi

You can convert a caffemodel isolatedly with trtexec. But you do not have to.
DeepStream can help you do this conversion provided that corresponding configurations are done properly.
This is a sample NvInfer configuration of for a classification model which is resnet18 as a reference for you.

[property]
gpu-id=0
net-scale-factor=1
model-file=../../../../samples/models/Secondary_VehicleTypes/resnet18.caffemodel
proto-file=../../../../samples/models/Secondary_VehicleTypes/resnet18.prototxt
model-engine-file=../../../../samples/models/Secondary_VehicleTypes/resnet18.caffemodel_b16_gpu0_int8.engine
mean-file=../../../../samples/models/Secondary_VehicleTypes/mean.ppm
labelfile-path=../../../../samples/models/Secondary_VehicleTypes/labels.txt
int8-calib-file=../../../../samples/models/Secondary_VehicleTypes/cal_trt.bin
force-implicit-batch-dim=1
batch-size=16
# 0=FP32 and 1=INT8 mode
network-mode=1
input-object-min-width=64
input-object-min-height=64
model-color-format=1
process-mode=2
gpu-id=0
gie-unique-id=4
operate-on-gie-id=1
operate-on-class-ids=0
is-classifier=1
output-blob-names=predictions/Softmax
classifier-async-mode=1
classifier-threshold=0.51
process-mode=2
#scaling-filter=0
#scaling-compute-hw=0

In addition, configurations for DeepStream pipeline is also needed.
This is a sample DS configuration for a detection model alongside with a classification model that you can find on your machine:

/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt

Hi @ersheng, @kayccc

[property]
gpu-id=0
net-scale-factor=1
model-file=../../../../samples/models/Secondary_VehicleTypes/resnet18.caffemodel
proto-file=../../../../samples/models/Secondary_VehicleTypes/resnet18.prototxt
**#model-engine-file=../../../../samples/models/Secondary_VehicleTypes/resnet18.caffemodel_b16_gpu0_int8.engine**
**#mean-file=../../../../samples/models/Secondary_VehicleTypes/mean.ppm**
labelfile-path=../../../../samples/models/Secondary_VehicleTypes/labels.txt
**#int8-calib-file=../../../../samples/models/Secondary_VehicleTypes/cal_trt.bin**
force-implicit-batch-dim=1
batch-size=16
# 0=FP32 and 1=INT8 mode
network-mode=1
input-object-min-width=64
input-object-min-height=64
model-color-format=1
process-mode=2
gpu-id=0
gie-unique-id=4
operate-on-gie-id=1
operate-on-class-ids=0
is-classifier=1
output-blob-names=predictions/Softmax
classifier-async-mode=1
classifier-threshold=0.51
process-mode=2
#scaling-filter=0
#scaling-compute-hw=0

I have commented out mean-file,int8-calib-file and model-engine-file. Hope no issues will be there. As I do not have any of these files.

Comment out files that you think you don’t need.
Engine file is automatically generated from caffemodel and you can comment out caffemodel and prototxt next time you run the pipeline.