Use custom caffe model in Deepstream SDK

Hi, i have a face detector trained using caffe, based on SSD 300x300. I would like to know what should i do, in order to use this model in Deepstream SDK in a Jetson TX2. I’ve seen in the samples that come built it inside the SDK, that it uses a model file, a proto file and a model engine file. My question is, how can i generate the .engine model? I’ve tried to generate it, based on this documentation: [i]Developer Guide :: NVIDIA Deep Learning TensorRT Documentation on section 3.2.3 But i keep getting the message: “could not parse layer type Normalize”, however i’m not quite sure that i’m pointing in the right direction… I would like to know, what should i do, or what should i read so i can use my face detector caffe model in deepstream. Thanks in advance!

Hi,

We have a SSD detector sample in deepstream but it targets for TensorFlow-based model.
/opt/nvidia/deepstream/deepstream-4.0/sources/objectDetector_SSD

To narrow down the issue, could you help to check if your model can be inferenced with this TensorRT sample or not:
/usr/src/tensorrt/samples/sampleSSD/

Thanks.