Custom SSD model on Deepstream 5.0

Environment

GPU Type: dGPU (RTX2060)
Operating System : Ubuntu 18.04
Deepstream Version - 5.0
Tensorrt Version - 7.0
Nvidia Driver Version: 440

I have converted the pretrained model of 91 classes using the sample provided in /opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD into uff and ran the deepstream app included in same, it work perfectly except for labels are mismatching, had to add dummy string to labels to fix it.

The custom models gets converted into uff without any errors but when running deepstream app it gives this error:

0:00:00.385595129 18164 0x556fa7801070 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
[libprotobuf FATAL /externals/protobuf/x86_64/10.0/include/google/protobuf/repeated_field.h:1408] CHECK failed: (index) < (current_size_): 
terminate called after throwing an instance of 'google_private::protobuf::FatalException'
  what():  CHECK failed: (index) < (current_size_): 
Aborted (core dumped)

Hi @priyanshthakore,
You could use TensorRT - trtexec to try running your uff file.
From the log, it should be the problem of your model itself instead of DeepStream.