tlt-converter [ERROR] UffParser: and  NvDsInfer Error: NVDSINFER_CUSTOM_LIB_FAILED Deepstream

I have trained a Model using DetectNet with Resnet 18 using  Transfer Learning Toolkit for Intelligent Video Analytics

And did the below steps:
0. [Set up env variables]

  1. [Prepare dataset and pre-trained model]
        1. [Verify downloaded dataset]
        1. [Prepare tfrecords from kitti format dataset]
        2. [Download pre-trained model]
  2. [Provide training specification]
  3. [Run TLT training]
  4. [Evaluate trained models]
  5. [Prune trained models]
  6. [Retrain pruned models]
  7. [Evaluate retrained model]
  8. [Visualize inferences]
  9. [Deploy]
        1. [Int8 Optimization]
        2. [Generate TensorRT engine]

Exported and Model is Saved in Nvidia Jetson Tx2  -> Calibration & etlt Model
Already have TensorRT Installed with 5.1.6 in Tx2

After all above steps I’m using tlt-converter in Nvidia Jetson Tx2 to save the engine

But getting the following ERROR:

nvidia@nvidia-desktop:~/Desktop/TLT_Converter$ ./tlt-converter -e resnet18_detector.engine -k MYKEY -c calibration.bin -o output_cov/Sigmoid,output_bbox/BiasAdd -d 3,384,1248 -b 4 -m 64 -t int8 -i nchw resnet18_detector.etlt
[ERROR] UffParser: Could not open /tmp/fileqsrTCW
[ERROR] Failed to parse uff model
[ERROR] Network must have at least one output
[ERROR] Unable to create engine
Segmentation fault (core dumped)

Also I tried directly using etlt and calibration file directly to Deepstream and below is the Config File but getting ERROR:

[property]
gpu-id=0

preprocessing parameters.

net-scale-factor=0.0039215697906911373
model-color-format=0

model paths.

int8-calib-file=./models/calibration.bin
labelfile-path=./models/labels.txt
tlt-encoded-model=./models/resnet18_detector.etlt
#model-engine-file=./models/resnet18_detector.trt
tlt-model-key=MYKEY
input-dims=3;384;1248;0 # where c = number of channels, h = height of the model input, w = width of model input, 0: implies CHW format.
uff-input-blob-name=input_1
batch-size=4

0=FP32, 1=INT8, 2=FP16 mode

network-mode=1
num-detected-classes=3
interval=0
gie-unique-id=1
is-classifier=0
output-blob-names=output_cov/Sigmoid;output_bbox/BiasAdd
#enable_dbscan=0

[class-attrs-all]
threshold=0.2
group-threshold=1

Set eps=0.7 and minBoxes for enable-dbscan=1

eps=0.2
#minBoxes=3
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0

ERROR:
./deepstream-test1-app …/…/…/…/samples/streams/sample_720p.h264
Now playing: …/…/…/…/samples/streams/sample_720p.h264

Using winsys: x11
Opening in BLOCKING MODE
Creating LL OSD context new
0:00:01.306060535  2052   0x5576738520 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger: NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
0:00:01.306441113  2052   0x5576738520 WARN                 nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger: NvDsInferContext[UID 1]:generateTRTModel(): INT8 not supported by platform. Trying FP16 mode.
0:00:01.860334501  2052   0x5576738520 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger: NvDsInferContext[UID 1]:log(): UffParser: Could not read buffer.
NvDsInferCudaEngineGetFromTltModel: Failed to parse UFF model
0:00:01.868581228  2052   0x5576738520 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger: NvDsInferContext[UID 1]:generateTRTModel(): Failed to create network using custom network creation function
0:00:01.868673325  2052   0x5576738520 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger: NvDsInferContext[UID 1]:initialize(): Failed to create engine from model files
0:00:01.868743149  2052   0x5576738520 WARN                 nvinfer gstnvinfer.cpp:692:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:00:01.868778797  2052   0x5576738520 WARN                 nvinfer gstnvinfer.cpp:692:gst_nvinfer_start: error: Config file path: dstest1_pgie_config.txt, NvDsInfer Error: NVDSINFER_CUSTOM_LIB_FAILED
Running…
ERROR from element primary-nvinference-engine: Failed to create NvDsInferContext instance
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(692): gst_nvinfer_start (): /GstPipeline:dstest1-pipeline/GstNvInfer:primary-nvinference-engine:

Kindly do the needful ASAP.

Hi ajayskabadi2012,
For tlt-converter error, please go through below topic to get more hints.
https://devtalk.nvidia.com/default/topic/1065680/transfer-learning-toolkit/tlt-converter-uff-parser-error/?offset=11#5397152

  1. The tlt-converter is downloaded from https://developer.nvidia.com/tlt-converter.
  2. The $KEY is really set.
  3. The key is correct. The key should be exactly the same as used in the TLT training phase
  4. The etlt model is available