0x55a9170640 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:log(): UffParser: Could not read buffer.

Hi,

while running deepstream-custom application using ./deepstream-custom pgie_frcnn_uff_config.txt sample_720p.h264 command I am facing following error,

./deepstream-custom pgie_frcnn_uff_config.txt sample_720p.h264

Now playing: pgie_frcnn_uff_config.txt

Using winsys: x11
Opening in BLOCKING MODE
Creating LL OSD context new
0:00:00.620880244 26424 0x55a9170640 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger: NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
0:00:00.942872196 26424 0x55a9170640 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger: NvDsInferContext[UID 1]:log(): UffParser: Could not read buffer.
NvDsInferCudaEngineGetFromTltModel: Failed to parse UFF model
0:00:00.943457149 26424 0x55a9170640 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger: NvDsInferContext[UID 1]:generateTRTModel(): Failed to create network using custom network creation function
0:00:00.943501535 26424 0x55a9170640 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger: NvDsInferContext[UID 1]:initialize(): Failed to create engine from model files
0:00:00.943962835 26424 0x55a9170640 WARN nvinfer gstnvinfer.cpp:692:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:00:00.944013685 26424 0x55a9170640 WARN nvinfer gstnvinfer.cpp:692:gst_nvinfer_start: error: Config file path: pgie_frcnn_uff_config.txt, NvDsInfer Error: NVDSINFER_CUSTOM_LIB_FAILED
Running…
ERROR from element primary-nvinference-engine: Failed to create NvDsInferContext instance
Error details: gstnvinfer.cpp(692): gst_nvinfer_start (): /GstPipeline:ds-custom-pipeline/GstNvInfer:primary-nvinference-engine:
Config file path: pgie_frcnn_uff_config.txt, NvDsInfer Error: NVDSINFER_CUSTOM_LIB_FAILED
Returned, stopping playback
Deleting pipeline

pgie_frcnn_uff_config.txt content:-

[property]
gpu-id=0
net-scale-factor=1.0
offsets=103.939;116.779;123.68
model-color-format=1
labelfile-path=./nvdsinfer_customparser_frcnn_uff/frcnn_labels.txt
#uff-file=./faster_rcnn.uff
#model-engine-file=./faster_rcnn.uff_b1_fp32.engine
tlt-encoded-model=./models/frcnn/faster_rcnn.etlt
tlt-model-key=$KEY
uff-input-dims=3;384;1280;0
uff-input-blob-name=input_1
batch-size=1

0=FP32, 1=INT8, 2=FP16 mode

network-mode=0
num-detected-classes=4
interval=0
gie-unique-id=1
is-classifier=0
#network-type=0
output-blob-names=dense_regress/BiasAdd;dense_class/Softmax;proposal
parse-bbox-func-name=NvDsInferParseCustomFrcnnUff
custom-lib-path=./nvdsinfer_customparser_frcnn_uff/libnvds_infercustomparser_frcnn_uff.so

[class-attrs-all]
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0

Per class configuration

#[class-attrs-2]
#threshold=0.6
#roi-top-offset=20
#roi-bottom-offset=10
#detected-min-w=40
#detected-min-h=40
#detected-max-w=400
#detected-max-h=800
84,1 Bot

Thanks,
Deep

Hi deep,
Please check several pointers I mentioned in https://devtalk.nvidia.com/default/topic/1065722/transfer-learning-toolkit/not-able-to-deploy-etlt-file-in-deepstream-test-app-1/ firstly. Thanks.

Hi Morganh,

I have changed the input file .etlt with model-engine-file(saved.engine) please see below.

I think libinfer version mismatch.

can you share libnvinfer5_5.1.5+cuda10.0_arm64.deb

bcz i have installed 5.1.5.

Please guide me.

[property]
gpu-id=0
net-scale-factor=1.0
offsets=103.939;116.779;123.68
model-color-format=1
labelfile-path=./nvdsinfer_customparser_frcnn_uff/frcnn_labels.txt
#uff-file=./faster_rcnn.uff
model-engine-file=./models/frcnn/saved.engine
#tlt-encoded-model=./models/frcnn_kitti.etlt
tlt-model-key=ZDN1ZzdnMWlkaGVxZ3NiM3ZrNjYxdm5wczc6YjU1ZDE5YTItNWFmZS00YmJiLWE4MGEtYzQzMjA3NWI2MWE3
uff-input-dims=3;384;1280;0
uff-input-blob-name=input_1
batch-size=10

0=FP32, 1=INT8, 2=FP16 mode

network-mode=0
num-detected-classes=4
interval=0
gie-unique-id=1
is-classifier=0
#network-type=0
output-blob-names=dense_regress/BiasAdd;dense_class/Softmax;proposal
#output-blob-names=dense_regress/BiasAdd;dense_class/Softmax
parse-bbox-func-name=NvDsInferParseCustomFrcnnUff
custom-lib-path=./nvdsinfer_customparser_frcnn_uff/libnvds_infercustomparser_frcnn_uff.so

[class-attrs-all]
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0

Per class configuration

#[class-attrs-2]
#threshold=0.6
#roi-top-offset=20
#roi-bottom-offset=10
#detected-min-w=40
#detected-min-h=40
#detected-max-w=400
#detected-max-h=800

Hi deep,
Please double check below items I mentioned in https://devtalk.nvidia.com/default/topic/1065722/transfer-learning-toolkit/not-able-to-deploy-etlt-file-in-deepstream-test-app-1/ firstly. I can reproduce your error when use below cases.

2) the key is correct and it is the exact one which is used in generating etlt model
3) there should be no additional space character in the end of line 70. Otherwise,if your key is “1234”, then the wrong key “1234 ” is not expected.

Hi Morganh,

I Just want to update you I have executed deepstream_custom binary successfully and able to parse .etlt frcnn model trained using TLT(Transfer Learning Toolkit).

so what I found, we need to replace /out/libnvinfer_plugin.so.5.1.5 to /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.5.1.6 only.

No need to replace all libnvinfer_plgin.so *.

I was replacing *.so to /usr/lib.

when I replace only one lib, error Gone.

As mentioned in GitHub - NVIDIA-AI-IOT/deepstream_4.x_apps: deepstream 4.x samples to deploy TLT training models link ,
REPLACE ALL SO TO TARGET PATH.

I think that needs correction.

please let me know if I understood the wrong something.

Thanks,
Deep

Hi deep,
Glad to know you solve the problem.
Yes, for OSS build, libnvinfer_plugin.so* is actually linked to one lib due to soft links as below.

$ ll  /home/nvidia/trt-oss/TensorRT/build/out/libnvinfer_plugin.so*
lrwxrwxrwx 1 nvidia nvidia      26 9月  19 14:04 /home/nvidia/trt-oss/TensorRT/build/out/libnvinfer_plugin.so -> libnvinfer_plugin.so.5.1.5*
lrwxrwxrwx 1 nvidia nvidia      28 9月  19 14:04 /home/nvidia/trt-oss/TensorRT/build/out/libnvinfer_plugin.so.5.1.5 -> libnvinfer_plugin.so.5.1.5.0*
-rwxrwxr-x 1 nvidia nvidia 2619200 9月  19 14:04 /home/nvidia/trt-oss/TensorRT/build/out/libnvinfer_plugin.so.5.1.5.0*

For /usr/lib/aarch64-linux-gnu/, actually libnvinfer_plugin.so* is also linked to one lib libnvinfer_plugin.so.5.1.6 due to soft links as below.

$ ll /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so*
lrwxrwxrwx 1 root root      26 6月   5 03:52 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so -> libnvinfer_plugin.so.5.1.6
lrwxrwxrwx 1 root root      26 6月   5 03:52 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.5 -> libnvinfer_plugin.so.5.1.6
lrwxrwxrwx 1 root root      26 9月  25 18:15 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.5.1.5 -> libnvinfer_plugin.so.5.1.6
-rw-r--r-- 1 root root 2619200 9月  19 17:58 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.5.1.6

If replace all libnvinfer_plugin.so* to /usr/lib/aarch64-linux-gnu/, it is necessary to make sure they are linked to one lib in the end as shown above.

please do mention in GitHub - NVIDIA-AI-IOT/deepstream_4.x_apps: deepstream 4.x samples to deploy TLT training models link .

So it is my suggestion to change in below line…
“To use these plugins for the samples here, complile a new libnvinfer_plugin.so* and replace your system libnvinfer_plugin.so*.”

People are confused on that…

So please clear like.

“To use these plugins for the samples here, complile a new libnvinfer_plugin.so* and replace your system libnvinfer_plugin.so.x.x.”…

Thanks for your help Morganh,
Appreciate your efforts towards this issue.

Thanks,
Deep shah

Because I see from forum many developer facing this same issue. And I think most of them did follows same steps mentioned in GitHub - NVIDIA-AI-IOT/deepstream_4.x_apps: deepstream 4.x samples to deploy TLT training models.

Thanks,
Deep

Thanks sdeep.
Let me sync with the github owner to see how to improve it.

Sure Morganh…

Thanks,
Deep

GitHub - NVIDIA-AI-IOT/deepstream_4.x_apps: deepstream 4.x samples to deploy TLT training models has been updated about oss build

Thanks ChrisDing…

Regards,
Deep

Still I am getting the same error!

  1. Jet-pack 4.2.2
  2. Deepstream 4.0
  3. Jetson TX2

when TensorRT is build, libnvinfer_plugin.so* are

  1. libnvinfer_plugin.so
    2.libnvinfer_plugin.so.5.1.5
    3.libnvinfer_plugin.so.5.1.5.0

I copied 5.1.5 and replaced as /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.5.1.6.

And try to run it. Same error. Error didn’t resolved.

Please help me to solve this issue.

Hi vimalpachaiappan,

Please help top open a new topic with more details. Thanks