Nvdsinfer_custom_lib_failed

Jetson Xaviors:

jetpack 4.6
CUDA 10.2
cuDNN-8.2.1
TensorRT 8.01
Ubuntu 18.04
python 3.6
Yolo_v4

nvidia/tao/tao-toolkit-tf:

i am using
deepstream_tao_apps](GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream)

i can’t get to run the default yolo model
when i run

./apps/tao_detection/ds-tao-detection /home/vaaan/Downloads/deepstream_tao_apps/configs/yolov4_tao/pgie_yolov4_tao_config.txt
-i /home/vaaan/Desktop/test2.h264 -`

i get this error :

ERROR: Could not open lib: /home/vaaan/deepstream_tao_apps/configs/yolov4_tao/…/…/post_processor/libnvds_infercustomparser_tao.so, error string: /home/vaaan/deepstream_tao_apps/configs/yolov4_tao/…/…/post_processor/libnvds_infercustomparser_tao.so: cannot open shared object file: No such file or directory
0:00:00.420026917 12839 0x55a07cab30 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1248> [UID = 1]: Could not open custom lib: (null)
0:00:00.420127589 12839 0x55a07cab30 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:00:00.420159237 12839 0x55a07cab30 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start: error: Config file path: /home/vaaan/deepstream_tao_apps/configs/yolov4_tao/configfile_orginal_1.txt, NvDsInfer Error: NVDSINFER_CUSTOM_LIB_FAILED
Running…
ERROR from element primary-nvinference-engine: Failed to create NvDsInferContext instance
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(841): gst_nvinfer_start (): /GstPipeline:ds-custom-pipeline/GstNvInfer:primary-nvinference-engine:
Config file path: /home/vaaan/deepstream_tao_apps/configs/yolov4_tao/configfile_orginal_1.txt, NvDsInfer Error: NVDSINFER_CUSTOM_LIB_FAILED
Returned, stopping playback
Deleting pipeline

Please make sure you have run

### Build Sample Application

libnvds_infercustomparser_tao.so file was deleted i reinstalled the deepstream_toa_apps
now i am getting this error

Now playing: /home/vaaan/deepstream_tao_apps/configs/yolov4_tao/pgie_yolov4_tao_config.txt
Opening in BLOCKING MODE
Opening in BLOCKING MODE
ERROR: Deserialize engine failed because file path: /home/vaaan/deepstream_tao_apps/configs/yolov4_tao/…/…/models/yolov4/yolov4_resnet18.etlt_b1_gpu0_int8.engine open error
0:00:01.911917204 21440 0x55845f4b30 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1889> [UID = 1]: deserialize engine from file :/home/vaaan/deepstream_tao_apps/configs/yolov4_tao/…/…/models/yolov4/yolov4_resnet18.etlt_b1_gpu0_int8.engine failed
0:00:01.912057556 21440 0x55845f4b30 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1996> [UID = 1]: deserialize backend context from engine from file :/home/vaaan/deepstream_tao_apps/configs/yolov4_tao/…/…/models/yolov4/yolov4_resnet18.etlt_b1_gpu0_int8.engine failed, try rebuild
0:00:01.912122932 21440 0x55845f4b30 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
parseModel: Failed to parse ONNX model
ERROR: Failed to build network, error in model parsing.
Segmentation fault (core dumped)

my config file:
[property]
gpu-id=0
net-scale-factor=1.0
offsets=103.939;116.779;123.68
model-color-format=1
labelfile-path=/home/vaaan/deepstream_tao_apps/configs/yolov4_tao/yolov4_labels.txt
model-engine-file=…/…/models/yolov4/yolov4_resnet18.etlt_b1_gpu0_int8.engine
int8-calib-file=/home/vaaan/deepstream_tao_apps/models/yolov4/yolov4nv.trt8.cal.bin
tlt-encoded-model=/home/vaaan/deepstream_tao_apps/models/yolov4/yolov4_resnet18.etlt
tlt-model-key=aGJ**********
infer-dims=3;544;960
maintain-aspect-ratio=1
uff-input-order=0
uff-input-blob-name=Input
batch-size=1

0=FP32, 1=INT8, 2=FP16 mode

network-mode=1
num-detected-classes=4
interval=0
gie-unique-id=1
is-classifier=0
#network-type=0
cluster-mode=3
output-blob-names=BatchedNMS
parse-bbox-func-name=NvDsInferParseCustomBatchedNMSTLT
custom-lib-path=…/…/post_processor/libnvds_infercustomparser_tao.so

[class-attrs-all]
pre-cluster-threshold=0.3
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0

i also tried running it with different modes int8 ,fp16 and fp32

Are you using the etlt model from nvidia? If yes, the tlt-model-key should be nvidia_tlt instead of your own key.

I am using my own key ,i just edited it here XXXX

As mentioned above, if the etlt model is downloaded from nvidia website, please check its model card or config file. It mentioned the key. Please try nvidia_tlt instead.

1 Like

Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.