Build engine file failed in deepstream5.0

After I installed all requirements in my jetson tx2, I want to run the demo: GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream. But it shows errors.

**• Hardware Platform : Jetson TX2
**• DeepStream Version : 5.0
**• JetPack Version (valid for Jetson only): Jetpack 4.4
**• TensorRT Version: 7.1.0
**• Confing File: https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps/blob/master/pgie_detectnet_v2_tlt_config.txt
**• Exec command run the demo:
./deepstream-custom -c pgie_detectnet_v2_tlt_config.txt -i /opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.jpg -b 1
**• Error information:
Now playing: pgie_detectnet_v2_tlt_config.txt
Opening in BLOCKING MODE
0:00:00.220752089 12118 0x55af01d640 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1591> [UID = 1]: Trying to create engine from model files
ERROR: [TRT]: UffParser: UFF buffer empty
parseModel: Failed to parse UFF model
ERROR: failed to build network since parsing model errors.
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:01.714657335 12118 0x55af01d640 ERROR nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1611> [UID = 1]: build engine file failed
Segmentation fault (core dumped)

1 Like

Hi,

We are checking this issue.
Will update more information with you later.

Thanks.

Hi,

We can run the pgie_detectnet_v2_tlt_config.txt with JetPack4.4 + Deepstream 5.0 without issue.

$ ./deepstream-custom -c pgie_detectnet_v2_tlt_config.txt ./deepstream-custom -c pgie_detectnet_v2_tlt_config.txt -i /opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.jpg -b 1
Now playing: pgie_detectnet_v2_tlt_config.txt
Opening in BLOCKING MODE
0:00:00.244361664 27051   0x55b3c47630 INFO                 nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1591> [UID = 1]: Trying to create engine from model files
INFO: [TRT]:
INFO: [TRT]: --------------- Layers running on DLA:
INFO: [TRT]:
INFO: [TRT]: --------------- Layers running on GPU:
INFO: [TRT]: conv1/convolution + activation_1/Relu, block_1a_conv_1/convolution + block_1a_relu_1/Relu, block_1a_conv_2/convolution, block_1a_conv_shortcut/convolution + add_1/add + block_1a_relu/Relu, block_1b_conv_1/convolution + block_1b_relu_1/Relu, block_1b_conv_2/convolution, block_1b_conv_shortcut/convolution + add_2/add + block_1b_relu/Relu, block_2a_conv_1/convolution + block_2a_relu_1/Relu, block_2a_conv_2/convolution, block_2a_conv_shortcut/convolution + add_3/add + block_2a_relu/Relu, block_2b_conv_1/convolution + block_2b_relu_1/Relu, block_2b_conv_2/convolution, block_2b_conv_shortcut/convolution + add_4/add + block_2b_relu/Relu, block_3a_conv_1/convolution + block_3a_relu_1/Relu, block_3a_conv_2/convolution, block_3a_conv_shortcut/convolution + add_5/add + block_3a_relu/Relu, block_3b_conv_1/convolution + block_3b_relu_1/Relu, block_3b_conv_2/convolution, block_3b_conv_shortcut/convolution + add_6/add + block_3b_relu/Relu, block_4a_conv_1/convolution + block_4a_relu_1/Relu, block_4a_conv_2/convolution, block_4a_conv_shortcut/convolution + add_7/add + block_4a_relu/Relu, block_4b_conv_1/convolution + block_4b_relu_1/Relu, block_4b_conv_2/convolution, block_4b_conv_shortcut/convolution + add_8/add + block_4b_relu/Relu, output_cov/convolution, output_cov/Sigmoid, output_bbox/convolution,
INFO: [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: [TRT]: Detected 1 inputs and 2 output network tensors.
0:00:24.776864360 27051   0x55b3c47630 INFO                 nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1624> [UID = 1]: serialize cuda engine to file: /home/nvidia/deepstream_tlt_apps/models/detectnet_v2/detectnetv2_resnet18.etlt_b1_gpu0_fp16.engine successfully
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x544x960
1   OUTPUT kFLOAT output_bbox/BiasAdd 16x34x60
2   OUTPUT kFLOAT output_cov/Sigmoid 4x34x60

0:00:24.787668329 27051   0x55b3c47630 INFO                 nvinfer gstnvinfer_impl.cpp:311:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:pgie_detectnet_v2_tlt_config.txt sucessfully
Running...
NvMMLiteBlockCreate : Block : BlockType = 256
[JPEG Decode] BeginSequence Display WidthxHeight 1280x720
End of stream
Returned, stopping playback
[JPEG Decode] NvMMLiteJPEGDecBlockPrivateClose done
[JPEG Decode] NvMMLiteJPEGDecBlockClose done
Deleting pipeline

Please make sure you have compiled the TensorRT OSS plugin and replace the library located at /usr/lib/aarch64-linux-gnu/.

Thanks.

Hi, I met the same issue here. Did you solve the error?

I also came across the same error running on deepstream 5 docker (nvcr.io/nvidia/deepstream:5.0-dp-20.04-samples)

/opt/nvidia/deepstream/deepstream-5.0/samples/configs/tlt_pretrained_models# deepstream-app -c deepstream_app_source1_peoplenet.txt

nevermind, the pretrained model was missing from the models directory. Now it’s working

I had solved this problem by re-download the models.