Deepstream 5, Uff parser: Could not read buffer

Hardware Platform: Nvidia Jetson Nano 2GB 
DeepStream Version: 5.0
JetPack Version: 4.5.1
Issue Type: questions

Hello

A few weeks ago I started work with Jetson Nano 2G and Deepstream 5. I was successfully run different demos and right now I try run nvidia:tlt_emotionnet (https://ngc.nvidia.com/catalog/models/nvidia:tlt_emotionnet) with Deepstream 5. At this point, I need some support because I can’t find a solution to my problem.

After download nvidia:tlt_emotionnet I get two files labels.txt and model.etlt. I tried to find step by step guide on how can I use these files with deepstream, but no success.
A few days ago I successfully ran deepstream-occupancy-analytics (GitHub - NVIDIA-AI-IOT/deepstream-occupancy-analytics: This is a sample application for counting people entering/leaving in a building using NVIDIA Deepstream SDK, Transfer Learning Toolkit (TLT), and pre-trained models. This application can be used to build real-time occupancy analytics applications for smart buildings, hospitals, retail, etc. The application is based on deepstream-test5 sample application.). And based on this demo I try to use the emotions model and label with deepstream.

First of all a creator copy of the config primary and updated path to model and label:

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
tlt-model-key=nvidia_tlt
tlt-encoded-model=emotions/model.etlt
labelfile-path=emotions/labels.txt
#model-engine-file=../resnet34_peoplenet_pruned.etlt_b1_gpu0_int8.engine
#int8-calib-file=resnet34_peoplenet_int8_update.txt
input-dims=3;544;960;0
uff-input-blob-name=input_1
batch-size=1
process-mode=1
model-color-format=0
# 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
num-detected-classes=3
cluster-mode=1
interval=2
gie-unique-id=1
output-blob-names=output_bbox/BiasAdd;output_cov/Sigmoid

[class-attrs-all]
pre-cluster-threshold=0.4
## Set eps=0.7 and minBoxes for cluster-mode=1(DBSCAN)
eps=0.7
minBoxes=1
#detected-min-w=60
#detected-min-h=100
#detected-max-w=400
#detected-max-h=300

[class-attrs-1]
pre-cluster-threshold=1.4
## Set eps=0.7 and minBoxes for cluster-mode=1(DBSCAN)
eps=0.7
minBoxes=1
[class-attrs-2]
pre-cluster-threshold=1.4
## Set eps=0.7 and minBoxes for cluster-mode=1(DBSCAN)
eps=0.7
minBoxes=1

After that, I create a copy for the config file and updated primary-gie (set new labelfile-path and config-file):

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=1
rows=1
columns=1
width=640
height=480
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=4

[source0]
enable=1
type=1
#uri=rtsp://admin:HikCam01@192.168.11.130:554/ISAPI/Streaming/Channels/101
camera-width=640
camera-height=480
latency=500
camera-fps-n=15
camera-fps-d=1
camera-v4l2-dev-node=0
cudadec-memtype=0
num-extra-surfaces=20
drop-frame-interval=2

[sink0]
enable=1
type=2
sync=0
source-id=0
gpu-id=0
nvbuf-memory-type=0

[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
live-source=1
batch-size=8
batched-push-timeout=40000
width=1920
height=1080
enable-padding=0
nvbuf-memory-type=0

[primary-gie]
enable=1
gpu-id=0
batch-size=8
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;1;1;1
bbox-border-color3=0;1;0;1
nvbuf-memory-type=0
interval=2
gie-unique-id=1
#model-engine-file=peoplenet/resnet34_peoplenet_pruned.etlt_b1_gpu0_int8.engine
labelfile-path=emotions/labels.txt
config-file=config_infer_primary_emotions.txt

[tracker]
enable=1
tracker-width=640
tracker-height=384
ll-lib-file=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_klt.so
gpu-id=0
enable-batch-process=1
enable-past-frame=0
display-tracking-id=1

[nvds-analytics]
enable=1
config-file=config_nvdsanalytics.txt

[tests]
file-loop=0

And I try to run deepstream with new config files. After a few seconds I got “build engine file failed” error:

./deepstream-test5-analytics -c config/test5_config_file_src_infer_tlt_my_emotions.txt -p 0

Warning: 'input-dims' parameter has been deprecated. Use 'infer-dims' instead.

Using winsys: x11 
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_klt.so
gstnvtracker: Optional NvMOT_RemoveStreams not implemented
gstnvtracker: Batch processing is OFF
gstnvtracker: Past frame output is OFF
0:00:01.038321406 10059   0x55816438f0 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
ERROR: [TRT]: UffParser: Could not read buffer.
parseModel: Failed to parse UFF model
ERROR: failed to build network since parsing model errors.
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:06.210625050 10059   0x55816438f0 ERROR                nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1735> [UID = 1]: build engine file failed
Segmentation fault (core dumped)

I didn’t find the solution to this problem. But I found this demo deepstream_tlt_apps (GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream) and in this demo, we have the same two files lable.txt and model.etlt. So I follow step by step from the demo and I run this demo. *.engine was generated successfully and demo project works.

So I thought I can make the same with nvidia:tlt_emotionnet model and label. I copied model and label files to this demo project, changed the config file and run this demo with my config file:

[property]
gpu-id=0
net-scale-factor=0.017507
offsets=123.675;116.280;103.53
model-color-format=0
tlt-model-key=nvidia_tlt
tlt-encoded-model=../../models/emotions/model.etlt
#model-engine-file=../../models/emotions/peopleSegNet_resnet50.etlt_b1_gpu0_fp16.engine
network-type=3 ## 3 is for instance segmentation network
labelfile-path=./labels.txt
#int8-calib-file=../../models/peopleSegNet/cal.bin
infer-dims=3;576;960
num-detected-classes=2
uff-input-blob-name=Input
batch-size=1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
interval=0
gie-unique-id=1
#no cluster
## 0=Group Rectangles, 1=DBSCAN, 2=NMS, 3= DBSCAN+NMS Hybrid, 4 = None(No clustering)
## MRCNN supports only cluster-mode=4; Clustering is done by the model itself
cluster-mode=4
output-instance-mask=1
output-blob-names=generate_detections;mask_head/mask_fcn_logits/BiasAdd
parse-bbox-instance-mask-func-name=NvDsInferParseCustomMrcnnTLT
custom-lib-path=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_infercustomparser.so

[class-attrs-all]
pre-cluster-threshold=0.8

After a few seconds, I got the same error, “build engine file failed”.

What had I done wrong?

Please check if GitHub - NVIDIA-AI-IOT/gesture_recognition_tlt_deepstream: A project demonstrating how to train your own gesture recognition deep learning pipeline. We start with a pre-trained detection model, repurpose it for hand detection using Transfer Learning Toolkit 3.0, and use it together with the purpose-built gesture recognition model. Once trained, we deploy this model on NVIDIA® Jetson™ using Deepstream SDK. can help you deploy tlt emotionnet model in custom deepstream.

@pcjaked
I am trying to run emotion net using deepstream python app.
I have downloaded the deployable etlt file and label file from ngc and trying to run on deepstream_python_test3.
following is the config:dstest3_pgie_config.txt (3.3 KB)
Please help