Getting UFFParser Error

Hi i am trying to develop emotion recognition project with FaceNet and my own TAO model over deepstream_test_2 template. I am getting this error:

ali@ali:~/Desktop/face-deployable-test-2$ python3 deepstream_test_2.py womanexpression.h264
Creating Pipeline

Creating Source

Creating H264Parser

Creating Decoder

Creating EGLSink

Playing file womanexpression.h264
Warning: ‘input-dims’ parameter has been deprecated. Use ‘infer-dims’ instead.
Adding elements to Pipeline

Linking elements in the Pipeline

Starting pipeline

0:00:00.134727294 268040 0x335db90 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See CUDA_MODULE_LOADING in CUDA C++ Programming Guide
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See CUDA_MODULE_LOADING in CUDA C++ Programming Guide
ERROR: [TRT]: 3: [builder.cpp::~Builder::307] Error Code 3: API Usage Error (Parameter check failed at: optimizer/api/builder.cpp::~Builder::307, condition: mObjectCounter.use_count() == 1. Destroying a builder object before destroying objects it created leads to undefined behavior.
)
WARNING: [TRT]: The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.
0:00:39.310399942 268040 0x335db90 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1955> [UID = 1]: serialize cuda engine to file: /home/ali/Desktop/face-deployable-test-2/emotionmodel.etlt_b30_gpu0_fp32.engine successfully
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See CUDA_MODULE_LOADING in CUDA C++ Programming Guide
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT input_1 3x224x224
1 OUTPUT kFLOAT predictions/Softmax 7x1x1

0:00:39.326044798 268040 0x335db90 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus: [UID 1]: Load new model:dstest2_sgie1_config.txt sucessfully
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is OFF
[NvMultiObjectTracker] Initialized
0:00:39.332767850 268040 0x335db90 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See CUDA_MODULE_LOADING in CUDA C++ Programming Guide
WARNING: …/nvdsinfer/nvdsinfer_model_builder.cpp:659 INT8 calibration file not specified/accessible. INT8 calibration can be done through setDynamicRange API in ‘NvDsInferCreateNetwork’ implementation
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See CUDA_MODULE_LOADING in CUDA C++ Programming Guide
ERROR: [TRT]: 3: [builder.cpp::~Builder::307] Error Code 3: API Usage Error (Parameter check failed at: optimizer/api/builder.cpp::~Builder::307, condition: mObjectCounter.use_count() == 1. Destroying a builder object before destroying objects it created leads to undefined behavior.
)
WARNING: [TRT]: The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.
ERROR: [TRT]: 3: [network.cpp::addInput::1615] Error Code 3: API Usage Error (Parameter check failed at: optimizer/api/network.cpp::addInput::1615, condition: isValidDims(dims, hasImplicitBatchDimension())
)
ERROR: [TRT]: UFFParser: Failed to parseInput for node input_1
ERROR: [TRT]: UffParser: Parser error: input_1: Failed to parse node - Invalid Tensor found at node input_1
parseModel: Failed to parse UFF model
ERROR: tlt/tlt_decode.cpp:358 Failed to build network, error in model parsing.
ERROR: [TRT]: 3: [builder.cpp::~Builder::307] Error Code 3: API Usage Error (Parameter check failed at: optimizer/api/builder.cpp::~Builder::307, condition: mObjectCounter.use_count() == 1. Destroying a builder object before destroying objects it created leads to undefined behavior.
)
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:723 Failed to create network using custom network creation function
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:789 Failed to get cuda engine from custom library API
0:00:40.885030889 268040 0x335db90 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1943> [UID = 1]: build engine file failed
Segmentation fault (core dumped)


primary config file:
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
tlt-encoded-model=facemodel.etlt
labelfile-path=labels.txt
force-implicit-batch-dim=1
tlt-model-key=nvidia_tlt
input-dims=3;240;384;0
uff-input-blob-name=input_1
batch-size=1
process-mode=1
model-color-format=0
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=1
#network-type=0 #bunu yeni ekledim
num-detected-classes=1
interval=0
gie-unique-id=1
# output-blob-names=output_bbox/BiasAdd;output_cov/Sigmoid
# output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid
output-blob-names=predictions/Softmax
cluster-mode=2

uff-file=sample_ssd_relu6.uff
uff-input-blob-name=Input

# parse-bbox-func-name=parse_bbox_resnet

#Use the config params below for dbscan clustering mode
#[class-attrs-all]
#detected-min-w=4
#detected-min-h=4
#minBoxes=3
#eps=0.7

#Use the config params below for NMS clustering mode
[class-attrs-all]
topk=20
nms-iou-threshold=0.5
pre-cluster-threshold=0.2

## Per class configurations
[class-attrs-0]
topk=20
nms-iou-threshold=0.5
pre-cluster-threshold=0.4

#[class-attrs-1]
#pre-cluster-threshold=0.05
#eps=0.7
#dbscan-min-score=0.5


secondary config file :
[property]
gpu-id=0
net-scale-factor=1
offsets=123.67;116.28;103.53
#offsets=124;117;104
model-color-format=1
batch-size=30
labelfile-path=emotionlabels.txt
tlt-encoded-model=emotionmodel.etlt
tlt-model-key=nvidia_tlt
infer-dims=3;224;224 # where c = number of channels, h = height of the model input, w = width of model input

#parse-bbox-func-name=parse_bbox_resnet
uff-input-blob-name=input_1
#uff-input-order=0
output-blob-names=predictions/Softmax

## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
# process-mode: 2 - inferences on crops from primary detector, 1 - inferences on whole frame
process-mode=2

num-detected-classes=7

interval=0
network-type=1 # defines that the model is a classifier.
gie-unique-id=1
classifier-threshold=0.2


my dataset is fer13, consist of 48*48 px gray images.
GPU 1060 GTX
deepstream-app version 6.1.1
DeepStreamSDK 6.1.1

Please Help, thank you.

There are two model files in your pgie config, etlt file set by tlt-encoded-model and uff model set by uff-file, which one is needed for your case?

Where and how did you get your primary model? What is the model? In your primary model configuration file, you input two models in the same configuration file, which gst-nvinfer does not support. Please remove the one which you did not need.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.