Hi i am trying to develop emotion recognition project with FaceNet and my own TAO model over deepstream_test_2 template. I am getting this error:
ali@ali:~/Desktop/face-deployable-test-2$ python3.8 deepstream_test_2.py womanexpression.h264
Creating Pipeline
Creating Source
Creating H264Parser
Creating Decoder
Creating EGLSink
Playing file womanexpression.h264
Adding elements to Pipeline
Linking elements in the Pipeline
Starting pipeline
0:00:00.205528347 55909 0x23bed90 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See CUDA_MODULE_LOADING
in CUDA C++ Programming Guide
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See CUDA_MODULE_LOADING
in CUDA C++ Programming Guide
ERROR: [TRT]: 3: [builder.cpp::~Builder::307] Error Code 3: API Usage Error (Parameter check failed at: optimizer/api/builder.cpp::~Builder::307, condition: mObjectCounter.use_count() == 1. Destroying a builder object before destroying objects it created leads to undefined behavior.
)
WARNING: [TRT]: The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.
ERROR: [TRT]: 3: predictions/MatMul:kernel weights has count 3584 but 0 was expected
ERROR: [TRT]: 3: predictions/MatMul:kernel weights has count 3584 but 0 was expected
ERROR: [TRT]: 3: predictions/MatMul:kernel weights has count 3584 but 0 was expected
ERROR: [TRT]: UffParser: Parser error: predictions/BiasAdd: The input to the Scale Layer is required to have a minimum of 3 dimensions.
parseModel: Failed to parse UFF model
ERROR: tlt/tlt_decode.cpp:358 Failed to build network, error in model parsing.
ERROR: [TRT]: 3: [builder.cpp::~Builder::307] Error Code 3: API Usage Error (Parameter check failed at: optimizer/api/builder.cpp::~Builder::307, condition: mObjectCounter.use_count() == 1. Destroying a builder object before destroying objects it created leads to undefined behavior.
)
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:723 Failed to create network using custom network creation function
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:789 Failed to get cuda engine from custom library API
0:00:02.160553403 55909 0x23bed90 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1943> [UID = 1]: build engine file failed
Segmentation fault (core dumped)
dstest2_pgie_config.txt:
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-file=facemodel.etlt
labelfile-path=labels.txt
force-implicit-batch-dim=1
tlt-model-key=nvidia_tlt
batch-size=1
network-mode=1
process-mode=1
model-color-format=0
num-detected-classes=1
interval=0
gie-unique-id=1
output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid
#scaling-filter=0
#scaling-compute-hw=0
[class-attrs-all]
pre-cluster-threshold=0.2
eps=0.2
group-threshold=1
dstest2_sgie1_config.txt:
[property]
gpu-id=0
net-scale-factor=1.0
offsets=103.939;116.779;123.68
model-color-format=1
batch-size=30
labelfile-path=emotionlabels.txt
tlt-encoded-model=emotionmodel.etlt
tlt-model-key=nvidia_tlt
infer-dims=3;48;48 # where c = number of channels, h = height of the model input, w = width of model input
uff-input-blob-name=input_1
uff-input-order=0
output-blob-names=predictions/Softmax
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
# process-mode: 2 - inferences on crops from primary detector, 1 - inferences on whole frame
process-mode=2
num-detected-classes=7
interval=0
network-type=1 # defines that the model is a classifier.
gie-unique-id=1
classifier-threshold=0.2
my dataset is fer13, consist of 48*48 px gray images.
GPU 1060 GTX
deepstream-app version 6.1.1
DeepStreamSDK 6.1.1
Can you please help?
Thank You.