Hi,
I’m trying to run the deepstream-test1-app with two models, the trafficcamNet and a custom .uff model to classify objects. Before, I ran only the trafficcamnet without any issue. So, I changed the deepstream-test1-app to include other nvifer in the pipeline with the model and I’m facing the following error.
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:187 Uff input blob name is empty
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:797 Failed to create network using custom network creation function
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:862 Failed to get cuda engine from custom library API
0:00:12.781816059 25051 0x55a6c69aa330 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1735> [UID = 1]: build engine file failed
My setup is:
- Ubuntu 18.04
- Cuda 10.2
- Tensorrt 7.2.1
- Deepstream 5
The classifier model config file is:
[property]
gpu-id=0
net-scale-factor=1
network-type=1
uff-file=/classification_resnet18_64.uff
labelfile-path=/labels.txt
uff-input-blob-name=data
uff-input-dims=3;64;64;0
output-blob-names=dense/Softmax
#uff-input-order=2
force-implicit-batch-dim=1
batch-size=4
network-mode=0
input-object-min-width=64
input-object-min-height=64
process-mode=2
model-color-format=1
gie-unique-id=2
operate-on-gie-id=1
operate-on-class-ids=0
is-classifier=1
classifier-async-mode=1
classifier-threshold=0.8
#scaling-filter=0
#scaling-compute-hw=0
Thanks,