I follow the post How to do inference with fpenet_fp32.trt to export fpenet from tft and execute successfully with tensorrt python script.
Now I would like to config it as the secondary infer in the deepstream pipeline, first is facedetect which is successful. But the deepstream indicate an error in parsing UFF model.
Here is the error log:
ERROR: β¦/nvdsinfer/nvdsinfer_model_builder.cpp:1523 Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/face_pose/./models/fpenet_b1_fp32.trt_b1_gpu0_fp16.engine open error
0:00:02.828806777 1131 0x5641f84d2430 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 3]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1690> [UID = 3]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/face_pose/./models/fpenet_b1_fp32.trt_b1_gpu0_fp16.engine failed
0:00:02.828824307 1131 0x5641f84d2430 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 3]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1797> [UID = 3]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/face_pose/./models/fpenet_b1_fp32.trt_b1_gpu0_fp16.engine failed, try rebuild
0:00:02.828833261 1131 0x5641f84d2430 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 3]: Trying to create engine from model files
ERROR: β¦/nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: UffParser: Could not read buffer.
parseModel: Failed to parse UFF model
ERROR: tlt/tlt_decode.cpp:274 failed to build network since parsing model errors.
This is the sgie1_config.txt:
[property]
gpu-id=0
net-scale-factor=0.0174292
offsets=123.675;116.28;103.53
tlt-model-key=nvidia_tlt
tlt-encoded-model=./models/fpenet_b1_fp32.trt
model-engine-file=./models/fpenet_b1_fp32.trt_b1_gpu0_fp16.engine
input-dims=2;80;80;0
uff-input-blob-name=input_face_images
output-blob-names=strided_slice/softargmax,strided_slice_1/softargmax
batch-size=1
model-color-format=0
network-mode=2
interval=0
network-type=100
workspace-size=3000
gie-unique-id=3
process-mode=2 # indicate secondary inferenece engine
operate-on-gie-id=1 # operate on
operate-on-class-ids=2 # operate on
I guess input-dims, uff-input-blob-name, output-blob-names are key parameters, so how to config them?