Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) Jetson Nano 4 GB
• DeepStream Version 6.0
• JetPack Version (valid for Jetson only) 4.6
• TensorRT Version 8.0
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) questions/bugs
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
Hello.
I’m going to use GazeNet model in Python pipeline. Build pipeline structure, using this example
- Download model from NGC
- Create engine using tao-converter:
tao-converter -k nvidia_tlt -p input_left_images:0,1x1x224x224,1x1x224x224,1x1x224x224 -p input_right_images:0,1x1x1x224x224,1x1x1x224x224,1x1x1x224x224 -p input_face_images:0,1x1x1x224x224,1x1x1x224x224,1x1x1x224x224 -p input_facegrid:0,1x1x1x625x1,1x1x1x625x1,1x1x1x625x1 -b 4 -m 4 -t fp16 gazenet_facegrid.etlt -e gazenet_facegrid_b4_gpu0_fp16.engine - Use it in pipeline:
gaze_identifier.set_property('customlib-name', "~/deepstream_tao_apps/apps/tao_others/deepstream-gaze-app/gazeinfer_impl/libnvds_gazeinfer.so")
gaze_identifier.set_property('customlib-props', "config-file:./sample_gazenet_model_config.txt")
sample_gazenet_model_config:
enginePath=~/deepstream_tao_apps/models/gazenet/gazenet_facegrid_b4_gpu0_fp16.engine
etltPath=~/deepstream_tao_apps/models/gazenet/gazenet_facegrid.etlt
etltKey=nvidia_tlt
## networkMode can be int8, fp16 or fp32
networkMode=fp16
batchSize=4
Got the following output, when running the pipeline:
Library Opened Successfully
Setting custom lib properties # 1
Adding Prop: config-file : ./sample_gazenet_model_config.txt
Inside Custom Lib : Setting Prop Key=config-file Value=./sample_gazenet_model_config.txt
0:00:04.858322868 22736 0x8892700 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<secondary-inference> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 2]: deserialized trt engine from :/home/seth/ds_meshbi/models/model_fpenet.etlt_b4_gpu0_fp16.engine
INFO: [FullDims Engine Info]: layers num: 4
0 INPUT kFLOAT input_face_images 1x80x80 min: 1x1x80x80 opt: 4x1x80x80 Max: 4x1x80x80
1 OUTPUT kFLOAT conv_keypoints_m80 80x80x80 min: 0 opt: 0 Max: 0
2 OUTPUT kFLOAT softargmax 80x2 min: 0 opt: 0 Max: 0
3 OUTPUT kFLOAT softargmax:1 80 min: 0 opt: 0 Max: 0
ERROR: [TRT]: 3: Cannot find binding of given name: softargmax,softargmax:1,conv_keypoints_m80
0:00:04.858535320 22736 0x8892700 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<secondary-inference> NvDsInferContext[UID 2]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1868> [UID = 2]: Could not find output layer 'softargmax,softargmax:1,conv_keypoints_m80' in engine
0:00:04.858573550 22736 0x8892700 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<secondary-inference> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 2]: Use deserialized engine model: /home/seth/ds_meshbi/models/model_fpenet.etlt_b4_gpu0_fp16.engine
0:00:04.910418647 22736 0x8892700 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<secondary-inference> [UID 2]: Load new model:faciallandmark_sgie_config.txt sucessfully
Deserializing engine from: /home/seth/deepstream_tao_apps/models/gazenet/gazenet_facegrid_b4_gpu0_fp16.engineThe logger passed into createInferRuntime differs from one already provided for an existing builder, runtime, or refitter. TensorRT maintains only a single logger pointer at any given time, so the existing value, which can be retrieved with getLogger(), will be used instead. In order to use a new logger, first destroy all existing builder, runner or refitter objects.
ERROR: [TRT]: 3: [executionContext.cpp::setBindingDimensions::969] Error Code 3: Internal Error (Parameter check failed at: runtime/api/executionContext.cpp::setBindingDimensions::969, condition: profileMaxDims.d[i] >= dimensions.d[i]. Supplied binding dimension [8,1,224,224] for bindings[0] exceed min ~ max range at index 0, maximum dimension in profile is 1, minimum dimension in profile is 1, but supplied dimension is 8.
)
ERROR: [TRT]: 3: [executionContext.cpp::setBindingDimensions::969] Error Code 3: Internal Error (Parameter check failed at: runtime/api/executionContext.cpp::setBindingDimensions::969, condition: profileMaxDims.d[i] >= dimensions.d[i]. Supplied binding dimension [8,1,224,224] for bindings[1] exceed min ~ max range at index 0, maximum dimension in profile is 1, minimum dimension in profile is 1, but supplied dimension is 8.
)
ERROR: [TRT]: 3: [executionContext.cpp::setBindingDimensions::969] Error Code 3: Internal Error (Parameter check failed at: runtime/api/executionContext.cpp::setBindingDimensions::969, condition: profileMaxDims.d[i] >= dimensions.d[i]. Supplied binding dimension [8,1,625,1] for bindings[2] exceed min ~ max range at index 0, maximum dimension in profile is 1, minimum dimension in profile is 1, but supplied dimension is 8.
)
ERROR: [TRT]: 3: [executionContext.cpp::setBindingDimensions::969] Error Code 3: Internal Error (Parameter check failed at: runtime/api/executionContext.cpp::setBindingDimensions::969, condition: profileMaxDims.d[i] >= dimensions.d[i]. Supplied binding dimension [8,1,224,224] for bindings[3] exceed min ~ max range at index 0, maximum dimension in profile is 1, minimum dimension in profile is 1, but supplied dimension is 8.
)
Why it cannot find bindings name?

