Description
I am trying to run a SSD ONNX from an azure export, but am getting a bunch of errors when building the engine… Is this a supported configuration? I am trying this because with the upgrade to DS our detection rate has dropped by an order of magnitude and we are to a point where we must have a deployment for a customer… We are having to run the YOLO exported ONNX at a threshold of 0.001, and still not getting what we need out of it. (Strange, it almost never give over detections only under detections)
I am using the following configuration lines:
parse-bbox-func-name=NvDsInferParseCustomSSD
custom-lib-path=/opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_infercustomparser.so
These are the error logs…
(deepstream-test5-app:1): GLib-CRITICAL : 02:03:55.967: g_strrstr: assertion ‘haystack != NULL’ failed
nvds_msgapi_connect : connect success
Opening in BLOCKING MODE
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is ON
[NvMultiObjectTracker] Initialized
ERROR: Deserialize engine failed because file path: /app/resources/custom_configs/…/custom_models/combined_ssd_iteration2.onnx_b6_dla0_fp32.engine open error
0:00:01.922001820 1 0x3d341560 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1889> [UID = 1]: deserialize engine from file :/app/resources/custom_configs/…/custom_models/combined_ssd_iteration2.onnx_b6_dla0_fp32.engine failed
0:00:01.922150491 1 0x3d341560 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1996> [UID = 1]: deserialize backend context from engine from file :/app/resources/custom_configs/…/custom_models/combined_ssd_iteration2.onnx_b6_dla0_fp32.engine failed, try rebuild
0:00:01.922184123 1 0x3d341560 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:364: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: Tensor DataType is determined at build time for tensors not marked as input or output.
WARNING: DLA does not support FP32 precision type, using FP16 mode.
WARNING: [TRT]: Default DLA is enabled but layer mean_value is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer (Unnamed Layer 1) [Shuffle] is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer mbox0_conf/transpose is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer mbox1_conf/transpose is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer mbox2_conf/transpose is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer mbox3_conf/transpose is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer mbox4_conf/transpose is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer mbox0_conf/flatten is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer mbox1_conf/flatten is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer mbox2_conf/flatten is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer mbox3_conf/flatten is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer mbox4_conf/flatten is not supported on DLA, falling back to GPU.
WARNING: [TRT]: mbox_conf/concat: DLA only supports concatenation on the C dimension.
WARNING: [TRT]: Default DLA is enabled but layer mbox_conf/concat is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer mbox_conf is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer mbox_conf/transpose is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer mbox0_loc/transpose is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer mbox1_loc/transpose is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer mbox2_loc/transpose is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer mbox3_loc/transpose is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer mbox4_loc/transpose is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer mbox0_loc/reshape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer mbox1_loc/reshape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer mbox2_loc/reshape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer mbox3_loc/reshape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer mbox4_loc/reshape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: node_of_mbox_loc: DLA only supports concatenation on the C dimension.
WARNING: [TRT]: DLA Layer node_of_mbox_loc does not support dynamic shapes in any dimension.
WARNING: [TRT]: Default DLA is enabled but layer node_of_mbox_loc is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer split is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer split_0 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer prior_sizes is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer (Unnamed Layer 172) [Shuffle] is not supported on DLA, falling back to GPU.
ERROR: [TRT]: 2: [standardEngineBuilder.cpp::buildEngine::2302] Error Code 2: Internal Error (Builder failed while analyzing shapes.)
ERROR: Build engine failed from config file
ERROR: failed to build trt engine.
0:00:02.126178983 1 0x3d341560 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 1]: build engine file failed
0:00:02.128934149 1 0x3d341560 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2020> [UID = 1]: build backend context failed
0:00:02.128999557 1 0x3d341560 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1257> [UID = 1]: generate backend failed, check config file settings
0:00:02.129507972 1 0x3d341560 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start:<primary_gie> error: Failed to create NvDsInferContext instance
0:00:02.129541636 1 0x3d341560 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start:<primary_gie> error: Config file path: /app/resources/custom_configs/config_infer_custom_vision.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
WARNING: [TRT]: Default DLA is enabled but layer multiply1_B is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer (Unnamed Layer* 175) [Shuffle] is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer prior_centers is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer (Unnamed Layer* 178) [Shuffle] is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer multiply2_B is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer (Unnamed Layer* 181) [Shuffle] is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer unary is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer prior_sizes_1 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer (Unnamed Layer* 185) [Shuffle] is not supported on DLA, falling back to GPU.
WARNING: [TRT]: concat2: DLA only supports concatenation on the C dimension.
WARNING: [TRT]: Default DLA is enabled but layer concat2 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer get_max_classes is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer get_max_scores is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer non_max_suppression is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer non_max_suppression_2 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer slice_out_selected_box_indexes is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer selected_box_reshape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer get_selected_boxes is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer split_detected_cxcy_wh is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer split_detected_cxcy_wh_3 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer value_2f is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer (Unnamed Layer* 198) [Shuffle] is not supported on DLA, falling back to GPU.
WARNING: [TRT]: half_wh: DLA cores do not support DIV ElementWise operation.
WARNING: [TRT]: Default DLA is enabled but layer half_wh is not supported on DLA, falling back to GPU.
WARNING: [TRT]: detected_boxes: DLA only supports concatenation on the C dimension.
WARNING: [TRT]: Default DLA is enabled but layer detected_boxes is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer get_detected_classes is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer get_detected_scores is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer squeeze_detected_classes is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer squeeze_detected_scores is not supported on DLA, falling back to GPU.
[NvMultiObjectTracker] De-initialized
** ERROR: main:1455: Failed to set pipeline to PAUSED
Quitting
ERROR from primary_gie: Failed to create NvDsInferContext instance
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(841): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie:
Config file path: /app/resources/custom_configs/config_infer_custom_vision.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Disconnecting Azure…
Environment
How do I get these in a concise manner?
TensorRT Version:
GPU Type: Jetson Xavier NX
Nvidia Driver Version:
CUDA Version:
CUDNN Version:
Operating System + Version:
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag): Container
Relevant Files
Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)
Steps To Reproduce
Please include:
- Exact steps/commands to build your repro
- Exact steps/commands to run your repro
- Full traceback of errors encountered