Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) Jetson AGX Xavier
• DeepStream Version 6.3
• JetPack Version (valid for Jetson only)
Package: nvidia-jetpack
Version: 5.0.2-b231
• TensorRT Version: 8.4.1-1+cuda11.4
• NVIDIA GPU Driver Version (valid for GPU only) 11.4
I have TensorRT OSS Plugins
version libnvinfer_plugin.so.8.6.1
updated to /usr/lib/aarch64-linux-gnu/
. TensorRT OSS Plugins branch 23.08 was downloaded from here.
libnvds_infercustomparser_tlt.so
is also generated.
These are the two configuration files used in application.
config_infer_primary.txt (4.0 KB)
source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt (5.5 KB)
When run the command, ./deepstream-app -c source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt
, the errors are
atic@atic-desktop:/opt/nvidia/deepstream/deepstream-6.1/sources/apps/sample_apps/Rectitude_efficientdet$ ./deepstream-app -c source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt
** WARN: <create_pipeline:1168>: Num of Tiles less than number of sources, readjusting to 4 rows, 1 columns
Unknown or legacy key specified 'is-classifier' for group [property]
Using winsys: x11
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
~~ CLOG[/dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvmultiobjecttracker/include/modules/NvMultiObjectTracker/NvTrackerParams.hpp, getConfigRoot() @line 52]: [NvTrackerParams::getConfigRoot()] !!![WARNING] Invalid low-level config file caused an exception, but will go ahead with the default config values
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is ON
~~ CLOG[/dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvmultiobjecttracker/include/modules/NvMultiObjectTracker/NvTrackerParams.hpp, getConfigRoot() @line 52]: [NvTrackerParams::getConfigRoot()] !!![WARNING] Invalid low-level config file caused an exception, but will go ahead with the default config values
[NvMultiObjectTracker] Initialized
WARNING: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.1/sources/apps/sample_apps/Rectitude_efficientdet/./model.onnx_b1_gpu0_fp16.engine open error
0:00:03.834295224 25921 0xaaaad6631890 WARN nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1897> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.1/sources/apps/sample_apps/Rectitude_efficientdet/./model.onnx_b1_gpu0_fp16.engine failed
0:00:03.889120693 25921 0xaaaad6631890 WARN nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.1/sources/apps/sample_apps/Rectitude_efficientdet/./model.onnx_b1_gpu0_fp16.engine failed, try rebuild
0:00:03.889220794 25921 0xaaaad6631890 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.
ERROR: [TRT]: ModelImporter.cpp:778: ERROR: ModelImporter.cpp:566 In function importModel:
[4] Assertion failed: !_importer_ctx.network()->hasImplicitBatchDimension() && "This version of the ONNX parser only supports TensorRT INetworkDefinitions with an explicit batch dimension. Please ensure the network was created using the EXPLICIT_BATCH NetworkDefinitionCreationFlag."
ERROR: Failed to parse onnx file
ERROR: failed to build network since parsing model errors.
ERROR: failed to build network.
0:00:06.197185756 25921 0xaaaad6631890 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1943> [UID = 1]: build engine file failed
0:00:06.250287305 25921 0xaaaad6631890 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2029> [UID = 1]: build backend context failed
0:00:06.250414767 25921 0xaaaad6631890 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1266> [UID = 1]: generate backend failed, check config file settings
0:00:06.251022924 25921 0xaaaad6631890 WARN nvinfer gstnvinfer.cpp:846:gst_nvinfer_start:<primary_gie> error: Failed to create NvDsInferContext instance
0:00:06.251072750 25921 0xaaaad6631890 WARN nvinfer gstnvinfer.cpp:846:gst_nvinfer_start:<primary_gie> error: Config file path: /opt/nvidia/deepstream/deepstream-6.1/sources/apps/sample_apps/Rectitude_efficientdet/config_infer_primary.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
[NvMultiObjectTracker] De-initialized
** ERROR: <main:716>: Failed to set pipeline to PAUSED
Quitting
ERROR from primary_gie: Failed to create NvDsInferContext instance
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(846): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie:
Config file path: /opt/nvidia/deepstream/deepstream-6.1/sources/apps/sample_apps/Rectitude_efficientdet/config_infer_primary.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
App run failed
I tried to create engine using Tao deploy library.
I run the command as
efficientdet_tf1 gen_trt_engine -m model.onnx -r ./export --data_type fp16 --batch_size 1 --engine_file ./export/model.onnx_b1_fp16.engine -k nvidia_tao
, but failed.
The errors are
atic@atic-desktop:/opt/nvidia/deepstream/deepstream-6.1/sources/apps/sample_apps/Rectitude_efficientdet$ efficientdet_tf1 gen_trt_engine -m model.onnx -r ./export --data_type fp16 --batch_size 1 --engine_file ./export/model.onnx_b1_fp16.engine -k nvidia_tao
2023-11-04 20:06:48,881 [TAO Toolkit] [INFO] nvidia_tao_deploy.cv.common.logging.status_logging 198: Log file already exists at /opt/nvidia/deepstream/deepstream-6.1/sources/apps/sample_apps/Rectitude_efficientdet/export/status.json
2023-11-04 20:06:48,882 [TAO Toolkit] [INFO] root 174: Starting efficientdet_tf1 gen_trt_engine.
[11/04/2023-20:06:49] [TRT] [I] [MemUsageChange] Init CUDA: CPU +181, GPU +0, now: CPU 220, GPU 4672 (MiB)
[11/04/2023-20:06:51] [TRT] [I] [MemUsageChange] Init builder kernel library: CPU +131, GPU +140, now: CPU 370, GPU 4827 (MiB)
2023-11-04 20:06:52,356 [TAO Toolkit] [INFO] nvidia_tao_deploy.cv.efficientdet_tf1.engine_builder 40: List inputs:
2023-11-04 20:06:52,357 [TAO Toolkit] [INFO] nvidia_tao_deploy.cv.efficientdet_tf1.engine_builder 41: Input 0 -> input.
2023-11-04 20:06:52,357 [TAO Toolkit] [INFO] nvidia_tao_deploy.cv.efficientdet_tf1.engine_builder 42: [512, 512, 3].
2023-11-04 20:06:52,357 [TAO Toolkit] [INFO] nvidia_tao_deploy.cv.efficientdet_tf1.engine_builder 42: 1.
[11/04/2023-20:06:52] [TRT] [W] onnx2trt_utils.cpp:367: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/04/2023-20:06:56] [TRT] [I] No importer registered for op: EfficientNMS_TRT. Attempting to import as plugin.
[11/04/2023-20:06:56] [TRT] [I] Searching for plugin: EfficientNMS_TRT, plugin_version: 1, plugin_namespace:
[11/04/2023-20:06:56] [TRT] [W] builtin_op_importers.cpp:4714: Attribute class_agnostic not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/04/2023-20:06:56] [TRT] [I] Successfully created plugin: EfficientNMS_TRT
2023-11-04 20:06:56,311 [TAO Toolkit] [INFO] nvidia_tao_deploy.cv.efficientdet_tf1.engine_builder 68: Network Description
2023-11-04 20:06:56,312 [TAO Toolkit] [INFO] nvidia_tao_deploy.cv.efficientdet_tf1.engine_builder 71: Input 'input' with shape (1, 512, 512, 3) and dtype DataType.FLOAT
2023-11-04 20:06:56,318 [TAO Toolkit] [INFO] nvidia_tao_deploy.cv.efficientdet_tf1.engine_builder 73: Output 'num_detections' with shape (1, 1) and dtype DataType.INT32
2023-11-04 20:06:56,318 [TAO Toolkit] [INFO] nvidia_tao_deploy.cv.efficientdet_tf1.engine_builder 73: Output 'detection_boxes' with shape (1, 100, 4) and dtype DataType.FLOAT
2023-11-04 20:06:56,318 [TAO Toolkit] [INFO] nvidia_tao_deploy.cv.efficientdet_tf1.engine_builder 73: Output 'detection_scores' with shape (1, 100) and dtype DataType.FLOAT
2023-11-04 20:06:56,319 [TAO Toolkit] [INFO] nvidia_tao_deploy.cv.efficientdet_tf1.engine_builder 73: Output 'detection_classes' with shape (1, 100) and dtype DataType.INT32
2023-11-04 20:06:56,320 [TAO Toolkit] [INFO] nvidia_tao_deploy.engine.builder 143: TensorRT engine build configurations:
2023-11-04 20:06:56,320 [TAO Toolkit] [INFO] nvidia_tao_deploy.engine.builder 156:
2023-11-04 20:06:56,320 [TAO Toolkit] [INFO] nvidia_tao_deploy.engine.builder 158: BuilderFlag.FP16
2023-11-04 20:06:56,320 [TAO Toolkit] [INFO] nvidia_tao_deploy.engine.builder 172: BuilderFlag.TF32
2023-11-04 20:06:56,321 [TAO Toolkit] [INFO] root 174: type object 'tensorrt.tensorrt.BuilderFlag' has no attribute 'ENABLE_TACTIC_HEURISTIC'
Traceback (most recent call last):
File "</usr/local/lib/python3.8/dist-packages/nvidia_tao_deploy/cv/efficientdet_tf1/scripts/gen_trt_engine.py>", line 3, in <module>
File "<frozen cv.efficientdet_tf1.scripts.gen_trt_engine>", line 182, in <module>
File "<frozen cv.common.decorators>", line 63, in _func
File "<frozen cv.common.decorators>", line 48, in _func
File "<frozen cv.efficientdet_tf1.scripts.gen_trt_engine>", line 70, in main
File "<frozen engine.builder>", line 287, in create_engine
File "<frozen engine.builder>", line 185, in _logger_info_IBuilderConfig
AttributeError: type object 'tensorrt.tensorrt.BuilderFlag' has no attribute 'ENABLE_TACTIC_HEURISTIC'
2023-11-04 20:06:57,088 [WARNING] nvidia_tao_deploy.cv.common.entrypoint.entrypoint_proto: Telemetry data couldn't be sent, but the command ran successfully.
2023-11-04 20:06:57,089 [WARNING] nvidia_tao_deploy.cv.common.entrypoint.entrypoint_proto: [Error]: Uninitialized
2023-11-04 20:06:57,090 [WARNING] nvidia_tao_deploy.cv.common.entrypoint.entrypoint_proto: Execution status: FAIL
What is wrong with my process?