Please provide the following information when requesting support.
• Hardware (T4/V100/Xavier/Nano/etc) Jetsom AGX Xavier
• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc) Efficientdet-tf1
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here)
Tao deploy is installed to AGX Xavier locally using the following commands.
apt install libopenmpi-dev
pip install nvidia_tao_deploy==5.0.0.423.dev0
pip install https://files.pythonhosted.org/packages/f7/7a/ac2e37588fe552b49d8807215b7de224eef60a495391fdacc5fa13732d11/nvidia_eff_tao_encryption-0.1.7-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
pip install https://files.pythonhosted.org/packages/0d/05/6caf40aefc7ac44708b2dcd5403870181acc1ecdd93fa822370d10cc49f3/nvidia_eff-0.6.2-py38-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
All are installed successfully.
But run the following command,
efficientdet_tf1 gen_trt_engine -m model.onnx -r ./export --data_type fp16 --batch_size 1 --engine_file ./export/model.onnx_b1_fp16.engine -k nvidia_tao
gave me errors as
atic@atic-desktop:/opt/nvidia/deepstream/deepstream-6.1/sources/apps/sample_apps/Rectitude_efficientdet$ efficientdet_tf1 gen_trt_engine -m model.onnx -r ./export --data_type fp16 --batch_size 1 --engine_file ./export/model.onnx_b1_fp16.engine -k nvidia_tao
2023-11-04 20:06:48,881 [TAO Toolkit] [INFO] nvidia_tao_deploy.cv.common.logging.status_logging 198: Log file already exists at /opt/nvidia/deepstream/deepstream-6.1/sources/apps/sample_apps/Rectitude_efficientdet/export/status.json
2023-11-04 20:06:48,882 [TAO Toolkit] [INFO] root 174: Starting efficientdet_tf1 gen_trt_engine.
[11/04/2023-20:06:49] [TRT] [I] [MemUsageChange] Init CUDA: CPU +181, GPU +0, now: CPU 220, GPU 4672 (MiB)
[11/04/2023-20:06:51] [TRT] [I] [MemUsageChange] Init builder kernel library: CPU +131, GPU +140, now: CPU 370, GPU 4827 (MiB)
2023-11-04 20:06:52,356 [TAO Toolkit] [INFO] nvidia_tao_deploy.cv.efficientdet_tf1.engine_builder 40: List inputs:
2023-11-04 20:06:52,357 [TAO Toolkit] [INFO] nvidia_tao_deploy.cv.efficientdet_tf1.engine_builder 41: Input 0 -> input.
2023-11-04 20:06:52,357 [TAO Toolkit] [INFO] nvidia_tao_deploy.cv.efficientdet_tf1.engine_builder 42: [512, 512, 3].
2023-11-04 20:06:52,357 [TAO Toolkit] [INFO] nvidia_tao_deploy.cv.efficientdet_tf1.engine_builder 42: 1.
[11/04/2023-20:06:52] [TRT] [W] onnx2trt_utils.cpp:367: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/04/2023-20:06:56] [TRT] [I] No importer registered for op: EfficientNMS_TRT. Attempting to import as plugin.
[11/04/2023-20:06:56] [TRT] [I] Searching for plugin: EfficientNMS_TRT, plugin_version: 1, plugin_namespace:
[11/04/2023-20:06:56] [TRT] [W] builtin_op_importers.cpp:4714: Attribute class_agnostic not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/04/2023-20:06:56] [TRT] [I] Successfully created plugin: EfficientNMS_TRT
2023-11-04 20:06:56,311 [TAO Toolkit] [INFO] nvidia_tao_deploy.cv.efficientdet_tf1.engine_builder 68: Network Description
2023-11-04 20:06:56,312 [TAO Toolkit] [INFO] nvidia_tao_deploy.cv.efficientdet_tf1.engine_builder 71: Input 'input' with shape (1, 512, 512, 3) and dtype DataType.FLOAT
2023-11-04 20:06:56,318 [TAO Toolkit] [INFO] nvidia_tao_deploy.cv.efficientdet_tf1.engine_builder 73: Output 'num_detections' with shape (1, 1) and dtype DataType.INT32
2023-11-04 20:06:56,318 [TAO Toolkit] [INFO] nvidia_tao_deploy.cv.efficientdet_tf1.engine_builder 73: Output 'detection_boxes' with shape (1, 100, 4) and dtype DataType.FLOAT
2023-11-04 20:06:56,318 [TAO Toolkit] [INFO] nvidia_tao_deploy.cv.efficientdet_tf1.engine_builder 73: Output 'detection_scores' with shape (1, 100) and dtype DataType.FLOAT
2023-11-04 20:06:56,319 [TAO Toolkit] [INFO] nvidia_tao_deploy.cv.efficientdet_tf1.engine_builder 73: Output 'detection_classes' with shape (1, 100) and dtype DataType.INT32
2023-11-04 20:06:56,320 [TAO Toolkit] [INFO] nvidia_tao_deploy.engine.builder 143: TensorRT engine build configurations:
2023-11-04 20:06:56,320 [TAO Toolkit] [INFO] nvidia_tao_deploy.engine.builder 156:
2023-11-04 20:06:56,320 [TAO Toolkit] [INFO] nvidia_tao_deploy.engine.builder 158: BuilderFlag.FP16
2023-11-04 20:06:56,320 [TAO Toolkit] [INFO] nvidia_tao_deploy.engine.builder 172: BuilderFlag.TF32
2023-11-04 20:06:56,321 [TAO Toolkit] [INFO] root 174: type object 'tensorrt.tensorrt.BuilderFlag' has no attribute 'ENABLE_TACTIC_HEURISTIC'
Traceback (most recent call last):
File "</usr/local/lib/python3.8/dist-packages/nvidia_tao_deploy/cv/efficientdet_tf1/scripts/gen_trt_engine.py>", line 3, in <module>
File "<frozen cv.efficientdet_tf1.scripts.gen_trt_engine>", line 182, in <module>
File "<frozen cv.common.decorators>", line 63, in _func
File "<frozen cv.common.decorators>", line 48, in _func
File "<frozen cv.efficientdet_tf1.scripts.gen_trt_engine>", line 70, in main
File "<frozen engine.builder>", line 287, in create_engine
File "<frozen engine.builder>", line 185, in _logger_info_IBuilderConfig
AttributeError: type object 'tensorrt.tensorrt.BuilderFlag' has no attribute 'ENABLE_TACTIC_HEURISTIC'
2023-11-04 20:06:57,088 [WARNING] nvidia_tao_deploy.cv.common.entrypoint.entrypoint_proto: Telemetry data couldn't be sent, but the command ran successfully.
2023-11-04 20:06:57,089 [WARNING] nvidia_tao_deploy.cv.common.entrypoint.entrypoint_proto: [Error]: Uninitialized
2023-11-04 20:06:57,090 [WARNING] nvidia_tao_deploy.cv.common.entrypoint.entrypoint_proto: Execution status: FAIL
Why failed in conversion?