Please provide the following information when requesting support.
• Hardware (T4/V100/Xavier/Nano/etc) : A100 PCIe
• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc) : lprnet
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here): v3.0-py3.0
• Training spec file(If have, please share here): Use the “tutorial_spec.txt” on sample code.
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)
It just run the following code after “Export in FP32 mode”.
!tlt lprnet export --gpu_index=$GPU_INDEX -m $USER_EXPERIMENT_DIR/experiment_dir_unpruned/weights/lprnet_epoch-24.tlt \
-k $KEY \
-e $SPECS_DIR/tutorial_spec.txt \
-o $USER_EXPERIMENT_DIR/export/lprnet_epoch-24.etlt \
--data_type fp32 \
--engine_file $USER_EXPERIMENT_DIR/export/lprnet_epoch-24.engine
Using TensorFlow backend.
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
Using TensorFlow backend.
2021-07-26 05:12:06,714 [INFO] iva.common.export.keras_exporter: Using input nodes: ['image_input']
2021-07-26 05:12:06,714 [INFO] iva.common.export.keras_exporter: Using output nodes: ['tf_op_layer_ArgMax', 'tf_op_layer_Max']
2021-07-26 05:12:06,714 [INFO] iva.lprnet.utils.spec_loader: Merging specification from /workspace/data/lprnet/specs/tutorial_spec.txt
The ONNX operator number change on the optimization: 132 -> 61
2021-07-26 05:12:18,205 [INFO] keras2onnx: The ONNX operator number change on the optimization: 132 -> 61
Traceback (most recent call last):
File "/opt/tlt/.cache/dazel/_dazel_tlt/2b81a5aac84a1d3b7a324f2a7a6f400b/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/export.py", line 215, in <module>
File "/opt/tlt/.cache/dazel/_dazel_tlt/2b81a5aac84a1d3b7a324f2a7a6f400b/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/export.py", line 142, in main
File "/opt/tlt/.cache/dazel/_dazel_tlt/2b81a5aac84a1d3b7a324f2a7a6f400b/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/export.py", line 211, in run_export
File "/opt/tlt/.cache/dazel/_dazel_tlt/2b81a5aac84a1d3b7a324f2a7a6f400b/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/common/export/keras_exporter.py", line 371, in export
TypeError: set_data_preprocessing_parameters() got an unexpected keyword argument 'image_mean'
It showed the error but actually, the notebook was created the “lprnet_epoch-24.etlt”.
So, I ignored this error but I met the another error at the next evaluate step on notebook.
# Verify the tensorrt engine accuracy on the validation dataset
!tlt lprnet evaluate --gpu_index=$GPU_INDEX -e $SPECS_DIR/tutorial_spec.txt \
-m $USER_EXPERIMENT_DIR/export/lprnet_epoch-24.engine \
--trt
Using TensorFlow backend.
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
Using TensorFlow backend.
2021-07-26 05:16:47,074 [INFO] iva.lprnet.utils.spec_loader: Merging specification from /workspace/data/lprnet/specs/tutorial_spec.txt
Traceback (most recent call last):
File "/opt/tlt/.cache/dazel/_dazel_tlt/2b81a5aac84a1d3b7a324f2a7a6f400b/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/evaluate.py", line 152, in <module>
File "/opt/tlt/.cache/dazel/_dazel_tlt/2b81a5aac84a1d3b7a324f2a7a6f400b/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/evaluate.py", line 148, in main
File "/opt/tlt/.cache/dazel/_dazel_tlt/2b81a5aac84a1d3b7a324f2a7a6f400b/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/evaluate.py", line 105, in evaluate
File "/opt/tlt/.cache/dazel/_dazel_tlt/2b81a5aac84a1d3b7a324f2a7a6f400b/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/common/inferencer/trt_inferencer.py", line 31, in __init__
File "/opt/tlt/.cache/dazel/_dazel_tlt/2b81a5aac84a1d3b7a324f2a7a6f400b/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/common/inferencer/engine.py", line 113, in load_engine
FileNotFoundError: [Errno 2] No such file or directory: '/workspace/data/lprnet/export/lprnet_epoch-24.engine'
Exception ignored in: <bound method TRTInferencer.__del__ of <iva.common.inferencer.trt_inferencer.TRTInferencer object at 0x7fdb63ddc518>>
Traceback (most recent call last):
File "/opt/tlt/.cache/dazel/_dazel_tlt/2b81a5aac84a1d3b7a324f2a7a6f400b/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/common/inferencer/trt_inferencer.py", line 139, in __del__
File "/opt/tlt/.cache/dazel/_dazel_tlt/2b81a5aac84a1d3b7a324f2a7a6f400b/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/common/inferencer/trt_inferencer.py", line 96, in clear_trt_session
AttributeError: 'TRTInferencer' object has no attribute 'context'
It seems that the model name was not match on example. So, I modified the model name at “-m” option like below but not fixed the issue.
!tlt lprnet evaluate --gpu_index=$GPU_INDEX -e $SPECS_DIR/tutorial_spec.txt \
-m $USER_EXPERIMENT_DIR/export/lprnet_epoch-24.etlt \
--trt
Using TensorFlow backend.
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
Using TensorFlow backend.
2021-07-26 06:49:57,938 [INFO] iva.lprnet.utils.spec_loader: Merging specification from /workspace/data/lprnet/specs/tutorial_spec.txt
[TensorRT] ERROR: coreReadArchive.cpp (32) - Serialization Error in verifyHeader: 0 (Magic tag does not match)
[TensorRT] ERROR: INVALID_STATE: std::exception
[TensorRT] ERROR: INVALID_CONFIG: Deserialize the cuda engine failed.
Traceback (most recent call last):
File "/opt/tlt/.cache/dazel/_dazel_tlt/2b81a5aac84a1d3b7a324f2a7a6f400b/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/evaluate.py", line 152, in <module>
File "/opt/tlt/.cache/dazel/_dazel_tlt/2b81a5aac84a1d3b7a324f2a7a6f400b/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/evaluate.py", line 148, in main
File "/opt/tlt/.cache/dazel/_dazel_tlt/2b81a5aac84a1d3b7a324f2a7a6f400b/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/lprnet/scripts/evaluate.py", line 105, in evaluate
File "/opt/tlt/.cache/dazel/_dazel_tlt/2b81a5aac84a1d3b7a324f2a7a6f400b/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/common/inferencer/trt_inferencer.py", line 32, in __init__
AttributeError: 'NoneType' object has no attribute 'max_batch_size'
Exception ignored in: <bound method TRTInferencer.__del__ of <iva.common.inferencer.trt_inferencer.TRTInferencer object at 0x7f13cb431550>>
Traceback (most recent call last):
File "/opt/tlt/.cache/dazel/_dazel_tlt/2b81a5aac84a1d3b7a324f2a7a6f400b/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/common/inferencer/trt_inferencer.py", line 139, in __del__
File "/opt/tlt/.cache/dazel/_dazel_tlt/2b81a5aac84a1d3b7a324f2a7a6f400b/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/common/inferencer/trt_inferencer.py", line 96, in clear_trt_session
AttributeError: 'TRTInferencer' object has no attribute 'context'
If you have any questions, please let me know.
Best reagrds.
Kaka