Please provide the following information when requesting support.
• Hardware (T4/V100/Xavier/Nano/etc)
RTX 4090
• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc)
yolo_v3
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here)
/home/ilias/anaconda3/envs/launcher/lib/python3.6/site-packages/tlt/init.py:20: DeprecationWarning:
The nvidia-tlt
package will be deprecated soon. Going forward please migrate to using the nvidia-tao
package.
warnings.warn(message, DeprecationWarning)
Configuration of the TAO Toolkit Instance
dockers:
nvidia/tao/tao-toolkit-tf:
v3.21.11-tf1.15.5-py3:
docker_registry: nvcr.io
tasks:
1. augment
2. bpnet
3. classification
4. dssd
5. emotionnet
6. efficientdet
7. fpenet
8. gazenet
9. gesturenet
10. heartratenet
11. lprnet
12. mask_rcnn
13. multitask_classification
14. retinanet
15. ssd
16. unet
17. yolo_v3
18. yolo_v4
19. yolo_v4_tiny
20. converter
v3.21.11-tf1.15.4-py3:
docker_registry: nvcr.io
tasks:
1. detectnet_v2
2. faster_rcnn
nvidia/tao/tao-toolkit-pyt:
v3.21.11-py3:
docker_registry: nvcr.io
tasks:
1. speech_to_text
2. speech_to_text_citrinet
3. text_classification
4. question_answering
5. token_classification
6. intent_slot_classification
7. punctuation_and_capitalization
8. action_recognition
v3.22.02-py3:
docker_registry: nvcr.io
tasks:
1. spectro_gen
2. vocoder
nvidia/tao/tao-toolkit-lm:
v3.21.08-py3:
docker_registry: nvcr.io
tasks:
1. n_gram
format_version: 2.0
toolkit_version: 3.22.02
published_date: 02/28/2022
• Training spec file(If have, please share here)
experiment spec file
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)
I run the command :
!tao yolo_v3 export
-e $SPECS_DIR/experiment_spec_exp.json
-m $USER_EXPERIMENT_DIR/experiment_dir_retrain_qat3/weights/yolov3_resnet18_epoch_080.tlt
-o $USER_EXPERIMENT_DIR/experiment_dir_final/resnet18_detector_qat2.etlt
-k $KEY
#–cal_image_dir /workspace/tao-experiments/try-6/train/
#–cal_data_file /$USER_EXPERIMENT_DIR/experiment_dir_final/calibration_qat.tensorfile
–data_type int8
#–batch_size 8
#–max_batch_size 64
–cal_json_file $USER_EXPERIMENT_DIR/experiment_dir_final/calibration_qat.json
#–verbose
with all the “#” line commented or not I get lots of different errors but in this specific configuration I get:
/home/ilias/anaconda3/envs/launcher/lib/python3.6/site-packages/tlt/init.py:20: DeprecationWarning:
The nvidia-tlt
package will be deprecated soon. Going forward please migrate to using the nvidia-tao
package.
warnings.warn(message, DeprecationWarning)
2023-04-21 14:18:35,036 [INFO] root: Registry: [‘nvcr.io’]
2023-04-21 14:18:35,071 [INFO] tlt.components.instance_handler.local_instance: Running command in container: nvcr.io/nvidia/tao/tao-toolkit-tf:v3.21.11-tf1.15.5-py3
Matplotlib created a temporary config/cache directory at /tmp/matplotlib-ydv8_bf6 because the default path (/.config/matplotlib) is not a writable directory; it is highly recommended to set the MPLCONFIGDIR environment variable to a writable directory, in particular to speed up the import of Matplotlib and to better support multiprocessing.
Using TensorFlow backend.
Using TensorFlow backend.
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
2023-04-21 12:18:38,083 [INFO] root: Building exporter object.
2023-04-21 12:18:39,692 [INFO] root: Exporting the model.
2023-04-21 12:18:39,692 [INFO] root: Using input nodes: [‘Input’]
2023-04-21 12:18:39,692 [INFO] root: Using output nodes: [‘BatchedNMS’]
2023-04-21 12:18:39,692 [INFO] iva.common.export.keras_exporter: Using input nodes: [‘Input’]
2023-04-21 12:18:39,692 [INFO] iva.common.export.keras_exporter: Using output nodes: [‘BatchedNMS’]
The ONNX operator number change on the optimization: 379 → 173
2023-04-21 12:18:54,369 [INFO] keras2onnx: The ONNX operator number change on the optimization: 379 → 173
[TensorRT] ERROR: 1: [caskUtils.cpp::trtSmToCask::114] Error Code 1: Internal Error (Unsupported SM: 0x809)
2023-04-21 12:18:56,130 [ERROR] modulus.export._tensorrt: Failed to create engine
File “/root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/core/build_wheel.runfiles/ai_infra/moduluspy/modulus/export/_tensorrt.py”, line 869, in init
Traceback (most recent call last):
File “/root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/core/build_wheel.runfiles/ai_infra/moduluspy/modulus/export/_tensorrt.py”, line 869, in init
AssertionError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File “/root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/yolo_v3/scripts/export.py”, line 12, in
File “/root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/common/export/app.py”, line 265, in launch_export
File “/root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/common/export/app.py”, line 247, in run_export
File “/root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/common/export/keras_exporter.py”, line 455, in export
File “/root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/core/build_wheel.runfiles/ai_infra/moduluspy/modulus/export/_tensorrt.py”, line 877, in init
AssertionError: Parsing failed on line 869 in statement
2023-04-21 14:18:57,191 [INFO] tlt.components.docker_handler.docker_handler: Stopping container.
This generates the etlt file but not the calibration_files.json to make it work with deepstream, what am I doing wrong ? Is Tao compatible with rtx 4090
Thank you for your help !
Best regards,
Ilias.