Please provide the following information when requesting support.
• Hardware (T4/V100/Xavier/Nano/etc) : Discrete Nvidia RTX 3080 GPU
• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc): Detectnet_v2
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here):
Configuration of the TAO Toolkit Instance
dockers:
nvidia/tao/tao-toolkit:
4.0.0-tf2.9.1:
docker_registry: nvcr.io
tasks:
1. classification_tf2
2. efficientdet_tf2
4.0.0-tf1.15.5:
docker_registry: nvcr.io
tasks:
1. augment
2. bpnet
3. classification_tf1
4. detectnet_v2
5. dssd
6. emotionnet
7. efficientdet_tf1
8. faster_rcnn
9. fpenet
10. gazenet
11. gesturenet
12. heartratenet
13. lprnet
14. mask_rcnn
15. multitask_classification
16. retinanet
17. ssd
18. unet
19. yolo_v3
20. yolo_v4
21. yolo_v4_tiny
22. converter
4.0.1-tf1.15.5:
docker_registry: nvcr.io
tasks:
1. mask_rcnn
2. unet
4.0.0-pyt:
docker_registry: nvcr.io
tasks:
1. action_recognition
2. deformable_detr
3. segformer
4. re_identification
5. pointpillars
6. pose_classification
7. n_gram
8. speech_to_text
9. speech_to_text_citrinet
10. speech_to_text_conformer
11. spectro_gen
12. vocoder
13. text_classification
14. question_answering
15. token_classification
16. intent_slot_classification
17. punctuation_and_capitalization
format_version: 2.0
toolkit_version: 4.0.1
published_date: 03/06/2023
TensorRT Version for inferencing: 8.5.3.1
In [10]:
• Training spec file(If have, please share here):
detectnet_v2_retrain_resnet18_kitti.txt (6.3 KB)
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)
I used to following script to export the model:
Need to pass the actual image directory instead of data root for tao-deploy to locate images for calibration
!sed -i “s|/workspace/tao-experiments/data/training|/workspace/tao-experiments/data/training/image_2|g” $LOCAL_SPECS_DIR/detectnet_v2_retrain_resnet18_kitti.txt
Convert to TensorRT engine (FP32)
!tao-deploy detectnet_v2 gen_trt_engine
-m $USER_EXPERIMENT_DIR/experiment_dir_final/resnet18_detector.etlt
-k $KEY
–data_type fp32
–engine_file $USER_EXPERIMENT_DIR/experiment_dir_final/resnet18_detector.trt.fp32
-e $SPECS_DIR/detectnet_v2_retrain_resnet18_kitti.txt
–verbose
Convert back the spec file
!sed -i “s|/workspace/tao-experiments/data/training/image_2|/workspace/tao-experiments/data/training|g” $LOCAL_SPECS_DIR/detectnet_v2_retrain_resnet18_kitti.txt
Then I am trying to read this engine file using TensorRT runtime using the following code:
import numpy as np
import os
import pycuda.driver as cuda
import pycuda.autoinit
import tensorrt as trt
import matplotlib.pyplot as plt
from PIL import Image
TRT_LOGGER = trt.Logger()
Filenames of TensorRT plan file and input/output images.
engine_file = “resnet18_detector.trt.fp32”
input_file = “input.ppm”
output_file = “output.ppm”
print(trt.version)
def load_engine(engine_file_path):
assert os.path.exists(engine_file_path)
print(“Reading engine from file {}”.format(engine_file_path))
with open(engine_file_path, “rb”) as f, trt.Runtime(TRT_LOGGER) as runtime:
return runtime.deserialize_cuda_engine(f.read())
load_engine(engine_file)
And after doing this I am getting the following error:
Reading engine from file resnet18_detector.trt.fp32
[04/28/2023-10:37:08] [TRT] [E] 1: [runtime.cpp::parsePlan::314] Error Code 1: Serialization (Serialization assertion plan->header.magicTag == rt::kPLAN_MAGIC_TAG failed.)