Hello. When I generated TRT engine file for the PeopleSemSegNet’s etlt file on this tao_deloy.ipynb notebook, I got the error message NotImplementedError: UFF for YOLO_v3 is not supported.
How could I do to solve this problem ? Thank you for your help is advance.
My TensorRT version is 8.5.1.7 and python version is 3.8.10
The picture below is error message when generating PeopleSemSegNet tensorrt engine
The content below is the process of generating PeopleSemSegNet TensorRT
#FIXME 8 - data_type: choose fp32 or fp16
os.environ["data_type"] = "fp32"
#FIXME 9 - trt_out_folder: choose output folder for TensorRT engine file writing
trt_out_folder = "/root/nvidia-tao/tao_deploy/trt_out_folder" + ptm_model_name
!mkdir -p $trt_out_folder
import glob
input_etlt_file_list = glob.glob(os.environ.get("ptm_download_folder")+"/**/*.etlt", recursive=True)
if len(input_etlt_file_list) == 0:
raise Exception("ETLT file was not downloaded")
os.environ["input_etlt_file"] = input_etlt_file_list[0]
if ptm_model_name in ("LicensePlateRecognition","LicensePlateDetection"):
# FIXME: country
# us/ccpd for LicensePlateDetection - us for United States, ch for China
# us/ch for LicensePlateRecongition - us for United States, ch for China
country = "us"
for countrywise_ptm in input_etlt_file_list:
fname = countrywise_ptm.split("/")[-1]
if fname.startswith(country):
os.environ["input_etlt_file"] = countrywise_ptm
action = ""
if ptm_model_name in ("PeopleNet","LicensePlateDetection","DashCamNet","TrafficCamNet","FaceDetect","FaceDetectIR"):
action = "_trt"
os.environ["KEY"] = "tlt_encode"
if ptm_model_name in ("LicensePlateRecognition","LicensePlateDetection","FaceDetect"):
os.environ["KEY"] = "nvidia_tlt"
os.environ["trt_experiment_spec"] = f"{os.environ.get('COLAB_NOTEBOOKS_PATH')}/tao_deploy/specs/{ptm_model_name}/{ptm_model_name}{action}.txt"
os.environ["trt_out_file_name"] = f'{trt_out_folder}/{ptm_model_name}.trt.{os.environ["data_type"]}'
if ptm_model_name == "PeopleSemSegNet":
!unet gen_trt_engine \
-m $input_etlt_file \
-k $KEY \
-e $trt_experiment_spec \
--data_type $data_type \
--batch_size 1 \
--max_batch_size 3 \
--engine_file $trt_out_file_name
The content below is PeopleSemSegNet.txt refernced from this file
model_config {
num_layers: 18
model_input_width: 960
model_input_height: 544
model_input_channels: 3
all_projections: true
arch: "vanilla_unet_dynamic"
use_batch_norm: true
training_precision {
backend_floatx: FLOAT32
}
}
dataset_config {
dataset: "custom"
augment: False
input_image_type: "color"
train_data_sources: {
data_source: {
image_path: "/root/nvidia-tao/tao_deploy/specs/PeopleSemSegNet/PeopleSemSegNet_data.txt"
masks_path: ""
}
}
val_data_sources: {
data_source: {
image_path: "/root/nvidia-tao/tao_deploy/specs/PeopleSemSegNet/PeopleSemSegNet_data.txt"
masks_path: ""
}
}
test_data_sources: {
data_source: {
image_path: "/root/nvidia-tao/tao_deploy/specs/PeopleSemSegNet/PeopleSemSegNet_data.txt"
}
}
data_class_config {
target_classes {
name: "person"
mapping_class: "person"
label_id: 1
}
target_classes {
name: "background"
mapping_class: "background"
label_id: 0
}
target_classes {
name: "bag"
mapping_class: "background"
label_id: 2
}
target_classes {
name: "face"
mapping_class: "person"
label_id: 3
}
}
}