• Hardware 2080ti
• Network Type Mask_rcnn
I am training a model on RTX3090 and want to deploy a TensorRT model on RTX2080TI
./tao-converter -k nvidia_tlt -d 3,1344,1344 -o generate_detections,mask_fcn_logits/BiasAdd -e ./trt.fp16.engine -t fp16 -i nchw -m 8 ./model.step-250000.etlt
[INFO] [MemUsageChange] Init CUDA: CPU +322, GPU +0, now: CPU 333, GPU 3243 (MiB) [INFO] [MemUsageSnapshot] Begin constructing builder kernel library: CPU 333 MiB, GPU 3243 MiB [INFO] [MemUsageSnapshot] End constructing builder kernel library: CPU 468 MiB, GPU 3277 MiB [ERROR] UffParser: Validator error: pyramid_crop_and_resize_box: Unsupported operation _MultilevelCropAndResize_TRT [ERROR] Failed to parse the model, please check the encoding key to make sure it's correct [ERROR] 4: [network.cpp::validate::2633] Error Code 4: Internal Error (Network must have at least one output) [ERROR] Unable to create engine
I am sure that the key is correct and that tao-convert is installed correctly because I successfully converted other models with it.