Description
I am following the instructions to install the nanoSAM framework (GitHub - NVIDIA-AI-IOT/nanosam: A distilled Segment Anything (SAM) model capable of running real-time with NVIDIA TensorRT) and am stuck at the conversion of the nanoSAM mobile_sam_mask_decoder.
trtexec --onnx=data/mobile_sam_mask_decoder.onnx --saveEngine=data/mobile_sam_mask_decoder.engine --minShapes=point_coords:1x1x2,point_labels:1x1 --optShapes=point_coords:1x1x2,point_labels:1x1 --maxShapes=point_coords:1x10x2,point_labels:1x10
this fails with:
onnx2trt_utils.cpp:374: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[12/18/2023-16:35:30] [E] Error[4]: [graph.cpp::symbolicExecute::539] Error Code 4: Internal Error (/OneHot: an IIOneHotLayer cannot be used to compute a shape tensor)
[12/18/2023-16:35:30] [E] [TRT] ModelImporter.cpp:771: While parsing node number 146 [Tile → “/Tile_output_0”]:
[12/18/2023-16:35:30] [E] [TRT] ModelImporter.cpp:772: — Begin node —
[12/18/2023-16:35:30] [E] [TRT] ModelImporter.cpp:773: input: “/Unsqueeze_3_output_0”
while the other conversion, i.e.
trtexec
–onnx=data/resnet18_image_encoder.onnx
–saveEngine=data/resnet18_image_encoder.engine
–fp16
just runs fine without issues
Environment
TensorRT Version: 8.6.1
GPU Type: RTX 3080
Nvidia Driver Version: 550.09
CUDA Version: 12.1
CUDNN Version: 8.9
Operating System + Version: Ubuntu 22.04 on WSL2
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable): 2.1.1
Baremetal or Container (if container which image + tag):
Relevant Files
Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)
Steps To Reproduce
follow the setup steps outlined here: