TAO converter for linux Ubuntu 20.04

Hi

When i try to download the binary exec for TAO converter for the x86 linux ubuntu 20.04 from the nvidia following site
TAO Converter | NVIDIA NGC

I am not able to download the correct arch version file for tensorrt 8.4. When the download link is clicked for x86 only aarch64 format is downloading not the correct binary exec.

Hi, Thanks for posting in the forums. I am looking for the owner of the NGC page. I will post back here when I have more information.

Best,
Tom

@gayathri4
Can you double check? I cannot reproduce this issue.

yes I have rechecked it. When i try to download trt 8.4 version for x86 device, aarch file is getting downloading

Can you try β€œwget” command? See below.

yes wget command works. but getting the following error for converting etlt model.

[INFO] [MemUsageChange] Init CUDA: CPU +724, GPU +0, now: CPU 730, GPU 311 (MiB)
[INFO] ----------------------------------------------------------------
[INFO] Input filename: /tmp/fileFclIhk
[INFO] ONNX IR version: 0.0.7
[INFO] Opset version: 13
[INFO] Producer name: pytorch
[INFO] Producer version: 1.10
[INFO] Domain:
[INFO] Model version: 0
[INFO] Doc string:
[INFO] ----------------------------------------------------------------
[WARNING] onnx2trt_utils.cpp:320: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[WARNING] Tensor DataType is determined at build time for tensors not marked as input or output.
[WARNING] Tensor DataType is determined at build time for tensors not marked as input or output.
[ERROR] [graphShapeAnalyzer.cpp::throwIfError::1306] Error Code 9: Internal Error (Gather_218: index to gather must not exceed length of vector
)
[ERROR] ModelImporter.cpp:738: While parsing node number 220 [Range β†’ β€œ537”]:
[ERROR] ModelImporter.cpp:739: β€” Begin node β€”
[ERROR] ModelImporter.cpp:740: input: β€œ935”
input: β€œ535”
input: β€œ936”
output: β€œ537”
name: β€œRange_220”
op_type: β€œRange”

[ERROR] ModelImporter.cpp:741: β€” End node β€”
[ERROR] ModelImporter.cpp:744: ERROR: ModelImporter.cpp:197 In function parseGraph:
[6] Invalid Node - Range_220
[graphShapeAnalyzer.cpp::throwIfError::1306] Error Code 9: Internal Error (Gather_218: index to gather must not exceed length of vector
)
Invalid Node - Range_220
[graphShapeAnalyzer.cpp::throwIfError::1306] Error Code 9: Internal Error (Gather_218: index to gather must not exceed length of vector
)
[ERROR] Failed to parse the model, please check the encoding key to make sure it’s correct
[INFO] Detected input dimensions from the model: (1, 3, 576, 960)
[INFO] Detected input dimensions from the model: (1, 3, 576, 960)
[INFO] Model has no dynamic shape.
[INFO] [MemUsageSnapshot] Builder begin: CPU 797 MiB, GPU 311 MiB
[ERROR] 4: [network.cpp::validate::2361] Error Code 4: Internal Error (Network must have at least one output)
[ERROR] Unable to create engine
Segmentation fault (core dumped)

OK, so original issue is gone.

For latest error, can you share the full commandline?

I need to convert etlt model to tensorrt engine using tao converter.

./tao-converter_8.4 -k ess -t fp16 -e ./ess.engine -o output_left ess.etlt
[INFO] [MemUsageChange] Init CUDA: CPU +736, GPU +0, now: CPU 742, GPU 311 (MiB)
[INFO] ----------------------------------------------------------------
[INFO] Input filename: /tmp/file2dQUqh
[INFO] ONNX IR version: 0.0.7
[INFO] Opset version: 13
[INFO] Producer name: pytorch
[INFO] Producer version: 1.10
[INFO] Domain:
[INFO] Model version: 0
[INFO] Doc string:
[INFO] ----------------------------------------------------------------
[WARNING] onnx2trt_utils.cpp:320: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[WARNING] Tensor DataType is determined at build time for tensors not marked as input or output.
[WARNING] Tensor DataType is determined at build time for tensors not marked as input or output.
[ERROR] [graphShapeAnalyzer.cpp::throwIfError::1306] Error Code 9: Internal Error (Gather_218: index to gather must not exceed length of vector
)
[ERROR] ModelImporter.cpp:738: While parsing node number 220 [Range β†’ β€œ537”]:
[ERROR] ModelImporter.cpp:739: β€” Begin node β€”
[ERROR] ModelImporter.cpp:740: input: β€œ935”
input: β€œ535”
input: β€œ936”
output: β€œ537”
name: β€œRange_220”
op_type: β€œRange”

[ERROR] ModelImporter.cpp:741: β€” End node β€”
[ERROR] ModelImporter.cpp:744: ERROR: ModelImporter.cpp:197 In function parseGraph:
[6] Invalid Node - Range_220
[graphShapeAnalyzer.cpp::throwIfError::1306] Error Code 9: Internal Error (Gather_218: index to gather must not exceed length of vector
)
Invalid Node - Range_220
[graphShapeAnalyzer.cpp::throwIfError::1306] Error Code 9: Internal Error (Gather_218: index to gather must not exceed length of vector
)
[ERROR] Failed to parse the model, please check the encoding key to make sure it’s correct
[INFO] Detected input dimensions from the model: (1, 3, 576, 960)
[INFO] Detected input dimensions from the model: (1, 3, 576, 960)
[INFO] Model has no dynamic shape.
[INFO] [MemUsageSnapshot] Builder begin: CPU 809 MiB, GPU 311 MiB
[ERROR] 4: [network.cpp::validate::2361] Error Code 4: Internal Error (Network must have at least one output)
[ERROR] Unable to create engine
Segmentation fault (core dumped)

Which network did you use to train to get this model?

Hi Morgan

This model was downloaded one from ess dnn stereo of nvidia

It is not a model mentioned in TAO user guide.
To narrow down, could you try its old version? ESS DNN Stereo Disparity | NVIDIA NGC

Hi

Tried the old version getting same error

Hi

The same converter for jetpack is workign in jetson nano board. Problem is in x86 platform.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

So, there is no issue in Jetson nano when you use aarch version of tao-converter, correct?

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.