How to convert etlt to trt on jetson orin

• Hardware:
x86 ubuntu18.04
jetson orin 20.04 jetpack5.0.2

• Network Type:
pointpillar

• TAO Version:
4.0.0-pyt

• Training spec file:
pointpillars.ipynb (5.1 MB)

Hello,my requirement is to use the pointpillar model to implement inference tasks on orin. The training phase was on x86 and the trained tlt and etlt models were obtained. The Docker image in Orin is the same as that in x86. The Jupyter case can convert the model into a TRT engine file on the x86 platform, but using tao convert on the Orin platform cannot achieve model conversion. The command is as follows:

tao-converter
- k $KEY
-e/ Trt.fp16.engine
-p points, 1x25000x4, 1x25000x4, 1x25000x4, 1x25000x4
-p num_ Points, 1,1,1
-t fp16
$My.etlt

Then report an error:
[INFO] [MemUsageChange] Init CUDA: CPU+220, GPU+0, now: CPU 242, GPU 7828 (MiB)
[INFO] [MemUsageChange] Init builder kernel library: CPU+351, GPU+350, now: CPU 612, GPU 8197 (MiB)
[INFO]----------------------------------------------------------------
[INFO] Input filename:/tmp/fileGgvCq8
[INFO] ONNX IR version: 0.0.0
[INFO] Opset version: 0
[INFO] Producer name:
[INFO] Producer version:
[INFO] Domain:
[INFO] Model version: 0
[INFO] Doc string:
[INFO]----------------------------------------------------------------
[ERROR] Number of optimization profiles does not match model input node number

This error indicates that the provided parameters are incorrect, but these parameters can be converted on the x86 platform. I don’t know where the problem occurred, or how can I confirm these parameters. I hope to receive some suggestions. Thank you very much

Please make sure $My.etlt is available. Suggest you to use explicit filename.
Also set explicit key instead of `$KEY" to avoid unexpected issue.

The .etlt model is confirmed to be available, it was successfully converted to TRT on x86, and $key was generated and available on the official website. The tao-converter was also downloaded on the official website, version v3.22.05_ trt8.4_ aarch64.
checkpoint_epoch_8.tlt.etlt (15.3 MB)

The log is similar to Convert model to Jetson Error during model export step in TAO notebook - #18 by Morganh.
As mentioned above, please try to set explicit key instead of $KEY.

Thank you for your patient answer. The $KEY I generated on the official website is as follows: YXZlOHBvZzc4Y3Y5czY5NW4yb282amtucnQ6OGIyZDJlMTMtYThmMi00ZGRlLTkyNzQtODgyM2I0NGFmZDU0

So the command I used is like this
./tao-converter -k YXZlOHBvZzc4Y3Y5czY5NW4yb282amtucnQ6Nzg0NTRjOTctYmQ5My00OWU2LTg5NTAtMGFmZDAzYzQ3N2Q0
-e ./trt.fp16.engine
-p points,1x1x25000x4,1x1x25000x4,1x1x25000x4
-p num_points,1,1,1 -t fp16
~/checkpoint_epoch_8.tlt.etlt

Now the error is reported as follows:
[INFO] [MemUsageChange] Init CUDA: CPU +220, GPU +0, now: CPU 242, GPU 6158 (MiB) [INFO] [MemUsageChange] Init builder kernel library: CPU +351, GPU +343, now: CPU 612, GPU 6518 (MiB)
[libprotobuf ERROR google/protobuf/text_format.cc:298] Error parsing text-format onnx2trt_onnx.ModelProto: 1:1: Interpreting non ascii codepoint 221.
[libprotobuf ERROR google/protobuf/text_format.cc:298] Error parsing text-format onnx2trt_onnx.ModelProto: 1:1: Expected identifier, got: �
[ERROR] ModelImporter.cpp:735: Failed to parse ONNX model from file: /tmp/file9eEmD3 [ERROR] Failed to parse the model, please check the encoding key to make sure it’s correct
[ERROR] Number of optimization profiles does not match model input node number. Aborted (core dumped)

Now it seems that this error is still due to KEY. What can be done to avoid this error? Thank you again for your feedback

To narrow down, please download ngc model PointPillarNet | NVIDIA NGC and try it.
$ wget ‘https://api.ngc.nvidia.com/v2/models/nvidia/tao/pointpillarnet/versions/deployable_v1.0/files/pointpillars_deployable.etlt
Its key is tlt_encode .

I downloaded this model and tried it out
./tao-converter -k YXZlOHBvZzc4Y3Y5czY5NW4yb282amtucnQ6OGIyZDJlMTMtYThmMi00ZGRlLTkyNzQtODgyM2I0NGFmZDU0 -e ./trt.fp16.engine
-p points,1x25000x4,1x25000x4,1x25000x4
-p num_points,1,1,1
-t fp16
/home/oligay_88/TAO/pointpillars_deployable.etlt

The error code has changed
[INFO] [MemUsageChange] Init CUDA: CPU +220, GPU +0, now: CPU 242, GPU 7508 (MiB)
[INFO] [MemUsageChange] Init builder kernel library: CPU +351, GPU +394, now: CPU 612, GPU 7919 (MiB) [libprotobuf ERROR google/protobuf/text_format.cc:298] Error parsing text-format onnx2trt_onnx.ModelProto: 1:1: Invalid control characters encountered in text.
[libprotobuf ERROR google/protobuf/text_format.cc:298] Error parsing text-format onnx2trt_onnx.ModelProto: 1:2: Interpreting non ascii codepoint 200.
[libprotobuf ERROR google/protobuf/text_format.cc:298] Error parsing text-format onnx2trt_onnx.ModelProto: 1:2: Expected identifier, got: �
[ERROR] ModelImporter.cpp:735: Failed to parse ONNX model from file: /tmp/fileU7azNC [ERROR] Failed to parse the model, please check the encoding key to make sure it’s correct
[ERROR] Number of optimization profiles does not match model input node number. Aborted (core dumped)

As mentioned above, please use key tlt_encode.

Hello, I used tlt_ encode, despite many warnings, successfully converted to the TRT engine. Thank you very much for your help. These warnings should not be a problem, right?

Yes. Glad to know it works.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.