Tao converter issue for PointPillar TensorRT Inference Sample

Hello, I am trying the sample example of " PointPillar TensorRT Inference Sample" (tao_toolkit_recipes/tao_pointpillars/tensorrt_sample at main · NVIDIA-AI-IOT/tao_toolkit_recipes · GitHub) for which i’m using pointpillars deployable_v1.0 model. I am facing issue with the Tao converter command. I’m attaching the snippet of the same.

If I give this command,
./tao-converter -k tlt_encode -e /home/adascoe/TensorRT/trt.fp16.engine -p points,1x204800x4,1x204800x4,1x204800x4 -p num_points,1x1x1,1x1x1,1x1x1 -t fp16 /home/adascoe/Downloads/model/files/pointpillars_deployable.etlt

where I have changed the format of num_points, then I’m getting this error


Please try this version of tao-converter: TAO Converter | NVIDIA NGC.

For this which TensorRT version needs to be installed?

Please use my mentioned tao-converter to try if it works.
Which TRT version you have installed?

TensorRt 8.2

Suggest to update to 8.6 version to avoid unexpected tensorrt issues.
BTW, similar topic in TAO Converter Provide three optimization profiles for pointpillar - #9 by Morganh.

I have installed v5.1.0_jp6.0_aarch64 of TAO converter on Jetson NX. But still I’m getting this error.

OK, I am just aware of your machine is Jetson NX.
Yes, please use aarch64 versions.
Can you try https://catalog.ngc.nvidia.com/orgs/nvidia/teams/tao/resources/tao-converter/files?version=v4.0.0_trt8.5.2.2_aarch64 and share the log?

BTW, need to
$ chmod +x tao-converter

This log is for this version (v5.1.0_jp6.0_aarch64) only, after trying chmod +x tao-converter, and running the command, I’m getting this

I tried v4.0.0_trt8.5.1.7_x tao-converter version on x86 machine. It is executing but throwing this error.


As mentioned above, in your Jetson NX, can you try https://catalog.ngc.nvidia.com/orgs/nvidia/teams/tao/resources/tao-converter/files?version=v4.0.0_trt8.5.2.2_aarch64 ?

I tried using this version and its working. Thanks!
One more query,
Here, in the below snippet,


the last line

./pointpillars -e /path/to/tensorrt/engine **-l ../../data/102.bin**  -t 0.01 -c Vehicle,Pedestrain,Cyclist -n 4096 -p -d fp16

Here /data/102.bin is just for reference or do we get sample data when we clone the repo?

It is just for reference.

BTW, actually since TAO5.0, the onnx file is available after training. The user can use trtexec to generate tensorrt engine. The tao-converter can be ignored. Refer to TRTEXEC with PointPillars - NVIDIA Docs.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.