Using pointpillar .onnx model

I converted a trained pointpillar model by running tao model pointpillars export.
I then viewed the .onnx network structure by using Netron.
It seems that there’re 3 plugin layers that aren’t directly supported by onnxruntime.


Screenshot from 2024-02-19 14-17-50

Unlike YOLOv4, in which I removed the last layer, BatchedNMSDynamic_TRT, and then exported a new model that can be used directly by onnxruntime, the pointpillar .onnx model contains custom Ops in the middle of the network structure. Therefore, by merely removeing the last layer, DecodeBbox3D plugin, isn’t enough.

I wonder if it’s possible to split the whole pointpillar .onnx model into 2 parts, implement VoxelGeneratorPlugin and PillarScatterPlugin layers, respectively and then remove the DecodeBbox3D plugin layer? Also, if the assumption is possible, I wonder where to look for the implementations of VoxelGeneratorPlugin, PillarScatterPlugin and DecodeBbox3D pluginlayers.

Thanks.

You can refer to TensorRT/plugin/voxelGeneratorPlugin at 8.4.1 · NVIDIA/TensorRT · GitHub,
TensorRT/plugin/pillarScatterPlugin at c0c633cc629cc0705f0f69359f531a192e524c0f · NVIDIA/TensorRT · GitHub, TensorRT/plugin/decodeBbox3DPlugin at c0c633cc629cc0705f0f69359f531a192e524c0f · NVIDIA/TensorRT · GitHub.

Thanks for the reference, @Morganh .

I actually ran my own inference code for YOLOv4 models trained on TAO toolkit using onnxruntime on Python, and this time I’m still writing codes for pointpillar models on Python, as well. So I guess it’s necessary to move to C++ if I wanna make it work? Also, TensorRT is needed as well?

We suggest users to run with TensorRT engine.
For onnxruntime, you can refer to above-mentioned link to implement the plugin.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.