Tao Pointpillars for CPU infrence

I have trained a pointpillars model from my application, I need to run this model on a computer with only an intel CPU and no GPU. It looked like the onnxruntime is the best solution for this. I have tried to import the model into the onnxruntime, but I am not getting the plugins to populate. Looking at the TensorRT repo the plugins all seem to pull in nvinfer and Cuda which to my understanding I can not use on my computer. Is my best option to rewrite the plugins to not use Cuda and NVinfer, or is there a better option that I am missing to run inference on a CPU only device.

Similar to Using pointpillar .onnx model.