Trtexec deployable onnx to engine for pointpillar object detection

Description

I downloaded the deployable .onnx file for to create .trt engine using trtexec
I followed the steps and ran the command below
trtexec --onnx=/home/osman/Downloads/pointpillarnet_deployable_v1.1/pointpillars_deployable.onnx
–maxShapes=points:1x204800x4,num_points:1
–minShapes=points:1x204800x4,num_points:1
–optShapes=points:1x204800x4,num_points:1
–fp16
–saveEngine=/home/osman/model.engine

Environment

TensorRT Version: 8.2.5
GPU Type: GTX1650
Nvidia Driver Version: 470
CUDA Version: 11.4
CUDNN Version: 8.2
Operating System + Version: Ubuntu 20.04
Python Version (if applicable): 3.8.20

I get the output below
RUNNING TensorRT.trtexec [TensorRT v8205] # trtexec --onnx=/home/osman/Downloads/pointpillarnet_deployable_v1.1/pointpillars_deployable.onnx --maxShapes=points:1x204800x4,num_points:1 --minShapes=points:1x204800x4,num_points:1 --optShapes=points:1x204800x4,num_points:1 --fp16 --saveEngine=/home/osman/model.engine
[02/01/2025-12:50:48] [I] === Model Options ===
[02/01/2025-12:50:48] [I] Format: ONNX
[02/01/2025-12:50:48] [I] Model: /home/osman/Downloads/pointpillarnet_deployable_v1.1/pointpillars_deployable.onnx
[02/01/2025-12:50:48] [I] Output:
[02/01/2025-12:50:48] [I] === Build Options ===
[02/01/2025-12:50:48] [I] Max batch: explicit batch
[02/01/2025-12:50:48] [I] Workspace: 16 MiB
[02/01/2025-12:50:48] [I] minTiming: 1
[02/01/2025-12:50:48] [I] avgTiming: 8
[02/01/2025-12:50:48] [I] Precision: FP32+FP16
[02/01/2025-12:50:48] [I] Calibration:
[02/01/2025-12:50:48] [I] Refit: Disabled
[02/01/2025-12:50:48] [I] Sparsity: Disabled
[02/01/2025-12:50:48] [I] Safe mode: Disabled
[02/01/2025-12:50:48] [I] DirectIO mode: Disabled
[02/01/2025-12:50:48] [I] Restricted mode: Disabled
[02/01/2025-12:50:48] [I] Save engine: /home/osman/model.engine
[02/01/2025-12:50:48] [I] Load engine:
[02/01/2025-12:50:48] [I] Profiling verbosity: 0
[02/01/2025-12:50:48] [I] Tactic sources: Using default tactic sources
[02/01/2025-12:50:48] [I] timingCacheMode: local
[02/01/2025-12:50:48] [I] timingCacheFile:
[02/01/2025-12:50:48] [I] Input(s)s format: fp32:CHW
[02/01/2025-12:50:48] [I] Output(s)s format: fp32:CHW
[02/01/2025-12:50:48] [I] Input build shape: num_points=1+1+1
[02/01/2025-12:50:48] [I] Input build shape: points=1x204800x4+1x204800x4+1x204800x4
[02/01/2025-12:50:48] [I] Input calibration shapes: model
[02/01/2025-12:50:48] [I] === System Options ===
[02/01/2025-12:50:48] [I] Device: 0
[02/01/2025-12:50:48] [I] DLACore:
[02/01/2025-12:50:48] [I] Plugins:
[02/01/2025-12:50:48] [I] === Inference Options ===
[02/01/2025-12:50:48] [I] Batch: Explicit
[02/01/2025-12:50:48] [I] Input inference shape: points=1x204800x4
[02/01/2025-12:50:48] [I] Input inference shape: num_points=1
[02/01/2025-12:50:48] [I] Iterations: 10
[02/01/2025-12:50:48] [I] Duration: 3s (+ 200ms warm up)
[02/01/2025-12:50:48] [I] Sleep time: 0ms
[02/01/2025-12:50:48] [I] Idle time: 0ms
[02/01/2025-12:50:48] [I] Streams: 1
[02/01/2025-12:50:48] [I] ExposeDMA: Disabled
[02/01/2025-12:50:48] [I] Data transfers: Enabled
[02/01/2025-12:50:48] [I] Spin-wait: Disabled
[02/01/2025-12:50:48] [I] Multithreading: Disabled
[02/01/2025-12:50:48] [I] CUDA Graph: Disabled
[02/01/2025-12:50:48] [I] Separate profiling: Disabled
[02/01/2025-12:50:48] [I] Time Deserialize: Disabled
[02/01/2025-12:50:48] [I] Time Refit: Disabled
[02/01/2025-12:50:48] [I] Skip inference: Disabled
[02/01/2025-12:50:48] [I] Inputs:
[02/01/2025-12:50:48] [I] === Reporting Options ===
[02/01/2025-12:50:48] [I] Verbose: Disabled
[02/01/2025-12:50:48] [I] Averages: 10 inferences
[02/01/2025-12:50:48] [I] Percentile: 99
[02/01/2025-12:50:48] [I] Dump refittable layers:Disabled
[02/01/2025-12:50:48] [I] Dump output: Disabled
[02/01/2025-12:50:48] [I] Profile: Disabled
[02/01/2025-12:50:48] [I] Export timing to JSON file:
[02/01/2025-12:50:48] [I] Export output to JSON file:
[02/01/2025-12:50:48] [I] Export profile to JSON file:
[02/01/2025-12:50:48] [I]
[02/01/2025-12:50:48] [I] === Device Information ===
[02/01/2025-12:50:48] [I] Selected Device: NVIDIA GeForce GTX 1650
[02/01/2025-12:50:48] [I] Compute Capability: 7.5
[02/01/2025-12:50:48] [I] SMs: 14
[02/01/2025-12:50:48] [I] Compute Clock Rate: 1.515 GHz
[02/01/2025-12:50:48] [I] Device Global Memory: 3911 MiB
[02/01/2025-12:50:48] [I] Shared Memory per SM: 64 KiB
[02/01/2025-12:50:48] [I] Memory Bus Width: 128 bits (ECC disabled)
[02/01/2025-12:50:48] [I] Memory Clock Rate: 6.001 GHz
[02/01/2025-12:50:48] [I]
[02/01/2025-12:50:48] [I] TensorRT version: 8.2.5
[02/01/2025-12:50:48] [I] [TRT] [MemUsageChange] Init CUDA: CPU +336, GPU +0, now: CPU 348, GPU 202 (MiB)
[02/01/2025-12:50:48] [I] [TRT] [MemUsageSnapshot] Begin constructing builder kernel library: CPU 348 MiB, GPU 202 MiB
[02/01/2025-12:50:49] [I] [TRT] [MemUsageSnapshot] End constructing builder kernel library: CPU 483 MiB, GPU 234 MiB
[02/01/2025-12:50:49] [I] Start parsing network model
[02/01/2025-12:50:49] [I] [TRT] ----------------------------------------------------------------
[02/01/2025-12:50:49] [I] [TRT] Input filename: /home/osman/Downloads/pointpillarnet_deployable_v1.1/pointpillars_deployable.onnx
[02/01/2025-12:50:49] [I] [TRT] ONNX IR version: 0.0.8
[02/01/2025-12:50:49] [I] [TRT] Opset version: 11
[02/01/2025-12:50:49] [I] [TRT] Producer name:
[02/01/2025-12:50:49] [I] [TRT] Producer version:
[02/01/2025-12:50:49] [I] [TRT] Domain:
[02/01/2025-12:50:49] [I] [TRT] Model version: 0
[02/01/2025-12:50:49] [I] [TRT] Doc string:
[02/01/2025-12:50:49] [I] [TRT] ----------------------------------------------------------------
[02/01/2025-12:50:49] [W] [TRT] onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/01/2025-12:50:49] [I] [TRT] No importer registered for op: VoxelGeneratorPlugin. Attempting to import as plugin.
[02/01/2025-12:50:49] [I] [TRT] Searching for plugin: VoxelGeneratorPlugin, plugin_version: 1, plugin_namespace:
[02/01/2025-12:50:49] [E] [TRT] ModelImporter.cpp:773: While parsing node number 0 [VoxelGeneratorPlugin → “voxels”]:
[02/01/2025-12:50:49] [E] [TRT] ModelImporter.cpp:774: — Begin node —
[02/01/2025-12:50:49] [E] [TRT] ModelImporter.cpp:775: input: “points”
input: “num_points”
output: “voxels”
output: “voxel_coords”
output: “num_pillar”
name: “VoxelGeneratorPlugin_0”
op_type: “VoxelGeneratorPlugin”
attribute {
name: “max_voxels”
i: 10000
type: INT
}
attribute {
name: “max_num_points_per_voxel”
i: 32
type: INT
}
attribute {
name: “voxel_feature_num”
i: 10
type: INT
}
attribute {
name: “point_cloud_range”
floats: -51.2
floats: -51.2
floats: -1.4
floats: 51.2
floats: 51.2
floats: 4.4
type: FLOATS
}
attribute {
name: “voxel_size”
floats: 0.2
floats: 0.2
floats: 5.8
type: FLOATS
}

[02/01/2025-12:50:49] [E] [TRT] ModelImporter.cpp:776: — End node —
[02/01/2025-12:50:49] [E] [TRT] ModelImporter.cpp:779: ERROR: builtin_op_importers.cpp:4871 In function importFallbackPluginImporter:
[8] Assertion failed: creator && “Plugin not found, are the plugin name, version, and namespace correct?”
[02/01/2025-12:50:49] [E] Failed to parse onnx file
[02/01/2025-12:50:49] [I] Finish parsing network model
[02/01/2025-12:50:49] [E] Parsing model failed
[02/01/2025-12:50:49] [E] Failed to create engine from model.
[02/01/2025-12:50:49] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec [TensorRT v8205] # trtexec --onnx=/home/osman/Downloads/pointpillarnet_deployable_v1.1/pointpillars_deployable.onnx --maxShapes=points:1x204800x4,num_points:1 --minShapes=points:1x204800x4,num_points:1 --optShapes=points:1x204800x4,num_points:1 --fp16 --saveEngine=/home/osman/model.engine

what should i change?

I also tried to run tao-converter as described in tutorials with deployable .etlt file from ngc but it didnt work again