Error Code 1: Serialization (Serialization assertion magicTagRead == magicTag failed.Magic tag does not match) [pp_infer-1] trt_infer: 4: [runtime.cpp

Hello I have done train of pointpillar and deploed engine in university computer. Now i want to use it in my own computer. When i try to run the node in this link GitHub - NVIDIA-AI-IOT/ros2_tao_pointpillars: ROS2 node for 3D object detection using TAO-PointPillars. it gives error engine is null.
pp_infer-1] trt_infer: 1: [stdArchiveReader.cpp::StdArchiveReader::30] Error Code 1: Serialization (Serialization assertion magicTagRead == magicTag failed.Magic tag does not match)
[pp_infer-1] trt_infer: 4: [runtime.cpp::deserializeCudaEngine::50] Error Code 4: Internal Error (Engine deserialization failed.)
[pp_infer-1] : engine null!
[ERROR] [pp_infer-1]: process has died [pid 6693, exit code 255, cmd ‘/home/osman/pointpillars_ws/install/pp_infer/lib/pp_infer/pp_infer --ros-args --params-file /tmp/launch_params_s11kn8uh -r /point_cloud:=/carla/ego_vehicle/lidar’].

In the forums i saw that it might be due to tensorrt version

How can i make this engine usable in my own pc

Installations on my computer
dpkg -l | grep nvinfer
ii libnvinfer-bin 8.2.5-1+cuda11.4 amd64 TensorRT binaries
ii libnvinfer-dev 8.2.5-1+cuda11.4 amd64 TensorRT development libraries and headers
ii libnvinfer-doc 8.2.5-1+cuda11.4 all TensorRT documentation
ii libnvinfer-lean10 10.7.0.23-1+cuda12.6 amd64 TensorRT lean runtime library
ii libnvinfer-plugin-dev 8.2.5-1+cuda11.4 amd64 TensorRT plugin libraries
ii libnvinfer-plugin8 8.2.5-1+cuda11.4 amd64 TensorRT plugin libraries
ii libnvinfer-samples 8.2.5-1+cuda11.4 all TensorRT samples
ii libnvinfer-vc-plugin10 10.7.0.23-1+cuda12.6 amd64 TensorRT vc-plugin library
ii libnvinfer8 8.2.5-1+cuda11.4 amd64 TensorRT runtime libraries
ii python3-libnvinfer 8.2.5-1+cuda11.4 amd64 Python 3 bindings for TensorRT
ii python3-libnvinfer-dev 8.2.5-1+cuda11.4 amd64 Python 3 development package for TensorRT

there is no tensorrt in school computer and Cuda version is 12.0

i would be happy if you help me immediately

when i tried to generate engine from model again using trtexec i get this error
[02/01/2025-10:38:35] [W] [TRT] onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/01/2025-10:38:35] [I] [TRT] No importer registered for op: VoxelGeneratorPlugin. Attempting to import as plugin.
[02/01/2025-10:38:35] [I] [TRT] Searching for plugin: VoxelGeneratorPlugin, plugin_version: 1, plugin_namespace:
[02/01/2025-10:38:35] [E] [TRT] ModelImporter.cpp:773: While parsing node number 0 [VoxelGeneratorPlugin → “voxels”]:
[02/01/2025-10:38:35] [E] [TRT] ModelImporter.cpp:774: — Begin node —
[02/01/2025-10:38:35] [E] [TRT] ModelImporter.cpp:775: input: “points”
input: “num_points”
output: “voxels”
output: “voxel_coords”
output: “num_pillar”
name: “VoxelGeneratorPlugin_0”
op_type: “VoxelGeneratorPlugin”
attribute {
name: “max_voxels”
i: 10000
type: INT
}
attribute {
name: “max_num_points_per_voxel”
i: 32
type: INT
}
attribute {
name: “voxel_feature_num”
i: 10
type: INT
}
attribute {
name: “point_cloud_range”
floats: 0
floats: -39.68
floats: -3
floats: 69.12
floats: 39.68
floats: 1
type: FLOATS
}
attribute {
name: “voxel_size”
floats: 0.16
floats: 0.16
floats: 4
type: FLOATS
}

[02/01/2025-10:38:35] [E] [TRT] ModelImporter.cpp:776: — End node —
[02/01/2025-10:38:35] [E] [TRT] ModelImporter.cpp:779: ERROR: builtin_op_importers.cpp:4871 In function importFallbackPluginImporter:
[8] Assertion failed: creator && “Plugin not found, are the plugin name, version, and namespace correct?”
[02/01/2025-10:38:35] [E] Failed to parse onnx file
[02/01/2025-10:38:35] [I] Finish parsing network model
[02/01/2025-10:38:35] [E] Parsing model failed
[02/01/2025-10:38:35] [E] Failed to create engine from model.
[02/01/2025-10:38:35] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec [TensorRT v8205] # trtexec --onnx=checkpoint_epoch_80.onnx --buildOnly --saveEngine=best.engine