Hello I have done train of pointpillar and deploed engine in university computer. Now i want to use it in my own computer. When i try to run the node in this link GitHub - NVIDIA-AI-IOT/ros2_tao_pointpillars: ROS2 node for 3D object detection using TAO-PointPillars. it gives error engine is null.
pp_infer-1] trt_infer: 1: [stdArchiveReader.cpp::StdArchiveReader::30] Error Code 1: Serialization (Serialization assertion magicTagRead == magicTag failed.Magic tag does not match)
[pp_infer-1] trt_infer: 4: [runtime.cpp::deserializeCudaEngine::50] Error Code 4: Internal Error (Engine deserialization failed.)
[pp_infer-1] : engine null!
[ERROR] [pp_infer-1]: process has died [pid 6693, exit code 255, cmd ‘/home/osman/pointpillars_ws/install/pp_infer/lib/pp_infer/pp_infer --ros-args --params-file /tmp/launch_params_s11kn8uh -r /point_cloud:=/carla/ego_vehicle/lidar’].
In the forums i saw that it might be due to tensorrt version
How can i make this engine usable in my own pc
Installations on my computer
dpkg -l | grep nvinfer
ii libnvinfer-bin 8.2.5-1+cuda11.4 amd64 TensorRT binaries
ii libnvinfer-dev 8.2.5-1+cuda11.4 amd64 TensorRT development libraries and headers
ii libnvinfer-doc 8.2.5-1+cuda11.4 all TensorRT documentation
ii libnvinfer-lean10 10.7.0.23-1+cuda12.6 amd64 TensorRT lean runtime library
ii libnvinfer-plugin-dev 8.2.5-1+cuda11.4 amd64 TensorRT plugin libraries
ii libnvinfer-plugin8 8.2.5-1+cuda11.4 amd64 TensorRT plugin libraries
ii libnvinfer-samples 8.2.5-1+cuda11.4 all TensorRT samples
ii libnvinfer-vc-plugin10 10.7.0.23-1+cuda12.6 amd64 TensorRT vc-plugin library
ii libnvinfer8 8.2.5-1+cuda11.4 amd64 TensorRT runtime libraries
ii python3-libnvinfer 8.2.5-1+cuda11.4 amd64 Python 3 bindings for TensorRT
ii python3-libnvinfer-dev 8.2.5-1+cuda11.4 amd64 Python 3 development package for TensorRT
there is no tensorrt in school computer and Cuda version is 12.0
i would be happy if you help me immediately