Please provide the following info (check/uncheck the boxes after clicking “+ Create Topic”):
DRIVE OS Linux 5.2.0
DRIVE OS Linux 5.2.0 and DriveWorks 3.5
NVIDIA DRIVE™ Software 10.0 (Linux)
NVIDIA DRIVE™ Software 9.0 (Linux)
other DRIVE OS version
Target Operating System
NVIDIA DRIVE™ AGX Xavier DevKit (E3550)
NVIDIA DRIVE™ AGX Pegasus DevKit (E3550)
SDK Manager Version
Host Machine Version
native Ubuntu 18.04
Dear support team,
- I’ve upload release TensorRT-22.214.171.124 (Linux x86) including /samples to target platform DRIVE AGX
- Set necessary dependencies(CUDA v11.1) for compiling /samples.
- Nvidia example :
TensorRT/sampleOnnxMNIST.cpp at master · NVIDIA/TensorRT · GitHub
was compiled successfully for aarch64 architecture.
After running binary file sample_onnx_mnist, I’ve got :
Thread 1 “sample_onnx_mni” received signal SIGSEGV, Segmentation fault.
0x0000007fb3ac2278 in vtable for __cxxabiv1::__si_class_type_info ()
Following my investigation it happens here SampleOnnxMNIST::infer() when we are trying to initialize Buffers for created Engine.
The similar error I’ve got for my local project, which works correctly on the host PC.
In Release Notes for DRIVE OS Linux 126.96.36.199., I’ve found that flashed version has TensorRT: 188.8.131.52
Could you please help me to understand a possibility of using TensorRT C++ API in DRIVE AGX platform:
- Looks like due to HW differences between Host PC Ubuntu with GTX1070 and DRIVE AGX, I can’t allocate buffers from GPU for inference.
May I use the TensorRT C++ API in DRIVE AGX in the same way as on a host PC?
- I’ve found the API’s guide for DRIVE AGX platform.
I have to use only mentioned C++ API ?
- How user data passes to lower layer HW GPU SoC ? Special libs, driver etc… ?
- May I use DRIVE AGX environment for Yolov
+ONNX inference tasks or should it be another
platform such as Jetson AGX Xavier ?
I’ve attached the full fail log and environment information in DriveAGXlogs.tar.gz
DriveAGXlogs.tar.gz (1.4 KB)
I appreciate your help with the mentioned issue.