What libstdc++ ABI does TensorRT build with?

In the TensorRT docs it says that TensorRT is built with GCC 4.8. But it does not specify which ABI (see Dual ABI ) TensorRT uses.

That it is compiled with GCC 4.8 would suggest the old ABI. However we’re seeing segfaults related to TensorRT’s shared libraries and symbol resolution order (and we use the old ABI).

The fact that CUDA is only supported on Ubuntu 16.04/17.04 (and our segfaults) indicate the newer ABI.

Does TensorRT compile against the newer (GCC 5.1+) ABI?

Linkers usually would use the correct library for ABI standards. Whether standards such as c++11 or c++14 are used in such a way that the ABI matters I don’t know. I’d be far less surprised if a link was missing a symbol rather than going to the wrong library. I don’t know in this particular case, but you may be interested in using “abi-dumper”. You can then look at the “Compiler” listed for the build, e.g., if the standard was c++14 it will say something like this:

'Compiler' => 'GNU C++14 7.2.1 20170915 (Red Hat 7.2.1-2) -mtune=generic -march=x86-64 -g -std=c++14'

I’m fairly confident that our link is not missing a symbol. No matter what our binary compiles & links successfully. The issue is that changing TensorRT’s location in the link order controls whether you get a segfault at runtime.

Thanks for pointing me to abi-dumper! I didn’t know about this tool. When I’ve used it on libnvinfer.so (for TensorRT 3.0.1) unfortunately it doesn’t output a “Compiler” line.

Ideally I was hoping a person familiar with the TensorRT build process could comment.

Hi,

Suppose that TensorRT is built with default gcc compiler.
It should be gcc-5.4 since Jetson use Ubuntu16.04

We are checking this issue with internal team. Will update information with you later.

Thanks.