qt +tensorRT error:RTTI symbol not found for class 'nvinfer1::Builder'

I transfered one official sample in qt. The program can be built. But When the program is stepped over the follow place(i.e.IBuilder* builder = createInferBuilder(gLogger)),an error happen.


void caffeToGIEModel(const std::string& deployFile, // name for caffe prototxt
const std::string& modelFile, // name for model
const std::vectorstd::string& outputs, // network outputs
unsigned int maxBatchSize, // batch size - NB must be at least as large as the batch we want to run with)
nvcaffeparser1::IPluginFactory* pluginFactory, // factory for plugin layers
IHostMemory *gieModelStream) // output stream for the GIE model
{
// create the builder
IBuilder
builder = createInferBuilder(gLogger);

INetworkDefinition* network = builder->createNetwork();
ICaffeParser* parser = createCaffeParser();
parser->setPluginFactory(pluginFactory);  


}

The errors is as follows:
RTTI symbol not found for class ‘nvinfer1::Builder’
RTTI symbol not found for class ‘nvinfer1::Builder’
RTTI symbol not found for class ‘nvinfer1::Builder’
RTTI symbol not found for class ‘nvinfer1::Builder’
RTTI symbol not found for class ‘nvinfer1::Builder’

But in the .pro file, i have added the so files, shown as below:

INCLUDEPATH += /usr/include/aarch64-linux-gnu
LIBS += -L/usr/lib/aarch64-linux-gnu
-lnvinfer -lnvparsers -lnvinfer_plugin

Since the program can find NvInfer.h and the corresponding libnvinfer.so, why the above errors happen?

I choose the running mode as release mode. Then run the program, however, when the program runs over the following code,(i.e.
const IBlobNameToTensor* blobNameToTensor = parser->parse(deployFile.c_str(),
modelFile.c_str(),
*network,
DataType::kFLOAT); )

in the function (i.e. caffeToGIEModel),the following errors happen:

Begin parsing model…
The program has unexpectedly finished.
/home/nvidia/lxm/tensorrt_mobilenet_ssd/build-mobilenet_ssd_tensorRT-JetsonTX2-Release/mobilenet_ssd_tensorRT crashed

I have tried my best to solve the problem. Now the new error is -----------------“Begin parsing model…
ERROR: Parameter check failed at: Layers.h::PluginLayer::619, condition: (inputs[0]) != NULL
Plugin layer output count is not equal to caffe output count”-------------- when tensorRT parser the caffemodel, i.e. my program step over the code----------- “const IBlobNameToTensor* blobNameToTensor = parser->parse(deployFile.c_str(),modelFile.c_str(),*network,DataType::kFLOAT);”------------ ,shown as below:

void caffeToGIEModel(const std::string& deployFile, const std::string& modelFile, const
std::vectorstd::string& outputs, unsigned int maxBatchSize,
nvcaffeparser1::IPluginFactory* pluginFactory, IHostMemory *gieModelStream)
{
// create the builder
IBuilder
builder = createInferBuilder(gLogger);

INetworkDefinition* network = builder->createNetwork();
ICaffeParser* parser = createCaffeParser();
parser->setPluginFactory(pluginFactory);

std::cout << "Begin parsing model..." << std::endl;


const IBlobNameToTensor* blobNameToTensor = parser->parse(deployFile.c_str(),
                                                          modelFile.c_str(),
                                                          *network,
                                                          DataType::kFLOAT);      ------------------------------------------------------------Here:error happen when the program step over here.


std::cout << "End parsing model..." << std::endl;
// specify which tensors are outputs
for (auto& s : outputs)
    network->markOutput(*blobNameToTensor->find(s.c_str()));

// Build the engine
builder->setMaxBatchSize(maxBatchSize);
builder->setMaxWorkspaceSize(16 << 20);

std::cout << "Begin building engine..." << std::endl;
ICudaEngine* engine = builder->buildCudaEngine(*network);
assert(engine);
std::cout << "End building engine..." << std::endl;

network->destroy();
parser->destroy();

(*gieModelStream) = engine->serialize();

engine->destroy();
builder->destroy();
shutdownProtobufLibrary();

}

Hi,

Before integrating with QT, could you check if your model is completely supported by TensorRT first?

cp -r /usr/src/tensorrt/ .
cd tensorrt/samples/
make
cd ../bin/
./giexec --deploy=/path/to/prototxt --output=/name/of/output

Thanks.

Hi,

Looks like you have filed a new topic for comment #3.
Let’s tracking it on topic 1031626 in detail:
https://devtalk.nvidia.com/default/topic/1031626

Thanks.

Hi, can you tell me how to config QT if I want use tensorRT on it?
I have trouble with the head file, such as cuda_runtime_api.h can’t be found.

Hi,

cuda_runtime_api.h can be found in /usr/local/cuda/include/.
Thanks.