tensorRT sample on QT-creator error: undefind reference to 'gLoginfo'

Hi all,
I tried to transferd the TensorRT sample (sampleMNIST) in QT, but the error happen that the most error is about [undefind reference to ‘gLoginfo’ or ‘gLogger’]

the following is my main code (I just change the include method)

[code]#include “/usr/include/aarch64-linux-gnu/NvCaffeParser.h”
#include “/usr/include/aarch64-linux-gnu/NvInfer.h”
#include “common/logger.h”
#include “common/common.h”
#include “common/argsParser.h”
#include “/usr/local/cuda-10.0/targets/aarch64-linux/include/cuda_runtime_api.h”

#ifdef _MSC_VER
#include <direct.h>
#include <sys/stat.h>

// stuff we know about the network and the input/output blobs
static const int INPUT_H = 28;
static const int INPUT_W = 28;
static const int OUTPUT_SIZE = 10;
samplesCommon::Args gArgs;

const char* INPUT_BLOB_NAME = “data”;
const char* OUTPUT_BLOB_NAME = “prob”;

using namespace nvinfer1;
using namespace nvcaffeparser1;

const std::string gSampleName = “TensorRT.sample_mnist_api”;

// Load weights from files shared with TensorRT samples.
// TensorRT weight files have a simple space delimited format:
// [type]
std::map<std::string, Weights> loadWeights(const std::string file)
gLogInfo << "Loading weights: " << file << std::endl;
std::map<std::string, Weights> weightMap;

// Open weights file
std::ifstream input(file);
assert(input.is_open() && "Unable to load weight file.");

// Read number of weight blobs
int32_t count;
input >> count;
assert(count > 0 && "Invalid weight map file.");

while (count--)
    Weights wt{DataType::kFLOAT, nullptr, 0};
    uint32_t type, size;

    // Read name and type of blob
    std::string name;
    input >> name >> std::dec >> type >> size;
    wt.type = static_cast<DataType>(type);

    // Load blob
    if (wt.type == DataType::kFLOAT)
        uint32_t* val = reinterpret_cast<uint32_t*>(malloc(sizeof(val) * size));
        for (uint32_t x = 0, y = size; x < y; ++x)
            input >> std::hex >> val[x];
        wt.values = val;
    else if (wt.type == DataType::kHALF)
        uint16_t* val = reinterpret_cast<uint16_t*>(malloc(sizeof(val) * size));
        for (uint32_t x = 0, y = size; x < y; ++x)
            input >> std::hex >> val[x];
        wt.values = val;

    wt.count = size;
    weightMap[name] = wt;

return weightMap;


// simple PGM (portable greyscale map) reader
void readPGMFile(const std::string& filename, uint8_t buffer[INPUT_H * INPUT_W])
readPGMFile(locateFile(filename, gArgs.dataDirs), buffer, INPUT_H, INPUT_W);

// Creat the engine using only the API and not any parser.
ICudaEngine* createMNISTEngine(unsigned int maxBatchSize, IBuilder* builder, DataType dt)
INetworkDefinition* network = builder->createNetwork();

// Create input tensor of shape { 1, 1, 28, 28 } with name INPUT_BLOB_NAME
ITensor* data = network->addInput(INPUT_BLOB_NAME, dt, Dims3{1, INPUT_H, INPUT_W});

// Create scale layer with default power/shift and specified scale parameter.
const float scaleParam = 0.0125f;
const Weights power{DataType::kFLOAT, nullptr, 0};
const Weights shift{DataType::kFLOAT, nullptr, 0};
const Weights scale{DataType::kFLOAT, &scaleParam, 1};
IScaleLayer* scale_1 = network->addScale(*data, ScaleMode::kUNIFORM, shift, scale, power);

// Add convolution layer with 20 outputs and a 5x5 filter.
std::map<std::string, Weights> weightMap = loadWeights(locateFile("mnistapi.wts", gArgs.dataDirs));
IConvolutionLayer* conv1 = network->addConvolution(*scale_1->getOutput(0), 20, DimsHW{5, 5}, weightMap["conv1filter"], weightMap["conv1bias"]);
conv1->setStride(DimsHW{1, 1});

// Add max pooling layer with stride of 2x2 and kernel size of 2x2.
IPoolingLayer* pool1 = network->addPooling(*conv1->getOutput(0), PoolingType::kMAX, DimsHW{2, 2});
pool1->setStride(DimsHW{2, 2});

// Add second convolution layer with 50 outputs and a 5x5 filter.
IConvolutionLayer* conv2 = network->addConvolution(*pool1->getOutput(0), 50, DimsHW{5, 5}, weightMap["conv2filter"], weightMap["conv2bias"]);
conv2->setStride(DimsHW{1, 1});

// Add second max pooling layer with stride of 2x2 and kernel size of 2x2
IPoolingLayer* pool2 = network->addPooling(*conv2->getOutput(0), PoolingType::kMAX, DimsHW{2, 2});
pool2->setStride(DimsHW{2, 2});

// Add fully connected layer with 500 outputs.
IFullyConnectedLayer* ip1 = network->addFullyConnected(*pool2->getOutput(0), 500, weightMap["ip1filter"], weightMap["ip1bias"]);

// Add activation layer using the ReLU algorithm.
IActivationLayer* relu1 = network->addActivation(*ip1->getOutput(0), ActivationType::kRELU);

// Add second fully connected layer with 10 outputs.
IFullyConnectedLayer* ip2 = network->addFullyConnected(*relu1->getOutput(0), OUTPUT_SIZE, weightMap["ip2filter"], weightMap["ip2bias"]);

// Add softmax layer to determine the probability.
ISoftMaxLayer* prob = network->addSoftMax(*ip2->getOutput(0));

// Build engine
builder->setMaxWorkspaceSize(1 << 20);
if (gArgs.runInInt8)
    samplesCommon::setAllTensorScales(network, 127.0f, 127.0f);

samplesCommon::enableDLA(builder, gArgs.useDLACore);

ICudaEngine* engine = builder->buildCudaEngine(*network);

// Don't need the network any more

// Release host memory
for (auto& mem : weightMap)
    free((void*) (mem.second.values));

return engine;


void APIToModel(unsigned int maxBatchSize, IHostMemory** modelStream)
// Create builder
IBuilder* builder = createInferBuilder(gLogger.getTRTLogger());
assert(builder != nullptr);

// Create model to populate the network, then set the outputs and create an engine
ICudaEngine* engine = createMNISTEngine(maxBatchSize, builder, DataType::kFLOAT);
assert(engine != nullptr);

// Serialize the engine
(*modelStream) = engine->serialize();

// Close everything down


void doInference(IExecutionContext& context, float* input, float* output, int batchSize)
const ICudaEngine& engine = context.getEngine();

// Pointers to input and output device buffers to pass to engine.
// Engine requires exactly IEngine::getNbBindings() number of buffers.
assert(engine.getNbBindings() == 2);
void* buffers[2];

// In order to bind the buffers, we need to know the names of the input and output tensors.
// Note that indices are guaranteed to be less than IEngine::getNbBindings()
const int inputIndex = engine.getBindingIndex(INPUT_BLOB_NAME);
const int outputIndex = engine.getBindingIndex(OUTPUT_BLOB_NAME);

// Create GPU buffers on device
CHECK(cudaMalloc(&buffers[inputIndex], batchSize * INPUT_H * INPUT_W * sizeof(float)));
CHECK(cudaMalloc(&buffers[outputIndex], batchSize * OUTPUT_SIZE * sizeof(float)));

// Create stream
cudaStream_t stream;

// DMA input batch data to device, infer on the batch asynchronously, and DMA output back to host
CHECK(cudaMemcpyAsync(buffers[inputIndex], input, batchSize * INPUT_H * INPUT_W * sizeof(float), cudaMemcpyHostToDevice, stream));
context.enqueue(batchSize, buffers, stream, nullptr);
CHECK(cudaMemcpyAsync(output, buffers[outputIndex], batchSize * OUTPUT_SIZE * sizeof(float), cudaMemcpyDeviceToHost, stream));

// Release stream and buffers


//! \brief This function prints the help information for running this sample
void printHelpInfo()
std::cout << “Usage: ./sample_mnist_api [-h or --help] [-d or --datadir=] [–useDLACore=]\n”;
std::cout << “–help Display help information\n”;
std::cout << “–datadir Specify path to a data directory, overriding the default. This option can be used multiple times to add multiple directories. If no data directories are given, the default is to use (data/samples/mnist/, data/mnist/)” << std::endl;
std::cout << “–useDLACore=N Specify a DLA engine for layers that support DLA. Value can range from 0 to n-1, where n is the number of DLA engines on the platform.” << std::endl;
std::cout << “–int8 Run in Int8 mode.\n”;
std::cout << “–fp16 Run in FP16 mode.\n”;

int main(int argc, char** argv)
bool argsOK = samplesCommon::parseArgs(gArgs, argc, argv);
if (gArgs.help)
if (!argsOK)
gLogError << “Invalid arguments” << std::endl;
if (gArgs.dataDirs.empty())
gArgs.dataDirs = std::vectorstd::string{“data/samples/mnist/”, “data/mnist/”};

auto sampleTest = gLogger.defineTest(gSampleName, argc, const_cast<const char**>(argv));


// create a model using the API directly and serialize it to a stream
IHostMemory* modelStream{nullptr};
APIToModel(1, &modelStream);
assert(modelStream != nullptr);

// Read random digit file
uint8_t fileData[INPUT_H * INPUT_W];
const int num = rand() % 10;
readPGMFile(std::to_string(num) + ".pgm", fileData);

// Print ASCII representation of digit image
gLogInfo << "Input:\n";
for (int i = 0; i < INPUT_H * INPUT_W; i++)
    gLogInfo << (" .:-=+*#%@"[fileData[i] / 26]) << (((i + 1) % INPUT_W) ? "" : "\n");
gLogInfo << std::endl;

// Parse mean file
ICaffeParser* parser = createCaffeParser();
assert(parser != nullptr);
IBinaryProtoBlob* meanBlob = parser->parseBinaryProto(locateFile("mnist_mean.binaryproto", gArgs.dataDirs).c_str());
const float* meanData = reinterpret_cast<const float*>(meanBlob->getData());

// Subtract mean from image
float data[INPUT_H * INPUT_W];
for (int i = 0; i < INPUT_H * INPUT_W; i++)
    data[i] = float(fileData[i]) - meanData[i];

IRuntime* runtime = createInferRuntime(gLogger.getTRTLogger());
assert(runtime != nullptr);
if (gArgs.useDLACore >= 0)
ICudaEngine* engine = runtime->deserializeCudaEngine(modelStream->data(), modelStream->size(), nullptr);
assert(engine != nullptr);
IExecutionContext* context = engine->createExecutionContext();
assert(context != nullptr);

// Run inference
float prob[OUTPUT_SIZE];
doInference(*context, data, prob, 1);

// Destroy the engine

// Print histogram of the output distribution
gLogInfo << "Output:\n";
float val{0.0f};
int idx{0};
for (unsigned int i = 0; i < 10; i++)
    val = std::max(val, prob[i]);
    if (val == prob[i])
        idx = i;
    gLogInfo << i << ": " << std::string(int(std::floor(prob[i] * 10 + 0.5f)), '*') << "\n";
gLogInfo << std::endl;

bool pass{idx == num && val > 0.9f};

return gLogger.reportTest(sampleTest, pass);



and the .pro file:

[code]TEMPLATE = app
CONFIG += console c++11
CONFIG -= app_bundle
CONFIG -= qt

#sample make file
INCLUDEPATH += /usr/local/include

INCLUDEPATH += usr/local/cuda-10.0/targets/aarch64-linux/lib
INCLUDEPATH += usr/local/cuda-10.0/targets/aarch64-linux/include

#fro, Nv talk
INCLUDEPATH += /usr/include/aarch64-linux-gnu
LIBS += -L/usr/lib/aarch64-linux-gnu -lnvinfer -lnvparsers -lnvinfer_plugin ##
LIBS += -L/usr/local/cuda-10.0/targets/aarch64-linux/lib -lcudart ##

#LIBS += -L/usr/local/cuda-10.0/targets/aarch64-linux/include/cuda_runtime_api.h

SOURCES += main.cpp


the total erroe report is:

/home/nvidia/wworkspace/Qt/build-trt_test-Desktop-Debug/main.o:-1: In function loadWeights(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)': /home/nvidia/wworkspace/Qt/trt_test/main.cpp:43: error: undefined reference to gLogInfo’
/home/nvidia/wworkspace/Qt/trt_test/main.cpp:43: error: undefined reference to gLogInfo' /home/nvidia/wworkspace/Qt/build-trt_test-Desktop-Debug/main.o:-1: In function APIToModel(unsigned int, nvinfer1::IHostMemory**)’:
/home/nvidia/wworkspace/Qt/trt_test/main.cpp:182: error: undefined reference to gLogger' /home/nvidia/wworkspace/Qt/trt_test/main.cpp:182: error: undefined reference to gLogger’
/home/nvidia/wworkspace/Qt/build-trt_test-Desktop-Debug/main.o:-1: In function main': /home/nvidia/wworkspace/Qt/trt_test/main.cpp:255: error: undefined reference to gLogError’
/home/nvidia/wworkspace/Qt/trt_test/main.cpp:280: error: undefined reference to gLogInfo' /home/nvidia/wworkspace/Qt/trt_test/main.cpp:280: error: undefined reference to gLogInfo’
/home/nvidia/wworkspace/Qt/trt_test/main.cpp:282: error: undefined reference to gLogInfo' /home/nvidia/wworkspace/Qt/trt_test/main.cpp:283: error: undefined reference to gLogInfo’
/home/nvidia/wworkspace/Qt/trt_test/main.cpp:283: error: more undefined references to gLogInfo' follow /home/nvidia/wworkspace/Qt/build-trt_test-Desktop-Debug/main.o:-1: In function main’:
/home/nvidia/wworkspace/Qt/trt_test/main.cpp:300: error: undefined reference to gLogger' /home/nvidia/wworkspace/Qt/trt_test/main.cpp:300: error: undefined reference to gLogger’
/home/nvidia/wworkspace/Qt/trt_test/main.cpp:322: error: undefined reference to gLogInfo' /home/nvidia/wworkspace/Qt/trt_test/main.cpp:322: error: undefined reference to gLogInfo’
/home/nvidia/wworkspace/Qt/trt_test/main.cpp:330: error: undefined reference to gLogInfo' /home/nvidia/wworkspace/Qt/trt_test/main.cpp:330: error: undefined reference to gLogInfo’
/home/nvidia/wworkspace/Qt/trt_test/main.cpp:332: error: undefined reference to gLogInfo' /home/nvidia/wworkspace/Qt/trt_test/main.cpp:332: error: undefined reference to gLogInfo’
:-1: error: collect2: error: ld returned 1 exit status


I have no idea there the gLog LIB come from and confuse that its run well sample on originaly Makefile, THX.


gLogger is defined in the /usr/src/tensorrt/samples/common/logger.h.


In fact, I had copied entire common content to my project

the file tree is like this,

common -> argsParser.h ,logging.h …

data -> char-rnn ,…

even I change the include method

#include "common/logger.h"

change to

#include "/usr/src/tensorrt/samples/common/logger.h"

the same error still happen.


Have you added it into include path of Makefile?



Iam also facing the same issue. Iam using QTCreator included all the dependent file paths and included the logger.h also still it gives the error “undefined reference to gloginfo”. Checked the makefile also where the dependent folder are inculded but still gives the same error. Can anybody suggest a Solution for this?



gLogInfo is defined in the /usr/src/tensorrt/samples/common/logger.h.
Please add the corresponding path to your compiling argument.


I encountered the same problem as you, did you solve it?

I solve it with add logger.cpp

# tensorRT
INCLUDEPATH += util/common/
SOURCES += util/common/logger.cpp
LIBS += -lnvcaffe_parser\
    -lnvinfer \
    -lnvparsers \

thanks for posting this… I was having the same issues after copying some sample code into my existing project.

I don’t know if this will help anyone else having this same issue, but I added the following line to my Makefile

logger.o: /usr/src/tensorrt/samples/common/logger.cpp

then added logger.o to the executable line.

1 Like

Hi moulok,

Thanks for your sharing.