Operational precision of TensorRT

I have a question about operational precision of TensorRT.
I downloaded Tensor RT and ran some C++ samples on Windows PC.

For example I ran sampleOnnxMNIST.cpp on VisualStudio 2017.
However, the code in the following (★) part on model loading is not executed, because I didn’t use commandline.

I think operational precision is set in (★) part.
If (★) part isn’t executed, which operational precision is set FP16 pr FP32?

bool SampleOnnxMNIST::constructNetwork(SampleUniquePtr<nvinfer1::IBuilder>& builder,
    SampleUniquePtr<nvinfer1::INetworkDefinition>& network, SampleUniquePtr<nvinfer1::IBuilderConfig>& config,
    SampleUniquePtr<nvonnxparser::IParser>& parser)
{
    auto parsed = parser->parseFromFile(locateFile(mParams.onnxFileName, mParams.dataDirs).c_str(), static_cast<int>(sample::gLogger.getReportableSeverity()));
    if (!parsed)
    {
        return false;
    }

    if (mParams.fp16)
    {
        (★)config->setFlag(BuilderFlag::kFP16);
    }
    if (mParams.int8)
    {
        (★)config->setFlag(BuilderFlag::kINT8);
        samplesCommon::setAllDynamicRanges(network.get(), 127.0f, 127.0f);
    }

    samplesCommon::enableDLA(builder.get(), config.get(), mParams.dlaCore);

    return true;
}

Hi,

It will be FP32 default. We can mention precision as mentioned here.

Thank you.