How does TensorRT show debug message when parsing the model

Dear everyone,

I want to transform my own tensorflow-based model to TensorRT-based model.

There are some unsupported layers which is replaced by customer plugins or already-registered TensorRT plugins.

But when I try to parse the model, it gave some errors:

ERROR: UFFParser: Parser error: g_net/enc1_1/Conv2D: Order size is not matching the number dimensions of TensorRT

So I wanna know the details about the processing, such as the input or output dimensions of the previous layer processed by already-registered TensorRT plugins.

Can tensorRT outputs more details like these by C++ API?