How to get the tensorRT inference graph based on caffe?

Hi!
I implement R-FCN on TX2 using TensorRT. Some errors occur when I run the .exe file.The error:[Cuda failure: unspecified launch failure at line 74 Aborted (core dumped)]
Now I can’t get the reason of the error. So I want to know how to get the optimization network graph and check it to debug.
Any advice is appreciated.

Hi,

.exe is a Windows executable.
Jetson is a Linux system with aarch64 architecture.

Please find the Linux runnable instead.
Thanks.

Thank for your reply. Maybe the file’s property is not quite accurate description.The file’s name is inference and its type is executable (application/x-executable) based on aarch64 architecture.
I run the file on the terminal by using the command "./inference and I got some information following by

[Bindings after deserializing:
Binding 0 (data): Input.
Binding 1 (im_info): Input.
Binding 2 (bbox_pred): Output.
Binding 3 (cls_prob): Output.
Binding 4 (rois): Output.
Cuda failure: unspecified launch failure at line 74
Aborted (core dumped)

I found that the error occured when it call the function as following [Specifically this line:context->execute(batchSize, buffers);]
void TensorNet::imageInference(void** buffers, int nbBuffer, int batchSize)
{
assert(engine->getNbBindings()==nbBuffer);

IExecutionContext* context = engine->createExecutionContext();
context->setProfiler(&gProfiler);

context->execute(batchSize, buffers);

context->destroy();

}
I am not sure that the network model after parsing is right because I implement two plugin layers or maybe some other reasons led to the error.
So I want to get the optimization network graph(inference graph) similar to .prototxt file based on caffe so that I can debug it.
Looking forward to your reply!

Hi,

Suppose you are using a caffe based model.
Here are the steps to have the layer information:

1. Convert the model into TensorRT network, like this:

const nvcaffeparser1::IBlobNameToTensor* blobNameToTensor = parser->parse(
        locateFile(mParams.prototxtFileName, mParams.dataDirs).c_str(),
        locateFile(mParams.weightsFileName, mParams.dataDirs).c_str(),
        *network,
        nvinfer1::DataType::kFLOAT);

You will have a nvinfer1::INetworkDefinition (network) parameter.

2. Get the total number of layers
https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/c_api/classnvinfer1_1_1_i_network_definition.html#a191a7724fc0c03a3b6f5fd8782dcd30e

virtual int nvinfer1::INetworkDefinition::getNbLayers	(		)	const

3. Check the layer type by the ID:
https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/c_api/classnvinfer1_1_1_i_network_definition.html#a4a81749aaa08e93ca4ae1dbb1739c7bd

virtual ILayer* nvinfer1::INetworkDefinition::getLayer

Thanks.