Layer information after optimization?

I’m wondering if it’s possible to take a look at the layers’ information after tensorRT optimized the model?
For example the in the SIDNet slide they mentioned that tensorRT optimized yolo’s number of layers from 96 to 30, I want to know that for my model as well.

Are you using lite API? If so, then you should have verbosity level set to INFO be default and required information should be visible (if you’re using ERROR instead it may not be visible) Look at logs:

[TensorRT] INFO: Detecting Framework
[TensorRT] INFO: Parsing Model from caffe
[TensorRT] INFO: Parsing caffe model alexnet/deploy.prototxt, alexnet/alexnet.caffemodel
[TensorRT] INFO: Input "data":3x227x227
[TensorRT] INFO: Marking prob as output layer
[TensorRT] INFO: Output "prob":1000x1x1
[TensorRT] INFO: Building engine
[TensorRT] INFO: Original: 21 layers
[TensorRT] INFO: After dead-layer removal: 21 layers
[TensorRT] INFO: After scale fusion: 21 layers
[TensorRT] INFO: Fusing conv1 with relu1
[TensorRT] INFO: Fusing conv2 with relu2
[TensorRT] INFO: Fusing conv3 with relu3
[TensorRT] INFO: Fusing conv4 with relu4
[TensorRT] INFO: Fusing conv5 with relu5
[TensorRT] INFO: Fusing fc6 with relu6
[TensorRT] INFO: Fusing fc7 with relu7
[TensorRT] INFO: After vertical fusions: 14 layers
[TensorRT] INFO: After swap: 14 layers
[TensorRT] INFO: After final dead-layer removal: 14 layers
[TensorRT] INFO: After tensor merging: 14 layers
[TensorRT] INFO: After concat removal: 14 layers
[TensorRT] INFO: Graph costruction and optimization completed in 0.000848487 seconds.

So there we have it, from 21 layers trt optimized it to 14.

If you are building your engine manually, then you have to set up the logger at the beginning anyway, just set it to INFO as well and these logs should be visible.

Thanks I can see the layers now, but it now prints all the tactics it’s trying which is a lot.

Just visualize your model in tensorboard, or print the name of the nodes

[print( for n in trt_graph.node]