GoogleNet

What is the output blob for GoogleNet?

layer {
  name: "loss3/classifier"
  type: "InnerProduct"
  bottom: "pool5/7x7_s1"
  top: "loss3/classifier"
  param {
    lr_mult: 1.0
    decay_mult: 1.0
  }
  param {
    lr_mult: 2.0
    decay_mult: 0.0
  }
  inner_product_param {
    num_output: 12
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0.0
    }
  }
}
layer {
  name: "softmax"
  type: "Softmax"
  bottom: "loss3/classifier"
  top: "softmax"
}

I have also tried this, i got error saying could not find output layer prob in engine

If “softmax” is the last layer of your network, it’s “softmax”, the name of the last layer.
You can also refer to TensorRT sample code - samples/sampleGoogleNet .

I tried, got an error saying could not find the layer, can you share the model you tested ?

I tried
Please elaborate how you try.

can you share the model you tested ?
As I said above, please check TensorRT sample code - samples/sampleGoogleNet, in which, the README gives step by step guidance about how to run the sample.

Alright, we found the issue

[property]
enable-dla=1
	
use-dla-core=0
net-scale-factor=1
model-file=../../models/GoogleNet-ILSVRC12-subset/deploy.caffemodel
proto-file=../../models/GoogleNet-ILSVRC12-subset/deploy.prototxt
model-engine-file=../../models/GoogleNet-ILSVRC12-subset/deploy.caffemodel_b30_fp16.engine
labelfile-path=../../models/GoogleNet-ILSVRC12-subset/synset_words.txt
batch-size=30
model-color-format=1
process-mode=2
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
is-classifier=1
output-blob-names=prob
classifier-async-mode=1
classifier-threshold=0.51
machine02:/usr/src/tensorrt/bin$ ./sample_googlenet --useDLACore=0
&&&& RUNNING TensorRT.sample_googlenet # ./sample_googlenet --useDLACore=0
[00/13/2020-15:14:32] [I] Building and running a GPU inference engine for GoogleNet
[00/13/2020-15:14:33] [W] [TRT] Default DLA is enabled but layer prob is not supported on DLA, falling back to GPU.
[00/13/2020-15:14:33] [I] [TRT] 
[00/13/2020-15:14:33] [I] [TRT] --------------- Layers running on DLA: 
[00/13/2020-15:14:33] [I] [TRT] {conv1/7x7_s2,conv1/relu_7x7,pool1/3x3_s2,pool1/norm1,conv2/3x3_reduce,conv2/relu_3x3_reduce,conv2/3x3,conv2/relu_3x3,conv2/norm2,pool2/3x3_s2,inception_3a/1x1,inception_3a/relu_1x1,inception_3a/3x3_reduce,inception_3a/relu_3x3_reduce,inception_3a/3x3,inception_3a/relu_3x3,inception_3a/5x5_reduce,inception_3a/relu_5x5_reduce,inception_3a/5x5,inception_3a/relu_5x5,inception_3a/pool,inception_3a/pool_proj,inception_3a/relu_pool_proj,inception_3a/output,inception_3b/1x1,inception_3b/relu_1x1,inception_3b/3x3_reduce,inception_3b/relu_3x3_reduce,inception_3b/3x3,inception_3b/relu_3x3,inception_3b/5x5_reduce,inception_3b/relu_5x5_reduce,inception_3b/5x5,inception_3b/relu_5x5,inception_3b/pool,inception_3b/pool_proj,inception_3b/relu_pool_proj,inception_3b/output,pool3/3x3_s2,inception_4a/1x1,inception_4a/relu_1x1,inception_4a/3x3_reduce,inception_4a/relu_3x3_reduce,inception_4a/3x3,inception_4a/relu_3x3,inception_4a/5x5_reduce,inception_4a/relu_5x5_reduce,inception_4a/5x5,inception_4a/relu_5x5,inception_4a/pool,inception_4a/pool_proj,inception_4a/relu_pool_proj,inception_4a/output,inception_4b/1x1,inception_4b/relu_1x1,inception_4b/3x3_reduce,inception_4b/relu_3x3_reduce,inception_4b/3x3,inception_4b/relu_3x3,inception_4b/5x5_reduce,inception_4b/relu_5x5_reduce,inception_4b/5x5,inception_4b/relu_5x5,inception_4b/pool,inception_4b/pool_proj,inception_4b/relu_pool_proj,inception_4b/output,inception_4c/1x1,inception_4c/relu_1x1,inception_4c/3x3_reduce,inception_4c/relu_3x3_reduce,inception_4c/3x3,inception_4c/relu_3x3,inception_4c/5x5_reduce,inception_4c/relu_5x5_reduce,inception_4c/5x5,inception_4c/relu_5x5,inception_4c/pool,inception_4c/pool_proj,inception_4c/relu_pool_proj,inception_4c/output,inception_4d/1x1,inception_4d/relu_1x1,inception_4d/3x3_reduce,inception_4d/relu_3x3_reduce,inception_4d/3x3,inception_4d/relu_3x3,inception_4d/5x5_reduce,inception_4d/relu_5x5_reduce,inception_4d/5x5,inception_4d/relu_5x5,inception_4d/pool,inception_4d/pool_proj,inception_4d/relu_pool_proj,inception_4d/output,inception_4e/1x1,inception_4e/relu_1x1,inception_4e/3x3_reduce,inception_4e/relu_3x3_reduce,inception_4e/3x3,inception_4e/relu_3x3,inception_4e/5x5_reduce,inception_4e/relu_5x5_reduce,inception_4e/5x5,inception_4e/relu_5x5,inception_4e/pool,inception_4e/pool_proj,inception_4e/relu_pool_proj,inception_4e/output,pool4/3x3_s2,inception_5a/1x1,inception_5a/relu_1x1,inception_5a/3x3_reduce,inception_5a/relu_3x3_reduce,inception_5a/3x3,inception_5a/relu_3x3,inception_5a/5x5_reduce,inception_5a/relu_5x5_reduce,inception_5a/5x5,inception_5a/relu_5x5,inception_5a/pool,inception_5a/pool_proj,inception_5a/relu_pool_proj,inception_5a/output,inception_5b/1x1,inception_5b/relu_1x1,inception_5b/3x3_reduce,inception_5b/relu_3x3_reduce,inception_5b/3x3,inception_5b/relu_3x3,inception_5b/5x5_reduce,inception_5b/relu_5x5_reduce,inception_5b/5x5,inception_5b/relu_5x5,inception_5b/pool,inception_5b/pool_proj,inception_5b/relu_pool_proj,inception_5b/output,pool5/7x7_s1,loss3/classifier}, 
[00/13/2020-15:14:33] [I] [TRT] --------------- Layers running on GPU: 
[00/13/2020-15:14:33] [I] [TRT] prob, 
[00/13/2020-15:14:37] [W] [TRT] No implementation obeys reformatting-free rules, at least 3 reformatting nodes are needed, now picking the fastest path instead.
[00/13/2020-15:14:37] [I] [TRT] Detected 1 inputs and 1 output network tensors.
[00/13/2020-15:14:40] [I] Ran ./sample_googlenet with: 
[00/13/2020-15:14:40] [I] Input(s): data 
[00/13/2020-15:14:40] [I] Output(s): prob 
&&&& PASSED TensorRT.sample_googlenet # ./sample_googlenet --useDLACore=0

Deepstream output

Using winsys: x11 
Creating LL OSD context new
0:00:03.808662151 17363   0x7f28002390 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 6]:log(): INVALID_ARGUMENT: Can not find binding of given name
0:00:03.808759083 17363   0x7f28002390 WARN                 nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 6]:checkEngineParams(): Could not find output layer 'prob' in engine

Thanks for update!

Sorry, I changed DLA to GPU, we still got

Using winsys: x11 
Creating LL OSD context new
0:00:03.808662151 17363   0x7f28002390 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 6]:log(): INVALID_ARGUMENT: Can not find binding of given name
0:00:03.808759083 17363   0x7f28002390 WARN                 nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<seconda

Did you change infer in the last 2 revisions of deepstream?

[00/13/2020-15:14:33] [I] [TRT] --------------- Layers running on GPU: 
[00/13/2020-15:14:33] [I] [TRT] prob,

to my understanding, there is no fall back to GPU in deepstream

Hi RaviKiranK
The model you run on DLA is build with DLA enabled, when you inference with this model with GPU only, you should re-build your model with GPU only.

Which Jetpack version are you using ? I’ll try to double check your issue.

Thanks!

I am using 4.3, I will try again will keep you posted

Sorry! I can’t reproduce your issue with trtexec and the googlenet model as below, could you elaborate how to reproduce your issue?

$ ./trtexec --deploy=/usr/src/tensorrt/data/googlenet/googlenet.prototxt --output=prop --batch=30 --fp16 --useDLACore=1 --allowGPUFallback --workspace=1024
$ ./trtexec --loadEngine=deploy.caffemodel_b30_fp16.engine --batch=30 --workspace=1024