generateTRTModel(): Could not find output layer 'SOFTMAX'

Dear NVIDIA,

Why nvinfer plugin failed to parser Caffe prototxt as following
layers {
name: “prob”
type: SOFTMAX

bottom: “fc8”
top: “prob”
}

How to solve it, Thanks!

Error log:
Creating LL OSD context new

0:00:02.979289951 9026 0x558a447630 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger: NvDsInferContext[UID 2]:generateTRTModel(): Could not find output layer ‘SOFTMAX’

ERROR from element secondary1-nvinference-engine: Failed to create NvDsInferContext instance

Code snap nvdsinfer_context_impl.cpp

/* Find and mark the coverage layer as output */
ITensor *tensor = blobNameToTensor->find(layerName);
if (!tensor)
{
printError(“Could not find output layer ‘%s’”, layerName);

            return NVDSINFER_CONFIG_FAILED;
        }

Jetson Nano, deepstream sdk 4.0.1

B.R.

Hi,

Would you mind to update the “SOFTMAX” into “Softmax” and give it a try?

In general, the softmax layer should look like this:

layer {
  name: "predictions/Softmax"
  type: "Softmax"
  bottom: "predictions"
  top: "predictions/Softmax"
}

Thanks.

Thank you for your quick reply!

I used the following CMD to convert it to Softmax,it works now.

caffehome/.build_release/tools/upgrade_net_protot_test.bin yourold.prototxt output.prototxt

0:00:01.427603671 4051 0x3e13b80 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 2]: Trying to create engine from model files
[libprotobuf ERROR google/protobuf/text_format.cc:298] Error parsing text-format ditcaffe.NetParameter: 172:9: Expected integer or identifier, got: “Softmax”
ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: CaffeParser: Could not parse deploy file
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:161 Failed while parsing caffe network: /opt/nvidia/deepstream/deepstream-5.1/sources/deepstream_python_apps/apps/age_gender/deepti/testing/extra/gender_deploy.prototxt
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:1039 failed to build network since parsing model errors.
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:872 failed to build network.
0:00:12.302726484 4051 0x3e13b80 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1736> [UID = 2]: build engine file failed
0:00:12.302811744 4051 0x3e13b80 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1822> [UID = 2]: build backend context failed
0:00:12.302824077 4051 0x3e13b80 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1149> [UID = 2]: generate backend failed, check config file settings
0:00:12.302864262 4051 0x3e13b80 WARN nvinfer gstnvinfer.cpp:812:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:00:12.302874582 4051 0x3e13b80 WARN nvinfer gstnvinfer.cpp:812:gst_nvinfer_start: error: Config file path: dstest2_sgie1_config_gender.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): gstnvinfer.cpp(812): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:secondary1-nvinference-engine:
Config file path: dstest2_sgie1_config_gender.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED

I have tried all the mentioned solutions but still facing the error

@bhargavi.sanadhya Try upgrading the prototxt if you are using a caffe model.

Try the below command it might help:

$CAFFE_ROOT/build/tools/upgrade_net_proto_text deploy_old.prototxt deploy.prototxt

bash: CAFFE_ROOT/build/tools/upgrade_net_proto_text: No such file or directory

I am running in the docker container

@bhargavi.sanadhya You have to install caffe in order to upgrade the prototxt files.