Using a Custom Model Caffe with DeepStream

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) NVIDIA GeForce RTX 4070 Laptop GPU
• DeepStream Version 7.0
• JetPack Version (valid for Jetson only)
• TensorRT Version 8.6.1.6-1+cuda12.0
• NVIDIA GPU Driver Version (valid for GPU only) 535.216.01
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Trying to use Caffe model as a Sgie got the follow ERROR:


Starting pipeline
0:00:00.160375924 74812 0x564c68ae23e0 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 4]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2109> [UID = 4]: Trying to create engine from model files
WARNING: [TRT]: The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:164 Could not find output layer ‘prob’
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:976 failed to build network since parsing model errors.
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:809 failed to build network.
0:00:04.490086938 74812 0x564c68ae23e0 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger: NvDsInferContext[UID 4]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2129> [UID = 4]: build engine file failed
0:00:04.608419023 74812 0x564c68ae23e0 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger: NvDsInferContext[UID 4]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2215> [UID = 4]: build backend context failed
0:00:04.610564605 74812 0x564c68ae23e0 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger: NvDsInferContext[UID 4]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1352> [UID = 4]: generate backend failed, check config file settings
0:00:04.610595067 74812 0x564c68ae23e0 WARN nvinfer gstnvinfer.cpp:912:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:00:04.610599912 74812 0x564c68ae23e0 WARN nvinfer gstnvinfer.cpp:912:gst_nvinfer_start: error: Config file path: /home/eduardo/Devel/Models/AMFG/config_age_amfg.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): gstnvinfer.cpp(912): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:secondary-inference3:

Here is the config file for the Sgie files:
config_age_amfg.txt (772 Bytes)

Caffe model age classification deploy prototext that I’m using can be downloaded from here: Age and Gender Classification Using Convolutional Neural Networks - Tal Hassner

Please make sure that the output-blob-names of the model you are using is prob first.

@yuweiw Thank you for the quick reply.

Here is the name output of the model:
layers {
name: “prob”
type: SOFTMAX
bottom: “fc8”
top: “prob”
}

I just use the netron tool to check the output of the age_net.caffemodel, the output is loss. Please double-check the output layer of the model you are using.

I checked the model age_net.caffemodel as you said (netron) and I can see the “data” and “label” outputs.

Coud you please guide me how to use the model outputs “data and label” in the config. file (output-blob-names)?
Using output-blob-names=label gives the following ERROR:
Starting pipeline
0:00:00.248391918 7826 0x5adbb91fd530 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 4]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2109> [UID = 4]: Trying to create engine from model files
WARNING: [TRT]: The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:164 Could not find output layer ‘label’
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:976 failed to build network since parsing model errors.
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:809 failed to build network.

Using output-blob-names=data gives the following ERROR:
Starting pipeline
0:00:00.128481144 8635 0x58cf94aa8270 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 4]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2109> [UID = 4]: Trying to create engine from model files
WARNING: [TRT]: The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.
Segmentation fault (core dumped)

The final output layer of this model is loss from the netron tool. But from the prototxt, it’s prob. The output layer in the model and prototxt file is different. Could you check if this model is correct? Or could you find an sample of this model being used?

@yuweiw, here is the sample of this model being used: AgeGenderDeepLearning/AgeGenderDemo.ipynb at master · GilLevi/AgeGenderDeepLearning · GitHub

You can check with the owner of the project about which of the deploy_age.prototxt file he used. Based on the file you’re using now, it doesn’t match with the model.

Problem solved: It is need to upgrade the old version of the caffe network and model to the latest version. Caffe comes with the tool to convert from old version caffe model and network to a new version.
Now it is working.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.