I am currently working on getting an SSD caffe model running in deepstream. It has been converted to a TensorRT engine for tegra based platforms.
• Hardware Platform: Jetson TX2/Xavier (Currently working on TX2)
• DeepStream Version: 5.0
• JetPack Version: 4.4
• TensorRT Version: 7.1.3
• Issue Type: Error when trying to run TensorRT engine in Deepstream on TX2 platform.
Converted Caffe SSD model into a TensorRT engine
Compiled a new updated version and replaced the old version of “libnvinfer_plugin.so.7.1.3”
Compiled and linked in the config file “libnvds_infercustomparser_tlt.so”
Mismatch in the number of output buffers.Expected 2 output buffers, detected in the network :1
0:00:09.304585054 25 0x559e8cb680 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::fillDetectionOutput() <nvdsinfer_context_impl_output_parsing.cpp:725> [UID = 1]: Failed to parse bboxes using custom parse function
SSD Output Layer (Caffe prototxt):
This is all running in a customized deepstream container on the TX2 platform. We currently have no problems running Detectnet models on the platform, and have completed similar steps to run YOLO on a dGPU setup. I do not see the last layer as having a “BatchedNMS” or “NMS” output like reference in the YOLO and SSD deepstream app config files. Is there a list of available output blob names, or a way to find what the appropriate one to use is in this case?