I am currently working on getting an SSD caffe model running in deepstream. It has been converted to a TensorRT engine for tegra based platforms.
• Hardware Platform: Jetson TX2/Xavier (Currently working on TX2)
• DeepStream Version: 5.0
• JetPack Version: 4.4
• TensorRT Version: 7.1.3
• Issue Type: Error when trying to run TensorRT engine in Deepstream on TX2 platform.
Steps Completed:
Converted Caffe SSD model into a TensorRT engine
Compiled a new updated version and replaced the old version of “libnvinfer_plugin.so.7.1.3”
Compiled and linked in the config file “libnvds_infercustomparser_tlt.so”
Current Error:
Mismatch in the number of output buffers.Expected 2 output buffers, detected in the network :1 0:00:09.304585054 25 0x559e8cb680 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::fillDetectionOutput() <nvdsinfer_context_impl_output_parsing.cpp:725> [UID = 1]: Failed to parse bboxes using custom parse function
This is all running in a customized deepstream container on the TX2 platform. We currently have no problems running Detectnet models on the platform, and have completed similar steps to run YOLO on a dGPU setup. I do not see the last layer as having a “BatchedNMS” or “NMS” output like reference in the YOLO and SSD deepstream app config files. Is there a list of available output blob names, or a way to find what the appropriate one to use is in this case?
Thanks for the update. I have decided to approach this a different way, and use the “sample_ssd” program to generate a new engine file with the required prototxt changes to include a second output tensor. I am however at a loss as to how to export the engine file once created. I’ve searched through the TensorRT C++ API docs and could not find a function to export or save the created engine file to disk. Would you have any insight as to how to go about doing that?
to the build engine function. This was added towards the end before the engine was returned and saves the engine to disk before continuing on to testing.
Thanks for the update!