Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) : RTX 3060
• DeepStream Version : 7.0
• TensorRT Version :
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) : question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
How can I use the custom semantic segmentation model in Deepstream 7.0?
- Semantic segmentation model input and output
- 0 INPUT kFLOAT input.1 3x512x512
- 1 OUTPUT kFLOAT 1145 1x512x512
- custom_config_infer.txt
[property]
gpu-id=0
gie-unique-id=1
interval=0
batch-size=3
net-scale-factor=0.003921569
model-color-format=0
infer-dims=3;512;512
onnx-file=../../../../tritonserver/models/customnet/1/segmentation-efficientnet-b3.onnx
model-engine-file=../../../../tritonserver/models/customnet/1/segmentation-efficientnet-b3.trt
process-mode=1
network-mode=2 # 0: FP32, 1: INT8, 2: FP16
network-type=2 # 0: Detector, 1: Classifier, 2: Segmentation, 3: Instance Segmentation
threshold=0.1
num-detected-classes=2
cluster-mode=4
parse-bbox-func-name=NvDsInferParseCustomDetection
parse-bbox-instance-mask-func-name=NvDsInferParseCustomSegmentation
custom-lib-path=../../gst-plugins/gst-nvinferserver/nvdsinfer_custom_impl_obstacle/obstacle_detection
scaling-filter=1
scaling-compute-hw=1
symmetric-padding=0
maintain-aspect-ratio=1
- I found custom parser forms in the official document.
- 3.1. Custom bounding box parsing function
extern "C" bool NvDsInferParseCustomDetection(
std::vector<NvDsInferLayerInfo> const& outputLayersInfo,
NvDsInferNetworkInfo const& networkInfo,
NvDsInferParseDetectionParams const& detectionParams,
std::vector<NvDsInferParseObjectInfo>& objectList);
- 3.2. Custom bounding box and instance mask parsing function
bool NvDsInferParseCustomInstanceMask(
std::vector<NvDsInferLayerInfo> const& outputLayersInfo,
NvDsInferNetworkInfo const& networkInfo,
NvDsInferParseDetectionParams const& detectionParams,
std::vector<NvDsInferParseObjectInfo>& objectList);
- 3.3. Custom semantic segmentation output parsing function
extern "C"
bool NvDsInferParseCustomSegmentation(
std::vector<NvDsInferLayerInfo> const& outputLayersInfo,
NvDsInferNetworkInfo const& networkInfo, float segmentationThreshold,
unsigned int numClasses, int* classificationMap,
float*& classProbabilityMap);
- How can I register NvDsInferParseCustomSegmentation in my config_infer.txt? The nvinfer official document does not have anything related to semantic segmentation.