Custom segmentation model BiSeNet gives empty ouput

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) jetson xavier
• DeepStream Version 6.0
• JetPack Version (valid for Jetson only) 4.6.1
• TensorRT Version 8.2.1
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) bug

• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

Tried to use a custom segmentation model BiSeNet in deepstream. getting an empty screen as an output in nvsegvisual.

[property]
gpu-id=0
network-input-order=0
offsets=0.485;0.456;0.406
net-scale-factor=0.00392156863
infer-dims=3;320;410
model-color-format=1
onnx-file=model_semifinal_220k.onnx
#model-engine-file=saved_model.trt

batch-size=1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
interval=0
network-type=2
gie-unique-id=1

num-detected-classes=19
output-blob-names=preds
segmentation-threshold=0.2
segmentation-output-order=1

model_semifinal_220k.onnx (12.8 MB)
saved_model.trt (9.9 MB)

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

change config file in segmentation test.

  1. is your model bisenetv1 or bisenetv2? please make sure the preprocess parameters are right. please verify your trt can work.
  2. please refer to deepstream sample deepstream-segmentation-test and deepstream_tao_apps/apps/tao_segmentation at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub

hi @fanzh ,
i am currently using bisenetv2, trt is working. Tried changing preprocess parameters as to match BiSeNet/segment.cpp at master · CoinCheung/BiSeNet · GitHub params. cannot find any parameter for variance in nvinfer so changed net-scale-factor and offsets accordingly but still getting same issue.

what i did to input-

  • took value of variance as 0.225
  • divided all offset values by scale
  • multiplied scale by variance

why-

  • deepstream uses preprocessing as y = net scale factor*(x-mean)
  • in bisenet prepprocessing is given as -
    array<float, 3> mean{0.485f, 0.456f, 0.406f};
    array<float, 3> variance{0.229f, 0.224f, 0.225f};
    float scale = 1.f / 255.f;
    for (int i{0}; i < 3; ++ i) {
        variance[i] = 1.f / variance[i];
    }
    vector<float> data(iH * iW * 3);
    for (int h{0}; h < iH; ++h) {
        cv::Vec3b *p = im.ptr<cv::Vec3b>(h);
        for (int w{0}; w < iW; ++w) {
            for (int c{0}; c < 3; ++c) {
                int idx = (2 - c) * iH * iW + h * iW + w; // to rgb order
                data[idx] = (p[w][c] * scale - mean[c]) * variance[c];
            }
        }
    }

looked at sample apps for segmentation they work in same way as my code only diffrence is in config file and model as i am using BiSeNet and they use Unet.

thanks for help

  1. please set segmentation-output-order=0, pleae find it in Gst-nvinfer — DeepStream 6.1.1 Release documentation.
    nvinfer is opensource, you can print the classes number in SegmentPostprocessor::fillSegmentationOutput() of nvdsinfer_context_impl_output_parsing.cpp, this model 's classes should be 19.

  2. and deepstream only supports this kind of pre-processing y = net scale factor*(x-mean), please refer to Gst-nvinfer — DeepStream 6.1.1 Release documentation

i tried printing data from SegmentPostprocessor::fillSegmentationOutput() ,it output width and height as 0. can you help how to solve this

can’t reproduce this issue after testing bisenetv2 cityscapes, here are cfg and output.
dstest_segmentation_config_semantic.txt (3.1 KB)
w 2048, 1024,19

how did you generate your onnx file.

after using your configuration too i am getting :
out height 0 output width0
out classes 1635021889

download model_final_v2_city.pth, then generate onnx according to this BiSeNet/tensorrt at master · CoinCheung/BiSeNet · GitHub