Sysmalloc assertion ensemble deepstream-triton

I’m trying to use ensemble model in deepstream-triton with pytorch & python backend. The pipeline is:
Pre-processing → Infer-segmentation → Post-processing

  • Pre-processing Configs
name: "pre_segment_person"
backend: "python"
max_batch_size : 0

input [
  {
    name: "raw_images"
    data_type: TYPE_UINT8
    dims: [-1, 3, 128, 64]
  }
]
output [
  {
    name: "segment_inputs"
    data_type: TYPE_FP32
    dims: [-1, 3, 128, 64]
  }
]
parameters: {
  key: "EXECUTION_ENV_PATH",
  value: {string_value: <path-to-env>}
}
  • Infer Config
name: "infer_segment_person"
platform: "pytorch_libtorch"
max_batch_size: 0
input [
  {
    name: "input__0"
    data_type: TYPE_FP32
    dims: [-1, 3, 128, 64]
  }
]
output [
  {
    name: "output__0"
    data_type: TYPE_FP32
    dims: [-1, 128, 64, 6]
  }
]

  • Post-processing Config
name: "post_segment_person"
backend: "python"
max_batch_size : 0

input [
  {
    name: "segment_outputs"
    data_type: TYPE_FP32
    dims: [-1, 128, 64, 6]
  },
  {
    name: "raw_images"
    data_type: TYPE_UINT8
    dims: [-1, 3, 128, 64]
  }
]
output [
  {
    name: "human_colors"
    data_type: TYPE_INT32
    dims: [-1, 4, 3, 3]
  }
]

parameters: {
  key: "EXECUTION_ENV_PATH",
  value: {string_value: <path-to-env>}
}
  • Ensemble Config
name: "ens_segment_person"
platform: "ensemble"
max_batch_size: 0
input [
  {
    name: "raw_images"
    data_type: TYPE_UINT8
    dims: [-1, 3, 128, 64]
  }
]
output [
  {
    name: "human_colors"
    data_type: TYPE_INT32
    dims: [-1, 4, 3, 3]
  }
]
ensemble_scheduling {
  step [
    {
      model_name: "pre_segment_person"
      model_version: -1
      input_map {
        key: "raw_images"
        value: "raw_images"
      }
      output_map {
        key: "segment_inputs"
        value: "segment_inputs"
      }
    },
    {
      model_name: "infer_segment_person"
      model_version: -1
      input_map {
        key: "input__0"
        value: "segment_inputs"
      }
      output_map {
        key: "output__0"
        value: "segment_outputs"
      }
    },
    {
      model_name: "post_segment_person"
      model_version: -1
      input_map {
        key: "raw_images"
        value: "raw_images"
      }
      input_map {
        key: "segment_outputs"
        value: "segment_outputs"
      }
      output_map {
        key: "human_colors"
        value: "human_colors"
      }
    }
    
  ]
}
  • Deepstream-triton Config
infer_config {
  unique_id: 1
  gpu_ids: [0]
  max_batch_size: 0
  backend {
    inputs: [ {
      name: "raw_images"
    }]
    outputs: [
      {name: "human_colors"}

    ]
    trt_is {
      model_name: "ens_segment_person"
      version: -1
      model_repo {
        root: "/deepstream/triton-server/models"
        strict_model_config: false
        log_level: 1
      }
      
    }
  }

  preprocess {
    network_format: IMAGE_FORMAT_RGB
    tensor_order: TENSOR_ORDER_LINEAR
    tensor_name: "raw_images"
    maintain_aspect_ratio: 0
    normalize {
      scale_factor: 1.0
      channel_offsets: [0, 0, 0]
    }
  }

  postprocess {
    classification {
      custom_parse_classifier_func: "NvDsInferParseCustomPersonColors"
    }
  }

  extra {
    copy_input_to_host_buffers: false
  }

  custom_lib {
      path : "/deepstream/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so"
  }

}
output_control {
  output_tensor_meta: false
  detect_control {
    default_filter {
      bbox_filter {
        min_width: 32
        min_height: 32
        }
      }
    }
}

I also built custom classification parser name NvDsInferParseCustomPersonColors

// Shape of output array is [1,4,3,3]
for (int k = 0; k < 4; k++){
        if (k != 0) attrString += ",";
        attrString += "(";
        attrString += std::to_string(human_colors[cc]);
        attrString += ",";
        attrString += std::to_string(human_colors[cc + 1]);
        attrString += ",";
        attrString += std::to_string(human_colors[cc + 2]);
        attrString += ",";
        attrString += std::to_string(human_colors[cc + 3]);
        attrString += ",";
        attrString += std::to_string(human_colors[cc + 4]);
        attrString += ",";
        attrString += std::to_string(human_colors[cc + 5]);
        attrString += ",";
        attrString += std::to_string(human_colors[cc + 6]);
        attrString += ",";
        attrString += std::to_string(human_colors[cc + 7]);
        attrString += ",";
        attrString += std::to_string(human_colors[cc + 8]);
        attrString += ")";
        cc += 9;
    }
    NvDsInferAttribute person_color_attr;
    person_color_attr.attributeIndex = 1;
    person_color_attr.attributeValue = 2;
    person_color_attr.attributeConfidence = 1.0;
    person_color_attr.attributeLabel = strdup(attrString.c_str());

    attrList.push_back(person_color_attr);
    if(log_enable != NULL && std::stoi(log_enable)) {
        std::cout << "Person Color C++ Parser: " << person_color_attr.attributeLabel << " attrString: " << attrString.c_str() << std::endl;
    }

However, i got this error after several normally running steps:

Person colors python backend:  [[[[ 58 251  62]
   [250  57  79]
   [ 55  74  54]]
  [[ 45  64  55]
   [ 62  53 252]
   [251  45  65]]
  [[ 95  90 184]
   [245 105 119]
   [ 86 251  89]]
  [[ -1  -1  -1]
   [ -1  -1  -1]
   [ -1  -1  -1]]]]
Person Color C++ Parser: (58,251,62,250,57,79,55,74,54),(45,64,55,62,53,252,251,45,65),(95,90,184,245,105,119,86,251,89),(-1,-1,-1,-1,-1,-1,-1,-1,-1) attrString: (58,251,62,250,57,79,55,74,54),(45,64,55,62,53,252,251,45,65),(95,90,184,245,105,119,86,251,89),(-1,-1,-1,-1,-1,-1,-1,-1,-1)
Person colors python backend:  [[[[ 90 251  92]
   [109  91 247]
   [250  86 106]]
  [[247  60  95]
   [ 64 251  53]
   [ 82  64 251]]
  [[252 115 115]
   [115 116  93]
   [115  93 252]]
  [[ -1  -1  -1]
   [ -1  -1  -1]
   [ -1  -1  -1]]]]
Person Color C++ Parser: (90,251,92,109,91,247,250,86,106),(247,60,95,64,251,53,82,64,251),(252,115,115,115,116,93,115,93,252),(-1,-1,-1,-1,-1,-1,-1,-1,-1) attrString: (90,251,92,109,91,247,250,86,106),(247,60,95,64,251,53,82,64,251),(252,115,115,115,116,93,115,93,252),(-1,-1,-1,-1,-1,-1,-1,-1,-1)
Person colors python backend:  [[[[199  97  88]
   [ 80 250  92]
   [ 86  77 252]]
  [[ 66  67 252]
   [ 60 248  68]
   [248  86  68]]
  [[252  68  71]
   [ 68 252  67]
   [ 70  68 252]]
  [[246  82  81]
   [ 69 250  84]
   [ 69  67 251]]]]
Person Color C++ Parser: (199,97,88,80,250,92,86,77,252),(66,67,252,60,248,68,248,86,68),(252,68,71,68,252,67,70,68,252),(246,82,81,69,250,84,69,67,251) attrString: (199,97,88,80,250,92,86,77,252),(66,67,252,60,248,68,248,86,68),(252,68,71,68,252,67,70,68,252),(246,82,81,69,250,84,69,67,251)
deepstream-app: malloc.c:2379: sysmalloc: Assertion `(old_top == initial_top (av) && old_size == 0) || ((unsigned long) (old_size) >= MINSIZE && prev_inuse (old_top) && ((unsigned long) old_end & (pagesize - 1)) == 0)' failed.
Person colors python backend:  [[[[243 244 242]
   [236 207 221]
   [209 226 244]]
  [[246 247 246]
   [240 214 226]
   [217 233 244]]
  [[ -1  -1  -1]
   [ -1  -1  -1]
   [ -1  -1  -1]]
  [[ -1  -1  -1]
   [ -1  -1  -1]
   [ -1  -1  -1]]]]
Aborted (core dumped)
0304 09:15:26.671987 6674 pb_stub.cc:737] Non-graceful termination detected. 0304 09:15:26.671985 6590 pb_stub.cc:737] Non-graceful termination detected. 0304 09:15:26.671994 6754 pb_stub.cc:737] Non-graceful termination detected.

But, when i tried to hard code with shorter string, the error disappear

std::string tmp_string = "test";
person_color_attr.attributeLabel = strdup(tmp_string.c_str())

• Hardware Platform (Jetson / GPU): A100 GPU
• DeepStream Version: 6.0
• DeepStream Container: 6.0-triton

Sorry for the late response, is this still an issue to support? Thanks

Looks the same as

Thanks for your attentions. I’ve put some metas and tensors into attributeLabel for easy access, just the tricky way, and this cause error when number characters of attributeLabel > 127.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.