Nvinfer-server: Data corruption while using python backend

  1. Started stand alone triton docker container on T4 GPU (x86),
    → Triton server has following models:
    a. YOLOX model (onnx-backend)
    b. Post processing (python-backend)
    c. ensemble_yolox_postprocessing
  2. Started stand alone deepstream-triton docker container on T4 GPU (x86),
  3. created pipeline which uses nvinferserver running
    "ensemble_yolox_postprocessing " in it.
  4. I have attached the probe to extract output tensor from the ensemble
    model but it turns out the first two number in the extracted array are
    corrupted.

To further conclude the issue I have returned hard coded array [[1,2,3,4,5,6,7]] from model.py python-backend. I am getting [[0,0,3,4,5,6,7]] inside the deepstream probe function.

Could you please help me how can I resolve this issue?
config.pbtxt (733 Bytes)
config.pbtxt (449 Bytes)
model.py (8.9 KB)

1 Like

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hardware Platform: GPU
DeepStream Version: Docker Image: nvcr.io/nvidia/deepstream:6.0-triton

Can you provide YOLOX model to us? We need to reproduce the failure for debugging purpose?

Hi @Fiona.Chen ,

Thanks for your response.

We can’t share you the model. But you can reproduce the issue using centerface model provided by Nvidia on this link: deepstream_triton_model_deploy/centerface at master · NVIDIA-AI-IOT/deepstream_triton_model_deploy · GitHub

You can execute the code two to three times, you will be able to see corrupted bounding boxes in left top corner of the frame.

Docker images used:

  1. Deepstream: nvcr.io/nvidia/deepstream:6.0.1-triton
  2. Triton Inference Server: nvcr.io/nvidia/tritonserver:21.08-py3

Hi,
We are trying to build an application with the deepstream 6.0.1 and triton 22.02-py3 and have the same issue with the ensemble model, which consists of yolo preprocessing python model, yolov5 onnx model and yolo post processing python model. We’ve tried to hardcode an outputs in the same way and got the same results in the deepstream.
Don’t really know for now how to fix this and ready to share with you all the configs and models, if needed.

2 Likes

I’ve tested with https://github.com/NVIDIA-AI-IOT/deepstream_triton_model_deploy/tree/master/centerface for several times, I can’t find any corrupted bbox in the output video.

To run the sample just the docker of nvcr.io/nvidia/deepstream:6.0.1-triton is enough.

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.