Failed to use custom sequence preprocess in triton inferserver

Please provide complete information as applicable to your setup.

• Hardware Platform – GPU
• DeepStream Version – 6.1
• NVIDIA GPU Driver Version – NVIDIA GeForce RTX 2080 Ti
• Triton Versionnvcr.io/nvidia/tritonserver:22.05-py3

HI, I’m using custom sequence preprocess for my own algo.
Triton received data successfully.
But here is many zero, and it’s not works.
LIKE THIS:
values.shape: (1, 9, 1080, 1920)

Here is my preprocess config:
[property]
enable=1
target-unique-ids=1
network-input-shape= 4;9;1080;1920
network-color-format=1
network-input-order=2
tensor-data-type=1
tensor-name=frames
operate-on-gie-id=1
processing-width=1920
processing-height=1080
scaling-pool-memory-type=0
scaling-pool-compute-hw=0
scaling-filter=0
tensor-buf-pool-size=18

custom-lib-path=/opt/nvidia/deepstream/deepstream-6.1/samples/configs/small_object_det/custom_sequence_preprocess/libnvds_custom_sequence_preprocess.so
custom-tensor-preparation-function=CustomSequenceTensorPreparation

[user-configs]
channel-scale-factors=1;1;1
channel-mean-offsets=0;0;0
stride=1
subsample=0

[group-0]
src-ids=0
process-on-roi=1
roi-params-src-0=0;0;1920;1080

inferserver config:
infer_config {
unique_id: 1
gpu_ids: [0]
max_batch_size: 1
backend {
inputs: [{
name: “frames”
dims: [9, 1080, 1920]
}]
outputs: [{
name: “output”
}]
triton {
model_name: “tracker”
version: -1
grpc {
url: “localhost:8501”
enable_cuda_buffer_sharing: true
}
}
}
preprocess {
network_format: IMAGE_FORMAT_RGB
tensor_order: TENSOR_ORDER_NHWC
normalize {
scale_factor: 1.0
channel_offsets: [0, 0, 0]
}
}
postprocess {
other {}
}
extra {
output_buffer_pool_size: 8
copy_input_to_host_buffers: false
}
}
input_control {
process_mode: PROCESS_MODE_FULL_FRAME
operate_on_gie_id: -1
interval: 0
}
output_control {
output_tensor_meta: true
}

PLEASE HELP!!!

firstly nvinferserver plugin is opensource in DS6.2 SDK, you can check the code if interested.
which sample are you testing? what is the model used to do? what is the whole media pipeline?

need to narrow down this issue, can the model work fine using a third party tool? can you make sure the configuration is right? or you can add logs in nvinferserver to check if preprocessing is right.

I refer to apps/deepstream-preprocess-test
and deepstream-3d-action-recognition/config_preprocess_2d_custom.txt
and config_triton_infer_primary_2d_action.txt

my model is based Yolo v5 to detect small objects. It needs three frames for detection.

pipeline:
streammux.link(preprocess)
preprocess.link(pgie)
pgie.link(tiler)
tiler.link(nvvidconv)
nvvidconv.link(nvosd)
nvosd.link(nvvidconv_postosd)
nvvidconv_postosd.link(caps)
caps.link(encoder)
encoder.link(rtppay)
rtppay.link(sink)

It’s OK to using python tritonclient to connect triton server and get the result successfully.

And I try single frame preprocess successfully, Just the sequence doesn’t works.

if testing using python tool is good, it should be preprocess’s issue in c code, you can add logs in /opt/nvidia/deepstream/deepstream/sources/gst-
plugins/gst-nvdspreprocess to narrow this issue.

noticing the mode needs three frames, what do me mean about “try single frame preprocess successfully”?

Thank you for your help.
My problem has been solved.
In “libnvds_custom_sequence_preprocess.txt” it’s only support FP32, but my config is set uint8.
After modify c code, It’s works!

THANKS!

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.