Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU): GPU
• DeepStream Version: 7.0
• TensorRT Version align with ds-7.0 docker image
• NVIDIA GPU Driver Version (valid for GPU only) 565
• Issue Type( questions, new requirements, bugs) maybe bugs
• How to reproduce the issue ?
Context
I want to use nvinferserver’s PGIE+SGIE to build a top-down pose estimation pipeline, PGIE uses yolo11, SGIE uses a top-down pose estimator.
Details
- I attach
object_meta
data into eachframe_meta
, and also change thenum_obj_meta
member. - in my expectation, the SGIE input/output batch-size should equal to the sum of
num_obj_meta
in each frame that PGIE transfer, e.g., 2 frames,num_obj_meta=3
andnum_obj_meta=4
, the SGIE’snvds_frame_meta_list
length should be7
but the actual is not in my test. - I use batch meta lock and
bInferDone
to manage the synchronization, not sure if this is enough to avoid race condition
Assumption 2 is importtant for me because i need to match the SGIE result for each frame
Other Questions
There is still some confuse for me, nvdsinferserver
attach all meta data into PGIE frame_meta
, but when debug inside the SGIE’s inferenceDone()
, i noticed SGIE did not do this at all, does the meta data in SGIE actually work under the nvinferserver’s hood? especially for bInferDone
?
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
PGIE config
config.pbtxt
name: "YOLO11-Det"
platform: "tensorrt_plan"
default_model_filename: "end2end.engine"
max_batch_size: 0
input [
{
name: "input"
data_type: TYPE_FP32
dims: [ -1, 3, 640, 640 ]
}
]
output [
{
name: "dets"
data_type: TYPE_FP32
dims: [ 128, 7 ]
}
]
instance_group [
{
count: 1
kind: KIND_GPU
gpus: [ 0 ]
}
]
version_policy {
specific: { versions: [1]}
}
config for nvinferserver
config file
input_control {
async_mode: false
process_mode: PROCESS_MODE_FULL_FRAME
operate_on_gie_id: -1
interval: 0
}
infer_config {
gpu_ids: [0]
backend {
triton {
model_name: "YOLO11-Det"
version: -1
model_repo {
root: "xxx"
backend_configs: [
{
backend: "tensorrt_plan"
}
]
strict_model_config: false
min_compute_capacity: 8.0
log_level: 2
}
}
inputs [
{
name: "input"
dims: [ 3, 640, 640 ]
data_type: TENSOR_DT_FP32
}
]
outputs [
{
name: "dets"
max_buffer_bytes: 4096
}
]
output_mem_type: MEMORY_TYPE_CPU
}
preprocess {
network_format: IMAGE_FORMAT_BGR
tensor_order: TENSOR_ORDER_LINEAR
maintain_aspect_ratio: 1
frame_scaling_filter: 1
symmetric_padding: 0
normalize {
scale_factor: 0.003921569
channel_offsets: [0, 0, 0]
}
}
postprocess {
other {}
}
extra {
custom_process_funcion: "YOLO11Det"
output_buffer_pool_size: 128
}
custom_lib {
path: "xxx.so"
}
}
config for plugin
unique-id: 1
process-mode: 1
input-tensor-meta: 0
config-file-path: xxx.txt
SGIE config
config.pbtxt
name: "RTMPose-m"
platform: "tensorrt_plan"
default_model_filename: "end2end.engine"
max_batch_size: 32
input [
{
name: "input"
data_type: TYPE_FP32
dims: [ 3, 384, 288 ]
}
]
output [
{
name: "simcc_x"
data_type: TYPE_FP32
dims: [ 26, -1 ]
},
{
name: "simcc_y"
data_type: TYPE_FP32
dims: [ 26, -1 ]
}
]
instance_group [
{
count: 1
kind: KIND_GPU
gpus: [ 0 ]
}
]
version_policy {
specific: { versions: [1]}
}
dynamic_batching {
preferred_batch_size: [ 32 ]
max_queue_delay_microseconds: 1500
}
nvinferserver config
config file
input_control {
async_mode: false
operate_on_gie_id: 1
process_mode: PROCESS_MODE_CLIP_OBJECTS
secondary_reinfer_interval: 0
}
infer_config {
gpu_ids: [0]
backend {
triton {
model_name: "RTMPose-m"
version: -1
model_repo {
root: "xxx"
backend_configs: [
{
backend: "tensorrt_plan"
}
]
strict_model_config: false
log_level: 2
}
}
inputs [
{
name: "input"
dims: [3, 384, 288]
data_type: TENSOR_DT_FP32
}
]
outputs [
{
name: "simcc_x"
},
{
name: "simcc_y"
}
]
output_mem_type: MEMORY_TYPE_CPU
}
preprocess {
network_format: IMAGE_FORMAT_BGR
tensor_order: TENSOR_ORDER_LINEAR
maintain_aspect_ratio: 1
symmetric_padding: 0
normalize {
scale_factor: 0.017352074
channel_offsets: [ 123.675, 116.28, 103.53 ]
}
}
postprocess {
other {}
}
extra {
custom_process_funcion: "RTMPose"
}
custom_lib {
path: "xxx.so"
}
}
output_control {
output_tensor_meta: true
}
config for plugin
unique-id: 2
process-mode: 2
input-tensor-meta: 0
infer-on-gie-id: 1
config-file-path: xxx.txt