Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU): RTX 3060
• DeepStream Version: nvcr.io/nvidia/deepstream:6.0.1-triton
• NVIDIA GPU Driver Version (valid for GPU only): 510.73.05
• Issue Type( questions, new requirements, bugs): bug
• How to reproduce the issue : Don’t provide object_control { bbox_filter { min_width: 64 min_height: 64 } }
• Plugin: nvinferserver
I have some small objects from the pgie which is YOLOv5, I am using the Triton Inference Server (ONNX) as SGIE, When I remove the object_control { bbox_filter { min_width: 64 min_height: 64 } } from tis_configs.txt, it will stop responding and got stuck there… I tried to run it with log level 3 and providing the logs below.
Warning: gst-library-error-quark: NvInferServer asynchronous mode is applicable for secondaryclassifiers only. Turning off asynchronous mode (5): gstnvinferserver_impl.cpp(352): validatePluginConfig (): /GstPipeline:pipeline0/GstNvInferServer:secondary-triton-classifier
ai_core_tis | INFO: PID 1: Decodebin child added: qtdemux0
ai_core_tis |
ai_core_tis | INFO: PID 1: Decodebin child added: multiqueue0
ai_core_tis |
ai_core_tis | INFO: PID 1: Decodebin child added: h264parse0
ai_core_tis |
ai_core_tis | INFO: PID 1: Decodebin child added: capsfilter0
ai_core_tis |
ai_core_tis | INFO: PID 1: Decodebin child added: nvv4l2decoder0
ai_core_tis |
ai_core_tis | INFO: PID 1: In cb_newpad
ai_core_tis |
ai_core_tis | INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 2
ai_core_tis | 0 INPUT kFLOAT data 3x640x640
ai_core_tis | 1 OUTPUT kFLOAT prob 6001x1x1
ai_core_tis |
ai_core_tis | I0629 10:17:29.757675 1 model_repository_manager.cc:638] GetInferenceBackend() 'tis_classifier' version -1
ai_core_tis | I0629 10:17:29.757783 1 infer_request.cc:524] prepared: [0x0x7f2940001610] request id: 1, model: tis_classifier, requested version: -1, actual version: 1, flags: 0x0, correlation id: 0, batch size: 0, priority: 0, timeout (us): 0
ai_core_tis | original inputs:
ai_core_tis | [0x0x7f2940001918] input: input, type: FP32, original shape: [1,3,448,448], batch + shape: [1,3,448,448], shape: [1,3,448,448]
ai_core_tis | override inputs:
ai_core_tis | inputs:
ai_core_tis | [0x0x7f2940001918] input: input, type: FP32, original shape: [1,3,448,448], batch + shape: [1,3,448,448], shape: [1,3,448,448]
ai_core_tis | original requested outputs:
ai_core_tis | output6
ai_core_tis | requested outputs:
ai_core_tis | output6
ai_core_tis |
ai_core_tis | I0629 10:17:29.757998 1 model_repository_manager.cc:638] GetInferenceBackend() 'tis_classifier' version -1
ai_core_tis | I0629 10:17:29.758040 1 infer_request.cc:524] prepared: [0x0x7f2940002100] request id: 2, model: tis_classifier, requested version: -1, actual version: 1, flags: 0x0, correlation id: 0, batch size: 0, priority: 0, timeout (us): 0
ai_core_tis | original inputs:
ai_core_tis | [0x0x7f2940002938] input: input, type: FP32, original shape: [1,3,448,448], batch + shape: [1,3,448,448], shape: [1,3,448,448]
ai_core_tis | override inputs:
ai_core_tis | inputs:
ai_core_tis | [0x0x7f2940002938] input: input, type: FP32, original shape: [1,3,448,448], batch + shape: [1,3,448,448], shape: [1,3,448,448]
ai_core_tis | original requested outputs:
ai_core_tis | output6
ai_core_tis | requested outputs:
ai_core_tis | output6
ai_core_tis |
ai_core_tis | I0629 10:17:29.758141 1 model_repository_manager.cc:638] GetInferenceBackend() 'tis_classifier' version -1
ai_core_tis | I0629 10:17:29.758189 1 infer_request.cc:524] prepared: [0x0x7f2940002bd0] request id: 3, model: tis_classifier, requested version: -1, actual version: 1, flags: 0x0, correlation id: 0, batch size: 0, priority: 0, timeout (us): 0
ai_core_tis | original inputs:
ai_core_tis | [0x0x7f2940002e58] input: input, type: FP32, original shape: [1,3,448,448], batch + shape: [1,3,448,448], shape: [1,3,448,448]
ai_core_tis | override inputs:
ai_core_tis | inputs:
ai_core_tis | [0x0x7f2940002e58] input: input, type: FP32, original shape: [1,3,448,448], batch + shape: [1,3,448,448], shape: [1,3,448,448]
ai_core_tis | original requested outputs:
ai_core_tis | output6
ai_core_tis | requested outputs:
ai_core_tis | output6
ai_core_tis |
ai_core_tis | I0629 10:17:29.758226 1 onnxruntime.cc:2138] model tis_classifier, instance tis_classifier, executing 1 requests
ai_core_tis | I0629 10:17:29.758234 1 onnxruntime.cc:1096] TRITONBACKEND_ModelExecute: Running tis_classifier with 1 requests
ai_core_tis | 2022-06-29 10:17:29.758325765 [I:onnxruntime:, sequential_executor.cc:157 Execute] Begin execution
ai_core_tis | 2022-06-29 10:17:29.763030358 [I:onnxruntime:, sequential_executor.cc:481 Execute] [Memory] ExecutionFrame statically allocates 88712192 bytes for Cuda
ai_core_tis |
ai_core_tis | 2022-06-29 10:17:29.763041417 [I:onnxruntime:, sequential_executor.cc:486 Execute] [Memory] ExecutionFrame dynamically allocates 256 bytes for Cuda
ai_core_tis |
ai_core_tis | I0629 10:17:29.809286 1 infer_response.cc:165] add response output: output: output6, type: FP32, shape: [1,15]
ai_core_tis | I0629 10:17:29.809477 1 model_repository_manager.cc:638] GetInferenceBackend() 'tis_classifier' version 1
ai_core_tis | I0629 10:17:29.809583 1 onnxruntime.cc:2138] model tis_classifier, instance tis_classifier, executing 1 requests
ai_core_tis | I0629 10:17:29.809590 1 onnxruntime.cc:1096] TRITONBACKEND_ModelExecute: Running tis_classifier with 1 requests
ai_core_tis | 2022-06-29 10:17:29.809721892 [I:onnxruntime:, sequential_executor.cc:157 Execute] Begin execution
ai_core_tis | I0629 10:17:29.811013 1 model_repository_manager.cc:638] GetInferenceBackend() 'tis_classifier' version -1
ai_core_tis | I0629 10:17:29.811090 1 infer_request.cc:524] prepared: [0x0x7f2940003510] request id: 4, model: tis_classifier, requested version: -1, actual version: 1, flags: 0x0, correlation id: 0, batch size: 0, priority: 0, timeout (us): 0
ai_core_tis | original inputs:
ai_core_tis | [0x0x7f29400037c8] input: input, type: FP32, original shape: [1,3,448,448], batch + shape: [1,3,448,448], shape: [1,3,448,448]
ai_core_tis | override inputs:
ai_core_tis | inputs:
ai_core_tis | [0x0x7f29400037c8] input: input, type: FP32, original shape: [1,3,448,448], batch + shape: [1,3,448,448], shape: [1,3,448,448]
ai_core_tis | original requested outputs:
ai_core_tis | output6
ai_core_tis | requested outputs:
ai_core_tis | output6
ai_core_tis |
ai_core_tis | 2022-06-29 10:17:29.856756611 [I:onnxruntime:, sequential_executor.cc:481 Execute] [Memory] ExecutionFrame statically allocates 88712192 bytes for Cuda
ai_core_tis |
ai_core_tis | 2022-06-29 10:17:29.856777447 [I:onnxruntime:, sequential_executor.cc:486 Execute] [Memory] ExecutionFrame dynamically allocates 256 bytes for Cuda
ai_core_tis |
ai_core_tis | I0629 10:17:29.866276 1 infer_response.cc:165] add response output: output: output6, type: FP32, shape: [1,15]
ai_core_tis | I0629 10:17:29.866433 1 model_repository_manager.cc:638] GetInferenceBackend() 'tis_classifier' version 1
ai_core_tis | I0629 10:17:29.866558 1 onnxruntime.cc:2138] model tis_classifier, instance tis_classifier, executing 1 requests
ai_core_tis | I0629 10:17:29.866582 1 onnxruntime.cc:1096] TRITONBACKEND_ModelExecute: Running tis_classifier with 1 requests
ai_core_tis | 2022-06-29 10:17:29.866719860 [I:onnxruntime:, sequential_executor.cc:157 Execute] Begin execution
ai_core_tis | I0629 10:17:29.866831 1 model_repository_manager.cc:638] GetInferenceBackend() 'tis_classifier' version -1
ai_core_tis | I0629 10:17:29.866913 1 infer_request.cc:524] prepared: [0x0x7f2940003b80] request id: 5, model: tis_classifier, requested version: -1, actual version: 1, flags: 0x0, correlation id: 0, batch size: 0, priority: 0, timeout (us): 0
ai_core_tis | original inputs:
ai_core_tis | [0x0x7f2940003e08] input: input, type: FP32, original shape: [1,3,448,448], batch + shape: [1,3,448,448], shape: [1,3,448,448]
ai_core_tis | override inputs:
ai_core_tis | inputs:
ai_core_tis | [0x0x7f2940003e08] input: input, type: FP32, original shape: [1,3,448,448], batch + shape: [1,3,448,448], shape: [1,3,448,448]
ai_core_tis | original requested outputs:
ai_core_tis | output6
ai_core_tis | requested outputs:
ai_core_tis | output6
ai_core_tis |
ai_core_tis | 2022-06-29 10:17:29.871263449 [I:onnxruntime:, sequential_executor.cc:481 Execute] [Memory] ExecutionFrame statically allocates 88712192 bytes for Cuda
ai_core_tis |
ai_core_tis | 2022-06-29 10:17:29.871274805 [I:onnxruntime:, sequential_executor.cc:486 Execute] [Memory] ExecutionFrame dynamically allocates 256 bytes for Cuda
ai_core_tis |
ai_core_tis | I0629 10:17:29.917678 1 infer_response.cc:165] add response output: output: output6, type: FP32, shape: [1,15]
ai_core_tis | I0629 10:17:29.917801 1 model_repository_manager.cc:638] GetInferenceBackend() 'tis_classifier' version 1
ai_core_tis | I0629 10:17:29.917928 1 onnxruntime.cc:2138] model tis_classifier, instance tis_classifier, executing 1 requests
ai_core_tis | I0629 10:17:29.917952 1 onnxruntime.cc:1096] TRITONBACKEND_ModelExecute: Running tis_classifier with 1 requests
ai_core_tis | 2022-06-29 10:17:29.918038368 [I:onnxruntime:, sequential_executor.cc:157 Execute] Begin execution
ai_core_tis | I0629 10:17:29.919424 1 model_repository_manager.cc:638] GetInferenceBackend() 'tis_classifier' version -1
ai_core_tis | I0629 10:17:29.919491 1 infer_request.cc:524] prepared: [0x0x7f2940004030] request id: 6, model: tis_classifier, requested version: -1, actual version: 1, flags: 0x0, correlation id: 0, batch size: 0, priority: 0, timeout (us): 0
ai_core_tis | original inputs:
ai_core_tis | [0x0x7f29400042b8] input: input, type: FP32, original shape: [1,3,448,448], batch + shape: [1,3,448,448], shape: [1,3,448,448]
ai_core_tis | override inputs:
ai_core_tis | inputs:
ai_core_tis | [0x0x7f29400042b8] input: input, type: FP32, original shape: [1,3,448,448], batch + shape: [1,3,448,448], shape: [1,3,448,448]
ai_core_tis | original requested outputs:
ai_core_tis | output6
ai_core_tis | requested outputs:
ai_core_tis | output6
ai_core_tis |
ai_core_tis | 2022-06-29 10:17:29.962154780 [I:onnxruntime:, sequential_executor.cc:481 Execute] [Memory] ExecutionFrame statically allocates 88712192 bytes for Cuda
ai_core_tis |
ai_core_tis | 2022-06-29 10:17:29.962171503 [I:onnxruntime:, sequential_executor.cc:486 Execute] [Memory] ExecutionFrame dynamically allocates 256 bytes for Cuda
ai_core_tis |
ai_core_tis | I0629 10:17:29.975007 1 infer_response.cc:165] add response output: output: output6, type: FP32, shape: [1,15]
classifier_tis_configs.txt (775 Bytes)
detector_configs.txt (2.1 KB)