Please provide complete information as applicable to your setup.
• Hardware Platform : GPU • DeepStream Version : 6.1.1 • TensorRT Version : 8.4.1.5 • NVIDIA GPU Driver Version (valid for GPU only) : 525.60.13 • Issue Type : questions, new requirements, bugs
I have a triton server running in a docker container which has two models
Peoplenet
Multitask classification
From deepstream app (in a different docker container) I am able to make grpc calls to peoplenet model which is my pgie. After this pgie I have a tracker and after the tracker I want to make grpc calls to Multitask classification model which will be my sgie.
please refer to /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app-triton/source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt
it is a nvinferserver pgie detection + sgie classification sample.
The fix suggested works and makes sense but I also get the following error logs at the very beginning (only once and then it works fine)
WARNING: infer_proto_utils.cpp:271 update max_bath_size to 1 in config:configs/config_triton_grpc_infer_secondary_model1.txt
INFO: infer_grpc_backend.cpp:169 TritonGrpcBackend id:2 initialized for model: vehicletypenet_tao 0:00:00.763948703 179 0x7f39d40022f0 ERROR nvinferserver gstnvinferserver.cpp:375:gst_nvinfer_server_logger: nvinferserver[UID 2]: Error in specifyBackendDims() <infer_grpc_context.cpp:140> [UID = 2]: input tensor: is not found in model:vehicletypenet_tao from config file
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is ON
[NvMultiObjectTracker] Initialized
WARNING: infer_proto_utils.cpp:144 auto-update preprocess.network_format to IMAGE_FORMAT_RGB
INFO: infer_grpc_backend.cpp:169 TritonGrpcBackend id:1 initialized for model: peoplenet_tao 0:00:00.846156938 179 0x7f39d40022f0 ERROR nvinferserver gstnvinferserver.cpp:375:gst_nvinfer_server_logger: nvinferserver[UID 1]: Error in specifyBackendDims() <infer_grpc_context.cpp:140> [UID = 1]: input tensor: is not found in model:peoplenet_tao from config file
Decodebin child added: source
Can you help me understand why do these errors comeup and how do i get rid of them?
Also secondly if at a later point of time for a different model I would want a batch size>1 I will have to make changes on the triton server side and max_batch_size, dims will change in config on the client side - is my understanding correct?
it should be that input configuration is not the same with model’s config.pbtxt, you don’t need to configure inputs in nvinferserver’s cfg, please refer to sample opt\nvidia\deepstream\deepstream\samples\configs\deepstream-app-triton-grpc\config_infer_secondary_plan_engine_carmake.txt
nvinferserver is opensource in deepstream 6.2, you can add logs to debug if needed.
@fanzh as per my understanding input { dims : } that is needed in my case as I am using enable_cuda_buffer_sharing: true . When I have enable_cuda_buffer_sharing: false then I dont need to specify input { dims : }
but the main question is that the app runs just fine and these errors just prop up only once in the beginning. I do not understand still why this must be the case.
/** input tensors settings, optional */
repeated InputLayer inputs = 1;
as the comments shown in nvdsinferserver_config.proto, this setting is optional.
you also might get detailed description of enable_cuda_buffer_sharing in this file.