Deepstream Triton Ensemble Model Error

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 6.0.1
• TensorRT Version 8.0.1
• NVIDIA GPU Driver Version (valid for GPU only) 510.47.03

Hi,
Currently i am developing a deepstream-6 pipeline, which calls a nvinferserver component for inferencing - i have attempted to implement a ensemble model, but am running into the following issue:

immediately before this error, the OSD output briefly appears, but quickly closes before any inferencing results are rendered.

Here is a greater screenshot with more information preceeding application exit -

Any help would be appreciated as this is currently a blocker

Attached Below are all config files for models, ensemble, and inferserver
densenet_onnx_config.pbtxt (1.9 KB)
dstest1_pgie_inferserver_config.txt (1.5 KB)
ensemble_config.pbtxt (1.5 KB)
ssd_inception_config.pbtxt (2.5 KB)

thank for the repo!
Will check and get back

1 Like

Hi @beauy152 ,
What input resolution and batch size does dstest1_pgie_inferserver_config.txt TRT engine supports? Seems it failed with error below because you didn’t provide a input dims in the config?
Is it possible to just provide us a repo package so that we can untart and run a command repo the issue?

Hi @mchi ,
The muxer is set to 1920*1080 stream resolution with batch_size set to 1 (though i have tested with higher batch sizes to the same result) - the input dims are specified in each models ‘.pbtxt’ files that i attached above.

I have attached the repo below with a short Readme.
7_multi-inference.tar.xz (520.4 KB)

Thanks so much for any help with this - documentation in this area appears very limited.

Thanks again -

Hi @beauy152
In your first post, there are 4 configs, but 7_multi-inference.tar.xz only includes one config, is it the same repo?
And, seems 7_multi-inference\ensemble_test\1 is empty, is it enough to repo it?

Yes, you’re right, i forgot to attach the triton_model_repo - i have fixed that, and adjusted the file paths this time around -
7_multi-inference.tar.xz (88.5 MB)

and yes '7_multi-inference/ensemble_test/1 should be empty, as the ‘ensemble’ is only a configuration, which points to the other models to be used, in this case they are ‘ssd_inception_v2_coco_2018_01_28’ & ‘Primary_Detector’

@beauy152 ,
Regarding ensemble models. I can see there are 2 input layers image_tensor + data_0. DS-Triton(gst-nvinferserver) support single layer input by default. For extra layers input layers, you need to follow Gst-nvinferserver manual
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvinferserver.html#custom-process-interface-iinfercustomprocessor-for-extra-input-lstm-loop-output-data-postprocess
to derive and implement interface class

IInferCustomProcessor {
virtual void supportInputMemType(InferMemType& type); // return supported memory type for extraInputs
virtual bool requireInferLoop() const;
virtual NvDsInferStatus extraInputProcess(const vector<IBatchBuffer*>& primaryInputs, vector<IBatchBuffer*>& extraInputs, const IOptions* options) = 0;
virtual NvDsInferStatus inferenceDone(const IBatchArray* outputs, const IOptions* inOptions) = 0;
virtual void notifyError(NvDsInferStatus status) = 0;
};

  • Preprocess: Function extraInputProcess is for extra input layers processing, e.g. if image_tensor is primary input. then data_0will be taken as extra input. you need fill the correct data into extraInputs.
  • Postprocess: If you need postprocessing, this class also has the interface inferenceDone() to implement customized parsing and attaching. If any function you don’t need, just return NVDSINFER_SUCCESS
  • Example: We have examples how to do preprocess and postprocess in fasterRCNN source file for DS-Triton:

/opt/nvidia/deepstream/deepstream-6.1/sources/objectDetector_FasterRCNN/nvdsinfer_custom_impl_fasterRCNN/nvdsinferserver_custom_process.cpp

Gst-nvinferserver config entry file is

/opt/nvidia/deepstream/deepstream-6.1/sources/objectDetector_FasterRCNN/config_triton_inferserver_primary_fasterRCNN_custom.txt
The key is to make the config have following lines
infer_config {
extra {
custom_process_funcion: “CreateInferServerCustomProcess”
}
custom_lib {
path: “/path/to/libnvdsinferserver_custom_process.so”
} }

make sure function CreateInferServerCustomProcess implemented to create an object instance of IInferCustomProcessor

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.