Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 6.0.1
• TensorRT Version 8.0.1
• NVIDIA GPU Driver Version (valid for GPU only) 510.47.03
Currently i am developing a deepstream-6 pipeline, which calls a nvinferserver component for inferencing - i have attempted to implement a
ensemble model, but am running into the following issue:
immediately before this error, the OSD output briefly appears, but quickly closes before any inferencing results are rendered.
Here is a greater screenshot with more information preceeding application exit -
Any help would be appreciated as this is currently a blocker
Attached Below are all config files for models, ensemble, and inferserver
densenet_onnx_config.pbtxt (1.9 KB)
dstest1_pgie_inferserver_config.txt (1.5 KB)
ensemble_config.pbtxt (1.5 KB)
ssd_inception_config.pbtxt (2.5 KB)
thank for the repo!
Will check and get back
Hi @beauy152 ,
What input resolution and batch size does dstest1_pgie_inferserver_config.txt TRT engine supports? Seems it failed with error below because you didn’t provide a input dims in the config?
Is it possible to just provide us a repo package so that we can untart and run a command repo the issue?
Hi @mchi ,
The muxer is set to 1920*1080 stream resolution with batch_size set to 1 (though i have tested with higher batch sizes to the same result) - the input dims are specified in each models ‘.pbtxt’ files that i attached above.
I have attached the repo below with a short Readme.
7_multi-inference.tar.xz (520.4 KB)
Thanks so much for any help with this - documentation in this area appears very limited.
Thanks again -
In your first post, there are 4 configs, but 7_multi-inference.tar.xz only includes one config, is it the same repo?
And, seems 7_multi-inference\ensemble_test\1 is empty, is it enough to repo it?
Yes, you’re right, i forgot to attach the triton_model_repo - i have fixed that, and adjusted the file paths this time around -
7_multi-inference.tar.xz (88.5 MB)
and yes '7_multi-inference/ensemble_test/1 should be empty, as the ‘ensemble’ is only a configuration, which points to the other models to be used, in this case they are ‘ssd_inception_v2_coco_2018_01_28’ & ‘Primary_Detector’
Regarding ensemble models. I can see there are 2 input layers
data_0. DS-Triton(gst-nvinferserver) support single layer input by default. For extra layers input layers, you need to follow Gst-nvinferserver manual
to derive and implement interface class
virtual void supportInputMemType(InferMemType& type); // return supported memory type for
virtual bool requireInferLoop() const;
virtual NvDsInferStatus extraInputProcess(const vector<IBatchBuffer*>& primaryInputs, vector<IBatchBuffer*>& extraInputs, const IOptions* options) = 0;
virtual NvDsInferStatus inferenceDone(const IBatchArray* outputs, const IOptions* inOptions) = 0;
virtual void notifyError(NvDsInferStatus status) = 0;
- Preprocess: Function
extraInputProcess is for extra input layers processing, e.g. if
image_tensor is primary input. then
data_0will be taken as extra input. you need fill the correct data into
- Postprocess: If you need postprocessing, this class also has the interface
inferenceDone() to implement customized parsing and attaching. If any function you don’t need, just
- Example: We have examples how to do preprocess and postprocess in fasterRCNN source file for DS-Triton:
Gst-nvinferserver config entry file is
The key is to make the config have following lines
make sure function
CreateInferServerCustomProcess implemented to create an object instance of
There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.