Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) GPU • DeepStream Version 6.0.1 • TensorRT Version 8.0.1 • NVIDIA GPU Driver Version (valid for GPU only) 510.47.03
Hi,
Currently i am developing a deepstream-6 pipeline, which calls a nvinferserver component for inferencing - i have attempted to implement a ensemble model, but am running into the following issue:
Hi @beauy152 ,
What input resolution and batch size does dstest1_pgie_inferserver_config.txt TRT engine supports? Seems it failed with error below because you didn’t provide a input dims in the config?
Is it possible to just provide us a repo package so that we can untart and run a command repo the issue?
Hi @mchi ,
The muxer is set to 1920*1080 stream resolution with batch_size set to 1 (though i have tested with higher batch sizes to the same result) - the input dims are specified in each models ‘.pbtxt’ files that i attached above.
Hi @beauy152
In your first post, there are 4 configs, but 7_multi-inference.tar.xz only includes one config, is it the same repo?
And, seems 7_multi-inference\ensemble_test\1 is empty, is it enough to repo it?
Yes, you’re right, i forgot to attach the triton_model_repo - i have fixed that, and adjusted the file paths this time around - 7_multi-inference.tar.xz (88.5 MB)
and yes '7_multi-inference/ensemble_test/1 should be empty, as the ‘ensemble’ is only a configuration, which points to the other models to be used, in this case they are ‘ssd_inception_v2_coco_2018_01_28’ & ‘Primary_Detector’
Preprocess: Function extraInputProcess is for extra input layers processing, e.g. if image_tensor is primary input. then data_0will be taken as extra input. you need fill the correct data into extraInputs.
Postprocess: If you need postprocessing, this class also has the interface inferenceDone() to implement customized parsing and attaching. If any function you don’t need, just return NVDSINFER_SUCCESS
Example: We have examples how to do preprocess and postprocess in fasterRCNN source file for DS-Triton:
/opt/nvidia/deepstream/deepstream-6.1/sources/objectDetector_FasterRCNN/config_triton_inferserver_primary_fasterRCNN_custom.txt
The key is to make the config have following lines
infer_config {
extra {
custom_process_funcion: “CreateInferServerCustomProcess”
}
custom_lib {
path: “/path/to/libnvdsinferserver_custom_process.so”
} }
make sure function CreateInferServerCustomProcess implemented to create an object instance of IInferCustomProcessor
There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks