I’m trying to take a model’s output and run custom python pre-processing logic before passing this as input to another model. Can this scenario be done on the Jetson Xavier NX with Deepstream using the Triton python backend?
Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) • DeepStream Version • JetPack Version (valid for Jetson only) • TensorRT Version • NVIDIA GPU Driver Version (valid for GPU only) • Issue Type( questions, new requirements, bugs) • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
could you elaborate your custom python pre-processing logic? nvinferserver supports to do preprocess, you only need to modify configuration file. please refer to deepstream sample deepstream\samples\configs\deepstream-app-triton\source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt, it assembles 3 models, the first one detects vehicles, the following models check vehicletypes, color, carmake.
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks
deepstream nvinferserver plugin will leverage opensource lib triton to do inference, triton supports custom model architecture, please refer to Gst-nvinferserver — DeepStream 6.2 Release documentation