Deepstream app with triton server

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Tesla T4
• DeepStream Version 6.1
• JetPack Version (valid for Jetson only)
• TensorRT Version 7.1
• NVIDIA GPU Driver Version (valid for GPU only) cuda 11.4
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

i am not able to get metadata while inferencing from the custom deepstream app , could you provide an example for it .

Hi,
I think you can refer to DeepStream sample - apps/sample_apps/deepstream-user-metadata-test

It demonstrates how to add custom or user-specific metadata to any component of DeepStream. The test code attaches a 16-byte array filled with user data to the chosen component. The data is retrieved in another component. This app uses resnet10.caffemodel for detection.

is there a relavent python app , deepstream-user-metadata-test is in C

yes, you can refer to the DS python samples under deepstream_python_apps/apps at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub , e.g. deepstream-imagedata-multistream

im doing Image inferencing ,

If i use nvinfer , i get the metadata , But if i use nvinferserver, i dont get the metadata .

im able to use “Primary_Detector” model through nvinferserver .where i get the metadata.
Custom use -cases im not able to get it.

can we customize this above example parser for all kinds of models ? such as onnx , TernsorRT engine files ?

yes!

my 4 output layers of my ONNX model are [“output”, “573”, “625”,“677”]

how should i fill num_detection_layer , score_layer , class_layer , box_layer parameters in ssd_parser.py ?

def nvds_infer_parse_custom_tf_ssd(output_layer_info, detection_param, box_size_param,

                                   nms_param=NmsParam()):

    """ Get data from output_layer_info and fill object_list

        with several NvDsInferObjectDetectionInfo.

        Keyword arguments:

        - output_layer_info : represents the neural network's output.

            (NvDsInferLayerInfo list)

        - detection_param : contains per class threshold.

            (DetectionParam)

        - box_size_param : element containing information to discard boxes

            that are too small. (BoxSizeParam)

        - nms_param : contains information for performing non maximal

            suppression. (NmsParam)

        Return:

        - Bounding boxes. (NvDsInferObjectDetectionInfo list)

    """

    **num_detection_layer = layer_finder(output_layer_info, "num_detections")**
**    score_layer = layer_finder(output_layer_info, "detection_scores")**
**    class_layer = layer_finder(output_layer_info, "detection_classes")**
**    box_layer = layer_finder(output_layer_info, "detection_boxes")**

im able to get the metadata . But the bounding box is not visible … How to use bounding box in deepstream-ssd-parser example

@h9945394143 Please allow me to close this topic as the initial question is solved, feel free to submit a new topic to discuss other topics, thanks.

its fixed , kindly close it