Python binding example of parse-bbox-func-name

HI i am changing the sample deepstream_test_1.py
I am using the python bindings as i dont know cpp.
I have used an ONNX model i have trained using custom vision.
do i need to supply a parse function to the config for use as a detector?

Where can i see an example of a bounding box parser in python?
do you have any configs for onnx that show how to use custome onnx models
against the deepstream_test_1.py sample.

Do i need an so in this use case?

Thanks in advance, and i can post any of the files im using if you need them.

Glen @ asgtech

Hi,

Yes. You will need to update the bbox parser since the output format is different.

The bounding box information is specified in the configure file rather than .py.
https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/blob/master/apps/deepstream-test1/dstest1_pgie_config.txt#L32

[property]
...
parse-bbox-func-name=NvDsInferParseCustomSSD
custom-lib-path=nvdsinfer_custom_impl_ssd/libnvdsinfer_custom_impl_ssd.so

The onnx file is supported by deepstream directly.
Update the file path in configure file should be enough.

[property]
...
onnx-file=../../models/resnet101v2.onnx
labelfile-path=../../models/imagenet1000_labels.txt
...

Thanks.

Hi AastaLLL,

Thank you for taking the time to help me, i am new to this so getting help is really cool.

  1. do i implement the NvDsInferParseCusomSSD interface in python as a function?

  2. how do i generate the libnvdsinfer_custom_impl_ssd.so file for python?

  3. do i need a output-blob-names and how do i know what to name it?

I really appreciate your help and time and really want to good at this :)

Glen

Hi,

You will need to implement the custom bbox with C++ and compile it into .so for linking.
The output-blob-names is the output layer name of your model.

Thanks.

our model is out of custom vision we are exporting it as fp16,
it has come with some python samples,

We are using the python bindings and sample code to run an ONNX model from custom vision.

class ObjectDetection(object):
“”“Class for Custom Vision’s exported object detection model
“””

ANCHORS = np.array([[0.573, 0.677], [1.87, 2.06], [3.34, 5.47], [7.88, 3.53], [9.77, 9.17]])
IOU_THRESHOLD = 0.45
DEFAULT_INPUT_SIZE = 512 * 512

and etc,

  1. can we use python detection or is that not an option for this sample?

  2. do we need to know what type of network was exported from custom vision?

  3. how do you know what custom function to use?

  4. how can i detect what type of model is exported by custom vision and does it matter? (it is a detection model fp16)

  5. do you have a python bindings sample for running an ONNX model exported from custom vision as fp16?

i really appreciate your help in this matter, i am learning alot.

Glen

Hi,

1. You will need to implement the custom bbox parser in C++.
2. This is indicated in the config file.
Ex.

/opt/nvidia/deepstream/deepstream-4.0/sources/objectDetector_SSD/config_infer_primary_ssd.txt

[property]
...
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0

3. The library patch and function name is also indicated in the config file.
Ex.

/opt/nvidia/deepstream/deepstream-4.0/sources/objectDetector_SSD/config_infer_primary_ssd.txt

[property]
...
parse-bbox-func-name=NvDsInferParseCustomSSD
custom-lib-path=nvdsinfer_custom_impl_ssd/libnvdsinfer_custom_impl_ssd.so

4. This is the model inference type. Input and output are still float32.
5. It should be similar. Just update the config to use fp16 mode.

Thanks.

Hi Glen,

In our upcoming release, you should be able to do the following:
a) Get decoded frames as numpy arrays in a probe function (in RGBA format). You can then run your Python detector on those frames.
b) Use Triton Inference Server plugin to run inference using your ONNX model, and configure the plugin to attach raw inference output tensors to the metadata. Then in a probe function, you can parse those tensors in Python.

Thanks for your interest in DeepStream SDK!

1 Like