Hey,
We are building upon the deepstream-python example for SSD parser - deepstream_python_apps/apps/deepstream-ssd-parser at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub
Please help me with the following confusions
What is the difference between ssd_parser.py and (.so) lib object added in in the config file for pgie.
If the ssd_parser.py is being used for parsing, is it mandatory to link .so object for parsing?
And can this lib (.so) object can be coded in python in any manner?
So, we have a custom SSD model, where we have only one class to classify. How can we create a custom (.so) object for parsing?
Can we just use ssd_parser.py without the .so object? If yes? How?
I deeply apologize for any naive questions asked, very new to this topic.
Thanks in advance.
Hi.
1. They are roughly the same.
But one implement parser with python while the other is a C++ library.
2. You don’t need to link the .so.
Please see that deepstream_ssd_parser.py use it to parser bbox directly:
https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/blob/master/apps/deepstream-ssd-parser/deepstream_ssd_parser.py#L284
3. You can link it in the model configure file:
For example:
https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/blob/master/apps/deepstream-segmentation/dstest_segmentation_config_semantic.txt#L79
4. For simplicity, you can update the ssd_parser.py directly.
5. YES. Please update the parameter below:
import pyds
CLASS_NB = 91
ACCURACY_ALL_CLASS = 0.5
UNTRACKED_OBJECT_ID = 0xffffffffffffffff
IMAGE_HEIGHT = 1080
IMAGE_WIDTH = 1920
MIN_BOX_WIDTH = 32
MIN_BOX_HEIGHT = 32
TOP_K = 20
IOU_THRESHOLD = 0.3
OUTPUT_VIDEO_NAME = "./out.mp4"
def get_label_names_from_file(filepath):
""" Read a label file and convert it to string list """
f = io.open(filepath, "r")
labels = f.readlines()
Thanks.
Hey @AastaLLL
I tried removing (.so) lib from the config, file to use ssd_parser.py for parsing the output. But, I get this error.
0:00:07.556276795 3678 0x3cd770f0 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::parseBoundingBox() <nvdsinfer_context_impl_output_parsing.cpp:59> [UID = 1]: Could not find output coverage layer for parsing objects
0:00:07.556342472 3678 0x3cd770f0 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::fillDetectionOutput() <nvdsinfer_context_impl_output_parsing.cpp:734> [UID = 1]: Failed to parse bboxes
Hi,
The error complains about the expected coverage / bboxes output not exist.
If your output layer name changed, please also update the sample accordingly:
return None
return res
def nvds_infer_parse_custom_tf_ssd(output_layer_info, detection_param, box_size_param,
nms_param=NmsParam()):
""" Get data from output_layer_info and fill object_list
with several NvDsInferObjectDetectionInfo.
Keyword arguments:
- output_layer_info : represents the neural network's output.
(NvDsInferLayerInfo list)
- detection_param : contains per class threshold.
(DetectionParam)
- box_size_param : element containing information to discard boxes
that are too small. (BoxSizeParam)
- nms_param : contains information for performing non maximal
suppression. (NmsParam)
Return:
Thanks.