Please provide complete information as applicable to your setup.
**Jetson TX2
**• DeepStream 4.0
**• JetPack Version 4.3
**• TensorRT Version 6.0
I trained my object detector in caffe and wanted to deploy the trained model with deepstream on tx2, but failed to parse the output.
I modified the .protoctxt file according to :
‘’’
Edit the deploy.prototxt file and change all the Flatten layers to Reshape operations with the following parameters:
reshape_param {
shape {
dim: 0
dim: -1
dim: 1
dim: 1
}
}
Update the detection_out layer by adding the keep_count output, for example, add:
top: “keep_count”
Rename the deploy.prototxt file to ssd.prototxt and run the sample.
parse-bbox-func-name=NvDsInferParseCustomResnet
custom-lib-path = libnvds_infercustomparser.so
I also modified the nvdsinfer_custombboxparser.cpp (https://github.com/Kwull/deepstream-4.0.1/blob/master/sources/libs/nvdsinfer_customparser/nvdsinfer_custombboxparser.cpp):
"conv2d_bbox" ---"detection_out"
"conv2d_cov/Sigmoid" ---"keep_count"
the outputs are many rectangles with height of 1, which are totally lines. I don't know how to solve this problem