Deepstream_test_3.py using your own custom model

Hi,

I follow the steps, jetson-inference/pytorch-collect-detection.md at master · dusty-nv/jetson-inference · GitHub, to create an customize model. How do I apply deepstream_test_3.py to use the customize model & labels instead.
I look into dstest3_pgie_config.txt see if I can change the path to new customize model but dunno where to change.

Hi,

Please update the dstest3_pgie_config.txt based on your customized model.

For example:

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
#model-file=../../../../samples/models/Primary_Detector/resnet10.caffemodel
#proto-file=../../../../samples/models/Primary_Detector/resnet10.prototxt
#model-engine-file=../../../../samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
onnx-file=[your/onnx/file/path]
labelfile-path=[your/label/file/path]
#int8-calib-file=../../../../samples/models/Primary_Detector/cal_trt.bin
force-implicit-batch-dim=1
batch-size=1
process-mode=1
model-color-format=0
network-mode=1
num-detected-classes=[your/class/number]
interval=0
gie-unique-id=1
output-blob-names=[your/output/layer/name]
...

Thanks.

I got an error message… dunno what you meant output-blob-names=[your/output/layer/name] but I am using the default output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid.


[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
#model-file=…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel
#proto-file=…/…/…/…/samples/models/Primary_Detector/resnet10.prototxt
#model-engine-file=…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
onnx-file=/home/tommy/jetson-inference/python/training/detection/ssd/models/personFrontYard/ssd-mobilenet.onnx
labelfile-path=/home/tommy/jetson-inference/python/training/detection/ssd/models/personFrontYard/labels.txt
#int8-calib-file=…/…/…/…/samples/models/Primary_Detector/cal_trt.bin
force-implicit-batch-dim=1
batch-size=1
process-mode=1
model-color-format=0
network-mode=1
num-detected-classes=1
interval=0
gie-unique-id=1
output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid

[class-attrs-all]
#pre-cluster-threshold=0.2
pre-cluster-threshold=0.15
eps=0.15
group-threshold=1


Using winsys: x11
0:00:00.403910415 4521 0x7f2c002560 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files

Input filename: /home/tommy/jetson-inference/python/training/detection/ssd/models/personFrontYard/ssd-mobilenet.onnx
ONNX IR version: 0.0.6
Opset version: 9
Producer name: pytorch
Producer version: 1.6
Domain:
Model version: 0
Doc string:

ERROR: ModelImporter.cpp:472 In function importModel:
[4] Assertion failed: !_importer_ctx.network()->hasImplicitBatchDimension() && “This version of the ONNX parser only supports TensorRT INetworkDefinitions with an explicit batch dimension. Please ensure the network was created using the EXPLICIT_BATCH NetworkDefinitionCreationFlag.”
ERROR: Failed to parse onnx file
ERROR: failed to build network since parsing model errors.
ERROR: failed to build network.
0:00:02.473530861 4521 0x7f2c002560 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1735> [UID = 1]: build engine file failed
0:00:02.473602477 4521 0x7f2c002560 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1821> [UID = 1]: build backend context failed
0:00:02.473640109 4521 0x7f2c002560 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1148> [UID = 1]: generate backend failed, check config file settings
0:00:02.473702894 4521 0x7f2c002560 WARN nvinfer gstnvinfer.cpp:809:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:00:02.473762478 4521 0x7f2c002560 WARN nvinfer gstnvinfer.cpp:809:gst_nvinfer_start: error: Config file path: dstest3_pgie_configFrontYardCustom1.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(809): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
Config file path: dstest3_pgie_configFrontYardCustom1.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Exiting app

Hi,

Sorry for the late update.

The error is caused by the implicit batch vs. explicit batch support for the ONNX model.
Could you update the configure file with below two setting to see if works?

[property]
gpu-id=0
...
force-explicit-batch-dim=1
force-implicit-batch-dim=0

Thanks.