Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) Jetson
• DeepStream Version 6.1.1
• JetPack Version (valid for Jetson only) 5.0.2
• TensorRT Version 8.4
I have a custom tf2 model like this.
`
Model: “model”
Layer (type) Output Shape Param # Connected to
input_1 (InputLayer) [(None, None, None, 0
3)]
random_flip (RandomFlip) (None, None, None, 0 [‘input_1[0][0]’]
3)
random_rotation (RandomRotatio (None, None, None, 0 [‘random_flip[0][0]’]
n) 3)
random_zoom (RandomZoom) (None, None, None, 0 [‘random_rotation[0][0]’]
3)
resizing (Resizing) (None, 64, 64, 3) 0 [‘random_zoom[0][0]’]
rescaling (Rescaling) (None, 64, 64, 3) 0 [‘resizing[0][0]’]
conv2d (Conv2D) (None, 62, 62, 64) 1792 [‘rescaling[0][0]’]
batch_normalization (BatchNorm (None, 62, 62, 64) 256 [‘conv2d[0][0]’]
alization)
max_pooling2d (MaxPooling2D) (None, 20, 20, 64) 0 [‘batch_normalization[0][0]’]
conv2d_1 (Conv2D) (None, 18, 18, 128) 73856 [‘max_pooling2d[0][0]’]
max_pooling2d_1 (MaxPooling2D) (None, 9, 9, 128) 0 [‘conv2d_1[0][0]’]
conv2d_2 (Conv2D) (None, 7, 7, 256) 295168 [‘max_pooling2d_1[0][0]’]
max_pooling2d_2 (MaxPooling2D) (None, 3, 3, 256) 0 [‘conv2d_2[0][0]’]
flatten (Flatten) (None, 2304) 0 [‘max_pooling2d_2[0][0]’]
dense (Dense) (None, 128) 295040 [‘flatten[0][0]’]
dense_1 (Dense) (None, 128) 295040 [‘flatten[0][0]’]
dropout (Dropout) (None, 128) 0 [‘dense[0][0]’]
dropout_1 (Dropout) (None, 128) 0 [‘dense_1[0][0]’]
gender_output (Dense) (None, 2) 258 [‘dropout[0][0]’]
age_output (Dense) (None, 1) 129 [‘dropout_1[0][0]’]
==================================================================================================
Total params: 961,539
Trainable params: 961,411
Non-trainable params: 128
`
and i am able to convert this model to onnx using this.
!python3 -m tf2onnx.convert --saved-model ./model/ --output model.onnx --opset 12
even i am able to convert the model into tensorrt engine using this.
/usr/src/tensorrt/bin/trtexec --onnx=./models/age_gender/age_gender_model.onnx --saveEngine=age_gender_model.onnx_b1_gpu0_fp32.engine --explicitBatch
but while running in deepstream i got this error.
`
{‘input’: [‘file:///home/sensormatic/Videos/audience-001.mp4’], ‘configfile’: None, ‘pgie’: None, ‘no_display’: False, ‘file_loop’: False, ‘disable_probe’: False, ‘silent’: False}
Creating Pipeline
Creating streamux
Creating source_bin 0
Creating source bin
source-bin-00
Creating Pgie
Creating tiler
Creating nvvidconv
Creating nvosd
Creating nv3dsink
Adding elements to Pipeline
Linking elements in the Pipeline
Now playing…
0 : file:///home/sensormatic/Videos/audience-001.mp4
Starting pipeline
0:00:05.441777844 1837530 0x3f93b950 INFO nvinfer gstnvinfer.cpp:666:gst_nvinfer_logger: NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 2]: deserialized trt engine from :/home/sensormatic/Neo/deepstream_python_apps/apps/deepstream-raj_age_gender/models/age_gender/raj_age_gender_model.onnx_b1_gpu0_fp32.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 1x1x3
1 OUTPUT kFLOAT age_output 1
2 OUTPUT kFLOAT gender_output 2
0:00:05.513273462 1837530 0x3f93b950 INFO nvinfer gstnvinfer.cpp:666:gst_nvinfer_logger: NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 2]: Use deserialized engine model: /home/sensormatic/Neo/deepstream_python_apps/apps/deepstream-raj_age_gender/models/age_gender/raj_age_gender_model.onnx_b1_gpu0_fp32.engine
0:00:05.513766713 1837530 0x3f93b950 ERROR nvinfer gstnvinfer.cpp:660:gst_nvinfer_logger: NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::preparePreprocess() <nvdsinfer_context_impl.cpp:971> [UID = 2]: RGB/BGR input format specified but network input channels is not 3
ERROR: Infer Context prepare preprocessing resource failed., nvinfer error:NVDSINFER_CONFIG_FAILED
0:00:05.520475338 1837530 0x3f93b950 WARN nvinfer gstnvinfer.cpp:866:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:00:05.520604747 1837530 0x3f93b950 WARN nvinfer gstnvinfer.cpp:866:gst_nvinfer_start: error: Config file path: age_gender_sgi_tensormeta_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
0:00:05.520753196 1837530 0x3f93b950 WARN GST_PADS gstpad.c:1142:gst_pad_set_active:secondary1-nvinference-engine:sink Failed to activate pad
**PERF: {‘stream0’: 0.0}
Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): gstnvinfer.cpp(866): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:secondary1-nvinference-engine:
Config file path: age_gender_sgi_tensormeta_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Exiting app
`
i have not mentioned the output layers here. if i mention the output blob names as age_output/gender_output it gives no output layers error.
if i remove the output blob parameter and ran i got the above error.
please find the sgie config.
`
[property]
gpu-id=0
net-scale-factor=1
onnx-file=/home/sensormatic/Neo/deepstream_python_apps/apps/deepstream-raj_age_gender/models/age_gender/age_gender_model.onnx
model-engine-file=./models/age_gender/raj_age_gender_model.onnx_b1_gpu0_fp32.engine
force-implicit-batch-dim=0
batch-size=1
0=FP32 and 1=INT8 2=FP16 mode
network-mode=0
network-type=1 # 0: Detector 1: Classifier 2: Segmentation 3: Instance Segmentation
#input-object-min-width=64
#input-object-min-height=64
1=Primary 2=Secondary
process-mode=2
model-color-format=1
gpu-id=0
gie-unique-id=2
operate-on-gie-id=1
operate-on-class-ids=0
#is-classifier=1
#output-blob-names=age_output/gender_output
#output-blob-names=fc1
classifier-async-mode=0
classifier-threshold=0.51
#scaling-filter=0
#scaling-compute-hw=0
output-tensor-meta=1
custom-lib-path=/opt/nvidia/deepstream/deepstream/lib/libnvds_infercustomparser.so
`