Error with converting onnx model

• Jetson AGX Xavier
• Deepstream 6.0
• JetPack 4.6
• TensorRT 8.0.1
• NVIDIA GPU Driver 32.6.1

Hi i’m trying to convert this model: face-recognition-resnet100-arcface-onnx — OpenVINO™ documentation

with this config file:

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
labelfile-path=/home/cv/Desktop/Mask-Detection/Deepstream-app/labels.txt
model-engine-file=/home/cv/Desktop/Mask-Detection/Deepstream-app/arcfaceresnet100-8.onnx_b1_gpu0_fp32.engine
onnx-file=/home/cv/Desktop/Mask-Detection/Deepstream-app/arcfaceresnet100-8.onnx
infer-dims=3;112;112
uff-input-order=0
uff-input-blob-name=input_1
output-tensor-meta=1
batch-size=1
process-mode=1
model-color-format=0
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
network-type=1
num-detected-classes=1
interval=0
gie-unique-id=1
output-blob-names=output_cov/Sigmoid;output_cov/Sigmoid

But got this error

Now playing...
0:00:00.170113226 20826   0x558420f0c0 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<secondary1-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:364: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: Detected invalid timing cache, setup a local cache instead
0:00:55.668094488 20826   0x558420f0c0 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<secondary1-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1947> [UID = 1]: serialize cuda engine to file: /home/cv/Desktop/Mask-Detection/Deepstream-app/arcfaceresnet100-8.onnx_b1_gpu0_fp32.engine successfully
INFO: [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT data            3x112x112       
1   OUTPUT kFLOAT fc1             512             

ERROR: [TRT]: Cannot find binding of given name: output_cov/Sigmoid
0:00:55.790537079 20826   0x558420f0c0 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<secondary1-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1868> [UID = 1]: Could not find output layer 'output_cov/Sigmoid' in engine
ERROR: [TRT]: Cannot find binding of given name: output_cov/Sigmoid
0:00:55.790584633 20826   0x558420f0c0 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<secondary1-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1868> [UID = 1]: Could not find output layer 'output_cov/Sigmoid' in engine
0:00:55.803115932 20826   0x558420f0c0 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<secondary1-nvinference-engine> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-infer-tensor-meta-test/dstest2_sgie1_config.txt sucessfully
Deserialize yoloLayer plugin: yolo_93
Deserialize yoloLayer plugin: yolo_96
Deserialize yoloLayer plugin: yolo_99
0:00:55.946087226 20826   0x558420f0c0 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 40001]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 40001]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-infer-tensor-meta-test/model_b1_gpu0_fp16.engine
INFO: [Implicit Engine Info]: layers num: 4
0   INPUT  kFLOAT data            3x640x640       
1   OUTPUT kFLOAT yolo_93         24x80x80        
2   OUTPUT kFLOAT yolo_96         24x40x40        
3   OUTPUT kFLOAT yolo_99         24x20x20        

0:00:55.946237441 20826   0x558420f0c0 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 40001]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 40001]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-infer-tensor-meta-test/model_b1_gpu0_fp16.engine
0:00:55.952427215 20826   0x558420f0c0 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-nvinference-engine> [UID 40001]: Load new model:/opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-infer-tensor-meta-test/dstest2_pgie_config.txt sucessfully
Decodebin child added: source
Decodebin child added: decodebin0
Running...
Decodebin child added: qtdemux0
Decodebin child added: multiqueue0
Decodebin child added: aacparse0
Decodebin child added: avdec_aac0
Decodebin child added: h264parse0
Decodebin child added: capsfilter0
Decodebin child added: nvv4l2decoder0
Opening in BLOCKING MODE 
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
In cb_newpad
In cb_newpad
1
File opened
Segmentation fault (core dumped)

And after loading this model (it converted sucessfully) info->inferDims.numElements equals 0

So how to convert this model to .engine?

did you still need support?
1 how did you convert model?
2 which deepstream app are you testing?
3 did you modify the code? please provide the diff.

Yes, I’m still trying to make face recognition pipeline and trying different models. I’m just writed the dir where onnx model is. I using the same app deepstream_infer_tensor_meta_test

deepstream_infer_tensor_meta_test.cpp (35.4 KB)
dstest2_pgie_config.txt (3.2 KB)
Makefile (2.6 KB)

I tried to make it with facenet (and I got the same output for different people (idk, maybe there is a problem with cropping the face?), tried with landmarks (you seen the problem, and I don’t understand how landmarks can help me with recognition) and I tried this model

1 using your code, I can built, but can’t run, please provide a simple code to reproduce, thanks.
2 you can use deepstream-infer-tensor-meta-test to test this model, please make sure the preprocess and postprocess are right.

Hi, I used another model and it worked well

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.