ERROR: Failed to create network using custom network creation function

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson
• DeepStream Version 6.2
• JetPack Version (valid for Jetson only) 5.1
• TensorRT Version 8.5.2
I have a pre trained PyTorch model which I convert to .onnx them .TRT. I have modified the config file according as below

model-file=/./facenet.trt
model-engine-file=/./facenet.trt_b30_gpu0_fp32.engine
labelfile-path=/./labels.txt

I’m sure about the directory however there’s no engine file in the directory.
The engine file creation showed the below error
EERROR: [TRT]: UffParser: Could not read buffer.
parseModel: Failed to parse UFF model
ERROR: Failed to build network, error in model parsing.
ERROR: [TRT]: 3: [builder.cpp::~Builder::307] Error Code 3: API Usage Error (Parameter check failed at: optimizer/api/builder.cpp::~Builder::307, condition: mObjectCounter.use_count() == 1. Destroying a builder object before destroying objects it created leads to undefined behavior.
)
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:04.930114170 5694 0xaaaad18654f0 ERROR nvinfer gstnvinfer.cpp:674:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1943> [UID = 2]: build engine file failed
ERROR: [TRT]: 2: [logging.cpp::decRefCount::65] Error Code 2: Internal Error (Assertion mRefCount > 0 failed. )
corrupted size vs. prev_size while consolidating
Aborted (core dumped)

please support

What is the “.TRT”?
Please use one of the following models:

  • Caffe Model and Caffe Prototxt
  • ONNX
  • UFF file
  • TAO Encoded Model and Key
  • Engine files generated by TAO Toolkit SDK Model converters

https://docs.nvidia.com/metropolis/deepstream/6.2/dev-guide/text/DS_plugin_gst-nvinfer.html#inputs-and-outputs

I have used the official documentation for TF-TRT conversion. But i modofoed the above config file to read .ONNX version of the model. However i’m using [primary-gie] where it has face detection config file (in .etlt) & [secondary-gie] for the face recognition (in .ONNX). In. When i tried to run the deepstream -c face.txt file where it has primary & secondary gie the below error generated
$ deepstream-app -c Face_main.txt
** ERROR: <parse_config_file:547>: Non unique gie ids found
** ERROR: <parse_config_file:606>: parse_config_file failed
** ERROR: main:687: Failed to parse config file ‘Face_main.txt’
Quitting
App run failed

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

That means there are errors in your configuration. You may refer to DeepStream Reference Application - deepstream-app — DeepStream 6.3 Release documentation and DeepStream SDK FAQ - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums for more details.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.