Error in NvDsInferContextImpl::buildModel()

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson Xavier
• DeepStream Version 5.0
• JetPack Version (valid for Jetson only) 4.4
I am building engine file for maskrcnn application.
My application runs using deepstream-app.
When building engine, I have error as follows.

cnn.etlt_b1_gpu0_fp16.engine open error
0:00:01.852291200 11795   0x55949c5160 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1690> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/../../models/maskrcnn/resnet34/maskrcnn.etlt_b1_gpu0_fp16.engine failed
0:00:01.852616832 11795   0x55949c5160 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1797> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/../../models/maskrcnn/resnet34/maskrcnn.etlt_b1_gpu0_fp16.engine failed, try rebuild
0:00:01.852751040 11795   0x55949c5160 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
ERROR: [TRT]: UffParser: Could not read buffer.
parseModel: Failed to parse UFF model
ERROR: failed to build network since parsing model errors.
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:02.327505664 11795   0x55949c5160 ERROR                nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1735> [UID = 1]: build engine file failed
free(): invalid pointer
Aborted

I have already updated plugin 7.2.2
IMG_2474

Can you upgrade to DS-6.3 ? DS-5.0 version is too old

No can’t. I can’t take out the device.
It was working before.
I downlaoded from the link and https://nvidia.box.com/shared/static/ezrjriq08q8fy8tvqcswgi0u6yn0bomg.1 -O libnvinfer_plugin.so.7.0.0.1 still have same issue.
I need to build myself.

May I know what is the error. When I build as

/usr/local/bin/cmake .. -DGPU_ARCHS="53 62 72" -DTRT_LIB_DIR=/usr/lib/aarch64-linux-gnu/ -DCMAKE_C_COMPILER=/usr/bin/gcc -DTRT_BIN_DIR=`pwd`/out

Building for TensorRT version: 7.1.3, library version: 7
-- The CUDA compiler identification is unknown
CMake Error at CMakeLists.txt:46 (project):
  No CMAKE_CUDA_COMPILER could be found.

  Tell CMake where to find the compiler by setting either the environment
  variable "CUDACXX" or the CMake cache entry CMAKE_CUDA_COMPILER to the full
  path to the compiler, or to the compiler name if it is in the PATH.


-- Configuring incomplete, errors occurred!

It is working now. Build engine at local device and engine file is copied to remote device.