Failed to allocate buffer

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson)
• DeepStream Version - 7
• JetPack Version (valid for Jetson only) - 6.0-b52
• TensorRT Version - 8.6.2.3

We are running a deepstream code refer from the repo DeepStream-Yolo-Face on a Jetson Orion Nano Module.
When we run the code using the model yolov8n-face.pt, the code run successfully. But when I choose an another onnx model which I trained manually, I’m facing some error like,

(base) jetson@ubuntu:~/Downloads/DeepStream-Yolo-Face-master$ sudo python3 deepstream.py -s file:///home/jetson/Downloads/DeepStream-Yolo-Face-master/f617d41f3a0e484d857cb8cf65d21ecd.mp4 -c config_infer_primary_yoloV8_face.txt
/home/jetson/Downloads/DeepStream-Yolo-Face-master/deepstream.py:201: DeprecationWarning: Gst.Element.get_request_pad is deprecated
  streammux_sink_pad = streammux.get_request_pad(pad_name)

SOURCE: file:///home/jetson/Downloads/DeepStream-Yolo-Face-master/f617d41f3a0e484d857cb8cf65d21ecd.mp4
CONFIG_INFER: config_infer_primary_yoloV8_face.txt
STREAMMUX_BATCH_SIZE: 1
STREAMMUX_WIDTH: 1920
STREAMMUX_HEIGHT: 1080
GPU_ID: 0
PERF_MEASUREMENT_INTERVAL_SEC: 5
JETSON: TRUE

gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-7.0/lib/libnvds_nvmultiobjecttracker.so
[NvMultiObjectTracker] Initialized
Setting min object dimensions as 16x16 instead of 1x1 to support VIC compute mode.
0:00:07.273003161 39599 0xaaab5b715870 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<pgie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2095> [UID = 1]: deserialized trt engine from :/home/jetson/Downloads/DeepStream-Yolo-Face-master/best.onnx_b1_gpu0_fp32.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: [Implicit Engine Info]: layers num: 4
0   INPUT  kFLOAT input           3x640x640       
1   OUTPUT kFLOAT boxes           8400x4          
2   OUTPUT kFLOAT scores          8400x1          
3   OUTPUT kFLOAT landmarks       8400x0          

0:00:07.686870097 39599 0xaaab5b715870 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<pgie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2198> [UID = 1]: Use deserialized engine model: /home/jetson/Downloads/DeepStream-Yolo-Face-master/best.onnx_b1_gpu0_fp32.engine
0:00:07.694737548 39599 0xaaab5b715870 ERROR                nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<pgie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::resizeOutputBufferpool() <nvdsinfer_context_impl.cpp:1463> [UID = 1]: Failed to allocate cuda output buffer during context initialization
0:00:07.694794735 39599 0xaaab5b715870 ERROR                nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<pgie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::allocateBuffers() <nvdsinfer_context_impl.cpp:1595> [UID = 1]: Failed to allocate output bufferpool

0:00:07.694819728 39599 0xaaab5b715870 ERROR                nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<pgie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1375> [UID = 1]: Failed to allocate buffers
Segmentation fault

How to resolve this?

How do you generate the engine file? Could you attach your config file?