Deepstream out of memory when transfer the onnx model into engine on Jetson Nano board

Dear my frineds,

Now I am trying to run the deepstream_pose_estimation porject on Jetson Nano board.

When I run the project first time , deepstream has to transfer the pose_estimation.onnx model into a engine file, it leads to out of memory issue.

deepstream-pose invoked oom-killer: gfp_mask=0x240B2c2(GPP_KERNEL…
Out of memory: Kill process 16547 (deepstream-pose) score 142 or sacrifice child

I run the same project on T4 GPU, it takes a lot of GPU memory the first for the onnx model transfer, after the engine file generated, less memory is required.

How can I achieve this onnx model transfer on Jetson Nano board???


Please provide complete information as applicable to your setup.

**• Hardware Platform (Jetson / GPU) JETSON
**• DeepStream Version 5.1
**• JetPack Version (valid for Jetson only) 4.5
**• TensorRT Version 7.1
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)


Please add some swap memory to see if it helps first:


Thx my friend.
To solve this problem I used trtexec in tensorrt to generate the engine file at first