Dear my frineds,
Now I am trying to run the deepstream_pose_estimation porject on Jetson Nano board.
When I run the project first time , deepstream has to transfer the pose_estimation.onnx model into a engine file, it leads to out of memory issue.
LOG:
deepstream-pose invoked oom-killer: gfp_mask=0x240B2c2(GPP_KERNEL…
Out of memory: Kill process 16547 (deepstream-pose) score 142 or sacrifice child
I run the same project on T4 GPU, it takes a lot of GPU memory the first for the onnx model transfer, after the engine file generated, less memory is required.
How can I achieve this onnx model transfer on Jetson Nano board???
Thx
Please provide complete information as applicable to your setup.
**• Hardware Platform (Jetson / GPU) JETSON
**• DeepStream Version 5.1
**• JetPack Version (valid for Jetson only) 4.5
**• TensorRT Version 7.1
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)