Jetpack4.6 with 'out of memory' error

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson)
• DeepStream Version:6.0
• JetPack Version (valid for Jetson only):4.6
• TensorRT Version:8.0
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I got an error about out of memory on deepstream-6.0, such as


This error will occure every an hour and a half.
If this error happen, The kernel will printf

Aug 22 15:18:53 localhost kernel: [619545.046219] t19x-arm-smmu 12000000.iommu: Unhandled context fault: smmu1, iova=0x4fa66d000, fsynr=0x3, cb=0, sid=86(0x56 - PCIE0), pgd=856056003, pud=856056003, pmd=452904003, pte=0
Out Of Memory

How can I deal it?
thank you very much.

Which Jetson platform are you using, Nano/Xavier/NX?
Please check the system free memory and the memory used by this process, it’s possible that the system memory is exhausted.

We are using on Xavier hardware.
We found there are much memory through jtop when this error occure.

When this issue happened, the kernel and deepstream logs are as following.
deepstream-app-log.txt (32.3 KB)
kernel-log.txt (2.8 KB)

Hi @huihui308 , What use case are you running, our demo or your own demo? Could you attach it if it is your own demo?

OK, I will prepare it in a few days.
But could you tell me how to occure this issue?
I operated null pointer or there are none memory in the device?
or others?

Just from the image you attached, your case runs out of memory on the GPU. You can monitor it before the error happen.
Did you use our demo code or your own code?

I used deepstream6.0 insted of myself.
In order to show more information, I have set gst-debug to 4.If this issue occure, I will send log to this site.

master.log.2022-08-30_12 (3.9 MB)
The error log on line 4980.

Could you provide your deepstream test sample and test stream to us?

I am sorry, we need to discuss this internally.
Could you give me you email?I will send this through email if this is allowed.

We suggest you debug by yourself first. It’s a gpu memory problem. You can foucus on the use of the cuda api.
Also you can run our demo code and see if there is the same problem.
If it really cannot be sovled, you can just click my icon and message to me directly.
You also can refer the link and provide us the log:
https://forums.developer.nvidia.com/t/deepstream-sdk-faq/80236/13

OK, I foud a new log when the device occure out of memory.

I have three devices, All of devices have same docker image and config files, but one of them did not occure this issue, but the others create this issue periodic, about two hours.

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

Did you run your own code and models?
You can attach the memory log first through what I said:
https://forums.developer.nvidia.com/t/deepstream-sdk-faq/80236/13

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.