During model conversion, I found that there was not enough memory

Please provide complete information as applicable to your setup.

**• Hardware Platform (Jetson / GPU)6.1
**• DeepStream Version 7.1
**• JetPack Version (valid for Jetson only)6.1
**• TensorRT Version 10.3
**• NVIDIA GPU Driver Version (valid for GPU only)540.4.0
• Issue Type( questions, new requirements, bugs)
The Type of this mechine is Orin AGX 64G
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
I found an issue when running Deepstream in Docker. During model conversion, I found that there was not enough memory and restarted. Below are the logs. Could you please help me check what the problem is?
deepstream-person-detect-gpu0-v103.log (1.2 MB)

Let me add the usage method
Based on the deepstream: 7.1-samples-multi arch image, put the locally compiled deepstream test5 app into it, regenerate the image, and then have a local algorithm package (configuration file and model). Start using Docker Composer

So could you run our memory check script Capture HW & SW Memory Leak log? If the memory is insufficient, we cannot optimize that.

I’ll give it a try first and send the results tomorrow. I don’t know if it’s a usage issue, but it’s not possible to store executable programs and algorithm packages (configuration files and models) locally or in containers. So I just told you about the usage method this afternoon

I tried, but the sudo python3 nvmemstatit.py - p deepstream_test5 app did not print or generate logs.`

`

Based on the deepstream: 7.1-samples-multi arch image, put the locally compiled deepstream test5 app into it, regenerate the image, and then have a local algorithm package (configuration file and model). Start using Docker Composer

And I give you log
deepstream-person-detect-gpu0-v103.log (1.2 MB)

Only print one line

I don’t know if it’s a problem with Docker or the way it’s used. I downloaded nvcr.io/nvidia/deptstream:7.1-triton-multiarch, copied the deepstream-test5-app into it to generate a new image, started it with Docker compose, and included the algorithm package.

person-detect-dla0-v1.1.0.zip (43.9 MB)
This is the local algorithm package


Deepstream: 7.1-samples-multi arch image’s CUDA and Tensor are inconsistent with local, is there a problem?

Could you run the top command to find the COMMAND and PID of your demo? Its name might not be the deepstream_test5 app.

No. The problem is most likely that your device is running out of storage space. You can run the top command to check the MEM when you run your project.

Looking at the logs, this process ends as soon as it starts. It’s a bit difficult to obtain the PID. Is there any other way?

Yes. Could you run the sudo tegrastats command in a new terminal?

1.run the `sudo tegrastats` command in a terminal
2.run your project in another ternimal
3. attach the log

Thanks I try it

log.txt (216.1 KB)

This is the log previous

If it’s because there’s not enough space, I don’t know if it’s related to the directory mapping in the Docker compose file in that algorithm package

Let’s narrow it down first. If you’re running your project directly on the Jetson outside the docker, is there any problem?

No running your project directly on the Jetson outside the docker is normal I just tried again

the next

hello

Could you try to start the docker with the docker run command and run your project in the docker and see if there’s a problem?