Python Dependency Management with NVIDIA Containers and Poetry

Hi everyone, and happy New Year!

I’m exploring best practices for dependency management in NVIDIA containers (e.g., and looking for community insights. Usually, I’ve relied on Poetry for precise control over dependencies, ensuring repeatable and robust deployments across various platforms, from embedded devices to cloud environments. This approach has proven effective in cloud, VMs, standard Docker containers, and local setups.

However, integrating Poetry with NVIDIA’s Docker containers presents a challenge. These containers come pre-optimized with specific Python package versions, like PyTorch, potentially conflicting with the dependencies I intend to manage via Poetry. This leads to a few questions for the community:

  1. How are you managing dependencies in NVIDIA containers? Have you moved away from tools like Poetry in this context?
  2. Do you reserve Poetry or similar tools solely for non-NVIDIA/CUDA dependencies?
  3. Is there a hybrid approach that has worked well for you? E.g., poetry export -f requirements.txt --output requirements.txt && pip install -r requirements.txt
  4. Has anyone successfully integrated NVIDIA and PyTorch dependencies within Poetry’s management system?
1 Like

Hello Nicholas,

thanks for this question and a happy new year to you, too.

I am also interested in learning best practices for this problem.
Since the images come with preinstalled dependencies and in most scenarios one needs to add more, I am wondering if there is an official guideline for this. If not, I wonder why.

I thought of the following steps:

  • export the dependency versions pre-installed in the container
  • pin these versions for the project (e.g. in poetry pyproject.toml)
  • add additional required dependencies (compatible with the existing ones)
  • install the additional dep. versions in the container without touching the pre-installed dependencies

I could not easily get this to work.

For now I am manually installing the required additional dependencies with pip without a pinned version. This is obviously not ideal for many reasons, so I am curious to learn about better alternatives.

Thanks in advance.

I want try if the following steps solve the problem:

  • Starting with the NVIDIA base image
  • mount a folder externally to store the files generated in the next steps
  • Install Poetry
  • Generate a pyproject.toml: poetry init
  • get the current dependencies: pip freeze > requirements.txt
  • pin the current dependencies: poetry add "PACKAGE==x.x.x"
  • Add all additional dependencies: poetry add "PACKAGE>=x.x.x"
  • create a new lock file with all compatible dependency versions, not overwriting the old dependencies: poetry lock --no-update
  • Install dependencies: poetry install
  • put pyproject.toml and poetry.lock under version control (outside of the docker container)

Any idea why this might not work? Do you see any drawbacks?

Here is a solution implementation: