I’m exploring best practices for dependency management in NVIDIA containers (e.g., nvcr.io/nvidia/pytorch:23.12-py3) and looking for community insights. Usually, I’ve relied on Poetry for precise control over dependencies, ensuring repeatable and robust deployments across various platforms, from embedded devices to cloud environments. This approach has proven effective in cloud, VMs, standard Docker containers, and local setups.
However, integrating Poetry with NVIDIA’s Docker containers presents a challenge. These containers come pre-optimized with specific Python package versions, like PyTorch, potentially conflicting with the dependencies I intend to manage via Poetry. This leads to a few questions for the community:
How are you managing dependencies in NVIDIA containers? Have you moved away from tools like Poetry in this context?
Do you reserve Poetry or similar tools solely for non-NVIDIA/CUDA dependencies?
Is there a hybrid approach that has worked well for you? E.g., poetry export -f requirements.txt --output requirements.txt && pip install -r requirements.txt
Has anyone successfully integrated NVIDIA and PyTorch dependencies within Poetry’s management system?
thanks for this question and a happy new year to you, too.
I am also interested in learning best practices for this problem.
Since the images come with preinstalled dependencies and in most scenarios one needs to add more, I am wondering if there is an official guideline for this. If not, I wonder why.
I thought of the following steps:
export the dependency versions pre-installed in the container
pin these versions for the project (e.g. in poetry pyproject.toml)
add additional required dependencies (compatible with the existing ones)
install the additional dep. versions in the container without touching the pre-installed dependencies
I could not easily get this to work.
For now I am manually installing the required additional dependencies with pip without a pinned version. This is obviously not ideal for many reasons, so I am curious to learn about better alternatives.