shared memory between container

Hello, I want to use n nvidia containers in jetson xavier , all containers run the some deep learning code independently,so i need to share the model inference between all container after loaded in gpu memory to decrease memory usage.

So may question is can: i share model inference between containers after loading it in gpu memory (i use tensorflow , tensorrt , keras)?

can i get the same execution time inference using container and without container ?

I wonder if there’s tutorial about nvidia container in jetson xavier and shared memory inference .

thank you for help .

On the Jetson platform TensorFlow is available as a pip package that you install directly into the Jetpack environment rather than as part of a Docker container. For installation and use instructions see https://docs.nvidia.com/deeplearning/frameworks/install-tf-jetson-platform/index.html.