nvidia xavier docker : shared inference model between containers

Hello, I want to use n nvidia containers in jetson xavier , all containers run the some deep learning code independently,so i need to share the model inference between all container after loaded in gpu memory to decrease memory usage.

So may question is can: i share model inference between containers after loading it in gpu memory (i use tensorflow , tensorrt , keras)?

can i get the same execution time inference using container and without container ?

I wonder if there’s tutorial about nvidia container in jetson xavier and shared memory inference .

thank you for help .