Need clarification on nvidia container toolkit 'rpm' install on host versus inside container image

Hello,
I need one more info clarification regarding the setup of ‘nvidia container toolkit’. I am not sure if this would be the right forum to ask this. Kindly guide appropriate forum:

My production runtime is on a Linux machine which can install ‘rpm’ format files (RHEL or Azure Linux or Amazon Linux). That machine would have nvidia GPU [example: AWS EC2 - g4dn instances, having nvidia g4]. I would be installing ‘docker’ runtime on that, and configure it to run on GPU using nvidia container toolkit.
On this docker, I intend to run an image containing huggingface model along with pytorch and its nvidia cuda dependencies inside that.

My query is about which all rpm packages need to be installed on the host and which all rpms should be installed inside the container?

I have gone through the following documentation:

Architecture Overview:

Install guide:

I am still not clear. I connected to nvidia rpm repo and downloaded all available rpm files. I see following:
libnvidia-container1
libnvidia-container-tools
nvidia-container-runtime
nvidia-container-toolkit
nvidia-container-toolkit-base (I think this gets covered by nvidia-container-toolkit)
nvidia-docker2

from the above list of rpms, which of these should be installed on host and which all inside container image?
Kindly guide me.