I want to run a gpu tensorRT container, but my jetson orin dosn't work

I got a jetson orin and flash it with nvidia sdk finished ,flashed with jetpack5.1

I run this to start a container,
docker run --gpus all -it --rm nvcr.io/nvidia/tensorrt:21.08-py3
but it failed with
warning: the requested image’s platform (linux/amd64) does not match the detected host platform(linux/arm64/v8) and no specific platform was requested
docker: Error response from deamon: failed to create shim task:OCI runtime create failed:runc create failed: unable to start container process: error during container init: error running hook #0:error running hook:exit status1, stdout:, stderr:Auto-detected mode as ‘csv’
invoking the NiVIDIA container Runtime Hook directly (e.g. specifying the docker --gpus flag) is not supported. Please use the NVIDIA Container Runtime instead: unknown.

!!!
how can i run a container with gpu and tensorRT?
!!!
My basic environment below
nvidia-smi, which command not found

I checked cat /usr/local/cuda/version.txt
cat: /usr/local/cuda/version.txt: No such file or directory

dpkg -l | grep TensorRT
ii graphsurgeon-tf 8.5.2-1+cuda11.4 arm64 GraphSurgeon for TensorRT package
ii libnvinfer-bin 8.5.2-1+cuda11.4 arm64 TensorRT binaries
ii libnvinfer-dev 8.5.2-1+cuda11.4 arm64 TensorRT development libraries and headers
ii libnvinfer-plugin-dev 8.5.2-1+cuda11.4 arm64 TensorRT plugin libraries
ii libnvinfer-plugin8 8.5.2-1+cuda11.4 arm64 TensorRT plugin libraries
ii libnvinfer-samples 8.5.2-1+cuda11.4 all TensorRT samples
ii libnvinfer8 8.5.2-1+cuda11.4 arm64 TensorRT runtime libraries
ii libnvonnxparsers-dev 8.5.2-1+cuda11.4 arm64 TensorRT ONNX libraries
ii libnvonnxparsers8 8.5.2-1+cuda11.4 arm64 TensorRT ONNX libraries
ii libnvparsers-dev 8.5.2-1+cuda11.4 arm64 TensorRT parsers libraries
ii libnvparsers8 8.5.2-1+cuda11.4 arm64 TensorRT parsers libraries
ii onnx-graphsurgeon 8.5.2-1+cuda11.4 arm64 ONNX GraphSurgeon for TensorRT package
ii python3-libnvinfer 8.5.2-1+cuda11.4 arm64 Python 3 bindings for TensorRT
ii python3-libnvinfer-dev 8.5.2-1+cuda11.4 arm64 Python 3 development package for TensorRT
ii tensorrt 8.5.2.2-1+cuda11.4 arm64 Meta package for TensorRT
ii tensorrt-libs 8.5.2.2-1+cuda11.4 arm64 Meta package for TensorRT runtime libraries
ii uff-converter-tf 8.5.2-1+cuda11.4 arm64 UFF converter for TensorRT package

nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Sun_Oct_23_22:16:07_PDT_2022
Cuda compilation tools, release 11.4, V11.4.315
Build cuda_11.4.r11.4/compiler.31964100_0

lspci | grep -i nvidia
0001:00:00.0 PCI bridge: NVIDIA Corporation Device 229e (rev a1)

lsmod | grep nvidia
nvidia_modeset 1093632 6
nvidia 1339392 13 nvidia_modeset

/usr/local/cuda/bin$ ls
bin2c cuda-gdb cuobjdump nvdisasm
compute-sanitizer cuda-gdbserver fatbinary nvlink
crt cuda-install-samples-11.4.sh nvcc nvprune
cudafe++ cu++filt nvcc.profile ptxas

Did you ever figure out how to accomplish running a container with gpu acceleration?