For newer TensorRT versions, there is a development version of the Docker container (e.g. r8.4.1.5-devel).
Maybe you’ll have more luck starting with the l4t-ml container?
Hi @vovinsa, on JetPack 4.x, you can set your default docker runtime to “nvidia”, and when you are building your dockerfile it will mount the TensorRT development headers from your host device (assuming you have TensorRT installed on your Nano)