Performance difference of the same task between docker and host on AGX orin

We want to know the perfromance difference when running the same task in the docker and in the orin host. It’s expected there is no performance gap between the two. However, the delay of the task in the host is 26ms, in the docker is 47ms. ALL tasks are runned in the MAXN mode.
jtop status shows as below:

Start the container by nvidia-docker run -it -e NVIDIA_DRIVER_CAPABILITIES=all bash

What maybe be causes the gap between the performance of docker and host?

Hi @335789715, what processes are you running? Just as a test, I ran the following from jetson-inference, and times were the same:

$ detectnet "images/humans_*.jpg" images/test/human_%i.jpg

perf_host.txt (4.8 KB)
perf_container.txt (4.8 KB)

Hi, @dusty_nv , we are running a perception pipline compsed by 4-6 models. It includes 2D and 3D detection model, plate detection model,vehicle color detection model, and so on. And we run two channels in one process, whose inputs are from two cameras.

We upload the status of jtop. Please check it.

I’m not sure what container this is, but to use GPU-accelerated container on Jetson, the container should use l4t-base (or a derived container like l4t-pytorch, l4t-tensorflow, l4t-ml, deepstream-l4t, ect). And you would start it like:

sudo docker run -it --rm --net=host --runtime nvidia

Ok,we’ll have a try on that container.

We have found another question. When the process runs in single channe mode, the same configuration as two channels except inputs, the delay difference bewteen the host and the container becomes small. Why? It’s confusing. 🤔

I’m closing this topic due to there is no update from you for a period, assuming this issue was resolved.
If still need the support, please open a new topic. Thanks

Hi @335789715, sorry for the delay - without more information about your pipelines, it’s hard to say what’s going on. What libraries are you using for the inferencing?