Does nvcr.io/nvidia/l4t-cuda have a performance benefits over docker.io/nvidia/cuda on jetson? or just extra pre-installed libraries (cudnn or opencv)?
I can see that l4t-cuda does:
Does it make any performance improvement or enable tegra specific features?
I can find /use/lib/aarch64-linux/gnu/tegra also in docker.io/nvidia/cuda.
If I add ldconf path in the docker.io/nvidia/cuda, can I have same benefits what like nvcr.io/nvidia/l4t-cuda?.
What supprt can I expect by using nvcr.io/nvidia/l4t-cuda over docker.io/nvidia/cuda? performance improvement described in this document?
I want to know I can code and tset my CUDA program both on docker.io/nvidia/cuda in my desktop computer and on nvcr.io/nvidia/l4t-cuda in Jetson without code change.
Is there any compaitibility issue (I don’t concern minor performance issue at this stage)?
The program will be compiled and deployed on nvcr.io/nvidia/l4t-cuda in Jetson at production stage.