I want to encapsulate tensorrt’s program as a dcoker image file, but the file is too large to run on the Jetson nano. Has anyone done anything like that? I want to convert the UFF model into an engine program and package it into the docker image file. Does anyone have other ways?
Hi,
May I know the engine size first?
If the platform and JetPack version is the same, you can attach the engine file without the uff model to save space.
Thanks.
I want to package the UFF to engine program and run it in docker. But the image of tensorrt is too large. Is there any way to solve this problem?Or is there any way to encapsulate it and run it
Hi,
A possible solution is to mount the library from the host instead of including it into docker.
To do so, you will need to create a CSV file for TensorRT in the below folder:
/etc/nvidia-container-runtime/host-files-for-container.d
Thanks.