Hi all,
I’m currently launching my application via the tensorflow
official container (which I rebuilt for ARM64) and it works great. However I now need to have access to the camera as well inside the container.
What needs to be available inside the container for it to run properly? Anybody had success before? So far I’ve tried with the following with no luck. To test the camera I’m using OpenCV’s VideoCapture::read
method which returns False
because it can’t grab frames from the camera.
docker run \
-e LD_LIBRARY_PATH=:/usr/lib/aarch64-linux-gnu:/usr/lib/aarch64-linux-gnu/tegra:/usr/local/cuda/lib64 \
--net=host \
-v /usr/lib/aarch64-linux-gnu:/usr/lib/aarch64-linux-gnu \
-v /usr/local/cuda/lib64:/usr/local/cuda/lib64 \
-v /tmp/nvcamera_socket:/tmp/nvcamera_socket \
--device=/dev/nvhost-ctrl \
--device=/dev/nvhost-ctrl-gpu \
--device=/dev/nvhost-prof-gpu \
--device=/dev/nvmap \
--device=/dev/nvhost-gpu \
--device /dev/video0 \
--device /dev/nvhost-vic \
--device /dev/nvhost-dbg-gpu \
--device=/dev/nvhost-as-gpu \
-it --rm --privileged \
myrepo/tensorflow-arm64:1.9-gpu bash