I’m trying to get CSI cameras working in a container. We’re using a custom carrier board with 6 cameras, and it works fine in JetPack 4.3 with the following command:
gst-launch-1.0 nvarguscamerasrc sensor-id=0 ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1' ! nvv4l2h264enc bitrate=8000000 ! h264parse ! qtmux ! filesink location=test.mp4 -e
However, starting a container based on nvcr.io/nvidia/l4t-base:r32.3.1 and running that command, I get the following error:
Setting pipeline to PAUSED ... Opening in BLOCKING MODE Pipeline is live and does not need PREROLL ... Setting pipeline to PLAYING ... New clock: GstSystemClock (Argus) Error FileOperationFailed: Connecting to nvargus-daemon failed: No such file or directory (in src/rpc/socket/client/SocketClientDispatch.cpp, function openSocketConnection(), line 201) (Argus) Error FileOperationFailed: Cannot create camera provider (in src/rpc/socket/client/SocketClientDispatch.cpp, function createCameraProvider(), line 102) Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, execute:526 Failed to create CameraProvider Redistribute latency... NvMMLiteOpen : Block : BlockType = 4 ===== NVMEDIA: NVENC ===== NvMMLiteBlockCreate : Block : BlockType = 4 Got EOS from element "pipeline0". Execution ended after 0:00:00.004908567 Setting pipeline to PAUSED ... Setting pipeline to READY ... Setting pipeline to NULL ... Freeing pipeline ...
I’ve tried both explicity mounting the devices in the container, as well as running it in previliged mode:
sudo docker run --device=/dev/video0:/dev/video0 --rm -it --net=host --runtime nvidia nvcr.io/nvidia/l4t-base:r32.3.1 sudo docker run --privileged --runtime nvidia --rm -it --net=host nvcr.io/nvidia/l4t-base:r32.3.1
Since we’re using a custom carrier board, we have a patched dtb and kernel in /boot/Image. Do we need to modify the l4t-base image to support our board?