Accessing the onboard camera from a container

Hi all,

I’m currently launching my application via the tensorflow official container (which I rebuilt for ARM64) and it works great. However I now need to have access to the camera as well inside the container.

What needs to be available inside the container for it to run properly? Anybody had success before? So far I’ve tried with the following with no luck. To test the camera I’m using OpenCV’s VideoCapture::read method which returns False because it can’t grab frames from the camera.

docker run \
-e LD_LIBRARY_PATH=:/usr/lib/aarch64-linux-gnu:/usr/lib/aarch64-linux-gnu/tegra:/usr/local/cuda/lib64 \
--net=host \
-v /usr/lib/aarch64-linux-gnu:/usr/lib/aarch64-linux-gnu \
-v /usr/local/cuda/lib64:/usr/local/cuda/lib64  \
-v /tmp/nvcamera_socket:/tmp/nvcamera_socket \
--device=/dev/nvhost-ctrl \
--device=/dev/nvhost-ctrl-gpu \
--device=/dev/nvhost-prof-gpu \
--device=/dev/nvmap \
--device=/dev/nvhost-gpu \
--device /dev/video0 \
--device /dev/nvhost-vic \
--device /dev/nvhost-dbg-gpu \
--device=/dev/nvhost-as-gpu \
-it --rm --privileged \
myrepo/tensorflow-arm64:1.9-gpu bash

For accessing onboard camera from opencv, you probably need to have gstreamer support enabled in your opencv library. You can get its options using function cv::getBuildInformation(). If it has not gstreamer support, you may rebuild your opencv library with gstreamer support.

“–device=
Add a host device to the container (e.g. --device=/dev/sdc:/dev/xvdc:rwm)”
source

More reference

I do have it enabled in my OpenCV library:

root@jetson-6:~# python
Python 2.7.12 (default, Dec  4 2017, 14:50:18) 
[GCC 5.4.0 20160609] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import cv2
>>> print(cv2.getBuildInformation())
[OMITTED OUTPUT]
    GStreamer:                   
      base:                      YES (ver 1.8.3)
      video:                     YES (ver 1.8.3)
      app:                       YES (ver 1.8.3)
      riff:                      YES (ver 1.8.3)
      pbutils:                   YES (ver 1.8.3)
[OMITTED OUTPUT]

That’s what I’m doing already via the --device flags when issuing docker run. When calling VideoCapture::read I can see this in the output and I never get the Python shell back:

(gst-plugin-scanner:468): GStreamer-WARNING **: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstomx.so': /usr/lib/aarch64-linux-gnu/libgbm.so.1: unde
fined symbol: drmGetDevice2

(gst-plugin-scanner:468): GStreamer-WARNING **: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstnveglglessink.so': /usr/lib/aarch64-linux-gnu/libgbm.
so.1: undefined symbol: drmGetDevice2

(gst-plugin-scanner:468): GStreamer-WARNING **: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstvideocuda.so': /usr/lib/aarch64-linux-gnu/libgbm.so.1
: undefined symbol: drmGetDevice2

(gst-plugin-scanner:468): GStreamer-WARNING **: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstnvcompositor.so': /usr/lib/aarch64-linux-gnu/libgbm.s
o.1: undefined symbol: drmGetDevice2

(gst-plugin-scanner:468): GStreamer-WARNING **: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstclutter-3.0.so': /usr/lib/aarch64-linux-gnu/libgbm.so
.1: undefined symbol: drmGetDevice2

(gst-plugin-scanner:468): GStreamer-WARNING **: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstnvivafilter.so': /usr/lib/aarch64-linux-gnu/libgbm.so
.1: undefined symbol: drmGetDevice2

(gst-plugin-scanner:468): GStreamer-WARNING **: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libnvgstjpeg.so': /usr/lib/aarch64-linux-gnu/libgbm.so.1: u
ndefined symbol: drmGetDevice2

(gst-plugin-scanner:468): GStreamer-WARNING **: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstnvvideosink.so': /usr/lib/aarch64-linux-gnu/libgbm.so
.1: undefined symbol: drmGetDevice2

(gst-plugin-scanner:468): GStreamer-WARNING **: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstnvegltransform.so': /usr/lib/aarch64-linux-gnu/libgbm
.so.1: undefined symbol: drmGetDevice2

(gst-plugin-scanner:468): GStreamer-WARNING **: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstnvarguscamerasrc.so': /usr/lib/aarch64-linux-gnu/libg
bm.so.1: undefined symbol: drmGetDevice2

Is this of any help?

I have noticed that all the gst-omx gstreamer plugins are not present in the container, but they are in the host. How can I install them?

In my opinion, you may invoke docker container bash and use apt-get for installing whatever plugins.

You may also check if you have the correct libGL.so.

Is there any solution? I also have the same problem but do not know how to solve it.

After investing a lot of hours on this I came to the conclusion that the onboard camera can’t be interacted with in a standard way from a container. This is due to all the moving pieces (nvcamera daemon, specific GStreamer libraries, etc.) that are required for the onboard camera to work properly.

I plugged in a standard Logitech C920 via USB and was able to capture images out of the box. The following Python snippet will let you know if your camera works or not:

cam = cv2.VideoCapture(1)
s, im = cam.read() # captures image
cv2.imshow("Test Picture", im) # displays captured image
cv2.imwrite("test.bmp",im) # writes image test.bmp to disk

Hope that helps.

Thanks for the info. I also give up trying to use the onboard cam with docker and just use a usb cam. I have a Raspberry Pi 3 with onborad cam, it worked fine with docker. Maybe someday someone will get it and share with us.

Sorry, I have no experience with Docker (I just know the container concept), but how are trying to use the onboard camera from opencv ?
cv.videoCapture(0) woudn’t work because opencv expects various formats, but nothing with 10 bits bayer frames as sent by CSI as onboard OV camera sends.

You may use a gstreamer pipeline for converting into BGR format at first. Can you get gst-launch-1.0 working from your container with this pipeline ?

gst-launch-1.0 nvcamerasrc ! 'video/x-raw(memory:NVMM), format=I420' ! nvvidconv ! 'video/x-raw, format=BGRx' ! videoconvert ! 'video/x-raw, format=BGR'! appsink

If gstreamer can do this, you would just use such pipeline for opencv:

cam = cv2.VideoCapture("nvcamerasrc ! video/x-raw(memory:NVMM), format=(string)I420 ! nvvidconv ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink")

If you can get the gstreamer pipeline working from shell but not from opencv, you may try to use v4l2loopback, and get a virtual node fed by gst-launch that would convert into a BGR8 format (for example here this node would be /dev/video2, you may adjust with option video_nr when loading module with modprobe). So if you can run this gstreamer pipeline from shell:

gst-launch-1.0 nvcamerasrc ! 'video/x-raw(memory:NVMM), format=I420' ! nvvidconv ! 'video/x-raw, format=BGRx' ! videoconvert ! 'video/x-raw, format=BGR' ! tee ! v4l2sink device=/dev/video2

Then you should be able to see it as a v4l2 device from another shell:

gst-launch-1.0 v4l2src device=/dev/video2 ! 'video/x-raw, format=BGR' ! videoconvert ! xvimagesink

And if it works you may use it from opencv with

cam = cv2.VideoCapture(2)

Hi,

Could you check if this topic helps?
https://devtalk.nvidia.com/default/topic/1042434/jetson-tx2/jetson-tx2-docker-libargus-/post/5289257/#5289257

Thanks.

Hi,

I have the same problem this afternoon and i have solved it. Here is my sulotion:
cd /usr/sbin/
nohup ./nvargus-daemon > log 2>&1 &

Now, the nvargus-daemon service has started, and you can use the cmd “gst-launch-1.0 nvarguscamerasrc ! nvvidconv ! xvimagesink” to open the camera.

Thanks for your sharing!