How to use the onboard camera with AWS Greengrass

Hello,

I’m try to setup the a Jetson TX2 to do ML inference with AWS Greengrass. I’ve been able to setup Greengrass and use it to execute Python code. And separately I’ve be able to use Python to open the onboard camera.

But when i execute the camera code in the lambda it cant’ open it and crash with this error:

0:00:00.051292832    15       0x46fc60 WARN      GST_PLUGIN_LOADING gstplugin.c:748:_priv_gst_plugin_load_file_for_registry: module_open failed: /usr/lib/aarch64-linux-gnu/libgbm.so.1: undefined symbol: drmGetDevice2

(gst-plugin-scanner:15): GStreamer-WARNING **: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstclutter-3.0.so': /usr/lib/aarch64-linux-gnu/libgbm.so.1: undefined symbol: drmGetDevice2
0:00:00.019066782    16       0x44ced0 WARN                     omx gstomx.c:2826:plugin_init: Failed to load configuration file: Valid key file could not be found in search dirs (searched in: /root/.config:/etc/xdg as per GST_OMX_CONFIG_DIR environment variable, the xdg user config directory (or XDG_CONFIG_HOME) and the system config directory (or XDG_CONFIG_DIRS)
NvRmPrivGetChipIdLimited: Could not read Tegra chip id/rev 
Expected on kernels without fuse support, using Tegra K1
NvRmPrivGetChipPlatform: Could not read platform information 
Expected on kernels without fuse support, using silicon
0:00:00.768247598     1      0x18912c0 WARN            GST_REGISTRY gstregistry.c:1830:gst_update_registry: registry update failed: Error writing registry cache to /root/.cache/gstreamer-1.0/registry.aarch64.bin: Permission denied
NvRmPrivGetChipIdLimited: Could not read Tegra chip id/rev 
Expected on kernels without fuse support, using Tegra K1
NvRmPrivGetChipPlatform: Could not read platform information 
Expected on kernels without fuse support, using silicon
0:00:00.778741553     1      0x18912c0 ERROR            nvcamerasrc gstnvcamerasrc.cpp:2411:gst_nvcamera_socket_connect:<nvcamerasrc0> Connecting to camera_daemon failed

OpenCV Error: Unspecified error (GStreamer: unable to start pipeline
) in cvCaptureFromCAM_GStreamer, file /home/nvidia/opencv/modules/videoio/src/cap_gstreamer.cpp, line 881
VIDEOIO(cvCreateCapture_GStreamer (CV_CAP_GSTREAMER_FILE, filename)): raised OpenCV exception:

/home/nvidia/opencv/modules/videoio/src/cap_gstreamer.cpp:881: error: (-2) GStreamer: unable to start pipeline
 in function cvCaptureFromCAM_GStreamer

Failed to open camera!
os.environ["GST_DEBUG"]="*:3"


def open_cam_onboard(width, height):
    gst_str = ("nvcamerasrc ! "
               "video/x-raw(memory:NVMM), width=(int)2592, height=(int)1458, format=(string)I420, framerate=(fraction)30/1 ! "
               "nvvidconv ! video/x-raw, width=(int){}, height=(int){}, format=(string)BGRx ! "
               "videoconvert ! appsink").format(width, height)
    return cv2.VideoCapture(gst_str, cv2.CAP_GSTREAMER)

cap = open_cam_onboard(640, 480)

if not cap.isOpened():
    GGC.publish(topic='hello/world', payload='Error failed to open camera')
    sys.exit("Failed to open camera!")

Did anyone got them to work together?

Hi,

Sorry that we don’t have experience on AWS Greengrass.

A common cause is that using a OpenCV libraries without GStreamer support.
Have you built OpenCV with GStreamer support?

When you open onboard camera with python, do you launch it with GStreamer?
If not, could you give it a try?

Thanks.

Hi,

This error came after I complied OpenCV with GStreamer support, after compiling it, it works fine when I run the python directly on the OS but not when I run it in Greengrass where I get the above error.

So far I’m using an USB camera which works well under Greengrass.

Hi,

Based on the log, your application crashes when launching the GStreamer pipeline with OpenCV.
Could you open the camera with the identical open_cam_onboard(width, height) shared in comment #1?

By the way, does AWS Greengrass take CvMat as input?

Thanks.

Thanks for your responses,

If I use

open_cam_onboard(width, height)

outside of Greengrass it works.

For my usage, Greengrass is mostly a container management system, which allows me to deploy Python functions on the devices from the cloud. It can use the same Python libraries installed on the Jetson but I need to explicitly map into the container any directory or devices if I want to use them in the container (like the GPU or the WebCam), I suppose GStreamer is referencing a library external to python/opencv and if I could find which one I should be able to map it in the container.

Not sure about CvMat but it supports what is embedded in the library as long as it doesn’t depends on external folders or devices.

Hi,

Okay, we are more clear about your issue now.

Suppose that you need to map the camera device into Greengrass container.
Could you check if your camera is well-mounted at /dev/video* first?

If not, you may need to enable the camera interface for Greengrass.
Thanks.

Yes it is: /dev/video0

Hi,

Please make sure you can find the camera device mounted inside the container.
More, please also try to open it via GStreamer directly.

Thanks.

Hi,

So the solution was to mount /dev/shm in the container. :)

1 Like

Good to know it works now.
Thanks for updating with us. : )

I am using Jetson TX2 (JetPack 3.1) running with AWS Greengrass, and experiencing the exact same problem.

@fxgsell, could you tell me what exactly you did to solve the problem?
Did you add /dev/shm as local resource ‘Volume’ in the AWS Greengrass management console?

Hello,

First you need the compile version of OpenCV:

sudo apt remove -y libopencv # avoid conflicts

git clone https://github.com/jetsonhacks/buildOpenCVTX2.git
cd buildOpenCVTX2/
./buildOpenCV.sh
cd $HOME/opencv/build
make
sudo make install

And then yes you need to add /dev/shm with r/w permission as a volume.
You also need the following r/w devices (mostly for the GPU, but at least one is required for the camera):
1. /dev/nvhost-ctrl
2. /dev/nvhost-gpu
3. /dev/nvhost-dbg-gpu
4. /dev/nvhost-ctrl-gpu
5. /dev/nvhost-prof-gpu
6. /dev/nvmap
7. /dev/nvhost-vic

Thanks fxgsell.

I used the jetsonhack’s build script (master HEAD), too, and successfully installed OpenCV 3.3.0 with GStreamer 1.0.

I had all those devices you listed as 1-7, then I added /dev/shm Volume, but did not work.
I still saw the exact same warnings and the error.

From ERROR log, I looked for a unix socket access with strace, and found this line:

connect(8, {sa_family=AF_LOCAL, sun_path="/tmp/nvcamera_socket"}, 110) = -1 ENOENT (No such file or directory)

So, I mounted /tmp, then it began working.

Looking the error log you posted earlier, It appeared that you had the problem with connecting to camera_daemon, but you did not have to mount /tmp. Now, I am wondering why it worked for you…

This is a question for Nvidia staff:
Does the nvcamera daemon always listen to the unix socket at /tmp/nvcamera_socket? Or, is there a way to configure the nvcamera where the daemon to listen?

I was also using /tmp from the start to share the video output, I just didn’t know the camera also used it!

Ok, that explains. Thanks @fxgsell!

This is a question for Nvidia staff:
Does the nvcamera daemon always listen to the unix socket at /tmp/nvcamera_socket? Or, is there a way to configure the nvcamera where the daemon to listen?

Yes, and no due to this source didn’t public.

Hi,

I use AWS IoT console instead of Python codes and I think I face the same problem with adding camera as a local resource. I tried to add /dev/shm but that causes the console unable to deploy the group to my TX2. And I cannot add /tmp as a local resource.
Anybody has any idea about it? Thank you!