camerapi module

I’ve got a script that usually runs on a raspberry pi, thus tries to import the “camerapi” module.

When I try and install this, it’s clearly attempting to check if the Nano is a raspberry pi. Any alternatives or workarounds for this module?

Hi,

The Nano already comes with the camera driver built-in so you can just capture from it out of the box using GStreamer/V4L2/libargus.

Have you tried running a simple capture pipeline to check your camera is detected correctly?

gst-launch-1.0 nvarguscamerasrc sensor-id=0 ! 'video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)NV12, framerate=(fraction)30/1' ! nvvidconv ! queue ! xvimagesink

Could you provide some more detail on about what are you trying to do at the Nano:

  1. Is your script bash, python, other?

  2. What are you trying to perform with the camera?

Jetson family uses kernel drivers for the cameras so the way to use them is quite different from RPI.

Best Regards,

It’s the classify capture demo from the coral usb accelerator. I also ran into it with my bird capture program but I got around it by importing another python script,but I’m not as motivated to fix this one.

python3 demo/classify_capture.py
–model test_data/mobilenet_v2_1.0_224_quant_edgetpu.tflite
–label test_data/imagenet_labels.txt

Copyright 2019 Google LLC

Licensed under the Apache License, Version 2.0 (the “License”);

you may not use this file except in compliance with the License.

You may obtain a copy of the License at

https://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software

distributed under the License is distributed on an “AS IS” BASIS,

WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

See the License for the specific language governing permissions and

limitations under the License.

“”“A demo to classify Raspberry Pi camera stream.”""

import argparse
import io
import time

import numpy as np
import picamera

import edgetpu.classification.engine

def main():
parser = argparse.ArgumentParser()
parser.add_argument(
‘–model’, help=‘File path of Tflite model.’, required=True)
parser.add_argument(
‘–label’, help=‘File path of label file.’, required=True)
args = parser.parse_args()

with open(args.label, 'r', encoding="utf-8") as f:
    pairs = (l.strip().split(maxsplit=1) for l in f.readlines())
    labels = dict((int(k), v) for k, v in pairs)

engine = edgetpu.classification.engine.ClassificationEngine(args.model)

with picamera.PiCamera() as camera:
    camera.resolution = (640, 480)
    camera.framerate = 30
    _, width, height, channels = engine.get_input_tensor_shape()
    camera.start_preview()
    try:
        stream = io.BytesIO()
        for foo in camera.capture_continuous(stream,
                                             format='rgb',
                                             use_video_port=True,
                                             resize=(width, height)):
            stream.truncate()
            stream.seek(0)
            input = np.frombuffer(stream.getvalue(), dtype=np.uint8)
            start_ms = time.time()
            results = engine.ClassifyWithInputTensor(input, top_k=1)
            elapsed_ms = time.time() - start_ms
            if results:
                camera.annotate_text = "%s %.2f\n%.2fms" % (
                    labels[results[0][0]], results[0][1], elapsed_ms*1000.0)
    finally:
        camera.stop_preview()

if name == ‘main’:
main()

Hi,

I am not used to the RPI so there is not much I can help you there. However, based on your script it looks like the only reason for that module is to get data frames from the camera and pass them to the classifier. If that’s the case, maybe you could try replacing it with Python GStreamer and capture your frames using it instead:

http://brettviren.github.io/pygst-tutorial-org/pygst-tutorial.html

On the other hand, if you are looking to execute some inference models on the data from the camera you might want to try GstInference:

Best Regards,