Hello guys,
So I have been working on this new library for working with the Jetson Nano in python. It’s a simple to use camera interface for the Jetson Nano for working with USB and CSI cameras in Python. Although tested with only Nano, I believe it should work with other Jetson family since it is based on Accelerated GStreamer Plugins.
It currently supports the following types of camera:
- Works with CSI Cameras.
- Works with various USB cameras.
- Works with IP Cameras (Future version)
Some of the features are:
- It is OpenCV ready. Image file can be called directly with OpenCV imshow
- Image file is a numpy RGB array.
- Support different Camera Flip Mode (Counterclockwise, Rotate 180 degress, Clockwise - 90 degrees, Horizontal Flip, Vertical Flip)
- Can be used with multiple cameras
- Support Frame rate enforcement. *Only available for USB cameras.
- Frame rate enforcement ensures the cameras work at the given frame rate using GStreamer videorate plugin
- It is based on Accelerated GStreamer Plugins
- Should work with other Jetson boards like Jetson TX1, TX2 and others (Not tested)
- Easily read images as numpy arrays with image = camera.read()
- Supports threaded read - available to all camera types. To enable a fast threaded read, you will need to enable the enforce_fps: enforce_fps = True
You can install it with a simple pip call:
pip3 install nanocamera
or clone the GitHub repo.
More details here:
1 Like
Cool, thanks for sharing! Will have to check it out.
Very cool! Do you have plans to support exporting the camera to a loopback device for use in browsers?
Well, not sure. Never thought of that. Not sure I understand how that is supposed to work though. Maybe, if you explain better, I could add that functionality.
@thehapyone. : CSI cameras cannot be used by browsers - in my case for joining webRTC video meetings. Workaround is to use gStreamer to capture CSI camera input and stream it to a virtual camera device. This virtual device can now be used by Chromium.
The current version supports this. You should check it out.
- Works with RTSP streaming camera and video with hardware acceleration (only supports H.264 video codec)
- Works with IP Cameras(JPEG codec) or any MJPEG streaming source (Currently, supports CPU acceleration. TODO: Hardware acceleration)
1 Like
I am currently working on the possibility of this. Looking at the possibility of doing this without external tools. Will update if successful.
1 Like
Could you possibly add event observation using Traitlets please so that an image change event can be triggered when a new frame is available?
This would enable callbacks to link a widget with a camera view within Jupyter notebooks.
My Python isn’t good enough to modify your existing code.
Thanks for the great code.
Damian
It was actually fairly easy to hack your MJPEG GStreamer commands into a new class in a forked copy of the JetCam project (bundled with JetPack) at: GitHub - NVIDIA-AI-IOT/jetcam: Easy to use Python camera interface for NVIDIA Jetson, as follows:
mjpeg_camera.py
from .camera import Camera
import atexit
import cv2
import numpy as np
import threading
import traitlets
class MJPEGCamera(Camera):
capture_location = traitlets.Unicode(default_value="127.0.0.1:8080/stream")
capture_fps = traitlets.Integer(default_value=30)
capture_width = traitlets.Integer(default_value=640)
capture_height = traitlets.Integer(default_value=480)
def __init__(self, *args, **kwargs):
super(MJPEGCamera, self).__init__(*args, **kwargs)
try:
self.cap = cv2.VideoCapture(self._gst_str(), cv2.CAP_GSTREAMER)
re, image = self.cap.read()
if not re:
raise RuntimeError('Could not read image from camera.')
except:
raise RuntimeError(
'Could not initialize camera. Please see error trace.')
atexit.register(self.cap.release)
def _gst_str(self):
return 'souphttpsrc location=%s do-timestamp=true is_live=true ! multipartdemux ! jpegdec ! videorate ! videoscale ! video/x-raw, width=(int)%d, height=(int)%d, framerate=(fraction)%d/1 ! videoconvert ! video/x-raw, format=BGR ! appsink' % (
"http://" + self.capture_location, self.capture_width, self.capture_height, self.capture_fps)
def _read(self):
re, image = self.cap.read()
if re:
return image
else:
raise RuntimeError('Could not read image from camera')
It’s a bit quick and dirty and may not have some of the niceties of your code.
One thing to note, the MJPEG IP camera source needs to be started before using the code or the camera will not become available to JetCam.
Here’s an example call to it:
from jetcam.mjpeg_camera import MJPEGCamera
camera = MJPEGCamera(capture_width=224, capture_height=224, capture_fps=10, capture_location="192.168.0.100:8080/stream")
camera.running = True
print("camera created")
I can’t seem to get this code working. What are you using to stream the camera? That’s the only thing I can think of that may be different from how I’m doing it
Thank you! I never even thought to try a mobile app to do this, I tried like 5 different windows and linux utilities that all failed. I couldn’t use that app since I have an android but the app IP Webcam - Apps on Google Play worked like a charm (for anyone else seeing this thread thats wants to do the same, make sure to use ip:port/video to just get the mjpeg feed).