Kindof but not quite and not ultimately. I’m adding AI based trigger to a home security system that I’ve already been developing for ten years. What I’ve done is come up with a json definition of a structure in which camera streams, object categories, notifications, region of interest polygons, regions of explicit non-interest and a bunch of other parameters that when configured are interpreted by a multi-threaded python script that consumes several MJPEG streams and in turn takes the latest image, pushes it up a websocket connection to a “yolo-server” and receives the json back. With this it checks to if any of the regions of interest fall within the parameter restraints and then makes a rest call to my existing system.
My existing system then captures video and triggers the rules in an event engine which can send outgoing alerts such as pushover, E-mail alerts which contain links to the captured video snipperts, control I/O ports, out going HTTP calls, a lot of things actually.
So in essence I’m making the AI do the work for you. It’s very nice showing the video with the bounding boxes and I have python tools for that but ultimately home security should eliminate the need for the person to stare at the screen with the bounding boxes. I want to implement all that as well, but it’s lower priority to me that making it functional and practical and getting the software released for people to use.
I’ve been using yolov3 at 416x16 pixels in nano based systems on about 5 sites for about a year now and it works really well. I’ve spotted one intruder intent on theft just recently and it also managed to help identify a car involved in a car theft. In the past, without the AI as sensor input I’ve captured intruders a number of times of video.
I’m focusing on the practical side of using the AI in real life, to that end I want to package up the intregration of yolo (Currently yolov4), apache reverse proxy, dynamic dns, automatic letsencrypt certificate renewal, video transcoding and a memory overlay file system on top of an OS running read-only from SSD. That’s how my current boxes are. In essense it just waiting for me to complete the reactjs configuration screens before I can write the install scripts. As I’m brand new to reactjs that’s holding me up at the moment. Oh and on the Xavier I also need to create the root pivots scripts in a manner that provides a safe fall back if the writeable partition of the disk ever gets corrupted, which can happen if the power falls out when writing video, however this is a different partition than the read-only partition that the OS is running on, so a fall back solution would be great. Particularly as there’s no SD card in the AGX and the reset all implies a full software reinstall, which is a bit brutal.
The yolov4 based system I’ve been using over the past year processes an image every 0.7s. The Jetson Xavier AGX that I’ve testing with now takes 0.165s using the complete full model. I also have a Xavier NX but haven’t tested that yet.
If I get a match I also do an additional test to be sure it’s ok. Over the past year I’ve gained a lot of insight as to what kind of things can cause false positives and how to deal with that.
The current configuration format makes it also easy to trigger on loitering, or crowd forming or loitering crowds forming as well.
The main system that this triggers also supports a websocket server so other consumers can hang off this.
I server up yolo detections via a websocket server that takes binary images thrown at it. This make’s it easy to use a separate more powerful GPU if you like. With a jetson nano I can support good detection over 4 cameras. But for myself I use an RTX 2080 it I’m monitoring more than 15 cameras comfortably.
For 10 1/2 years I developed my security system for myself, so it’s already well developed and tested. I’d like to add the AI support and then see if other people can benefit from it as well.
In addition to AI based sensing I have also developed Lorawan PIR based sensors that I’ll be open sourcing as well. There can be some uses cases, such on large farms, where you may not be able to easily get cameras to the edge, so I guess there’s still some use for the older style sensors, but the AI has pretty much obsoleted it all in my experience over the last year.