How do I identify every UFOs scene in my video library?

Hi all,
I have about 2000 videos on UFOs in my video library. 30 years worth.
I like to be able to process each video (MP4 or MKV mostly), and record the time stamps where a new scene with a UFO starts. Then continue like this for the rest of the playlist.
Then, use Adobe premier scripting to extract 5~10 second clips using the time lines marked by the previous step. No problems on the premier side for me.
Is that possible with a Jetson nano?
Can the system be taught what UFOs look like, by training it with 10,20,… stills?
Can it be done in 1 year by a systems programmer of 30 years proficient in C and C++ and APIs?
What I want to do, is find out the location of each sighting in the video, and manually record them on a map, to see if there are any patterns to their movement. Like, do they appear more around seismic fault lines?
What I don’t want, is to sit there and watch the entire library over again. That would be too painful.
Thanks in advance for any idea you might have and care to share.

I couldn’t tell you all of the details of setting up for training, but you could train on a PC or cloud and then deploy the model on a Nano. It won’t matter what the object is…dogs, cars, jets…if you have a good sample, then based on what is learned from those samples detection should work.

You will need 1000+ stills of various UFOs, plus augmentation, and then another 5,000+ stills, plus augmentation of scenes without UFOs, to learn what a UFO looks like.
The good news is that you really only need a single signal: “interesting” or “not interesting,” so at least labeling will be simple.

Trying to train on the Nano will be very slow. You’ll probably want a high-end desktop card with lots of RAM for training (GTX 1080 or higher) or perhaps rent some time on Amazon EC2 cloud GPU instances.

I would first start with what’s a flying object, and what isn’t a flying object.

So effectively you’d have a neural network to spot things flying. I imagine an algorithmic approach could be used for this as you’re detecting changes in a scene.

I’d use an general object detection Neural Network that simply captured a region of interest of the flying object.

I’d let that run on lots of videos of things flying. So you got a huge database of flying objects.

Then I’d use Amazon Turk to crowd source the task of identifying the flying objects. Things like bird, airplane, etc.

I’d use that dataset to generate a separate classification network. After I was done with that, and proved it out I’d combine the two neural networks together. I’m a big fan of stringing Neural Networks together.

Then I’d use a system that retrained the classification network based on human feedback. Like when it misidentifies something.

I’d try to convince people to buy this “Flying Object” detector. I think it would actually be a pretty cool product. Where you could see what kinds of strange things flew around your house (like drones). Like a security system, but for the sky above you.

Please allow me to chime in: I live in the entry lane of the local airfield and think of installing a webcam and then train a NN to identify/classify the incoming and outgoing machines, e.g. “single engine”, “twin engine” or even better “Cessna 172”, “Piper PA-34”. Any hints if there already exists some models or which AI-framework would fit best (Yolo, VisionWorks, OpenCV)?

OpenCV isn’t all that great for deep convolutional models yet, but it could conceivably be used as a driver for other models (e g, to open a window, read from a camera, and then actually DO SOMETHING when it sees a match.)

Yolo is a specific network topology (“you only look once,”) which gives you both “the object is here” and “this is the kind of object it is” in the same classification. As far as I can tell, multiple frameworks implement the Yolo model.

To actually run a classifier on the Nano with good performance, you’ll likely want to use the NVIDIA-supported runtimes, of which TensorRT seems to be the best for the Jetson series: https://developer.nvidia.com/tensorrt
There’s a code sample for using Yolo on TensorRT here: https://docs.nvidia.com/deeplearning/sdk/tensorrt-sample-support-guide/index.html#yolov3_onnx
No idea how well it will run on the Nano (or if at all.)

Separately, I don’t know of a good dataset with labeled images of various aircraft that’s publicly available and suitable for the particular application you suggest. (There are some airplanes in general-purpose data sets like ImageNet.)
Maybe the government/military developed one, and maybe that would be available through some freedom of information request, but that seems like an uphill march to try to chase down …