Foreign Object Debris (FOD) detection System!

Hello everyone! I want to say that I am working on a project to make a automated system for FOD detection that can be installed at runways of airports. FOD are very small items like screws, bolts, nuts, stones, pebbles of very small size etc.
Now I want to say that I want help in this regard that which camera is best for long range and for detection of small FOD. Furthermore if anybody can help me suggest that how can I make such an automated system.

What I have thought of is a system consisting of static cameras installed along the runway and will be used to perform real time FOD detection using YOLOv4. If anybody can help me!

You may need more specific what kind of help need to support.

I mean that currently I am using Jetson Nano with Logitech C920 pro USB camera to do real time detection of FOD using TensorRT at 4.6 FPS. I followed @jkjung13 github repo. Thanks to @jkjung13.

Now the problem is that currently my system consisting of nano and logitech cam is used in my college LAB but I want to make a system that can be used at actual runway. So, what are the necessary things or modifications that I need to make. One major change is obviously of camera as the runway is 2700 by 40 feet in dimensions so a very good camera is needed. So I want to say that what are the necessary changes which are required.
More specifically the code which I am using is of @jkjung13 repo as attached below: (1.6 KB)

This code I run on terminal of nano using this command:
python3 --usb --vid 0 --width 1280 --height 720 --category_num 1

But this is a manual process, so what if I want to make it automated? So these are some of the important points which I have highlighted, rest you can also help me in solving this issue efficiently!

I’d say you need many cameras. You would not necessarily need all cameras active at the same time, and could simply sequence through them. I don’t think a single “reasonably and not astronomically priced” camera could do the job without being mobile (and it is much simpler and cheaper, and probably more accurate, to use multiple fixed cameras and sequence through them).

I am a bit intrigued by the thought of perhaps using an infrared laser to scan the surface and provide distance measurements (since thermal won’t blind pilots or interfere with electronics). If you have a map of the field (runway), then the infrared could be used to guide where the camera spends extra time.

Night and bad weather would also be a problem, and near infrared would probably help there.

Sorry, I’m not much help on the software side, but I am guessing existing LIDAR sample code could be adapted.

Thanks @linuxdev. You are right, we need multiple cameras but what kind of system is suitable like currently I have Jetson Nano with me and in my LAB i am doing FOD detection using YOLOv4 with a Logitech USB Cam. But for actual runway what specific camera, what specific algorithm etc. I do not know so I need help in that regard.
As far as LIDAR is concerned, I am currently not using but your idea is good.
But for the time being my focus is just camera based system.

I would suggest to use TX1/TX2 or Xavier cause these platform can support 6 -16 cameras

Thanks @ShaneCCC. But what about the cameras? Which camera is best suited as I have to place the camera at a distance from runway and FOD on runway are very small i.e. size of FOD ranges from 1mm to 10 inches approx.
So FOD’s are very small. And furthermore what about this issue:

For the camera you can consult with the camera partner.

Thanks @ShaneCCC. Ok the point for cameras is clear, now what about this issue:


What kind of automation you try to achieve?
In general, you can run the detector periodically.

Please check our Deepstream to see if this can meet your requirement:

Reference test application Path inside sources directory Description
Yolo detector sources/objectDetector_Yolo Configuration files and custom library implementation for the Yolo models, currently Yolo v2, v2 tiny, v3, and v3 tiny.


Here is a post about starting a service at booting:


As far as Deepstream is concerned, my model is YOLOv4 and YOLOv3 was not giving me good results so I cannot use Deepstream!


Have you tried your customized YOLOv4 model with Deepstream yet?
It needs some update but should work since the backend are TensorRT also.


No @AastaLLL. I have not tried my custom YOLOv4 yet. Can you please provide a good and easy tutorial for me that I can follow to implement my customized YOLOv4 model with Deepstream?


You can check this blog for some information: