Drones have become an integral part of the modern world. They not only perform aerial photography, agricultural and other tasks, but also serve as a weapon for terrorist attacks.
For this reason, technologies that allow you to track them are of great interest.
Today there are many such effective systems using vision, but they have disadvantages: they require good illumination, do not detect drones hidden behind obstacles (buildings, trees, etc.).
A team of researchers from the University of Texas at Arlington has developed a new type of drone tracking device that uses both visual and acoustic signals. The DroneChase system is designed to be mounted on vehicles to continuously track fast-moving drones. DroneChase uses a machine learning algorithm that recognizes the correspondence between visual and acoustic information.
For testing, they used the YOLOv5 model retrained on a dataset of 10,000 drone images to detect drones visually. The YOLOv5 model detects and tags drones in frames when a video stream is fed to it. A neural network uses these tags to analyze audio data and object location based on the sounds produced.
DroneChase's algorithms are very efficient and can run on a Raspberry Pi single-board computer. The system, complemented by a low-cost camera and Seeed ReSpeaker microphone, makes the device affordable.
Tests have shown that visualization and acoustics, together with high accuracy, determine the drone's location. If the drone is hidden behind an object or if lighting is poor, the acoustics come to the rescue and determine the drone's location with great success.
The team plans to expand the system so that it can track more than one drone at a time. There are also plans to test DroneChase in more challenging environmental conditions to make it more reliable.