0
Defence
DARPA advances on Fast Lightweight Autonomy programme
DARPA advances on Fast Lightweight Autonomy programme
© DARPA

| Staff writer 504 mots

DARPA advances on Fast Lightweight Autonomy programme

The U.S. Defense Advanced Research Projects Agency (DARPA) has tested latest quadcopter software to navigate simulated urban environments, performing real-world tasks without human assistance.

DARPA has completed a second phase of testing under its Fast Lightweight Autonomy (FLA) programme, demonstrating advanced algorithms designed to turn small air and ground systems into team members that could autonomously perform tasks dangerous for humans — such as pre-mission reconnaissance in a hostile urban setting or searching damaged structures for survivors following an earthquake.

Building on Phase 1 flight tests in 2017, researchers have refined the software and adapted commercial sensors to achieve greater performance with smaller, lighter quadcopters. The tests showed significant progress in urban outdoor as well as indoor autonomous flight scenarios, including:

  • flying at increased speeds between multi-story buildings and through tight alleyways while identifying objects of interest;

  • flying through a narrow window into a building and down a hallway searching rooms and creating a 3-D map of the interior; and

  • identifying and flying down a flight of stairs and exiting the building through an open doorway.

Jean-Charles Ledé, DARPA programme manager, explains: “Unmanned systems equipped with FLA algorithms need no remote pilot, no GPS guidance, no communications link, and no pre-programmed map of the area – the onboard software, lightweight processor, and low-cost sensors do all the work autonomously in real-time.”

A key part of the team’s task was for the air vehicle to build not only a geographically accurate map as it traversed the cityscape but also a semantic one.

“As the vehicle uses its sensors to quickly explore and navigate obstacles in unknown environments, it is continually creating a map as it explores and remembers any place it has already been so it can return to the starting point by itself,” said Jon How, co-leader of the MIT/Draper Laboratory team focusing on reducing the number of onboard sensors to lighten the air vehicle for higher speed.

Neural nets

Using neural nets, the onboard computer recognizes roads, buildings, cars, and other objects and identifies them as such on the map, providing clickable images as well. The human team member could download the map and images from the onboard processor after the mission is completed.

Additionally, the MIT/Draper team incorporated the ability to sync data collected by the air vehicle with a handheld app called the Android Tactical Assault Kit (ATAK), which is already deployed to military forces. Using an optional Wi-Fi link from the aircraft (that the human team member could turn on or off as desired), the air vehicle can send real-time imagery of objects of interest.

According to DARPA, FLA’s algorithms could lead to effective human-machine teams on the battlefield, where a small air or ground vehicle might serve as a scout autonomously searching unknown environments and bringing back useful reconnaissance information to a human team member.

Without needing communications links to the launch vehicle, the chances of an adversary detecting troop presence based on radio transmissions is reduced, which adds further security and safety, Ledé said. This could be particularly important in a search-and-rescue scenario, where an FLA-equipped platform could search in radio silence behind enemy lines for a downed pilot or crew member.


Answer to () :

| | Login