Intelligent drones: Governance and Responsible AI

Neelanshi Varia
6 min readFeb 6, 2021

Learning from mistakes and loopholes in various artificial intelligence technologies, it is high time that we pay attention to the development of robust AI algorithms. We must also put laws and policies in place to responsibly deploy and govern intelligent drones before they get out of control.

Photo by Miguel Ángel Hernández on Unsplash

What are “intelligent” drones?

An Unmanned Aerial Vehicle (UAV) can be defined as a “smart” system that can be controlled remotely or using an onboard computer system. [1] Intelligent drone technology is a step ahead, where not only are these drones smart but also capable of making their own decisions like recognizing faces, deciding their paths, communicating with each other, and taking decisions without a human in the loop. The technology can be imagined as an air counterpart of autonomous driving vehicles only with wider areas of application and easier regulations. From delivery to the military, industries have shown great interest in development of such drones.

Where do we find these drones?

With Amazon getting permission for Prime Air, in no time will we start seeing drones peeping outside our windows on a daily basis. Large real estate property groups already have intelligent drones surveying buildings for quality monitoring. But these are not the only areas where intelligent drones are deployed. A lot of tasks previously performed by satellite can and are now being performed more precisely with the UAVs, especially drones. Tasks like early forest fire detection, wildlife monitoring, harvest monitoring can now be done at ground level in an automated fashion. Food chains like Domino’s have already delivered pizzas with drones and delivery services players like Amazon and UPS are looking forward to a more intelligent, cheaper and faster version of delivery.

Security and surveillance is another major area of application of these intelligent drones. Drones developed for police departments that can follow and find a person are expected to be flying sooner than we imagine. Various militaries have developed drones that can kill target subjects on the battlefield without human approval, drones that patrol to perform indoor navigation on the enemy frontier and guide a soldier in an unknown territory. It was a concerning sight when the US attacked Russia’s Khmeimim airbase in western Syria with swarm drones. [2] Swarm robotics is essentially having a single distributed brain amongst different members of a swarm to make decisions and adapt to each other. These drones can analyze targets, distribute tasks and deploy weapons with almost zero human interaction.

Are these drones a part of the AI world that will wipe humankind?

Well, whether or not AI will wipe humankind is another unending debate, but these drones are definitely a part of the AI ecosystem that is growing today and becoming part of everyday life. Intelligent drones embedded with Machine Learning, Computer Vision and other technologies under the umbrella of AI come with the advantages and loopholes that the AI community is actively discussing. Facial recognition technology, which can be used in various surveillance, police and military drones, is probably helpful in finding missing people but at the same time inherits a bias when assessed in terms of gender, race and other such demographics. The object detection and segmentation technology which has caused a few accidents with autonomous cars is highly likely to happen with drones, especially with the little existing research on robustness and awareness in the area.

Another major aspect of these intelligent drones is route planning — which is not scripted on GPS coordinates but involves intelligent path mapping based on traffic, time of the day and other such factors. Amazon’s algorithm excluded some of the minority neighbourhoods for next-day delivery and the same bias can carry forward on the air e-commerce systems. Swarm robotics depends on multiple external factors that can go random and have unpredictable outcomes if the system isn’t robust. The list can go on, but it is important to realize that intelligent drones can have as much impact and disadvantage as other AI systems have caused in the past and it is necessary to follow principles of responsible AI implementation, law and governance.

But do we have the law in place for these drones?

Currently, the laws in the US for flying Unmanned Aircraft Systems (UAS) such as drones are governed by Federal Aviation Administration (FAA).[3] It supports Recreational Flyers, Modeler Community-Based Organizations, Certificated Remote Pilots and Educational Users. The Law Enforcement Assistance Program (LEAP) of the FAA helps for incidents related to drones but nowhere does it mention at length what are the rules of flying a drone (let alone intelligent drone ownership responsibilities) or how cases are handled. In conjunction with NASA, FAA is carrying out an Unmanned Aircraft System Traffic Management (UTM) plan which was last updated 4 years back! The laws aren’t uniform even for different towns in the same county. For the city of Evanston where I stay, laws state a moratorium on drone use until reasonable state and federal regulations are enacted, which is indefinite and ambiguous.

The European Union Regulations 2019/947 and 2019/945 set a framework for safe operation of drones in EU and EASA Member States. It’s a 300-page document thoroughly describing guidelines, responsibilities, laws and regulations. But none of these laws mention anything about the intelligent drones and their operation. A US policy (2012) says: “Autonomous … weapons systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.” [4] But it may certify compliance with “appropriate levels” and so it isn’t forbidden to use such drones. Hence, the hour calls to establish Responsible AI guidelines for every autonomous technology we use i.e. aiming to establish more robust systems and even better laws to govern these techs.

What can be done?

Who is to blame when a self-driving car kills someone? Such questions should not only be asked for Autonomous Vehicles but also drones, bots, robots, and even simpler AI systems. This is important to assimilate and apply the idea of Responsible AI alongside its development. With the advent of these intelligent drones, it is also important to establish the tiers of autonomy of intelligent drones, laws for respective permissions for different categories of users, air traffic rules in depth and AI technologies prohibited from usage for proper governance. More importantly, the responsibility in the case when these drones go rogue, intrude privacy “unintentionally”, the allowed usage in the military for they’re as lethal as biological weapons. We should also persevere to make Computer Vision, Machine Learning, Swarm Tech, etc. algorithms and applications more robust with respect to its applications in drones for the adversarial conditions in this space is far more unpredictable and unknown than the terrain we reside on. And we are better off understanding that prevention is better than cure after mishaps that have resulted from negligence in robustness and lack of governing rules of every revolutionary technology.

I am a recent MS in Artificial Intelligence graduate from Northwestern University. This blogpost was written as a part of my coursework for the class ‘Law and the Governance of Artificial Intelligence’ and I am extremely thankful to Prof. Daniel W. Linna Jr. for his valuable inputs. I would love to hear any thoughts/suggestions/additions/criticisms that you may have!

References:

  1. Nayyar, A., Nguyen, B. L., Nguyen, N. G. (2020). The Internet of Drone Things (IoDT): Future Envision of Smart Drones. In First International Conference on Sustainable Technologies for Computational Intelligence (pp. 563–580). Springer, Singapore.
  2. Are drone swarms the future of aerial warfare? (https://www.theguardian.com/news/2019/dec/04/are-drone-swarms-the-future-of-aerial-warfare)
  3. https://www.faa.gov/uas/
  4. US Department of Defense (2012). “Directive 3000.09, Autonomy in weapon systems” (PDF). p. 2.

--

--