There has been a great deal of interest in Autonomous Systems in the research community. This interest stems from the increased use of unmanned systems such as unmanned air vehicles (UAVs), unmanned ground vehicles (UGV) and unmanned underwater vehicles (UUVs). The number of unmanned vehicles has grown dramatically in recent years. In 2008 the U.S. Air Force had 5,331 unmanned aircraft, or twice as many as manned. Unmanned systems, however, have been around for a very long time with the earliest attempts occurring in 1916 with A.M. Low’s “Aerial Target”. Most of the time these unmanned vehicles are driven remotely using a joystick and cockpit like display, but the demand for increased “autonomy” has grown alongside the fields of robotics and artificial intelligence (AI).
But what do people mean by “autonomy”? A lay person’s definition is inadequate:
- independence or freedom, as of the will or one’s actions: the autonomy of the individual.
- the state or condition of having independence or freedom, or of being autonomous; self-government, or the right of self-government
- a self-governing community. (Dictionary.com)
Do we really want systems to have independence or freedom? Do we want them to be self-governing? While these are great topics for research, I don’t believe that the user community envisions autonomy as systems that are “free” or “independent”. Instead they are looking for systems that don’t require constant attention or intervention. There is a very important difference between a freely acting intelligent system and a system that isn’t a constant attention sink. This is not a quibble about definition. It turns out that there are very few domains (possibly no domains) in which true autonomy is desired. For instance, it would be extremely interesting from a research perspective to create a robotic bird that decides what to do for itself, can refuel itself (perhaps through solar power), but what would it do for the user? You might say it could deliver messages or collect video or drop payloads, but then is it truly autonomous? Is it deciding for itself what it should do? Would it be acceptable for it to decide that a nice sunny spot would be better than delivering a payload or shooting the video? These questions may seem silly, because one assumes that decisions of this sort are built into the mission of the autonomous entity. The inclusion of a goal or mission, however, for the entity is one way of limiting its autonomy. The user wants the “autonomous” system to do something for them, so they set constraints for the system that limit its freedom to decide.
Yes … I hear your immediate argument: “the system could have a great deal of freedom (i.e. autonomy) in how it carries out its mission.” Perhaps, but this is where research hits reality head-on. Every real world situation is different and nuanced. Even humans are briefed extensively prior to taking on a mission, and once that mission starts it can change quickly requiring quick reactive thinking and communication with leadership. In a real world situation most humans still have to make a decision about survival versus mission objectives when placed in that situation. There also remain decisions that no one wants to cede to autonomous systems. Consider an autonomous Predator UAV (they are not today) or an autonomous UUV. Do users desire the “autonomous” entity to make the decision to fire the weapon (missile or torpedo)? These decisions are made pretty high up in the command chain.
So for most practical purposes autonomy is limited and should be constrained. I suggest that we redefine autonomy to be something closer to systems that users can actually use. An autonomous system is one which does not require constant user attention, follows a detailed mission or plan, is aware of its situation, can react to that situation and communicate with decision makers when the plan diverges. Discovery Machine provides a methodology and tools that enable users to rapidly define the constraints for systems that need to act autonomously in a limited sense. We also provide these systems with communications and reactive capabilities for when autonomy breaks down. This enables users to reduce their attention and limit their intervention. Also, since every mission is unique, the system enables users to tailor the decision making (including reactions) to deal with situations specific to that mission.