We use technologies to interact with our physical environment, essentially to obtain energy and raw material, and to transform them into “whatever it is that we need”. Until recently, technologies were firmly “under our thumb”, they served our immediate purpose. Devised and employed as ‘dumb’ tools, they extended their user’s agency; but they did not have agency of their own. As we introduce increasingly ‘smart’ technologies and tools, that historic certainty is fading. It is fading fast, though not in any controlled or predictable way. In that emerging story, robots, algorithms, and Artificial Intelligence feature as prominent examples.
Autonomous systems
Robots and autonomous systems in general are the poster children of artificial agents. They are designed with the explicit goal of giving them “some freedom of action, within certain constraints”. We positively want them to take over the dirty, dull, and dangerous tasks. Hence we deliberately equip them with sensors (for them to read their surroundings), with motors, joints, and actuators (for them to interact with and manipulate their environment), and with a little electronic brain that is pre-programmed for any conceivable situation (for them to select and initiate the most appropriate action). Take an early example: the manufacturing robots in the automotive industry. They were stationary and carefully segregated from their humans co-workers. Without direct contact to the human world, the robot “worked in its own bubble”. That isolation ensured a powerful limitation on the robot’s artificial agency, and at the same time it insulated humans from potential accidents.
Today, mobility is key, including for autonomous systems. Navigating the real world safely and reliably requires many more sensors, additional actuators to move around and manoeuvre congested spaces, and much more processing power for the plethora of unpredictable situations to be mastered. Now we have to come to terms with one another: robots must ‘learn’ humans; humans must ‘learn’ robots. We have two principle approaches to do that learning, as the case of autonomous cars illustrates. The ‘out in the wild’-scenario releases the car into everyday traffic and expects it to navigate any possible situation relying solely on onboard means. This scenario does not constrain the road to travel; but in return for this liberty it requires highly sophisticated onboard sensing and information processing capacities: the beast is free roam to its heart’s desires, but it is left to its own devices. In contrast, the ’play it safe’-scenario allows the car to operate only within a limited environment (consider a test site or a smart city), where additional sensors and transmitters fixed along the roadside provide crucial navigation support. This scenario imposes severe limitations on the roaming range of the car, offering considerable offboard support in return: the demand on the car is reduced, as is its ‘freedom of movement’.
By comparing these two idealised scenarios, we can draw further insights on the nature and characteristics of agency.
- There are different levels of agency. The ‘wild’ vehicle aspires to the agency of an independent human driver. This vehicle has more built-in autonomy than the ‘safe’ vehicle, which depends on external support. In technical jargon, paradoxically, the ‘safe’ vehicle is said to show the higher degree of autonomy. To be precise, in the ‘safe’ scenario, the vehicle succeeds only with the support of the surrounding infrastructure. While the ‘safe’ system is superior to the ‘wild’ vehicle, the ‘safe’ vehicle alone is inferior. This points to a second observation.
- There are lines to be drawn and trade-offs to be made when developing artificial agents. In the case of autonomous vehicles, the location of critical functions (onboard or offboard) decides the overall system layout, and how much agency resides in the vehicle itself; this is in turn driven by considerations like sensing capacity, required processing power, data link bandwidth, energy constraints, size, weight, and cost. These lines must be reviewed and adjusted over time, as technologies evolve and mature: even the ‘wild’ vehicle today relies on external support for satellite navigation; and cloud-based data processing and decision-making are an area of intense Research & Development. Such lines do not only exist within the artificial agent, but also between the artificial and the human — a third observation.
- Ultimately, artificial agents are devised to serve a human purpose. Hence the success of human-machine interaction crucially depends on the trade-off between human and artificial agency. This involves designers and engineers of course, but to an equal measure the regulators who define the overall safety regime. In the ‘safe’ setting, humans give away some of their freedom for the technology to deliver to its full potential. Humans adapting to technology —as an exception from the default that technology is adapted to human needs— should always be a deliberate choice, following conscious consideration of pros and cons. After all, we give up a part of our own agency and turn it over to an artificial agent. Neither simple convenience nor sheer negligence should count as sufficient justifications. Neither in the design and nor in the use of technology.
Algorithms
Yet that is exactly what happens in our growing affection for algorithms. Whether simple spreadsheets or massive computer programmes, algorithms should help us digest available data and offer recommendations that assist our human decision-making. However, as Mathematician Hannah Fry illustrates in ‘Hello World, How to Be Human in the Age of the Machine‘ [Transworld Publishers, 2018], we harbour a dangerous readiness to ascribe authority to our algorithmic tools. She presents a staggering set of examples —ranging from mildly amusing to plain shocking— that showcase how we accept an algorithm’s recommendation to implement it directly, without any further scrutiny. De facto, we let the algorithm take a decision and lend our own agency for that decision to be executed. We hand over the key to the kingdom without even blinking. Through this blind faith, we do not simply endow an algorithm with some degree of artificial agency, we empower it to our full human agency. We emasculate ourselves and become the algorithm’s obedient tool. If that sounds to you like the tail wagging the dog, I will not disagree.
Artificial Intelligence
Looking into the near future, Artificial Intelligence (AI) is the obvious elephant in the room. What agency will it have? How much agency can it have? Judging by the prophecies of the imminent arrival of Artificial General Intelligence (AGI), we are bound for a future in which AGI-overlords will decide humanity’s fate. I challenge the inevitability assumed in such bold claims. By itself, AI is just another piece of information technology, without direct recourse to the physical world. Of course AI is a powerful tool: it can find patterns in large data sets, it can extrapolate trends from available data to make predictions, it can recommend decisions based on its reading of the data. Still, it needs somebody or something else to take an action: AI must play team to have impact. By itself, AI is like a brain without a body: unable to do anything. Granted, AI will enable autonomous systems towards higher levels of agency; AI will be a vital ingredient to more effective human-machine teams; the Internet of Things will tremendously increase the amount of available data; all of that will happen. Nevertheless, how much agency we endow AI-enabled tools with is our own choice: as designers, and as users.
Cyber
Then there is the notoriously difficult-to-grasp cyber domain: pure code, electrons zipping around, ’living’ in our information technology infrastructure. Its all-pervading nature seems to defy our three-dimensional existence, and its lightning-speed action is staggering, if not downright overwhelming. But we do not have to dive into the deeper structures of internet hubs and underwater cables to understand what agency can emanate from this essentially bodiless domain. A fairly simple concept will suffice: a smart contract. Once written and encoded on a blockchain, it operates fully automatically, without moderation, without hesitation: when conditions are met, execution will follow; plain if-this-than-that, no second ‘thought’, no mercy. A smart contract acts without any external interference, supervision, control, or override option. Once started, it cannot be stopped, even if the signatories changed their mind. We load the gun, aim, and fire in one single moment, and then hope that the target won’t move later. This razor-sharp sword is literally double-edged: we give it agency over us and we deprive ourselves of any agency over it. Because we exclude later corrections, we force ourselves to get our creation perfect. This begs the vital question which types of decisions should be entrusted to such an ‘unforgiving executioner’.
Synthetic biology
One last field to tackle is biology, more specifically, how we put natural agents to human use. It started with the Agricultural Revolution and the domestication of plants and animals to supply food, raw material, and workforce. We have been using more sophisticated biotechnology for millennia, exploiting micro-organisms to process food (dairy products come to mind, or beer brewing). In the 20th century we began to tailor the capabilities of such micro-organisms through genetic manipulation, e.g., to produce human insulin. Today, synthetic biology promises to take the design of organisms to an entirely new level. Applications include medicine, bio fuels, waste recycling, self-healing materials; the list goes on and the potential benefits are assumed to be enormous, while serious ethical concerns are yet to be addressed.
Convergence
The development of bio-inspired agents occurs at the intersection between nanoscience, molecular biology, information technology, and cognitive science. This nano-bio-info-cogno convergence will not produce challenges that are new in-principle. But with their microscopic size, as-yet unseen levels of functional integration, and superior energy efficiency, bio-inspired agents will make our current mechanics-inspired designs look unbelievably crude. These characteristics, together with questions about biological proliferation processes, will aggravate the already existing challenges of meaningful control over our artificial agents. Therefore, bio-inspired artifical agents —even more so than AI-enabled tools— will put our collective resolve to the test: can we (as designers, as users, as regulators, as societies) clearly define the limits of acceptable use? And are we willing and able to guard that perimeter unrelentingly?
This is the fourth in a series of posts on the agency and how it matters to innovation.
I really enjoyed reading this article. The writing style was engaging and kept me interested from start to finish