The path(s) to artificial agency

Across the categories of organizations, technologies, and socio-technical systems, we can identify three principle paths to artificial agency. Ultimately, they all originate from human agency; but the degree of human deliberation varies considerably.

Willing consent — There are situations in which we positively intend to create an artificial agent for it to execute a task for us. These deliberate cases include founding a company, building a robot, or entering into a smart contract. Here, we knowingly endow an artificial entity with a defined level of agency. We have a clear purpose in mind, with consideration for the range of decisions it could make, carefully evaluating the actions it could take and the impact those might have. Yet even in these cases, we do not retain full control over all possible consequences (which are unforeseeable anyway).

Blind faith — The designers’ plans are not always fit for the users’ reality. Or rather, the users will not always act according to the designers’ explicit predictions and implicit assumptions. Take our unhealthy relationship with algorithms as a case in point. The users render themselves the willing executioners of algorithmic recommendations, in utter disregard for the designers’ assumptions that the users would —prior to taking action— conduct a critical evaluation of the algorithm’s proposals. The users’ accidental failure (or convenient refusal) to exercise their oversight responsibilities empowers the algorithm. Through the users’ blind faith, the artificial agent obtains direct access to the full human agency.

Runaway complexity — Our socio-technical systems are complex, blurry blends of many agents; they comprise people, organizations, and technologies. The multitude of interactions and feedback loops between them catalyses the dynamic ’meeting and mating’ of people’s ideas, established organizations and novel technologies. By strengthening existing and building new interactions and/or feedback loops, the level of agency can increase; by integrating new components, even entirely novel agency can emerge. Struggling with unforeseen consequences, we issue regulations after the fact to limit the further proliferation of the novel agents. Alas, we cannot ‘un-create’ our own creation; therefore, our containment strategy can only have limited success.

The powers that we entrust to our artificial agents can be —or become— enormous; and our attention to that fact is crucially underdeveloped. To sum it up in a single phrase: “We invest little concrete intentional planning now, and resort to much haphazard improvisation later”. First, there is intent: the designer’s choice to deliver a tool that provides maximum utility. Then, there is neglect: the user’s choice to ignore his responsibilities in order to maximise personal convenience. Finally, there is emergence: driven by unexpected connections, applications and exploitations, novel agency can arise beyond our immediate choices as designers or users, often difficult to gain control over once it took shape. Nevertheless, these unplanned outcomes do not relieve us from the fundamental responsibility we bear for our creations.

This is the sixth in a series of posts on the agency and how it matters to innovation.

What's your view?

This site uses Akismet to reduce spam. Learn how your comment data is processed.