The agent’s decision-making

With that clarification on the its relations with the outside, we can turn to the agent’s inner workings: What happens between those inputs and outputs? How is sensing translated into action?

What is needed is some information processing that leads to a decision. To that effect, the agent must first compare the outside world (the sensor input about the context) with the agent’s purpose, and then discern —from that comparison— an appropriate response.

Taking our anthropocentric selves as the example, we expect that this information processing and decision making necessarily requires higher brain functions and consciousness. We assume that possible actions are assessed internally and the optimal decision is made before any action is taken. We see the exploitation of pre-existing concepts and knowledge as natural and essential precondition to being an agent. Counter to that intuition of ours, it is important to acknowledge how little of “what makes us human” is actually required to take decisions. For an agent to comply with the above definition of agency, it does not need to understand the context, or have a sense of self. It does not even need knowledge of the consequential effects that possible actions would have. Rather, it is sufficient to evaluate an action based on its outcome —through new sensor input— and then to adjust the action accordingly. Obviously, such exploration is a risky approach in an unknown, potentially dangerous setting. But it is the only thing you actually can do when you cannot predict future outcomes with some certainty, i.e., when you don’t have a crystal ball. If you don’t like the term ‘improvision’, try ‘real-time mission planning’ instead.

Exploration and exploitation both deliver an answer to the same question: What is the ‘right’ action to take? — they both help making the crucial decision of “What to do next?” But there are fundamental differences between the two.

  • Exploration is the older approach, tried and tested since the beginning of life on Earth. Basically, an explorer acts first and then looks at the outcomes: a good action brought the agent into a better position (relative to its starting point); the agent got closer to achieving its purpose. In this case, the assessment of an action occurs after the fact, and it is external to the agent. Exploration has the significant advantage that it does not require any sophistication of the agent; hence it is rather easy to achieve. But it carries the considerable risk of a potentially lethal next step.
  • Exploitation seeks to minimise such risk by evaluating possible actions in advance, comparing their probable outcomes, and then selecting the best option. Here, the assessment of the action is internal, and it occurs prior to the action. Such an analysis can de-risk decision-making by eliminating potentially fatal options. However, it comes at a price. Exploitation requires massive resources to build, run and maintain a serious information processing capacity, like the vertebrate brain that evolved around 500 million years ago. Such machinery must represent abstract models of the agent, of its environment, and of the interactions between them; what is more, it needs to project those into the future in order to discern viable courses of action. In other words, an exploiter must internalise the world in order to perform this a priori internal assessment of possible actions.

One might simply conclude that exploration is cheap but risky, whereas exploitation is safe yet expensive. But that summary only passes superficial scrunity, as reality turns out a little less simple. Consider the following three exemplary cases:

  • Simple agents explore — The first agents, i.e., the unicellular organisms, only had this one approach: to assess their actions after the fact, externally. This approach served them very well for the first few billion years of the evolution of life on Earth.
  • Complex agents exploit — Vertebrate animals were the first agents equipped with sufficiently powerful information processing that allowed them to evaluate actions a priori, internally. The vertebrates became the first exploiters.
  • Complex agents explore — Sometimes, even complex agents use exploration. They take that approach when “they cannot think of a solution“, or rather, in less anthropocentric words, when the context’s complexity exceeds the agent’s information processing capacity.

These examples reveal a striking similarity across the two cases of exploration. Notwithstanding the significant differences between the agents themselves, the agents’ relation to the context is the same. In both cases, the agent wrestles with an environment or situation where the range of possible options exceeds the exploitable knowledge and understanding. In both cases, the context’s complexity is beyond the agent’s grasp. This leaves us with only two fundamental cases, based on the context’s relative complexity:

  1. Exploration is the natural approach —without any precondition— when the context overwhelms the agent’s capacity.
  2. Exploitation is the safer approach, provided that the agent is equipped with sufficient information processing capacity to cope with the context’s complexity — however, such capacity is costly to develop, operate and sustain.

Why does this matter? Ever since the Enlightenment, we cherish rationality. We give preference to the solid evaluation of options prior to making a decision. We became formidable exploiters, but at the same time we came to despise exploration as mere improvisation. Even though we admire successful entrepreneurs and explorers for prevailing against all odds, we maintain considerable reticence against the exploratory approach. It is never our first choice; we do not usually assume that it could work; we need to be pushed hard to even take it into consideration.

This singular focus on exploitation has two mutually reinforcing effects. First – The focus on exploitation promotes a mechanistic worldview, it lets us assume that everything should be knowable, and could come under our control, if only we’d invest enough time and effort to think harder. Second – By assuming that exploitation is our only approach, we entrench our thinking and try to force the world around us into that narrow mould. As a result, we invest ever more resources in futile attempts at gaining the upper hand over external events beyond our control, instead of building sufficient resilience into our systems and organizations, which would allow us to cope with unavoidable shocks. For organizations in particular, it is vital to judge the evolving context correctly, and to shift from exploitation to exploration swiftly as soon as —ideally before— the external complexity overwhelms an organization’s coping capacity.

We should consider ourselves privileged: We are equipped to have the choice between exploitation and exploration. We only need to be aware of that choice, and exercise it wisely. We cannot always play it safe; sometimes it is necessary to accept risk.

This is the twelveth in a series of posts on the agency and how it matters to innovation.

4 thoughts on “The agent’s decision-making

  1. Suggesting higher level of exploration and accepting risk, what would be the threshold or limit of that exploration level and risk to be accepted? AI could be set up to keep “exploring” and take decisions even if there is 1% chance of success. What would be the acceptance criteria and what would be the cutoff time, and last but not least, who makes the call (is it up to the entrepreneur and private sector, or the government setting the governance policy)? Thanks for the great post.!

    1. Many thanks for the inspiration, that’s definitely a great set of follow-up questions.

      In general, exploration should be more common in particular if the risks are low. But with increasing risk (and cost), we need to make ever more deliberate choices. For example, an AI contained in a proper sandbox should be really free to explore, and to make recommendations to a human operator.

      However, and AI actually making decisions is an altogether different thing. If those decisions are ‘only’ about the shuffling sequence in your spotify playlist, I really don’t mind if that AI is ‘inventive’. But if a decision is about credit ratings, probation periods, or other things of potentially life-changing nature, I want to be sure that such AI is properly trained, thoroughly governed, and carefully employed — do deliver a recommendation to a human who makes the ultimate decision.

      All of this needs whole-of-society discussions to build common, widely shared understanding of the risks, the opportunities, and the choices we all need to make.

  2. Very interesting intersection of controversial thoughts. Although we agree that for potentially life-changing nature we want AI to be properly trained and governed according to our standards, policies or ethics, but at the same time we have also agreed that humans as agents rely on five senses and (as per previous post) that it’s important to recognize that much more is possible than our anthropocentric approach. Since for AI the view of the world is data centric with so many different parameters, how can we define what is really responsible and ethical in innovation & disruptive technology, if we really want to dare think the unthinkable that will inevitably change the decision making process as we know it?. Playing the devils advocate, we want to be in charge because we think we know what is best for us the “natural agents”, but we also aknowledge superior capacity of artificial agents to suggest solutions for our benefit based on different set of senses or criteria. My logic here leads me to a need for a type of independent verification & validation system that is disconnected from the primary artificial agent, and advices on the validity of the approach of the suggested solution or decision, based on preset but adjustable ethics or standards; but this is just a thought. Something that was considered ethical & responsible 100 years ago perhaps is not anymore today, and what we think as ethical today may be obsolete in our society in a few decades from now (or sooner if we consider the exponential nature of AI application).
    But as you suggest, whole-of-society discussion is needed for a better common understanding, although again, looking at crypto currency, Blockchain and Wall Street transactions, I wonder if “risks and opportunities” might have slightly different meaning in the near future as well.

    1. Many thanks again. These are the types of discussions that need to occur, across the many different stakeholders, across the entire society.

      You are entirely correct that we judge, and can only judge, from the perspective of our own agency. And I agree that it is plausible for AI to have some capacity that is superior to us humans (as is already the case for pattern identification). My question then is: what exactly is the AI superior in? And the answer is: a tiny subset of what up human intelligence (which is its own topic of considerable debate). I argue that AI’s partial superiority today should not lead us to assume that AI is already, or could at all become, fully superior to human intelligence. From that venture point, I draw two conclusions: we continue to bear the responsibility for our own decisions (whether AI-assisted or not), and the limits of AI’s future capacities our the result of our choices (there is nothing inevitable about general AI).

      I agree that some verification & validation will be useful, and your proposal for a ‘four eyes principle’ in AI-assisted decisions is worth some further elaboration.

      And ultimately, ‘risks and opportunities’ are evolving yardsticks, never carved in stone. I would just want to avoid that the ‘opportunities for a few’ are pursued despite the ‘risks for the many’. I.e., BigTech’s profit interest is not good enough to justify undermining civil liberties or democratic institutions.
      The same applies to ethics and values: they are subject to cultural changes. But I would want technology that complies with today’s known values, rather than adjusting are values to comply with tomorrow’s technologies. Which leads to an interesting observation that Henry Kissinger offered in ‘How the Enlightenment Ends‘, where he mused about the impact of AI: “The Enlightenment started with essentially philosophical insights spread by a new technology. Our period is moving in the opposite direction.” We are developing a dominant technology that is still looking for a solid philosophical anchor. High time that that discussion gets started.

What's your view?

This site uses Akismet to reduce spam. Learn how your comment data is processed.