Rethinking trust – in technology

We are engulfed in technology in our homes, at our workplaces, in our spare time: every moment of our everyday lives. We rely on technology such much that the question whether we trust it doesn’t easily come to our minds. And there was little need to ask that question – until recently. But with information technology pervading literally every aspect of our lives today, we cannot evade that question any longer.

Our relation with technology gradually evolved over time, and even today it is still largely shaped by the Industrial Revolution. In this mechanistic view, technology is a simple tool that serves a single purpose; you can touch it, you can own it. Think about a hammer, a bicycle, a car, or even a pocket calculator. What these examples have in common is a direct physicality: you use them directly with your hands to achieve a specific goal, and you still have a chance to understand the basic physics that make these tools do their job. In this view of technology – and for these types of technologies – we didn’t have to think about trust; exactly because of that very direct relation, we knew essentially how the tool worked, hence trust was not at all the question.

We have to ask ourselves, however, whether this idea of technology still is sufficient in the 21st century. For example, what does it mean, if Google Translate develops its own internal meta-language? That’s no doubt a fascinating technological achievement, something that no human could ever master. But it’s also something that no human ever asked the machine to do. Hence it begs some fascinating questions about our future relation with technology: Why do we need to change our perspective? And how do we then deal with technology?

So what is different today as compared to, say, thirty years ago? Up until the mid-eighties, the majority of technologies were of the directly physical kind I described above. But from then on, information technology entered the mainstream. From then on, a new class of tools gradually took over ever larger parts of our daily lives. For these multi-purpose reprogrammable tools, the tangible components (keyboard, mouse, and screen) are only the interfaces for you to interact with them, while most of the underpinning technology is removed from sight (and touch). The internet is certainly the prime example for this type of technology.

The increasingly indirect use goes hand in hand with unprecedented complexity of technology: component parts can be recombined for new tasks, and can even recombine themselves, as Google Translate so impressively illustrated. Our big challenge is that we lost sight of technology at exactly the time that its fundamental characteristics shifted from simple to complex, from single purpose to reprogrammable, from tangible to difficult to grasp. If we want to maintain some control over the impact of technology use we need to overcome that outdated mechanistic view; we need to comprehend – and embrace – the full complexity of human-made technologies.

I’ll sketch four types of interactions between humans and technology, which gradually lead us away from our conventional mechanistic comfort zone and into the uncharted territory of trust in technology. These four types may well coexist in reality; they do not represent discrete steps of an evolution, in which each step must be completed before “the next level” becomes accessible. But for illustrative purposes it’s easier to address them one by one, introducing each through a guiding question.

Does it do what it is designed to do?

This initial question is simply about complexity (no pun intended). As technology becomes ever more complex, how do we make sure that a novel product or service does what it designed to do? How can we validate and verify that it will work according to plan, in all the interactions with users, and with other existing products and services? In the mechanistic view, technology was at best complicated, and that still allowed us to thoroughly test a new product and under any conceivable circumstances to ensure its desired functionality.

But for truly complex technology? Take the example of complex software code, which is the heart, the soul, and the backbone of most of Silicon Valley’s products. It seems that we’ve already given up on thorough testing in the lab, and leave it to the customers to report bugs, and ideally suggest and even develop fixes. That might be a viable business model for companies; it might even be acceptable for most customers (though not for me); but whatever the rationale, this certainly marks a fundamental change in ‘technology philosophy on two fronts. As a producer, you release a product to your customers even though you know that it does not meet all their requirements. As a customer and user, you do not expect complete guaranteed functionality anymore. That objective loss in reliability causes a subjective loss in trust; that new producer mindset instills user doubt in technology. Is that what we want?

Who decides what it does?

You might expect that once a product is manufactured, its user decides what it does, within the immutable design specifications. At least, that was the case in the past. Now, the example of Tesla and hurricane Irma tells a very different story: Tesla provided owners of their Models S and X with temporarily increased battery capacity –via a remote software update– targeted specifically at residents in the Florida evacuation area. The additional range allowed these drivers to leave the sunshine state more easily, before Irma made landfall.

Ignore the hype and the positive press for a second, and think about what Tesla really demonstrated with their decision: A manufacturer retains far-reaching control over essential performance characteristics, even after the product is sold, without giving the user a choice. And Tesla was ready to exert their power without even asking the users. In this specific example, the intent was undoubtedly positive. But what if this power is used under contested circumstances, e.g., if user and producer disagree about terms of use? Is the producer then simply going to lock out the user (usually also the legal owner)? That could easily qualify as blackmail, but it is –technically– possible, thanks to the connectivity, software power, and software dependence of modern cars. And it’s not limited to the automotive sector. Think about smart homes and all the other hyper-connected services and applications that are being promoted as we speak.

In general, we witness the emergence of a new relation between manufacturer and user. Previously, you engaged with the manufacturer only once, when you paid for the product before you drove or carried it home. Now, you enter into a new enduring relation with the manufacturer, who may roll out updates, upgrades, or additional services over time. That begs many questions. What’s the cost model – Do I pay in advance as a flat-rate, or per update, or per use? What’s the customer’s choice – Can I reject such updates? What would be the consequences? What’s the customer’s influence – Can I ask for specific updates or improvements? Which access and control does the manufacturer retain – How much of the product do I own and control as a user?

It’s too early for definitive answers. What is clear already is this: the technology a customer acquires is not fixed or final at the time of purchase; it can change over time – based on manufacturer decisions and actions. Whether that’s a revolution, an evolution, or a reversion: time will tell. As customers, owners, and consumers, we should carefully watch the development and make conscious choices: not every upgrade is necessary, or useful, or worth its price.

Does it work for me?

Autonomous agents are a new class of tools that is spreading fast and wide. Think about computer algorithms, essentially software, devised to support humans in making decisions. That sounds innocent enough, until you get to the kind of decisions that are supported by these tools. As everyday consumer, you won’t be exposed to these autonomous agents, you will not come to use them. But it is increasingly likely that they are used on you, that you are judged by them, often without your knowledge, let alone consent.

There are already many examples of autonomous agents supporting human decision making; not just in the labs, but in actual use. Banks and insurances employ them in granting loans, offering insurance coverage, or determining premiums. Companies use automated analysis of video-recorded interviews for hiring new talent. Security forces rely on them for facial recognition in CCTV imagery, or for identifying the neighbourhoods where to increase police presence (predictive policing). Judges obtain advice on whom to grant parole. And medical diagnostics is just emerging as another field of promising applications.

Despite the broad range of tools and purposes, and whether you like them or not, these autonomous agents have many characteristics in common. They are employed without you being asked, and often with you knowing, to help somebody else to take a decision about you. Usually that’s an assessment of a risk that you might pose (banking, insurance, security), an opportunity you might present (human resources), or a risk you might be under (medical). All these decisions are marked by complexity and uncertainty. Hence they are naturally considered difficult, and it’s only logical that humans seek advice. And that’s where Artificial Intelligence (AI) is attracting ever more attention.

However, AI poses an entirely novel problem. AI systems are adaptive, in part they programme themselves, and they have evolved beyond the understanding of their designers. That’s the dark secret at the heart of AI as Will Knight phrases it:

We’ve never before built machines that operate in ways their creators don’t understand.

The wide-spread use of AI presents a host of serious questions about our relation with this type of technology; and these are even more pressing due to our lack of insight in AI’s inner workings. These questions broadly fall into two categories:

  • Taking decisions and taking responsibility – It’s only a small step from the theory of algorithmic decision support to the reality of algorithms taking actual decisions. All that’s required is an uncritical or stress-outed operator who stops investing his own judgment and simply executes the algorithm’s proposal. If your job is to take one such decision a week, you still have the time to make up your own mind. But once you have fifty cases a day, you will feel forced to rely on the recommendations you get. Are we comfortable with that? Out of convenience? Only in specific circumstances? And who is held accountable? The designer of the algorithm? The company integrating the AI systemThe human operator of the tool? The organisation using and owning it all?
  • Knowing how AI works and knowing when its used – Current AI was developed to “get the job done”, but not to “explain how it does the job”. I firmly believe that we need to be able to peek inside the AI black box, to understand how an AI system arrived at its conclusions. That’s what John Weaver expresses in AI owes you an explanation. Take for example the European General Data Protection Regulation (GDPR) that will come into force on 25 May 2018. This regulation constitutes a binding law that enshrines the consumer’s right to know when, where, and how personal data are used by autonomous agents. The GDPR covers the personal data of all residents of the EU, regardless where the actual data processing takes place. Compliance with this regulation will force developers of AI around the globe to make the inner workings of their AI transparent. That’ll be a significant step forward – at least for European consumers.

All of the above considerations tackle single autonomous systems and their interactions with humans. But that is still a fairly simple case. A group of autonomous systems, a team or swarm, is something entirely different.

Do they work together for me?

What if autonomous agents interact directly and the result of their collaboration affects you? That’s obviously a fundamentally different situation than an interaction of an AI system with you in person where you still could expect to maintain some influence on the outcome. How comfortable would you be knowing that the “negotiation” between AIs influences your life? And you know that they didn’t really ask your opinion? Is that a comfortable feeling?

That’s probably too abstract a question. Looking for a more tangible example, think about negotiation bots. That’s the idea that two or more (human) parties task their respective autonomous agents to negotiate a deal that is then binding for the humans. That’s technically already achievable, and it seems compelling. Think about two cars quickly negotiating the right of way at an intersection, e.g., when the traffic lights are out of order. If your autonomous car would take that burden from you, that’s probably very welcome. Or think about the smart grid, where many providers and many consumers need to negotiate the price of electricity. For humans it’s practically impossible to grasp and take into account the demand and supply at the specific moment in time. Negotiation bots could achieve that and agree on a fair price even in such a complex high-speed market.

But what about complex political debates like the Paris Climate Accord? Is that something we could imagine? Try and picture that situation: individual nation states would entrust their future fate to the skills and competences of an AI system, and those states would then necessarily abide by the outcome negotiated by their bots on their behalf. Is that an application for negotiation bots that we should desire? At the very least, that presents the ultimate test for our trust in technology. And let’s not forget that the above questions about taking decisions, taking responsibility, and knowing how AI works, still need to be resolved.

What’s next? 

The big question about our trust in technology is: How will it evolve in light of the transformative power of the digital revolution? Given that key characteristics of technology itself have changed from statically defined and tangible-physical to reprogrammable and information-driven, we must adjust to a new, and still emerging reality. But neither techno-hype nor technophobia are justified, neither blind faith nor irrational fear will give us reasonable orientation.

And we cannot simply close our eyes and hope for fair winds and smooth sailing, either. Rather, if we continue our exploratory journey pursuing the opportunities of digital technologies such as Artificial Intelligence, we’ll have to be mindful of currents, of undersea rocks, and of tempestuous weather — and navigate them carefully. For such risk management, societies around the globe have traditionally resorted to either an a priori approach or an a posteriori approach.

  • The a priori approach seeks to avoid potential risks. This precautionary principle demands evidence that a new product or service complies with existing regulations before that product or service enters the market. Such regulations are intended to ensure that no harm could arise to individual customers, to society, or to the environment. This approach is preferred in legal systems based on codified law, and is widely applied in continental Europe and the European Union.
  • The a posteriori approach accepts potential risks and deals with consequences as they arise. It assumes that any new product or service will be designed and delivered in such a way that harm to individuals, society, or the environment is effectively avoided. In case a concrete damage arises and can be attributed to a specific product, the originator of that product is held liable and charged to compensate for the damage. This approach is more compatible with a case law system, for example in the United States.

Today, the Digital Revolution is driven by technology and the industry promoting it. We let technological developments extend the art of the possible and push the boundaries, for example of AI applications. The role of society is reduced to a collective of consumers. In the absence of any evident catastrophic failure of digital technologies, society so far does not execute its regulatory and law-making function. That approach seems entirely coherent in Silicon Valley, where technology-can-do and case-law culture go hand in hand. However, from a European perspective, such unconstrained technology push presents a challenge, to say the least.

In this specific regard, the Musk-Zuckerberg-controversy on AI is remarkable. Elon Musk is quite outspoken about the potential risk of unbridled AI, and in his call for regulatory control of AI he takes a position that is a lot closer to Europeans than to Americans. Granted, the European GDPR might appear excessive and over-reacting from a Silicon Valley perspective; on the other hand, the hope that industry all by itself will generate only beneficial results for society deems many Europeans as nonchalant or even naïve.

Either way, any extremist approach is equally unreasonable and irresponsible. We’ve got to discuss all aspects of the Digital Revolution and the emerging socio-econo-techno-sphere: not only the technological developments, not only the economic expectations, but the cultural implications, the societal effects, and the appropriate legal action to define the confines in which we trust digital technologies in the future.

 


This post is the third of a short series on how we might rethink society, or rather, rethink some of the core concepts our society is built upon. Previous posts addressed ownership, and trust in humans, the following post will focus on value. The need and opportunity for such a fundamental re-orientation arise from a cluster of disruptions emerging all around us: in politics, economics, Big Business, energy and production. These disruptions are themselves driven by mega trends like urbanization, globalization, digitization, and decentralization.

Comments

  1. Thanks a lot for the great post. In the question “how do we make sure that a novel product or service does what it designed to do?”, in my view and when we are talking about high-tech products, I think we simply cannot make sure anymore. Especially the digital technology, has advanced way beyond our capacity as human beings to critically think of the reasoning and the consequences of the end result in a timely manner, that it is impossible for us to be an active part of this process anymore. In today’s modern cars for instance, the autonomous systems and AI allow to perform reliable processes in such a speed and then self-evaluate the performance to make sure it stays within the limits set by the factory, that the driver or even the car engineer is not able to compete with the machine in testing and making sure it does what it designed to do. Almost a decade ago as I was troubleshooting the electrical/armament system of the Apache helicopter, I was able to plug-in a computer and interact with the aircraft data-bus system with encoded messages and had to convert a hexadecimal number into binary code which I then cross reference to a list in the technical manual to evaluate the performance of the system or possible malfunction. Today I read that a F-35 test pilot is struggling to proceed as a human with an objective evaluation of a system that recreates its own knowledge and takes decisions autonomously based on calculations that accede by far his capacity.
    So we end up using, and consequently trusting, machines to test and evaluate other machines, due to the technological complexity and digital processing power that we (the creators) are so weak in ensuring their scope and purpose that they where designed to serve. Inevitably, the questions “who decides what it does” and “does it work for me”, have equally difficult answers. Therefore, I would start with a more fundamental question related to technological ethics and responsible innovation. How do we set the social, humanistic and materialistic criteria for a certain design of a novel product? Because once it is designed and produced, it is pretty late to answer, and then we can hardly be ahead of the game as creators.

    • Thanks for sharing your thoughts and another very good example of how far we’ve already come to simply “need to trust” today’s complex technology. It seems that we have already given up on understanding tools we have created. That is deeply concerning and leads to a whole heap of challenging questions.

      Of course we cannot change the past; but we can and must learn from the past to be better for the future. The critical consideration is technology’s path-dependence: today’s technology is built on the past (at last in part), and will inevitably propagate into the future (again, in some part). These far-reaching consequences should force us to make our choices consciously and carefully.

      Let’s assume that we are beyond the point of knowing how our technology works (taking complex IT systems as an example). Then – as you suggest – the question is: How do we still exert some level of control over that technology? In the past, we’ve done that through closed-loop control inside the technology. Today, as we lost that option, we are left with open-loop control, hoping that the designer gets it about right, and trusting that the technology will only act according to plan. Which means we’ve lost the ability to control technology from within, and have not yet created effective alternative means. That’s troublesome, to say the least.

      As an optimist, I see the glass as half-full, though. And you allude to that as well: we’ve got to strengthen the ethical, cultural (and I would definitely add: legal) elements of control. That’s for the facet of society that goes beyond the simple market participant, industry customer, consumer of technology: that is for civil society in its entirety to discuss. I’m just concerned that we don’t have the mechanisms in place to facilitate that highly complex discussion. Today, most debate about technology is driven by industry considerations, along two lines:
      (1) opportunities for consumers — Buy this cool gadget and you’ll live forever!
      (2) employment — Jobs are in jeopardy if we cannot do what we propose!
      Of course the first is attractive to the individual citizen (more Silicon Valley style), whereas the second pressures governments (more German automotive industry style). And both lines of argument effectively suggest to citizens and their governments that they’d better accept what industry has to offer. Now, that would be entirely acceptable if industry was altruistic at its core. But we all know the concept of shareholder value and the myopic self-interest it fosters. In essence, today’s discussions about technology are dominated by a short-term micro-economic voice.

      For the future, I would want to hear several voices exchanging arguments. The short-term micro-economic voice is one for sure, but I’d need to hear long-term voices as well: What are the implications, e.g., hidden cost for future generations, environmental impacts, effects of society. And in all of those discussions, I want civil society to stand its ground: not just as a collective of anonymous consumers, but ultimately as the electorate that (indirectly, but still) defines politics and makes laws. For in the end, industry only exists within the constraints of political and legal frameworks.

      Which then gets us very directly to questions about our value system: What’s our purpose, and how does innovation help? Is technology a tool for problem-solving? Is making money the main driver for human progress? I’m structuring my thoughts on those and related questions – for the upcoming post 🙂

      • Totally agree that today’s discussions about technology are dominated by a short-term micro-economic voice. The decision to invest in new technology should be driven by a vision and objectives related to our well-being, our family’s well-being and our sustainable future, regardless any ephemeral economic gains. The Norwegian government for example has hired a philosopher as a moral compass, to decide how to spend the revenue from the country’s enormous oil income. Try mentioning this idea to the European Central Bank or the Federal Reserve. 🙂
        Still, very glad to hear that you see the glass half-full.

What's your view?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: