As innovators seeking to reach further and farther, we always stand on the shoulders of our many predecessors. We build our future progress on their invaluable past achievements. We literally don’t have to re-invent the wheel (or the steam engine, or double-entry book-keeping, or the integrated circuit, you name it), thanks to humankind’s unique capability of social learning, of sharing experiences and ideas over time and over space.
Biology, culture, technology
Initially our ancestry had only gestures and facial expressions, until the ability evolved to communicate ideas through articulate speech. The advent of spoken language marks the beginning of our cultural evolution, which turned out to progress a lot faster than our biological evolution ever could. Even more stunning than cultural evolution’s pace is its relentless acceleration. We invented script about 5,000 years ago, and moveable-type print in the 15th century. We created the first integrated circuits only in the 1960s, and modern information technology seems to breed new generations of smart phones every year, while apps are already updated monthly.
Today, we rely heavily on computers, databases, and the internet to underpin, facilitate, reinforce and/or accelerate nearly everything we do: We use our technology to augment our biology. And that includes our social learning capabilities and our creative skills as well. There’s nothing wrong with that in principle, as long as we recognise the hidden feedback loop: social learning and individual creativity give rise to novel technology, and at the same time they are affected directly by that technology, in particular by our modern information technology. Further, we must acknowledge the tremendous speed of technological change as well as its continued acceleration. The pace imposed by the augmenting technology challenges our biologically defined ability to adjust.
For these two reasons, it is high time for us to critically assess technology’s impact on our innovative endeavours: How do we employ technology to support innovation? What are the benefits we seek to realise? And what are the potential detriments we should mitigate, if not avoid? This is the time to formulate constraints and expectations: What do we demand? What can we realistically hope for? As I’ll go through these questions, the concept of the adjacent possible (see my beginning exploration here and here) will help to anchor the evolving storyline.
Teaming up
There’s no reason to assume that technology could automatically, all by itself, and somewhat miraculously ‘get it right‘ and simply deliver what we’ll need. As Brian Arthur discussed in The Nature of Technology, technology areas co-evolve with society in a process of mutual adaptation. We cannot help but shape this adaptation, beyond the initial purpose of a given technology, and through all its later applications. It is a responsibility borne not only by the narrow set of developers of technologies; it is shared widely by all users. Hence all of us have to invest conscious effort to shape emerging technologies and employ them wisely in order to maximise the benefit they will yield, while minimising the harm they could cause.
However, this mutual adaptation does not happen over night. To take a long-range historic example, just consider how we gradually augmented human muscle: first we domesticated animals and learned to use wind and water power, then steam and coal, then oil and gas, and finally nuclear power. As we gained control over ever more powerful sources of energy, and as the size and density of our settlements kept increasing, we had to find new ways for managing ever larger groups of people, be it politically, economically, or socially. All of that we learned one step at a time. Growing and progressing gradually, we kept venturing into the adjacent possible. Today, we are confronting a comparable challenge, only that we now seek to augment human brains, or more precisely, human cognitive and intellectual capacities.
But here’s the catch! The hidden feedback loop between social learning and creative thinking on one side and modern information technology on the other accelerates technological development as well as societal change. Unlike in the past, when technology influenced society over longer periods of time, and mainly in the very long run, today’s technology impacts society instantaneously, already in the here and now. This reality forces us to rethink our relation with technology. We must go beyond the traditional concept of technology as a simple, mechanical, unchangeable tool. We must understand technology as an agent that responds to our ways of using it. And we must come to grips with the tremendous implications of this new paradigm of making progress.
I’d suggest that we consider it a team sport: humans, aware of their strengths and weaknesses, teaming up with technology. Following that route, we find two two basic strategies to pursue. We could use technology to compensate human weaknesses, or to support human strengths. In reality, these may well overlap, yet for the sake of clarity and simplicity, let’s address each of them in turn.
Compensating human weakness
Let’s start with an honest look in the mirror: What are we good at? And more importantly: What are we less good at? What are our limitations and shortcomings? In the historic case of human muscle, that problem was clearly framed and easily addressed: We simply needed more power and greater endurance. But for human brains, for our cognitive capacity, and in particular for social learning and creative skills, the answer is far from simple. Still, I’ll suggest two targets for the technological augmentation we should seek: our imagination and our cognitive biases.
We take pride in our imaginative powers. However, human imagination is not unlimited. In fact, as Vittorio Loreto points out, when asked to think creatively or to be innovative, it is impossible for us to conceive of all the potential options out there. While we will always come up with a few ideas, the vast majority of possible solutions won’t even cross our minds. Therefore, we should think about technology that helps identify and make visible more of those possibilities in order to expand the range of options humans can take into consideration. The approach would be –at least conceptually– rather simple. 1: map the actual, then 2: sketch the possible (based on the map of the actual), and finally 3: indicate the adjacent possible, the near-by part of the possible. The objective for such technology is not a fulsome depiction of all the adjacent possible, but a wider overview than what human skills alone could readily provide. Such technology would offer suggestions for humans to consider. And humans would validate, evaluate, assess the suggestions, and –if they are found worthwhile– pursue them further. In this scenario, technology would take the role of a scout to advise us as we navigate yet uncharted territory.
The second target is not so much a shortcoming; it is rather a challenge of ‘too much’ or ‘too quick’: our notorious oversights, our shortsightedness and prejudices. Once we acknowledge that cognitive biases contaminate our thinking and taint our judgement, the question is: Could technology warn us of our own biases? The origins of those biases are deeply rooted in the basics of our very thought processes, which are well described in Daniel Kahneman’s seminal Thinking – Fast and Slow. He delivers a marvelous explanation of our thinking patterns, why they served us well throughout our evolution, and where they can lead us astray – the title of one chapter says it all: ‘A machine for jumping to conclusions‘. Building on Kahneman’s in-depth analysis, Hans Rosling offers an overview of the resulting biases in Factfulness. He identifies the triggers (what he calls our ten dramatic instincts) that lead us down the wrong path of hasty judgment based on flawed interpretations and incomplete assumptions. Could we devise a technology that can check decision situations for those triggers and alert humans to their presence? Here, technology would serve as a counsellor reminding us of our penchant for intellectual short cuts and the risks of heading into dead ends. As in the previous case, technology would complement human skills, not replace them.
Supporting human strength
The capacity for social learning is unique to humans. It is the reason why our cultural evolution has accelerated beyond the speed of biological evolution, and why it keeps accelerating. Social learning is certainly the most distinct human strength, and still we could add to that strength by teaming up with technology. To understand how that might work, we can simply split the term ‘social learning‘ in its two literal components: ‘social‘ and ‘learning‘. While ‘social‘ emphasises the need for connecting people, ‘learning‘ in its wider sense of developing understanding consists of two strands: connecting ideas and making sense. Let’s address these three elements one by one.
Connecting ideas is a way of generating new ideas and developing new understanding. The story of how such insights arise is usually told in a straight line of ‘connecting the dots‘. However, as Gary Klein observes in Seeing What Others Don’t, that storyline is a gross simplification. Constructed in hindsight, it only includes the elements that are part of the final story, presenting them in the final logical order. That storyline misses all the distractions and challenges enroute to arriving at that new idea: sorting out the non-dots, refuting the anti-dots, and clarifying the ambiguous dots. And it ignores the loops and iterations that will inevitably occur throughout the gestation of the new idea. Hence we have to think about technology that can help us to cut through the noise of the myriad of ideas buzzing around in our brains, giving them some structure, amplifying the weak signals and focusing our creative and innovative efforts. In Where Good Ideas Comes From, Steven Johnson shares some of his experience with using commercially available software for that purpose. And even though his observations date back to 2010, his approach is still informative, as it offers clear orientation for the team play between humans and machines that we should strive for.
The second element of learning is making sense, i.e., reasoning and causal inference. Up until now, that has been solely the domain of human prowess, despite the considerable hype about the promise and potential impact of Artificial Intelligence (AI). Cutting through the maze of claims, hopes, and misconceptions, Judea Pearl wrote The Book of Why to deliver what he calls ‘the new science of cause and effect‘. A computer scientist as well as philosopher, he offers a thorough analysis of the inner workings of causation, together with an honest stocktake of the achievements, and limitations, of current technologies. Based on Bayesian networks that Pearl helped develop during his earlier career, today’s machine learning and deep learning approaches remain stuck at the level of identifying correlations. And Big Data, based on those same principles, will not magically break out of that mould. On the other hand, Pearl’s more recent work on causal diagrams opens a new path for creating technology that could be truly capable of causation: machines that can reason and make sense.
The social component of social learning is all about connecting people, which is the prime approach for connecting ideas. Humans are formidable carriers of ideas and knowledge, so that whenever people meet to establish new connections (or strengthen existing ones), their ideas do exactly the same. Over centuries, we devised different media to facilitate direct human connections. Apart from public meeting venues like the agora, the salon, or the coffee house, these include remote communication through mail, telegraph, and telephone. Most recently, social media were launched with the express claim to connect more people more easily than any other medium ever before. However, they did so without replicating the manifold and multilayered types of personal relations that exist in the real world. You could only pick and chose from friend or not, from follower or not. Thus, social media users are forced to connect without protection, without barriers: Whatever you share online is shared widely. Social media’s strong emphasis on the unconstrained exchange of ideas sacrifices the essence of human privacy. As Bruce Schneier argues in Surveillance Kills Freedom by Killing Experimentation, the creative, eccentric, challenging ideas are only shared within trusted circles. Where privacy is not guaranteed, humans will weigh their words carefully; when in doubt, they’d rather self-censor than share their ideas. Unintentionally, today’s social media promote conformity at the expense of creativity and originality, thereby eliminating the very differences of ideas and perspectives that are the fuel for innovation.
Privacy’s paramount importance for connecting humans and their ideas has long been contested by social media advocates. You might recall Facebook founder Mark Zuckerberg’s proclamation in 2010 that privacy is no longer a social norm. With that statement in mind, it is astonishing to read his recent privacy focused vision for social networking, which indicates a fundamental change of mind. Only time will tell whether Facebook truly changes course in that new direction. But we – as users, as citizens, and most importantly as humans – we shouldn’t wait. We should push for serious privacy protection on social media. We need more than public announcements, we need effective action. Only then will social media become a genuine asset supporting our greatest strength: social learning.
Questions arising
Based on the ideas outlined above, I’ll offer a couple of guiding questions. They should help us maintain clarity and focus throughout the necessary and continued discussion: How do we best team up with technology in support of our innovative endeavour?
What’s our objective?
We must acknowledge that we cannot predict the co-evolution of society and technology with any certainty. At best, we might partially anticipate some developments or trends. But nevertheless, we will inevitably shape this co-evolution through our choices and decisions. So let our choices be conscious. Let our decisions be informed. And let them be robust. That’s what David Collingridge proposed back in 1980 in The Social Control of Technology: decisions are considered robust when their outcome is reversible; they don’t send you down a steep, narrow, one-way path.
Similarly, technology must remain open to adjustments and course corrections. Of course, technological lock-in is very much in the interest of business corporations, in particular if the technology in question is proprietary. Yet the resulting dependency on monopoly providers is directly opposed to society’s need for technological variety, options, and choice. Monopoly and innovation are mutually exclusive, hence we must strive for a sound balance between the interest of business and the needs of society. Beyond the shadow of a doubt, society must come first.
We cannot leave the discussion to business alone!
How do we want the team to work?
The two recent incidents with the Boeing 737 MAX vividly illustrate the potentially dramatic outcomes of failed or flawed team work. While this case comes from advanced flight control systems, it points to the very basic, yet essential questions we should ask ourselves about any technology: How can we ensure that technology is clear, unambiguous, predictable, transparent to humans? And if that’s not the case, who has the over-rule powers? How do we effectively prevent that the team work itself causes new weaknesses?
Now think a second about today’s AI systems. They are already deployed to support many vital decisions, for example in our legal system (criminal sentencing as well as parole), in banking (who gets a loan and who doesn’t), and in insurance pricing and coverage. Yet they all operate as blackboxes: You cannot peek ‘under the hood‘ to understand how the AI arrived at its recommendation. Still, the human operators are most willing to follow the suggestion in blind faith. And if you are subject to such a decision and seek to appeal it, your chances are thin, because the operators can only cite their trust in the AI system.
I believe that AI must be transparent, that AI must be able to ‘explain’ its ‘reasoning’ (see paragraph on making sense, above) and give an indication of the validity of the proposed course of action. Some will now object that today’s AI cannot do that, because it isn’t designed for that purpose; and I must accept that reality. But things don’t have to be like that. In fact, I will argue, we must design future AI to master this challenge. That’s a decisive precondition for our effective teaming with technology.
In parallel to such a dedicated effort to make the technology a better team player, we’ll have to work on our human mindset as well. All too often, recommendations made by ‘the AI’ or ‘the system’ are blindly executed. In such cases, operators transfer their responsibility to the machine; for no good reason other than convenience. Yet the inconvenient truth is: We remain in charge, be it as developers of technology or as its users. Let’s not hide behind technology.
We cannot leave the solution to technology alone!
Who is in charge of developing the technology?
The call for transparent AI might be obvious for citizens and users, but it’s not necessarily self-evident for researchers and coders. Judea Pearl offers an interesting insight in The Book of Why: AI research is divided between what he calls ‘neats’ and ‘scruffies’. While the ‘neats’ favour transparent AI, the ‘scruffies’ just want something that works. And currently, he concedes, the ‘scruffies’ seem to dominate the field. From a techie point of view, ‘good enough’ might well be just that: good enough. But what does that mean for society? Do we really want to accept the risks associated with AI that roughly gets its core job done effectively, but that has never been tested or evaluated for side-effects and collateral impact? How reliable, how trust-worthy could ‘scruffy’ AI actually be?
And even if the ‘neats’ were to take the lead, the problem of algorithmic bias looms large. The tech industry is moving swiftly to establish AI ethics boards that advise AI developers on moral and ethical implications of their systems. However, those boards partially suffer from the same problem that they are expected to solve: their composition does not reflect the population that the AI systems will work on / for / with. Silicon Valley is not representative of California, California is not representative of the world.
Still, the challenge runs a lot deeper than techies and ethics boards; the challenge originates directly from human nature and our inability to shed our own biases. Reflecting on her experience at the CIA and at Facebook, Yaël Eisenstat concludes that the very skills required for observing and counteracting our various cognitive biases are antithetical to the can-do spirit and the move-fast culture of tech industry. Tasking techies to solve this problem is therefore at least naïve, if not downright insincere. The solution to the technological problem of algorithmic bias is certainly not found in more data. The answer is not more technology, the answer is more humanity. And that is a collective task for all of us: We must learn to understand our own biases and draw the right lessons from Daniel Kahneman’s work.
We cannot leave the development to the techies alone!
How do we go forward?
We must acknowledge a fundamental shift in our relation with technology. Modern information technology, the internet, and not the least AI have changed the role of technology in our lives: from passive, stable tools that are at our disposal and command, to active, adaptable agents that respond to our ways of using them. Technology is no longer our slave as making progress ultimately becomes a team sport of humans and technology.
To come to terms with this shift, we need to have a serious conversation across society, actually whole-of-society. That must tackle questions of desirability and achievability, as well as risks and rewards. I’d split that into five perspectives that collectively cover societal, business, and technology facets:
- What outcome is socially desirable?
- What risk is socially acceptable?
- What approach is macro-economically viable?
- Which technology is micro-economically promising?
- What solution is technically feasible?
With this entire context in mind, and acknowledging the many trade-offs between those facets, we can then define and implement innovation policies that promote the benefits of novel technologies, while limiting and mitigating their potential detriments.
To team up with technology successfully, we cannot simply hope for the best. There’s no good reason to have blind faith in a technology-empowered bright future. On the contrary, we must develop technologies that deserve our mature trust. We must engage in that development consciously and actively. Techies, business people, users, policy makers. As citizens. As humans. All of us.
We are in it together!
What's your view?