Artificial Intelligence (or AI) is hotly debated for all the promise it holds and the concerns it raises. Opinions abound, and they range widely. From hailing AI as the harbinger of an entirely new level of human development, to cursing AI as ushering in the end of all human civilisation. Across all that discussion, you’ll find three common strands:
- the impact of AI on society will be transformational,
- the exact outcome is uncertain, and
- the time to act is now.
That is exactly the Collingridge Dilemma: making decisions and devising policy under conditions of deep uncertainty. Two recent contributions to the AI discussion each present a particular angle for dealing with this dilemma, one very wide, the other fairly narrow. Each individually has its merit, each individually is relevant, we definitely need both. But only when taken together, they show us the wide-open space between them: That’s the option space for policies that can shape the AI we want to have in the future. That is the space we should explore, and I’m glad to see that another concrete step in this exploratory journey is taken. But first, the two angles.
The very wide angle
As a historian and politician, former U.S. Secretary of State Henry Kissinger presents a very broad view of the implications of AI in How the enlightenment ends,
framing his argument against the historic development of human culture and philosophy.
While acknowledging the extraordinary benefits that AI can bring about, he expresses three major concerns: AI may achieve unintended effects due to its inherent lack of context; AI may change human thought processes and human values; AI may be unable to explain the rationale for its conclusions. All of those potential outcomes of AI could undermine the relevance and impact of human reasoning: the search for empirical knowledge and causalities that has guided human development since the Age of Enlightenment.
The most pressing challenge at this stage is the lack of vision for our future with AI. Kissinger calls for a group of eminent thinkers in the areas of technology, philosophy, the humanities, business, and government to develop such a vision.
The narrow angle
A lawyer and technologist, Microsoft president Brad Smith, zooms in on a particular application of AI in his call for regulating facial recognition technology just a week ago.
Concerned over the potential abuse of this technology, he does not want to leave its further development and implementation to the commercial interests of industry. Instead, he lays out six principles for industry to adopt: fairness; transparency; accountability; nondiscrimination; notice and consent; and lawful surveillance. Based on these principles, he makes concrete recommendations for governmental regulations that would minimise the risks of discrimination, intrusion into people’s privacy, and encroachment on democratic freedoms.
Smith advocates an incremental approach for both, industry adopting the six principles, and government developing regulation. We’ll need to adjust and revise as facial recognition technology evolve and our understanding of their implications advance over time.
A concrete step forward
Also last week, Canada and France jointly announced their plans for the International Panel on Artificial Intelligence (IPAI), seeking to establish
an international place where we can discuss all the impacts of AI in the transformation of society.
Openly inviting participation from all interested nations, the IPAI should bring together policy experts with researchers in AI, humanities, and the social sciences in order to guide the development of policies that could keep AI technology grounded in human rights.
This heartening step picks up Kissinger’s general recommendation and takes it straight to an international level, while providing much wider context to Smith’s suggestions. I’m looking forward to see the IPAI gain traction as an inclusive forum for much needed debate, and as a credible and trusted source of advice to national and international decision makers.
What's your view?