Trends to consider in policy-making – What to pursue

The ongoing Digital Revolution reshapes our world and challenges policy makers in many ways. The emergence of the platform economy has created powerful non-state actors that have far-reaching influence on communities around the globe. In this new nexus of business, technology, and society, we need to rethink policy-making to blend innovation policy with foreign policy from the start.

We need to think about potential dangers ahead, about necessary course corrections and plausible alternatives, about the public discourse we should promote. In the previous post, I started with casting some light on four potential downsides of the Digital Revolution, on the outcomes we should seek to avoid. In this post today, I’ll look at four forward-looking proposals that could help us shape our digital future for the greater good of society.

Technology to the rescue?

Social media today are free of charge for their users, but at immense cost. The predominant business model of the social media platforms is driven by micro-targeted advertising: advertisers pay for getting user attention. And social media platforms have become the masters of managing our attention. First, they developed the algorithms that sift through our online activities to identify our preferences. Second, they entice us to spend ever-more time on their platforms to gather ever-more data about us, in order to feed and improve the algorithms. User data have become a new currency, just like coal in the Industrial Revolution, turning users into coal miners, and data into coal. But the forgotten truth is: personal data are the users’ own property. Users are expropriated by the social media platforms for a ridiculously little compensation: free-of-charge access. Is that good enough? Is that the best we can imagine?

The big question is: Can we change the technological underpinnings of today’s digital economy? Internet pioneer Tim Berners-Lee offers an alternative approach in One Small Step for the Web. In fact, he developed Solid, a new platform that is entirely built around the concept that data owners should be in full control of their personal data at all times. The core of this concept is the personal online data store (POD). The user-owner has full control over her POD regarding its location, content, and access: Which service or app should get access? To which portion? For how long? Service providers and app developers then need to obtain user consent prior to working with user data, and users know at all times who is doing what with their data.

Solid definitely has the potential to become a viable alternative to the current platforms. Users would regain control over their data, at a transparent price that they’d have to pay for their POD. That price will be paid in conventional currency, in Dollars, Euros, Yen, and it will be borne by the users directly. I am hopeful that this price will be comparatively modest. Time will tell whether sufficient numbers of users are willing to subscribe to this decisive system change. The even bigger question points towards industry’s response. For sure, user control over user data comes across as a frontal attack on the fundament of their current business models, the generation and subsequent expropriation of massive streams of user data. However, business models are not carved in stone, whereas first principles of civil rights are.

Accredited algorithms only?

For many products, we expect they have been thoroughly tested before they are introduced into the market. Think about cars, airplanes, or medicine: We’d just like to be sure that the risk of them causing any harm is within acceptable limits. And in many instances, we rely on government agencies (such as the U.S. Food and Drug Administration, FDA) to define appropriate safety standards and test regimes, and to ensure industry’s compliance with those standards.

Mathematician Hannah Fry suggests exactly that mechanism for the algorithms that drive the digital future: We need an FDA for algorithms. Why? Because today, algorithms are sketched out in the lab and tested in the real world for their technical functionality. Such beta-testing directly on the consumer is considered a vital advantage for getting the product out faster. But this approach spares no thought on potential negative implications. And industry will certainly tell you that more tests would cost valuable time and money, they would be either impossible or ineffective, but either way with utmost certainty useless. Well, that sounds very similar to the producer of snake oil proclaiming all the good his medicine will do for you, just trust him.

There are many examples of products that became subject to security standards and regulations only after the harm they could inflict had become evident, including cars, airplanes, and medicine. When those standards were introduced and enforced, the respective industries of course cited the detrimental effect on profit, the cost increase that would drive prizes up, the destruction of jobs, etc. And still, society benefited mightily from implementing safety standards and holding industry accountable. Why should algorithms be treated any differently? Why not establish a government agency for the accreditation of algorithms? Why not permit only those algorithms into the real world that have been thoroughly tested and validated, by neutral experts without business interest?

The future of journalism?

For a long time, we have relied on journalism as the fourth estate, as the checks and controls to keep our political system reasonably transparent and honest. A key role in traditional newspaper journalism was held by the editor, who decided which stories to print and where to position them in the paper. The editor thus served as the content curator for the paper’s audience to stimulate and foster public debate. But that role is disappearing fast as the online world draws most of the advertising money that helped sustain the printed media. What is more, social media have taken a large portion of the audiences’ attention, and their algorithmic toolbox (e.g., Facebook’s newsfeed) serves streams of news tailored to any individual reader’s personal interest. As a result, the seemingly same source of news delivers different news to different people, thereby fostering the polarisation of audiences and stifling meaningful public discourse. This influencing power of social media platforms is not subject to any control, let alone to public debate.

The question for our digital future then is how we could best combine the strength of traditional journalism with the opportunities of modern information technology. To deliver an answer, the journalists Sue Gardner, Julia Angwin, and Jeff Larson founded The Markup as a non-partisan, non-profit newsroom. It’s core tenet is the teaming of journalists with data scientist, for every story, from the very onset of every investigative research. Both perspectives are equally relevant to understand the challenges and opportunities that digital technologies present for society, and The Markup’s founders are committed to bring independent, in-depth analysis to the public.

The Markup is going to be a live experiment about journalism and technology alike: It’s journalism about digital technology and its implications for society; at the same time it uses digital technology to empower investigative journalism. It is planned to launch early in 2019, and I’m curious to see it evolve and flourish.

Align disruptive technology with public purpose?

Every technology serves a human purpose. But what is helpful and desirable for the individual user is not automatically good for the whole of society. Take the automobile for example. Initially just a toy for the well-to-do and the gentry, it gradually became the key to mass mobility only when mass production made private ownership affordable for many. And with increasing traffic, severe problems surfaced that needed to be addressed. Reliability was low, road conduct was miserable, and accidents rates were atrocious. So over time, governments devised and enforced regulations to reduce the damaging side effects: traffic regulations defined the rules of the road, driving licenses ensured that drivers had a minimum knowledge of those rules, safety standards helped reduce the number of fatal accidents, and so on. The automobile is only one instance of governments and technologists working together to minimise the harmful side effects of a new technology that is widely adopted. This is an example of society and technology co-evolving in a process of mutual adaptation: it takes time, it is a struggle, but it is necessary.

In a recent speech at the Aspen Institute, the former U.S. Secretary of Defense Ash Carter (a theoretical physicist by training) laid out his ideas about Shaping Disruptive Technological Change for Public Good. His narrative is the story of technological disruptions, the ethics of technologists, and their interaction with politics. Contrasting the very early with the most recent experiences he’d made in his professional career, he draws insightful comparisons between nuclear weapons and digital technology. Firmly convinced that meaningful policies can only be devised if the technologists (the scientists, researchers, engineers, and programme managers with deep understanding of the subject matter) can and do make strong inputs to political decision-making, he observes fundamental changes in the quality of those necessary interactions.

Throughout the Cold War, the nuclear weapons technologists worked closely with government to keep the arsenal safe, the avoid technology proliferation, to establish arms control, and later on to promote disarmament. After the formative experience of the Second World Wat, these technologists felt the moral duty to ensure that the technology they had helped create remained under tight governmental control. For them, trusted partnership with government was a natural fit, with a clear understanding of their division of labour: Technologists to provide their knowledge and insights as advice, government to take informed decisions and to devise appropriate policies.

That’s different with the proponents of digital technologies at today’s cutting edge of disruption. Growing up in post-war societies, the digital technologists embraced a neutral, often skeptical view of government. The hacker culture that’s been driving tech industry so far is traditionally loath to government and tends to follow its own guide star towards a better society. But whatever shape that “future better place” would take, whatever obligation technologists feel towards society, no matter how noble the intent: They are not legitimised by the general public to make policies, regulations, or laws. And they are of course not held accountable for policy-making either.

Shaping the arc of disruption towards public good, i.e., towards beneficial outcomes for society, requires the tech community and government to work together. That’s Carter’s unambiguous message. Government alone might define too narrow regulations that avoid detriment and suffocate benefit alike, while tech industry could end up in a race to the bottom for sheer profit considerations. Neither of these outcomes is in the interest of the general public. So we’ll have to establish in-depth dialogue between government and technologists, a dialogue that provides unfettered, cutting-edge technology advice to political decision makers. In order to be credible, such advice needs to remain untainted by commercial interest. Re-igniting and holding that dialogue is a challenge, for sure. In fact, both sides will have to go beyond their current comfort zones. But acknowledging a challenge is the first step to overcoming it. What is more, Carter’s speech offers numerous examples of government-to-tech outreach initiatives, in particular in the U.S. And keeping in mind the role of the Danish Technology Ambassador, I see promising ideas to strengthen the dialogue and foster the much-needed cooperation between technologists and government.

What else?

What I’ve offered here can only serve as a selective overview of trends for policy-making to consider. In the previous post, I presented four potential downsides we should try to avoid, while this post summarised four potential upsides for us to pursue. Taken from a range of professional fields (history, sociology, economics, law, technology, mathematics, journalism, and politics), these ideas collectively illustrate how complex the challenges ahead really are. At the same time, this diversity should inspire us: There are many more good ideas than any single one of us could come up with, and many more sources than each of us might think.

It’ll be worthwhile to follow these trends, their evolution, the path they’ll take. Will they meet resistance and stall, or will they gain further momentum? Or would they blend and merge to take off in yet unforeseen directions? We live in truly interesting times, presenting unprecedented opportunities as well as entirely novel challenges. And once again, it is upon us to shape our future, to avoid looming detriment and to boost beneficial developments.

Over to you: Which important idea deserves more attention in policy-making?  What’s the essential thought, and why does it matter? But also: Who’s the originator? What’s the professional field?  And is there a source for us to read more?

What's your view?

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.