A Magna Carta for Inclusivity and Fairness in the Global AI Economy
"Should your self-driving Uber be allowed to break traffic regulations in order to safely merge onto the highway? To what extent are algorithms as prone to discriminatory patterns of thinking as humans – and how might a regulatory body make this determination? More fundamentally, as more tasks are delegated to intelligent machines, to what extent will those of us who are not directly involved in the development of these technologies be able to influence their decisions? It is with these questions in mind that we are pleased to have adapted the following article for publication at TDL. – Andrew Lewis, Editor-in-Chief"
We stand at a watershed moment for society’s vast, unknown digital future. A powerful technology, artificial intelligence (AI), has emerged from its own ashes, thanks largely to advances in neural networks modeled loosely on the human brain. AI can find patterns in massive unstructured data sets and improve its own performance as more data becomes available. It can identify objects quickly and accurately, and make ever more and better recommendations — improving decision-making, while minimizing interference from complicated, political humans. This raises major questions about the degree of human choice and inclusion for the decades to come. How will humans, across all levels of power and income, be engaged and represented? How will we govern this brave new world of machine meritocracy?
Behavioral Science, Democratized
We make 35,000 decisions each day, often in environments that aren’t conducive to making sound choices.
At TDL, we work with organizations in the public and private sectors—from new startups, to governments, to established players like the Gates Foundation—to debias decision-making and create better outcomes for everyone.
The AI Governance Challenge
Machine meritocracy
To find perspective on this questions, we must travel back 800 years: It was January, 1215, and King John of England, having just returned from France, faced angry barons who wished to end his unpopular vis et voluntas (“force and will”) rule over the realm. In an effort to appease them, the king and the Archbishop of Canterbury brought 25 rebellious barons together to negotiate a “Charter of Liberties” that would enshrine a body of rights to serve as a check on the king’s discretionary power. By June they had an agreement that provided greater transparency and representation in royal decision-making, limits on taxes and feudal payments, and even some rights for serfs. The famous “Magna Carta” was an imperfect document, teeming with special-interest provisions, but today we tend to regard the Carta as a watershed moment in humanity’s advancement toward an equitable relationship between power and those subject to it. It set the stage eventually for the Enlightenment, the Renaissance, and democracy.
Balance of power
It is that balance between the ever-increasing power of the new potentate — the intelligent machine — and the power of human beings that is at stake. Increasingly, our world will be one in which machines create ever more value, producing more of our everyday products. As this role expands, and AI improves, human control over designs and decisions will naturally decrease. Existing work and life patterns will be forever changed. Our own creation is now running circles around us, faster than we can count the laps.
Machine decisions
This goes well beyond jobs and economics: in every area of life machines are starting to make decisions for us without our conscious involvement. Machines recognize our past patterns and those of (allegedly) similar people across the world. We receive news that shapes our opinions, outlooks, and actions based on inclinations we’ve expressed in past actions, or that are derived from the actions of others in our bubbles. While driving our cars, we share our behavioral patterns with automakers and insurance companies so we can take advantage of navigation and increasingly autonomous vehicle technology, which in return provide us new conveniences and safer transportation. We enjoy richer, customized entertainment and video games, the makers of which know our socioeconomic profiles, our movement patterns, and our cognitive and visual preferences to determine pricing sensitivity.
As we continue to opt-in to more and more conveniences, we choose to trust a machine to “get us right.” The machine will get to know us in, perhaps, more honest ways than we know ourselves — at least from a strictly rational perspective. But the machine will not readily account for cognitive disconnects between that which we purport to be and that which we actually are. Reliant on real data from our real actions, the machine constrains us to what we have been, rather than what we wish we were or what we hope to become.
Personal choice
Will the machine eliminate that personal choice? Will it do away with life’s serendipity — planning and plotting our lives so we meet people like us, thus depriving us of encounters and friction that force us to evolve into different, perhaps better human beings? There’s tremendous potential in this: personal decisions are inherently subjective, but many could be improved by including more objective analyses. For instance, including the carbon footprint for different modes of transportation and integrating this with our schedules and pro-social proclivities may lead us to make more eco-friendly decisions; getting honest pointers on our more and less desirable characteristics, as well as providing insight into characteristics we consistently find appealing in others, may improve our partner choices; curricula for large and diverse student bodies could become more tailored to the individual, based on the engine of information about what has worked in the past for similar profiles.
Polarization
But might it also polarize societies by pushing us further into bubbles of like-minded people, reinforcing our beliefs and values without the random opportunity to check them, defend them, and be forced to rethink them? AI might get used for “digital social engineering” to create parallel micro-societies. Imagine digital gerrymandering with political operatives using AI to lure voters of certain profiles into certain districts years ahead of elections, or AirBnB micro-communities only renting to and from certain socio-political, economic, or psychometric profiles. Consider companies being able to hire in much more surgically-targeted fashion, at once increasing their success rates and also compromising their strategic optionality with a narrower, less multi-faceted employee pool.
Who makes judgments?
A machine judges us on our expressed values — especially those implicit in our commercial transactions — yet overlooks other deeply held values that we have suppressed or that are dormant at any given point in our lives. An AI might not account for newly formed beliefs or changes in what we value outside the readily-codified realm. As a result, it might, for example, make decisions about our safety that compromise the wellbeing of others — doing so based on historical data of our judgments and decisions, but resulting in actions we find objectionable in the present moment. We are complex beings who regularly make value trade-offs within the context of the situation at hand, and sometimes those situations have little or no codified precedent for an AI to process. Will the machine respect our rights to free will and self-reinvention?
Discrimination and bias
Similarly, a machine might discriminate against people of lesser health or standing in society because its algorithms are based on pattern recognition and broad statistical averages. Uber has already faced an outcry over racial discrimination when its algorithms relied on zip codes to identify the neighborhoods where riders were most likely to originate. Will the AI favor survival of the fittest, the most liked, or the most productive? Will it make those decisions transparently? What will our recourse be?
Moreover, a programmer’s personal history, predisposition, and unseen biases — or the motivations and incentives of or from their employer — might unwillingly influence the design of algorithms and sourcing of data sets. Can we assume an AI will work with objectivity all the time? Will companies develop AIs that favor their customers, partners, executives, or shareholders? Will, for instance, a healthcare-AI jointly developed by technology firms, hospital corporations, and insurance companies act in the patient’s best interest, or will it prioritize a certain financial return?
The AI Governance Challenge
We can’t put the genie back in the bottle, nor should we try — the benefits will be transformative, leading us to new frontiers in human growth and development. We stand at the threshold of an evolutionary explosion unlike anything in the last millennium. And like all explosions and revolutions, it will be messy, murky, and fraught with ethical pitfalls.
A new charter of rights
Therefore, we propose a Magna Carta for the Global AI Economy — an inclusive, collectively developed, multi-stakeholder charter of rights that will guide our ongoing development of artificial intelligence and lay the groundwork for the future of human-machine co-existence and continued, more inclusive, human growth. Whether in an economic, social, or political context, we as a society must start to identify rights, responsibilities, and accountability guidelines for inclusiveness and fairness at the intersections of AI and human life. Without it, we will not establish enough trust in AI to capitalize on the amazing opportunities it can and will afford us.
* adapted from the forthcoming book “Solomon’s Code: Power and Ethics in the AI Revolution” (working title) copyright © 2017 Olaf Groth & Mark Nitzberg.
About the Authors
Olaf Groth
Dr. Olaf Groth, Ph.D. is co-author of “Solomon’s Code” and CEO of Cambrian.ai, a network of advisers on global innovation and disruption trends, such as AI, IOT, autonomous systems and the 4th Industrial Revolution for executives and investors. He serves as Professor of Strategy, Innovation & Economics at Hult International Business School, Visiting Scholar at UC Berkeley’s Roundtable on the International Economy, and the Global Expert Network member at the World Economic Forum.
Mark Nitzberg
Dr. Mark Nitzberg, Ph.D. is co-author of “Solomon’s Code” and Executive Director of the Center for Human-Compatible AI at the University of California at Berkeley. He also serves as Principal & Chief Scientist at Cambrian.ai, as well as advisor to a number of startups, leveraging his combined experience as a globally networked computer scientist and serial social entrepreneur.
Mark Esposito
Mark Esposito is a member of the Teaching Faculty at the Harvard University's Division of Continuing, a Professor of business and economics, with an appointment at Hult International Business School. He is an appointed Research Fellow in the Circular Economy Center, at the University of Cambridge's Judge Busines School. He is also a Fellow for the Mohammed Bin Rashid School of Government in Dubai.