19/7: Intelligent automation, ethics and jobs

We wondered what course a conversation might take among people who know what they’re talking about on the subject of the likely impact of Intelligent Automation (IA) on society and the ethical dimensions for all involved. The people engaged in this conversation are KCP partners.  Their views are their own, independently held, without the guiding hand of corporate interest – except our principle at KCP that anything we say should be based on thoroughly learned, rigorously considered case information. 

Craig Mindrum: Let me begin as provocatively as I think I must on this subject: The possibility that public companies will ever voluntarily do something that is good for the world or workers or the general public but not good for them personally is, sorry to be blunt, laughable.

The late economist Milton Friedman once wrote, famously, that “the social responsibility of business is to increase profits.” I see little evidence that this sentiment is not still driving business strategy and operations—and, especially, attitudes toward the workforce. So, the answer to the question of what companies feel their responsibility is to employees in the face of technology advancements is: Nada.

But efforts will continue to paint rosy pictures. Economist Robert J. Samuelson published in April of 2019 a kind of “don’t worry your pretty little heads about it” column about robotics and jobs. (All it takes is good retraining!)

And we’re also getting Orwellian “newspeak” (using language to confuse and control) about the matter. One example is when we see service providers wording their efficiency improvements as “redeploying people to other, more value-added tasks” when in fact the provider is being measured quite plainly on reductions in FTEs. And that’s all the Street cares about, too.

What will be the impact on jobs? On my most hopeful days, I feel that, one way or another, we will adapt. But what if it’s catastrophic, or nearly so? What are people actually going to *do* in the future? I imagine the highly educated will thrive. But it’s always important to keep in mind a sobering statistic: In the US, less than one-third of the population has a four-year university degree.

Here is one commentary: “A two-year study from McKinsey Global Institute suggests that by 2030, intelligent agents and robots could eliminate as much as 30 percent of the world’s human labor. McKinsey reckons that, depending upon various adoption scenarios, automation will displace between 400 and 800 million jobs by 2030, requiring as many as 375 million people to switch job categories entirely.”

As I said, I’m being deliberately provocative. There’s another, more positive side to this, of course. Leslie, you have recently written about this topic in your short paper, “It Was the Robot’s Fault.” Are you thinking along the same lines, or are you more hopeful?

Leslie Willcocks: I think the clue is in your phrases about companies not doing anything that is not good for them. I agree that most companies have a mostly tacit, but very limited, view of their social and public responsibilities and their own interests. Then something like the Boeing 737 MAX 8 crashes comes along.

Reading the reports so far suggests that part of the story there—the pursuit of cost efficiencies in, for example, reducing pilot training time for Boeing and airlines; treating the software update as a minor change—has led to the tragic deaths of its customers and a possible $1 billion loss on sales. And this is not even advanced AI. The driverless vehicle and the safety dimensions are also high in the public and practitioner imaginations. I think stories like this will put other companies on red alert on safety, reputation, the interconnectedness of automation, the probability of unintended consequences.

I also think that we are experiencing a typical pattern with new technology (which tends to be seven or so years ahead of society, government and business) catching up with its downsides and needing to be regulated. Add in the growing backlash on social responsibility and AI which can be seen in many government committee reports, some legislation, and widespread publications by scientists, AI experts, and interested bodies, and you have accumulating pressure on corporations as can be seen in some changing behaviours amongst, for example, Google, Facebook and Apple. It’s early days, but I am more hopeful than Craig.

John Hindle: Right! The backlash is growing, Leslie, and from within. Chris Hughes, co-founder of Facebook, has just called for the company to be broken up, arguing it’s too big and powerful for regulatory remedies. Its algorithms are too inscrutable, its de-facto social media monopoly stifles innovation, and its goal of worldwide domination encourages bad behavior (Cambridge Analytica, anyone?).

And to your point, Craig, the economic impulse to grow and dominate in tech is effectively irresistible—achieving scale is organic to the domain (and relatively easy), and good behavior depends totally on the motives, incentives and decisions of executives and owners.

But the race is well and truly on: a recent McKinsey Global Institute study projects that companies deploying artificial intelligence capabilities early and widely stand to gain 12x the economic benefits that followers will realize over a 10-year period, while non-adopters will actually decline from current levels. Who can risk not joining the battle?

So there are clearly economic incentives and unique properties in intelligent automation that can’t be ignored or wished away, and some kind of governance is needed to mitigate harmful outcomes. But what’s the underlying moral framework that should guide this arms race? Peaceful Co-existence or Mutually Assured Destruction?

A starting point might be a kind of Hippocratic Oath for the industry: Do No Harm. But how do we build purpose into intelligent automation when it has no inherent moral framework? As Stuart Russell, UC Berkeley Professor of Computer Science puts it, machines may be better at making decisions, but that’s not the same as making better decisions.

So are we smarter than the machines, or will they win in the end, aided and abetted by our inattention and collective greed? For the foreseeable future, I tend to agree with Russell that millions of years of natural selection give humans an inimitable advantage—a complete mental ”blueprint” of reality—enabling us to plan and learn, to ask and answer “what if?” questions involving an infinitely diverse set of variables. With no equivalent representation of reality and a much more limited set of variables, the old axiom still applies to intelligent automation: garbage in, garbage out.

All the more reason we need effective governance.

Leslie Willcocks: Indeed, but then you have to take into account some practical difficulties with governance and regulation! Let me point to a few. A regulation requires a clear definition that is widely recognised. But what Intelligent Automation is—surely a set of technologies—and what it can and cannot do depends on the context, and is difficult to pin down. How it works is often opaque, as well. Furthermore, regulation might work for large organizations, but IA research and development can happen online, and the software is easily accessible. People working on IA and with IA can be in different legal jurisdictions, even in the same team. Which set of regulations prevail, and can the stricter regulation just be evaded? Then again, people made legally responsible for AI might lose control over its consequences and how it is subsequently used.

One-size-fits-all regulations just might be too simplistic for complex phenomena. For example, would a general rule to disclose all source code or how an algorithm arrives at its conclusions be applicable and useful in every situation?

I think we need to move on several fronts at the same time. For example, companies can develop their own regulatory and ethical guidelines in line with industry standards, and these could be aligned with legally binding human rights—including issues of privacy, safety, and non-discrimination, for example. In the US, the Food and Drug Administration and Defense Department are working towards creating their own policies. From within the IT sector, the Software and Information Industry Association (SIIA) and the Industrial Training Institute (ITI) have proposed guidelines. Industries have also come together in other ways, such as in the development of the Asilomar principles by scientists, philosophers and researchers. The Partnership on Artificial Intelligence to Benefit People and Society was formed by tech giants Google, Facebook, Amazon, IBM and Microsoft and now has more than 80 partners including academics, scientists and non-profit companies. OpenAI was co-founded in 2015 by Elon Musk to develop harmless general principles.

John Hindle: All interesting developments, but, following Craig’s skepticism, there are still enough examples emerging to leave as an open question whether companies can be relied on to self-regulate ahead of the shareholder interests, and their bottom line.  In addition, these attempts at self-regulation are not legally binding and may lead to a monopolization of data and the technology rather than its application for social good.

Leslie Willcocks: Well, yes—any attempt at regulating Intelligent Automation needs to ensure that civilian data rights are protected and secured from innovations that have the increasing ability to sense and compute everything they “see” and “hear.”

This would suggest that, with Intelligent Automation, governments really do have a hyper-active role to play. The AI Now Institute at New York University, as an interdisciplinary research center dedicated to understanding the social implications of artificial intelligence, has done some good thinking on this, and suggested (in 2018) eight roles for government:

  • Enable existing sector and industry organisations to regulate the development and deployment of AI technologies rather than attempting a wholesale approach to overseeing the technology.

  • Determine the data protection and rights to privacy of citizens, especially in the use of facial recognition technology.

  • Ensure that companies protect whistle-blowers.

  • Develop advertising standards for AI developers and service providers to limit the hype.

  • Ensure that the benefits of AI technology are experienced more widely, and not monopolised by the owners of the large tech companies.

  • Ensure clear communication about who is responsible, and the reasons why, if things go wrong with the technology.

  • Ensure that companies are responsible to more than just their shareholders—for example, audits should be conducted by external ethic boards and monitoring should take place by independent industry organisations.

  • Task technology companies with responsibility to promote clearer accountability in the development and deployment of algorithms.

That sounds like a lot of regulation, but we are in catch-up mode and the technology development is accelerating. The fundamental question with any powerful technology is always: does “can” translate into “should”? Enter ethics and social responsibility. In the case of AI, belatedly. But taking meaningful action to limit this set of technologies to deliver on human and socially acceptable goals now has to be ingrained in the very ways we think about, design, develop and deploy AI.

Conversation to be continued...

About the Authors:

Leslie Willcocks is Professor of Technology Work and Globalization and Director of the Outsourcing Unit at the Department of Management at London School of Economics and Political Science.

John Hindle has an extensive international business background. He currently serves as Vice Chair of the IEEE P2755 Intelligent Process Automation Working Group, a multilateral standards initiative for the emerging Intelligent Process Automation industry.

Craig Mindrum is a management and strategic communications consultant and writer. He taught business ethics for 15 years at DePaul University.