19/7: Intelligent automation, ethics and jobs

We wondered what course a conversation might take among people who know what they’re talking about on the subject of the likely impact of Intelligent Automation (IA) on society and the ethical dimensions for all involved. The people engaged in this conversation are KCP partners.  Their views are their own, independently held, without the guiding hand of corporate interest – except our principle at KCP that anything we say should be based on thoroughly learned, rigorously considered case information. 

Craig Mindrum: Let me begin as provocatively as I think I must on this subject: The possibility that public companies will ever voluntarily do something that is good for the world or workers or the general public but not good for them personally is, sorry to be blunt, laughable.

The late economist Milton Friedman once wrote, famously, that “the social responsibility of business is to increase profits.” I see little evidence that this sentiment is not still driving business strategy and operations—and, especially, attitudes toward the workforce. So, the answer to the question of what companies feel their responsibility is to employees in the face of technology advancements is: Nada.

But efforts will continue to paint rosy pictures. Economist Robert J. Samuelson published in April of 2019 a kind of “don’t worry your pretty little heads about it” column about robotics and jobs. (All it takes is good retraining!)

And we’re also getting Orwellian “newspeak” (using language to confuse and control) about the matter. One example is when we see service providers wording their efficiency improvements as “redeploying people to other, more value-added tasks” when in fact the provider is being measured quite plainly on reductions in FTEs. And that’s all the Street cares about, too.

What will be the impact on jobs? On my most hopeful days, I feel that, one way or another, we will adapt. But what if it’s catastrophic, or nearly so? What are people actually going to *do* in the future? I imagine the highly educated will thrive. But it’s always important to keep in mind a sobering statistic: In the US, less than one-third of the population has a four-year university degree.

Here is one commentary: “A two-year study from McKinsey Global Institute suggests that by 2030, intelligent agents and robots could eliminate as much as 30 percent of the world’s human labor. McKinsey reckons that, depending upon various adoption scenarios, automation will displace between 400 and 800 million jobs by 2030, requiring as many as 375 million people to switch job categories entirely.”

As I said, I’m being deliberately provocative. There’s another, more positive side to this, of course. Leslie, you have recently written about this topic in your short paper, “It Was the Robot’s Fault.” Are you thinking along the same lines, or are you more hopeful?

Leslie Willcocks: I think the clue is in your phrases about companies not doing anything that is not good for them. I agree that most companies have a mostly tacit, but very limited, view of their social and public responsibilities and their own interests. Then something like the Boeing 737 MAX 8 crashes comes along.

Reading the reports so far suggests that part of the story there—the pursuit of cost efficiencies in, for example, reducing pilot training time for Boeing and airlines; treating the software update as a minor change—has led to the tragic deaths of its customers and a possible $1 billion loss on sales. And this is not even advanced AI. The driverless vehicle and the safety dimensions are also high in the public and practitioner imaginations. I think stories like this will put other companies on red alert on safety, reputation, the interconnectedness of automation, the probability of unintended consequences.

I also think that we are experiencing a typical pattern with new technology (which tends to be seven or so years ahead of society, government and business) catching up with its downsides and needing to be regulated. Add in the growing backlash on social responsibility and AI which can be seen in many government committee reports, some legislation, and widespread publications by scientists, AI experts, and interested bodies, and you have accumulating pressure on corporations as can be seen in some changing behaviours amongst, for example, Google, Facebook and Apple. It’s early days, but I am more hopeful than Craig.

John Hindle: Right! The backlash is growing, Leslie, and from within. Chris Hughes, co-founder of Facebook, has just called for the company to be broken up, arguing it’s too big and powerful for regulatory remedies. Its algorithms are too inscrutable, its de-facto social media monopoly stifles innovation, and its goal of worldwide domination encourages bad behavior (Cambridge Analytica, anyone?).

And to your point, Craig, the economic impulse to grow and dominate in tech is effectively irresistible—achieving scale is organic to the domain (and relatively easy), and good behavior depends totally on the motives, incentives and decisions of executives and owners.

But the race is well and truly on: a recent McKinsey Global Institute study projects that companies deploying artificial intelligence capabilities early and widely stand to gain 12x the economic benefits that followers will realize over a 10-year period, while non-adopters will actually decline from current levels. Who can risk not joining the battle?

So there are clearly economic incentives and unique properties in intelligent automation that can’t be ignored or wished away, and some kind of governance is needed to mitigate harmful outcomes. But what’s the underlying moral framework that should guide this arms race? Peaceful Co-existence or Mutually Assured Destruction?

A starting point might be a kind of Hippocratic Oath for the industry: Do No Harm. But how do we build purpose into intelligent automation when it has no inherent moral framework? As Stuart Russell, UC Berkeley Professor of Computer Science puts it, machines may be better at making decisions, but that’s not the same as making better decisions.

So are we smarter than the machines, or will they win in the end, aided and abetted by our inattention and collective greed? For the foreseeable future, I tend to agree with Russell that millions of years of natural selection give humans an inimitable advantage—a complete mental ”blueprint” of reality—enabling us to plan and learn, to ask and answer “what if?” questions involving an infinitely diverse set of variables. With no equivalent representation of reality and a much more limited set of variables, the old axiom still applies to intelligent automation: garbage in, garbage out.

All the more reason we need effective governance.

Leslie Willcocks: Indeed, but then you have to take into account some practical difficulties with governance and regulation! Let me point to a few. A regulation requires a clear definition that is widely recognised. But what Intelligent Automation is—surely a set of technologies—and what it can and cannot do depends on the context, and is difficult to pin down. How it works is often opaque, as well. Furthermore, regulation might work for large organizations, but IA research and development can happen online, and the software is easily accessible. People working on IA and with IA can be in different legal jurisdictions, even in the same team. Which set of regulations prevail, and can the stricter regulation just be evaded? Then again, people made legally responsible for AI might lose control over its consequences and how it is subsequently used.

One-size-fits-all regulations just might be too simplistic for complex phenomena. For example, would a general rule to disclose all source code or how an algorithm arrives at its conclusions be applicable and useful in every situation?

I think we need to move on several fronts at the same time. For example, companies can develop their own regulatory and ethical guidelines in line with industry standards, and these could be aligned with legally binding human rights—including issues of privacy, safety, and non-discrimination, for example. In the US, the Food and Drug Administration and Defense Department are working towards creating their own policies. From within the IT sector, the Software and Information Industry Association (SIIA) and the Industrial Training Institute (ITI) have proposed guidelines. Industries have also come together in other ways, such as in the development of the Asilomar principles by scientists, philosophers and researchers. The Partnership on Artificial Intelligence to Benefit People and Society was formed by tech giants Google, Facebook, Amazon, IBM and Microsoft and now has more than 80 partners including academics, scientists and non-profit companies. OpenAI was co-founded in 2015 by Elon Musk to develop harmless general principles.

John Hindle: All interesting developments, but, following Craig’s skepticism, there are still enough examples emerging to leave as an open question whether companies can be relied on to self-regulate ahead of the shareholder interests, and their bottom line.  In addition, these attempts at self-regulation are not legally binding and may lead to a monopolization of data and the technology rather than its application for social good.

Leslie Willcocks: Well, yes—any attempt at regulating Intelligent Automation needs to ensure that civilian data rights are protected and secured from innovations that have the increasing ability to sense and compute everything they “see” and “hear.”

This would suggest that, with Intelligent Automation, governments really do have a hyper-active role to play. The AI Now Institute at New York University, as an interdisciplinary research center dedicated to understanding the social implications of artificial intelligence, has done some good thinking on this, and suggested (in 2018) eight roles for government:

  • Enable existing sector and industry organisations to regulate the development and deployment of AI technologies rather than attempting a wholesale approach to overseeing the technology.

  • Determine the data protection and rights to privacy of citizens, especially in the use of facial recognition technology.

  • Ensure that companies protect whistle-blowers.

  • Develop advertising standards for AI developers and service providers to limit the hype.

  • Ensure that the benefits of AI technology are experienced more widely, and not monopolised by the owners of the large tech companies.

  • Ensure clear communication about who is responsible, and the reasons why, if things go wrong with the technology.

  • Ensure that companies are responsible to more than just their shareholders—for example, audits should be conducted by external ethic boards and monitoring should take place by independent industry organisations.

  • Task technology companies with responsibility to promote clearer accountability in the development and deployment of algorithms.

That sounds like a lot of regulation, but we are in catch-up mode and the technology development is accelerating. The fundamental question with any powerful technology is always: does “can” translate into “should”? Enter ethics and social responsibility. In the case of AI, belatedly. But taking meaningful action to limit this set of technologies to deliver on human and socially acceptable goals now has to be ingrained in the very ways we think about, design, develop and deploy AI.

Conversation to be continued...

About the Authors:

Leslie Willcocks is Professor of Technology Work and Globalization and Director of the Outsourcing Unit at the Department of Management at London School of Economics and Political Science.

John Hindle has an extensive international business background. He currently serves as Vice Chair of the IEEE P2755 Intelligent Process Automation Working Group, a multilateral standards initiative for the emerging Intelligent Process Automation industry.

Craig Mindrum is a management and strategic communications consultant and writer. He taught business ethics for 15 years at DePaul University.

19/6: AUTOMATION AND THE FUTURE OF WORK – WHAT STORY DO YOU WANT TO FEAR?

By Leslie Willcocks, Professor, Department of Management, London School of Economics and Political Science

‘Automation threatens 1.5 million workers in Britain, says ONS’ (The Guardian headline, 25th March 2019)

You probably now feel distressed and anxious. You certainly want to read on. Are you one of those threatened? What sort of things can these machines do to me, us business, life itself? But you have seen this storyline many times before. I debunked it in our book Robotic and Cognitive Automation: The Next Phase in 2018 (see chapter 9), and will be publishing a monograph on all the reports as ‘Robo-Apocalypse Cancelled? Reframing The Automation and Future of Work Debate’ in a few months time. Meanwhile let me provide some relief. This column is going to tell you what the report actually said, and put it into the bigger context it deserves.

Here is the basic statement from the report itself:  ‘Around 1.5 million jobs in England are at high risk of some of their duties and tasks being automated in the future, Office for National Statistics (ONS) analysis shows……The ONS has analysed the jobs of 20 million people 1  in England in 2017, and has found that 7.4% are at high risk of automation’.

Firstly, it is useful to analyse the headline. The word ‘threaten‘ is used, and attributed to the ONS, yet what they actually, differently, said is quoted above. The report talks of England, while the headline talks of ‘Britain’. Well, yes, England is in Britain, but the sample does not cover the rest of the country, something no doubt the Office of National Statistics (ONS) would like to be quite careful about. The implicit assumption is that 1.5 million workers will be displaced by automation. The actual report says that ‘high risk’ means 70% or more of a job is likely to be automated i.e. not elimination of the whole set of tasks making up the job. Not quite all of the job then all the time, and no room in the newspaper analysis (as opposed to the ONS report) for the fact that jobs consists of multiple tasks that can be recombined and jobs restructured as technology takes over repetitive routine work.

The figure of 7.4% of jobs being highly impacted by automation is, in fact, way off the Frey and Osborne scare figure of 35% UK jobs lost through automation - even though the ONS partly use the Frey and Osborne methodology. One limitation that the newspaper analysis inherits from both Frey and Osborne (2013) and the ONS 2019 report is that none of them stipulate seriously the time horizon i.e. WHEN and, indeed, even IF automation will occur. Frey and Osborne mention perfunctorily 10 maybe 20 years but were reporting statistical probability based on characteristics of jobs and automation tools, rather than practical challenges.

Several media articles failed to report the ONS finding that between 2011 and 2017 the number of jobs with high risk of automation actually FELL from 8.1% to 7.4% of the representative  working population – was this too much like good news?!

In actual fact, the headline news here is not that interesting. Multiple reports in 2018 and 2019 are showing that the headline net job loss figure is going to be quite low over the next 12 years. The media follow Frey and Osborne and the ONS in not looking at the job gains – some reports suggest these will be considerable in the medium term - that could be extrapolated forward and placed against the 7.4% extrapolated adversely highly impacted.

In practice, the skills shift is the worrying and transformative challenge and storyline, not the job loss figure. The ONS report does pick up on this and has some interesting data. Many repetitive routine low skilled tasks are under threat, but the ONS show that 69.9%  of all jobs at high risk are part-time, and 70.2% of high risk roles are carried out by females. Age also has a bearing, with the younger 20-29 years and older  40-65 years age groupings more at risk than the  29-40 year olds age group. The South East region of England is likely to experience lower probabilities of job loss through automation, partly through more jobs profiles having higher, less automatable skills.

In a broader context the present study provides useful data but is limited by not looking at potential job gains, how fast the technology will actually develop, the economic feasibility of automation in specific labour markets, regions and sectors, organizational readiness and absorption capacity for automation, and where skills shortages will slow automation. Also how far the exponential data explosion, bureaucracy and regulation are already creating a dramatic increase in the amount of work to be done, turning automation into a coping mechanism rather than the job killer so beloved by the media. Of course the study does not seek to engage either with more macro factors like ageing populations, birth rates, productivity and economic growth targets which can all impact speed of deployment, and levels of employment, as well as skills required.

My takeaways are that the figures are much lower than  the earlier studies, it is interesting that the  high risk jobs numbers have decreased between 2011-2017 and that the gender/age/region  diverse effects are perhaps the most key findings going forward.

19/5: FROM EFFICIENCY TO ENABLEMENT: THE POWER OF CONNECTING TECHNOLOGIES

Dr. Leslie Willcocks & Dr. John Hindle

Effective RPA Centers of Excellence invariably do project governance well. But to achieve corporate coherence on multiple related technologies, RPA CoE activities need to be linked to wider IT, cognitive automation and digital transformation developments arising elsewhere in the organization, requiring decisions in five major areas:

• Automation Principles – Clarifying the business role of automation technologies

• Automation and IT/digital Architecture – Defining integration and standardization requirements

• Automation and IT/digital Infrastructure – Determining shared and enabling services

• Business Application Needs – Specifying the business need for purchased or internally developed automation applications

• Automation Investment and Prioritization – Choosing which initiatives to fund and how much to spend.

To achieve these ends, the RPA CoE mission and scope must be expanded and redefined to become a comprehensive Automation Center of Excellence, with authority to manage and coordinate all automation projects and the overall program. Business and service unit managers become primarily responsible for decisions on business and process applications, with expert support and some mandates from the CoE. Technology decisions are primarily made by a combination of CoE and IT, with the CoE responsible for design, development, delivery, operations and maintenance, and IT for integration challenges, and IT architecture/infrastructure and IT trajectory issues.

Ultimately, however, mature RPA users are looking to move beyond operational excellence to continuous innovation, and to accelerate their digital transformation programs. As organizations realize that both RPA and CA realms enable new business strategies, and that together they can complement and magnify value, we are seeing the rise of service automation Centers of Enablement to bring the full force of the service automation landscape under one centralized center. We think this center will report to a Chief Digital Strategy Officer or other C-suite executive. By early 2019 we saw several financial service companies moving in this direction.

One approach is to expand and uplift the existing Automation CoE mission, supported by additional skills and resources. Another practice would be to bring several different centers together – e.g R&D, innovation, digital, RPA, cognitive – and co-locate, integrate and scale their efforts. In practice the specific structure adopted is less important initially than introducing the extra capabilities needed for continuous innovation, expressed as future-focused roles:

  • Innovation leader. Business-focused, executive-level. Devising and engaging in organizational relationships and arrangements supporting innovation. Listening to emerging technologies and identifying where the business value might be, and aligning strategy, structure, process, technology and people required to migrate the organization to new sources of business value.

  • Technical architect. Technology-focused. Future proofing the 3-5 year technology trajectory through architecture planning and design for an efficient, effective, enabling technology platform.

  • Relationship builder. Business- and technology-focused. An integrating, operational role building understanding, trust and cooperation with business users, and identifying and helping delivery of valuable business innovations.

  • Supplier/partner developer. Service-focused. Understanding and benchmarking the external market for automation technologies and services. Engaging with external parties and in-house service staff to release combined innovation potential in order to gain mutual business value.

  • Innovation monitor. Value-focused. Developing and auditing metrics on efficiency, effectiveness, and enablement. Looking for continuous improvement and innovation. Reviewing progress, anticipating problems, driving out business value. In our previous blog we discussed our Total Value of Ownership (TVO) framework, specifically designed to help value and innovation monitoring.

This will be an accelerating trend, in our view, because increasingly, organizations will create competitive advantage by connecting a portfolio of technology innovations that we call SMAC/BRAID, including Social media, Mobile technologies, Analytics and Big Data, Cloud services, Blockchains, Robotics, Automation of knowledge work (RPA and CA), the Internet-of-Things, and Digital Fabrication (i.e., 3-D printing), to transform service delivery. Organizations usually experiment with new technologies in innovation labs, but getting vetted technologies out of digital labs and into production environments, via Centers of Enablement focused on rapid delivery, will become a competitive differentiator.

19/4: Beyond ROI: Towards Total Value of Ownership

Dr. Leslie Willcocks & Dr. John Hindle, Knowledge Capital Partners

Initial adoption of Robotic Process Automation (RPA) typically starts with efficiency – using new technology to improve accuracy, quality, and speed while reducing costs. And the most common business case metric is some variant of Return on Investment (ROI), with the reference benchmark being the cost of a Full-Time Employee (FTE). But we know from decades of research that traditional ROI measures and cost/benefit analyses typically understate ‘soft’ and strategic benefits when applied to IT investments, and don’t account for many operational, maintenance as well as human and organizational costs, which can exceed technical costs by 300-400%.

One remedy, at least on the cost side, has been to focus on Total Cost of Ownership (TCO), defined as the total technical, project, human and organizational acquisition and operating costs as well costs related to replacement or upgrades at the end of the life cycle. TCO adds up all resource costs across all the activities comprising the automation life-cycle, flushing out hidden costs so often missed when using ROI. By mid-2018 some 67% of Blue Prism clients had a TCO model. Of these, 40% started with a TCO model while 60% developed it over time.

The real limitation so far in RPA assessment, however, has been in establishing benefits. Inherited from IT evaluation practice, one tendency has been to understate total costs in order to be able to allocate only hard, financial benefits allowable under traditional ROI or TCO measurement regimes. But this does not lead to gaining strategic value from RPA, nor treat RPA as strategic. The truth is that a new measure of net benefits is needed in order to drive strategic behavior and gains

Based on extensive research at Knowledge Capital Partners, we have developed a new measurement framework for service automation investments we call Total Value of Ownership (TVO). With this concept, the objective is to ensure that business cases for service automation are driven by:

  • total costs (both explicit and hidden costs)

  • multiple expected business benefits, and the strategic returns from future business and technical options made possible by RPA (hidden value).

Our TVO Framework is shown below, with Total Costs on the left had side of the equation, matched against Total Benefits on the right.

Screen+Shot+2019-03-04+at+11.29.26.png

(Total Value of Ownership (Source: Knowledge Capital Partners. © All Rights Reserved)

On the cost side we include all relevant activities and resources, not just traditional ROI inputs. On the benefits side, we have already found very strong empirical evidence amongst Blue Prism and other clients using RPA to achieve a ‘triple win’ for shareholders, customers and employees. Our “Three E’s” framework is designed to capture all these, but also locates further hidden value frequently omitted from clients’ business cases.

Much hidden value resides in the potential from applying process analytics for further Efficiency gains. Additional hidden value is located in the Effectiveness area (‘doing things right/differently’) by using automation and analytics to change how business is done, or to extend its capabilities. Meanwhile when we come to Enablement, we have already found multiple examples of enhanced customer journeys, new services, and increased profit/revenue.

Furthermore, we need to capture the hidden strategic value of the future options enabled where RPA creates a powerful digital business platform, operating something like a ‘bus’ on a computer motherboard but here connecting and integrating a growing range of innovative cognitive and AI solutions, supporting a blended human and digital workforce. Discounting the source of such major hidden, future value is a serious mistake.

Identifying and capturing these business-driven Effectiveness and Enablement value opportunities requires evolving the initial RPA Center of Excellence – first to become an Automation Center of Excellence, and ultimately to become what we call a Center of Enablement. We discuss how to do this in the next blog.

19/3: The critical role of communications in RPA change management

Words by John Smith

All our evidence gathering at KCP has made absolutely clear to us that the quality of change management is critical to successful outcomes in RPA. Quality is the operative word here. Because of course everyone will tick the box “ change management”. The issue is, how good is it? And too often there is not enough focus on getting the answer to that question. Plenty of energy is spent investigating whether the technology will live up to what is claimed for it . There is also increasing understanding and focus in the user community on finding the skills needed to redesign processes to optimise the benefit of RPA – rather than plugging it into established process design. But then regrettably the people part of the equation can get a bit fuzzy. Good change management practice puts people at the heart of success, mobilises colleagues to get the most from the technology, presents a credible narrative that everyone engaged in the enterprise can buy into. It doesn’t obscure the uncomfortable truths about the need for change and it explains all outcomes for stakeholders - challenging and positive outcomes - equally. Change management isn’t simply about the mechanics of implementation, it has strong, motivating, credible, intelligent and farsighted communications built into it as a vital means of propulsion.

In our case study work at KCP we have seen less communications of this calibre than might be expected in companies that are starting out on a path offering strategic gain. And it leads us to wonder whether tentativeness in communications in change management is caused in user organisations by the climate of debate about intelligent automation in society. Alarm dominates so much of the analysis and conversation in the public sphere about how this new age of technology will shape our lives. Maybe that is inevitable. And perhaps one of the biggest challenges any single organisation now faces when it decides on RPA investment is to chart its own course for change management communications in a turbulent sea of loud opinion that affects every individual away from their work. The real life context in which we individually translate change into impact on our own existence, and make a judgement about whether we want it to happen or not, has many inputs. What a partially informed politician might say, a journalist online or on air might comment with little time to prepare, will be weighed against what our CEO might tell us.

For our partners at KCP evidence is always king. What the evidence from the now hundreds of RPA deployments we have researched shows is that the companies succeeding with RPA, and on track for achieving the value they envisioned, have communicated effectively both the big picture – the why, the goal, the how - and the realities of the impact on their people. They have communicated early in their adoption of the technology, and continued with clear, consistent and regular messages and engagement with colleagues. They have focused on the value to employees, including less repetitive, boring work; co-working for higher productivity; learning new skills and roles; being recognized as innovators; and being able to focus more on customer service. They have faced up to the fact that we all hate uncertainty and have confronted the inevitable potential for downsizing intelligent automation offers. They examine the implications with the individuals and groups involved with joined up human resources strategies in place and have had the courage to set out and adopt all the options available – natural attrition, redeployment, reduced dependency on outsourcing, – and with redundancy or early retirement as a last option. It’s worth emphasising that in nearly half the user companies we interviewed in a recent survey, communications highlighted the opportunities presented by automation to take on more work and grow the organisation.

In the Insight section of this site you will find a briefing paper that examines the whole role of change management in RPA. (Keys To RPA Success - Part Four: Change Management & Capability Development). It has a vital role, and communications is one of its most critical components. Especially as the debate about the impact of intelligent automation captures the imagination of so many people - for better or worse.

19/1: Is RPA ‘The Real Deal?’

Dr. John Hindle, Dr. Leslie Willcocks, Dr. Mary Lacity

Robotic Process Automation increasingly looks like a game-changer. Why? And why now? We discover that you need to look at the past, as much as the present, for the real answers.

As professional researchers, we’ve been studying IT-enabled business transformations for over 25 years, taking a long view of the sector through multiple generations of technology. We’ve been especially fascinated with the recent performance of Robotic Process Automation (RPA) and the impressive results customers have achieved over the 5 years we’ve been researching it. Our service automation research base includes dozens of case studies and multiple quantitative surveys, analyzed in several books, articles and conference presentations.

Is RPA simply the newest “new thing” in enterprise technology? Or is something deeper going on? We’ve been asking the same questions of ourselves that others have been asking: “What’s different about RPA? Why is everybody so excited? Why is the value realized so high?” The answer, our research suggests, has a lot to do with context.

For the past 30 years, companies in all industries have undertaken a succession of large-scale initiatives to improve operational effectiveness and competitiveness, applying the 3 classic enterprise transformation levers: People, Process, and Technology.

As our summary table below suggests, all these initiatives, from ITO and BPO to BPR, ERP, TQM, BPM and all the other acronyms, share some common features: they’re expensive, they involve long-term consultancy services, standalone “strategic” budgets and dedicated deployment teams, and they take a long time to implement. And oh, they’re all disruptive to the business.

Screen Shot 2019-03-11 at 09.12.51.png

Looking at this picture, one might well pity the Operations and IT executives charged with leading and managing these interventions – it’s hard, painful work, with expensive, inflexible, long-cycle tools, for uncertain and often hard-to-quantify gains, with limited recognition and reward.

Enter RPA, which emerges in this context as a surprisingly winning solution. RPA technologies are available in three main models: autonomous, enterprise-grade RPA, delivered on a server or cloud infrastructure; local, “assistive” desktop versions, known as Robotic Desktop Automation (RDA); and a third, more IT-intensive model resembling a software development kit (SDK). While all 3 models have their value, enterprise-grade RPA offers the business a code-free, flexible, general purpose toolset. While not problem-free or without challenges, RPA is relatively easy to configure, offers rapid implementation, high ROI, and early benefit realization, with minimal pain and mostly happy users.

For business leaders, it eliminates multi-year waits on the IT work queue, enables control over configuration to meet changing process demands, and allows the workforce to pursue more rewarding, revenue-generating activities while increasing productivity. Conversely, RPA also enables the IT function to focus on core enterprise infrastructure and relieves pressure on shrinking IT resources (people and budgets), while maintaining security and governance. Because RPA software operates at the presentation layer, moreover, it doesn’t disturb or compromise underlying systems of record.

Is RPA a magic elixir? Given the challenges noted earlier, and the need for specialist implementation expertise, RPA isn’t a panacea for all enterprise ills. But it clearly changes earlier paradigms of enterprise transformation and greatly boosts productivity. In autonomous server- and cloud-based models, enterprise RPA creates an agile, flexible technology platform on which to build a truly digital business – a new, business-driven and business-managed layer on the IT “stack,” if you will. For all those unsung operations leaders, our research to date offers abundant evidence why they’re so excited about the opportunity and the benefits.

Over the coming weeks and months, we’ll be publishing a series of reports, sponsored by Blue Prism, on “The Keys to RPA Success” – check back here regularly for updates, or download our findings from our Reports page here.