Artificial intelligence (AI) and machine learning (ML) technologies are the next set of transformative technologies in the tech sector. And while much has been prophesized about their impact, data points are few and far between. 451 Research's Voice of the Enterprise (VotE): AI & Machine Learning survey is a new biannual offering that addresses the need for quantitative metrics around these emerging technologies. It provides insight into the adoption patterns as well as benefits, barriers and applications of these critical technologies. Similar to recent reports on industry-specific use cases and AI adoption strategies, this Spotlight leverages the results of the most recent survey to detail the critical challenge of measuring the impact of AI initiatives. The data comes from responses from a cohort of more than 1,600 line-of-business and IT professionals in North America, EMEA and Asia-Pacific.

The 451 Take

There is no point for enterprises to adopt technology just because it's new and interesting; there must be some measurable benefit to be gained from it. However, at the early stage of development and deployment of a technology, quantifying that benefit can be very difficult. According to new data from 451 Research's VotE: AI and Machine Learning survey, less than half of early adopters have defined key performance indicators (KPIs) around their AI and machine learning initiatives, although the vast majority plan to do so within a year. Clearly, an 'adopt now, ask questions later' mindset pervades this first stage of enterprise AI adoption. Major obstacles include defining the right success metrics, technical limitations and operationalizing data. Given the substantial benefits of well-implemented AI systems and the significant risks of poorly constructed AI, organizations plan to focus more on success metrics as the industry moves further along the adoption curve.


 

Adopt Now and Ask Question Later

Although AI – and machine learning in particular – is still very early in its deployment within enterprises, 451 Research survey data provides insight into how adopters are measuring the impact of these new technologies. According to the survey results presented in Figure 1 below, only 46% of AI and machine learning adoptees currently have defined KPIs to measure their initiatives. Another 42% of respondents say they plan to develop KPIs within the next year, and a 12% minority say they have no plans to produce KPIs.

Figure 1: Are early adopters of AI and machine learning defining KPIs?

Figure 1
Source: 451 Research Voice of the Enterprise: AI and Machine Learning, 2H 2018

The data suggests that enterprises are approaching AI and machine learning with an 'adopt now, ask questions later' mindset. For many respondents, the absence of defined KPIs does not preclude the implementation of this emerging technology. It goes without saying, but this approach is inherently high-risk. What if the 'intelligent' system performs worse than its predecessor? What if it makes one disastrous decision? The data shows many enterprises are prioritizing putative rewards over potential risks.

Digging a little deeper into the data reveals further insights about KPI definition:
  • Larger companies (those with more than 1,000 employees) are more likely than smaller companies (1-999 employees) to have already defined KPIs, 52% to 43%.
  • Among the smallest companies of all in our survey – those with annual revenue of less than $1m – a little over a quarter (26%) say they have no plans to define KPIs around their AI initiatives, compared with 14.9% of companies with revenue between $1 and $9.99m and 11% of respondents with revenue over $1bn.
  • 62% of enterprises pursuing a primary AI strategy of vendor-provided machine learning-specific tools say they defined KPIs.
  • 65% of the most tech-savvy users have defined KPIs, whereas 24% of conservative respondents say they have no plans to.

The Various Barriers to Defining KPI 

Of those organizations that have yet to define KPIs around AI and machine learning, what is holding them back? The answer is, not surprisingly, a lot. Figure 2 below displays the most prominent barriers cited by respondents who had yet to implement KPIs around their AI initiatives.

Figure 2: Most prominent barriers to defining KPIs
Figure 2

Source: 451 Research Voice of the Enterprise: AI and Machine Learning, 2H 2018

A plurality of respondents – 38% – say they face difficulties with the initial step of the KPI creation process: describing which metrics should be collected. This statistic is honestly a bit worrisome. The best AI initiatives are those that address a narrow business outcome, and the corresponding KPIs should directly relate to this goal. For example, in the case of a customer service chatbot, a valuable KPI would be whether the system solves customer issues more efficiently than a human representative. Admittedly, there are additional interacting variables that generate complexity, but it is still disconcerting that the largest percentage of respondents cite variable definition as a challenge.

The next two leading barriers – technical limitations (31%) and operationalization (29%) – are downstream of metric definition, so it makes sense that fewer respondents report problems with these steps. Organizational inertia (26%) is a parallel, and admittedly opaque, issue that hampers many business initiatives, so it is not surprising to find that it inhibits the implementation of KPIs around AI projects.

Finally, 13% cite a completely different reason for not defining KPIs: they are so confident in the success of their AI initiatives that metrics are superfluous. Even the laziest business analysts will admit that confidence is no substitute for evidence; however, digging into the data provides some context for this statistic. Respondents whose initiatives are either security-related or employing a systems integrator are more likely to say they don't need to define KPIs. Why would that be? There are some AI applications that are categorical upgrades from the systems they replace. If your business is going from an entirely manual SOC to one with AI-assisted remediation tools, you probably feel that you don't need KPIs to tell you that it's an improvement.

Toward a KPI-first Furture?

It is important to underscore that this data is a snapshot of the current – and relatively early – state of AI and machine learning adoption in the enterprise. As these technologies become more commonplace, business processes and workflows will standardize. It's thus hard to imagine that the 'adopt now, ask questions later' mindset will persist into the future, especially as enterprises look to imbue machine intelligence into their critical systems.

Nick Patience
Founder & Research Vice President

Nick Patience is 451 Research’s lead analyst for AI and machine learning, an area he has been researching since 2001. He is part of the company’s Data, AI & Analytics research channel but also works across the entire research team to uncover and understand use cases for machine learning. Nick is also a member of 451 Research’s Center of Excellence for Quantum Technologies.

Jeremy Korn
Senior Research Associate

Jeremy Korn is a Senior Research Associate at 451 Research. He graduated from Brown University with a BA in Biology and East Asian Studies and received a MA in East Asian Studies from Harvard University, where he employed quantitative and qualitative methodologies to study the Chinese film industry.

Keith Dawson
Principal Analyst

Keith Dawson is a principal analyst in 451 Research's Customer Experience & Commerce practice, primarily covering marketing technology. Keith has been covering the intersection of communications and enterprise software for 25 years, mainly looking at how to influence and optimize the customer experience.

Want to read more? Request a trial now.