Introduction

Over the past year, AI has continued to make inroads in the enterprise. According to the new 451 Research Voice of the Enterprise: AI and Machine Learning Use Cases 2020 survey, 29% of enterprises have deployed machine learning in some capacity. As an increasing number of these AI projects move from the whiteboard, through proof of concept, to final deployment, adoptees (and the market in general) are beginning to face a series of second-order matters. MLOps, accountable AI, AI infrastructure, sustainable AI, new use cases and overall market maturity are topics that need to be addressed to ensure that the technology continues to deliver benefits in an enterprise context.

The 451 Take


AI is a transformative technology with the potential to impact almost every enterprise process, but in this still-nascent stage of adoption there are many open questions about how it should be implemented and for which use cases. Until now, many adoptees were more focused on rolling out AI applications into production and less concerned with developing standards and procedures to ensure the technology is safe, manageable, robust, explainable and sustainable. As these downstream challenges become more prevalent in 2020, we expect the industry to formalize frameworks, products and services around these key areas.

 

MLOps

MLOps (machine learning operations) broadly describes the practices and technologies that govern the deployment, monitoring and management of machine learning models. As its name suggests, one component of MLOps is the extension of the core principles of DevOps – automation, agility and collaboration – to the lifecycle of machine learning models.

These assets are categorically different than traditional applications, and thus, require additional considerations. For example, unlike software code, predictive models erode over time – a phenomenon known as model drift – and adoptees must therefore monitor model performance after deployment. In determining how frequently to sample and retrain deployed models, organizations must weigh many factors, such as the time of data scientists, the compute costs of model retraining and the business risks associated with a wayward prediction. This simple example, which only includes managing one model in deployment, demonstrates the exponential complexity of MLOps. Many organizations we have spoken with recently have 10 or 20 models in production now, but expect that to grow to hundreds within a year or so.

In 2020 we expect MLOps to move from an industry buzzword to a set of formalized best practices. Additionally, we suspect many providers of machine learning platforms will roll out additional features or tools targeting the management of models.


Accountable AI

The transformative potential of AI technology is awesome – it inspires both excitement and a healthy degree of fear, especially as it becomes integrated into more facets of society. One common refrain across the industry is the imperative nature of ensuring these proliferating AI systems are not harmful to the objects of their decision-making. In other words, the technology must be accountable to the humans it impacts.

This goal is not simple. One area of complication is philosophical. What does it mean for an AI system to be accountable? How much more accountable do applications embedded with machine learning models need to be than a traditional system? A second issue is technological. Prying open the black box of artificial intelligence is difficult, a problem compounded by the popularity of deep learning techniques, which led to particularly opaque decision-making systems.

Over the last few years, technology companies have produced sets of principles to guide their AI development and implementation, and many of these organizations are starting to release tools to improve the accountability of their AI products and systems. At the same time, public institutions have begun to debate legal frameworks for regulating the technology. In the coming year, we expect these developments to progress in tandem, with more feature releases from the industry and more defined policy from the public sector.


AI Infrastructure

While much of the AI buzz focuses on use cases, the underlying infrastructure to build these applications is also critical. Each stage of the machine learning process – data management and preparation, model training or inferencing – places unique demands on enterprise IT systems. Additionally, the magnitude of data feeding an AI system can reach petabyte scale, and given the experimental nature of developing machine learning models, AI workloads are not always predictable. Most enterprise adoptees of AI have already begun to develop strategies around their AI infrastructure. They know that the success or failure of this critical technology depends, at least in part, on having adequate (or even superior) infrastructure.

Therefore, in 2020, we expect machine learning practitioners and IT stakeholders to engage in even more concerted decision-making around their AI infrastructure. Budgets will likely expand as enterprises look to procure products and services that turbocharge their IT environments dedicated to AI. At the same time, we expect vendors to introduce new hardware products – storage, networking and particularly servers – targeting on-premises AI workloads, while the hyperscale cloud providers continue to invest heavily in their own infrastructure. In particular, we expect a series of new accelerators – from both incumbent vendors and startups – to hit the market. We also foresee the emergence of new edge devices that address processing needs for low-latency, low-power environments.


Sustainable AI

As more enterprises adopt the technology, AI workloads are increasing in both size and frequency, resulting in more compute resources being used for AI. As a research study from OpenAI demonstrates, this trend is particularly acute in the area of cutting-edge research, where after decades of alignment with Moore's Law (which predicted a doubling in processing power every two years), the compute used for landmark AI experiments is doubling every few months. One of the main reasons for this change is deep learning. Another driver of power consumption will be the automation of key stages of machine learning development, which makes it easier to consume vast numbers of processor cycles with the click of a button. However, some vendors claim their automated ML is in fact extremely efficient; a lot depends on the chip architecture being used, such as CPU, FPGAs, GPU or TPU.

The growing carbon footprint of this technology is galvanizing a discussion in the industry about the sustainability of AI. One problem is that the research community almost exclusively privileges advances as measured by gains in accuracy, with little or no emphasis on financial costs or computational efficiency. In 2020 we anticipate a growing segment of AI thought leaders and research will push for an environmentally friendly approach to this technology, and new techniques that privilege efficiency will emerge.

At the same time, we expect the market to react to this discussion around sustainable AI. Purveyors of AI applications could begin to promote the sustainability of their products. Hardware vendors will likely highlight the greenness of their offerings, alongside their speed. It is even possible that the market will begin to see eco-friendly hardware products targeting the AI market.


New, Innovative Use Cases

It may seem obvious to say there will be new use cases for AI in 2020, since we see new use cases emerging every day as it is. But we expect to see organizations using machine learning to solve increasingly complex use cases as they move from applying AI to problems to which they have previously applied deterministic software approaches (i.e., rules-based ones) to new use cases that were not previously solvable using software alone. We saw evidence of this in our 2019 Voice of the Enterprise AI & Machine Learning Use Cases survey, where 37% of financial services companies said they intend to use AI to solve compliance problems in the future, compared with 28% today. Other prominent examples include drug discovery, the perennial problem of enterprise search and even the design of chips for AI workloads.

These emerging use cases may take the form of more use of algorithms at the more adventurous end of AI, such as reinforcement learning, whereby a software agent performs tasks and gets rewarded for certain results and not for others, thus learning what works and does not.


Market Maturation

There will always be a need for general-purpose AI platforms – a set of tools that can be used to build and deploy any sort of AI application. But this isn't something that can be attempted by every AI vendor, although many try. We expect a dose of realism to take hold among the AI startup community – the vendors themselves, as well as their investors – as they realize that there will not be a sustainable market for hundreds of general-purpose AI platforms, and there may only be room for a handful eventually. The filleting of the public cloud market is an example of this, and the three leading providers there – AWS, Google and Microsoft – all offer general-purpose AI platforms.

We expect more startups to focus on specific problems in specific industries, and where they need something like speech-to-text or image recognition, will lean on the major AI platforms for those. In 2020 we believe the major cloud platforms will further consolidate their market share by purchasing smaller entrants in the space, particularly those that offer complementary features. The horizontal, cross-industry approach of these big players will leave ample room for smaller companies to tackle niche problems.
Nick Patience
Founder & Research Vice President

Nick Patience is 451 Research’s lead analyst for AI and machine learning, an area he has been researching since 2001. He is part of the company’s Data, AI & Analytics research channel but also works across the entire research team to uncover and understand use cases for machine learning. Nick is also a member of 451 Research’s Center of Excellence for Quantum Technologies.

Jeremy Korn
Associate Analyst

Jeremy Korn is an Associate Analyst at 451 Research. He graduated from Brown University with a BA in Biology and East Asian Studies and received a MA in East Asian Studies from Harvard University, where he employed quantitative and qualitative methodologies to study the Chinese film industry.

Keith Dawson
Principal Analyst

Keith Dawson is a principal analyst in 451 Research's Customer Experience & Commerce practice, primarily covering marketing technology. Keith has been covering the intersection of communications and enterprise software for 25 years, mainly looking at how to influence and optimize the customer experience.

Want to read more? Request a trial now.