Meaningful Artificial Intelligence (AI) deployments are just beginning to take place, according to Gartner. Gartner’s 2018 CIO Agenda Survey shows that 4% of CIOs have implemented AI , while a further 46% have developed plans to do so. "Despite huge levels of interest in AI technologies, current implementations remain at quite low levels," said Whit Andrews, research vice president and distinguished analyst at Gartner.
"However, there is potential for strong growth as CIOs begin piloting AI programs through a combination of buy, build and outsource efforts,"
As with most emerging or unfamiliar technologies, early adopters are facing many obstacles to the progress of AI in their organizations. Gartner analysts have identified the following four lessons that have emerged from these early AI projects.
1. Aim Low at First
"Don’t fall into the trap of primarily seeking hard outcomes, such as direct financial gains, with AI projects," said Mr. Andrews. "In general, it’s best to start AI projects with a small scope and aim for 'soft' outcomes, such as process improvements, customer satisfaction or financial benchmarking."
Expect AI projects to produce, at best, lessons that will help with subsequent, larger experiments, pilots and implementations. In some organizations, a financial target will be a requirement to start the project. "In this situation, set the target as low as possible," said Mr. Andrews. "Think of targets in the thousands or tens of thousands of dollars, understand what you’re trying to accomplish on a small scale, and only then pursue more-dramatic benefits."
2. Focus on Augmenting People, Not Replacing Them
Big technological advances are often historically associated with a reduction in staff head count. While reducing labor costs is attractive to business executives, it is likely to create resistance from those whose jobs appear to be at risk. In pursuing this way of thinking, organizations can miss out on real opportunities to use the technology effectively. "We advise our clients that the most transformational benefits of AI in the near term will arise from using it to enable employees to pursue higher-value activities," added Mr. Andrews.
Gartner predicts that by 2020, 20% of organizations will dedicate workers to monitoring and guiding neural networks .
"Leave behind notions of vast teams of infinitely duplicable 'smart agents' able to execute tasks just like humans," said Mr. Andrews. "It will be far more productive to engage with workers on the front line. Get them excited and engaged with the idea that AI-powered decision support can enhance and elevate the work they do every day."
3. Plan for Knowledge Transfer
Conversations with Gartner clients reveal that most organizations aren't well-prepared for implementing AI. Specifically, they lack internal skills in data science and plan to rely to a high degree on external providers to fill the gap. 53% of organizations in the CIO survey rated their own ability to mine and exploit data as "limited" — the lowest level. Gartner predicts that through 2022, 85% of AI projects will deliver erroneous outcomes due to bias in data, algorithms or the teams responsible for managing them.
"Data is the fuel for AI, so organizations need to prepare now to store and manage even larger amounts of data for AI initiatives," said Jim Hare , research vice president at Gartner. "Relying mostly on external suppliers for these skills is not an ideal long-term solution. Therefore, ensure that early AI projects help transfer knowledge from external experts to your employees, and build up your organization’s in-house capabilities before moving on to large-scale projects."
4. Choose Transparent AI Solutions
AI projects will often involve software or systems from external service providers. It’s important that some insight into how decisions are reached is built into any service agreement. "Whether an AI system produces the right answer is not the only concern," said Mr. Andrews. "Executives need to understand why it is effective, and offer insights into its reasoning when it’s not."
Although it may not always be possible to explain all the details of an advanced analytical model, such as a deep neural network, it’s important to at least offer some kind of visualization of the potential choices. In fact, in situations where decisions are subject to regulation and auditing, it may be a legal requirement to provide this kind of transparency.
Add new comment