Three questions to ask vendors before investing in artificial intelligence.
In the late 1990s, people joked that someday even toothbrushes would have internet capabilities. Today, not only are there internet-enabled toothbrushes, but 2017 also brought a vendor claiming to have AI in its toothbrush.
Virtually every technology provider today is pushing artificial intelligence (AI) into their product strategy, with more than 1,000 vendors now claiming to have AI capabilities.
But businesses are not simply buying it.
Although AI offers exciting possibilities, the huge increase in startups and established vendors all claiming to offer AI products without any real differentiation has confused companies and obfuscated the value of more straightforward, proven approaches.
“For example, a vendor showed us a chatbot that was intended to provide a useful dialogue between a customer and a retail company regarding the products it has on consignment” says Whit Andrews, research vice president at Gartner. “However, when we inquired about how the chatbot would improve its own conclusions from subsequent data, or from the customers’ choices, the vendor indicated that the system was based entirely on its own rules, which were regularly updated manually.” While this does not accurately represent an AI product, it might resolve a business challenge admirably.
The different definitions of AI
Part of the problem is that vendors claim to offer AI when they are actually using classic machine learning (ML) solutions rather than more modern techniques such as deep learning.
Most AI products sold today use quantitative statistical techniques as its base that can make changes to behavior based on new data, and many vendors offer this kind of product.
Deep learning delivers new options under the AI umbrella with systems that can replace humans in many routine tasks that involve pattern recognition. These systems can, for example, convert speech to text, classify objects in images, catalog faces and automate the driving of a car.
But deep learning is not always the best solution to a problem. Firstly, deep learning products do not produce quantitative, measurable outputs, but instead, identify the likelihood of something belonging to a class by grouping complex patterns. Secondly, training is a critical aspect of deep learning systems, which is a compute-intensive process that can take a lot of time. And, once completed, only small modifications are possible, and the entire training must typically be rerun to accommodate new data.
Many vendors oversell the capability of their deep learning systems, glossing over the challenges of training and the need for retraining. Others market their products as deep learning systems, but that is often an overstatement of their capabilities.
As AI accelerates up the Hype Cycle with the promise to change business forever, companies have to distinguish between faux and real AI offerings. One way to do this is by asking vendors to describe the analytical model used in their AI solutions and, from there, infer how well it might perform in a given situation.
3 questions to ask vendors
Companies need to determine three things when questioning a vendor:
➢ What AI method it is proposing to use in its solution
➢ How robust or brittle the implementation will be in terms of the resources needed to deploy and manage it
➢ How much training data is needed to “prime” the solution, and how often it will need to be retrained
The answers to these questions go well beyond the traditional “demo.” Companies must understand how a vendor’s product uses AI and whether it would work well with the data and processes that they already possess.
Another factor companies have to consider is the reason behind having AI in a product, as it introduces risks, complexity and costs.
Consequently, any vendor claiming that their product includes AI should also be able to explain how it will benefit the end user more than versions without AI. But in a world where AI products become commonplace, companies will want to go beyond just verifying that AI makes the product better, and instead get a sense of how a given vendor’s AI-enabled product is better than others in the market.
When comparing different AI products, companies must ask vendors how they manage risk with their AI products, and how that is superior to their competitors’ means of doing so. This is particularly important, as many vendors themselves do not understand the risk involved in using AI. For example, the Tesla driver who was killed because his autopilot mistook a container trailer for scenery is a tragic indicator of how an AI system can get it wrong.
AI systems are unlike other technology products in the sense that they are not static and require vendors to be fully invested in improving its flexibility and resilience. So companies need to find out what vendors are doing to improve their offerings, whether it’s collaborating with independent data scientists or being an active player in the industry.
Two factors have contributed to AI’s seemingly sudden appearance in modern business software. The first is the availability of cloud computing, and the changes in computing architectures, including prevailing chip designs. The second is simply the volume of data, which is unprecedented in both its volume and clarity.
But with this large volume of data that AI can use for improved insights comes the greater challenge of understanding what is in the data, and what it shows or doesn’t show. Many cautionary tales of AI include stories of misinterpretation of new data based on misinterpretation of old data. So companies must know what data an AI system uses to develop and improve its performance, and how their vendors will help them address the risks.