I have often said that the most valuable thing that I have built from my years in Analytics Consulting is a ‘failure portfolio’. Each failed project has taught a lot, and it comes down to some foundational issues

1. Are you solving the right problem? Call Center Operations are always trying to cut down the call time (Average Handle Time). Needless to say, there are multiple improvement opportunities in the entire process. A telecom company wanted to solve the problem of auto-classification of calls using AI. The idea was to shave a few seconds from an agent’s workflow from every call. This required transcribing the call by converting the audio to text; extracting features using text mining and then combining with other call related data to classify the call using a pre-defined taxonomy. Several thousands of dollars later, they had an AI engine with some acceptable level of accuracy. At the end of this exercise, they had managed to cut a few seconds of the agent time at the end of the call. When the solution was demonstrated to the call center agents, they had a much simpler alternative – training and simple tweaks to the workflow for better routing to the right agents. As it turns out, agents are already organized by problem area (billing, upgrade options, device management etc.) and it would be a few simple training sessions to get them to further classify the call within their domain area. In the end, the AI engine was shelved – the moral of the story: it is important to focus on the right problem. Choice at origin is important – pick the wrong problem and it is easy to go down a rabbit hole.

2. Have you thought of the overall Business Process? One of the problems that automobile manufacturers have long struggled with is that of parts re-use. As multiple engineering teams work on different vehicle platforms, they tend to create new part designs, as opposed to re-using existing parts. The downstream effects of this are obvious – part proliferation leads to inventory holding and procurement costs. Engineering teams are also very good at capturing part specifications – both detailed design as well as attributes. Only that most of them are drawings – from scanned documents (PDFs, TIFFs et al) to CAD files. There is clearly an opportunity to use AI – more specifically, computer vision – to extract features from these documents and create a matching solution that would with a few simple questions about the engineer’s intent, suggest a list of matching parts. A Tier-1 auto manufacturer invested in exactly that and developed a solution that would do any Data Science team proud. Then came the next step – how does this fit into the overall business process? How do you make it part of the engineer’s process? And then there was the issue of systems – Engineers work in CAD/CAE and PLM systems – how does this solution fit into that eco-system? All these were questions that we were not thought through fully to begin with. Too often, we forget that AI solutions are more often than not, solve a very specific problem. And unless this is pieced together with the relevant process tweaks, chances are the AI solution will end up as a proof of concept.

3. Have you engineered the Solution to work at scale? Every retailer is always on the hunt for extract cost savings from the system – and one big area of focus is safety stocks. Retailers have typically lived with a normative method (i.e. a formula that makes a lot of theory driven assumptions) of computing safety stocks. Along came Big Data and AI. The idea was to develop an empirical method to compute safety stocks using reinforcement learning. The solution worked beautifully – there were significant improvements in safety stock levels in test after test. Then came the issue of scaling – to make a real dent of even a few basis points to the bottomline, the solution had to work for over 2,000 stores, with each store carrying 50,000 SKUs on an average. It is no secret that AI solutions are compute and storage intensive. Despite that, the solution, elegant though it was for a small subset of SKUs, was just not designed to operate at scale.

4. Are you trying to fit a technique into a use-case? For those of us who have seen technology hype cycles, we are painfully aware of the early stages of the hype cycle – the temptation to take the new hammer out to look for a nail is too strong to pass. And it was thus, that the Customer Care function in a technology firm took it upon itself to leverage ‘cutting edge AI’. The idea was to go where no one had chosen (yet) to go – and as we all know, unstructured data is the new frontier. And the best minds got together and invented a use-case: using speech-to-text, voice modulation features and NLP to assess in real time, the mood of a caller. The idea: instead of relying on the Call Center representative to judge the need to escalate a call, how about we let machines make the recommendation in real time? By now – it should be obvious where this all landed. On hindsight, it seems almost laughable that we could dream up such a use-case: machines listening in to a human-to-human interaction and intervening in real-time if the conversation was not likely to result in a desirable outcome. But that is exactly what happens – there is a thin line separating an innovative and a ludicrous idea.

And here’s the interesting thing – you may have noticed that these are not necessarily Big Data or AI specific issues. Fundamental issues that are relevant for any transformation initiative. And that’s the good news.

And does this mean that AI is all hype? Of course not – there is absolutely no doubt that AI and Big Data present a tremendous opportunity to drive tremendous, game-changing value in organizations. And to be sure, we will have many such failures – but so long as we approach this thoughtfully, start with outcomes in mind, move with ‘deliberate speed’ and are always willing to learn, we can truly unlock the potential of AI and Big Data.

Recruitment Fraud Alert