Incedo Inc.

In recent years, promoters of blockchain have pushed the technology as a major disrupter to existing digital payments and transactions systems. Indeed, it offers tremendous promise to become a key building block of the digital economy, but the technology has fallen victim to massive hype and irrational exuberance in past, driven largely over a Bitcoin-buying frenzy.

The talent gap is often the talking point in the industry. To discuss the typical analytics hiring scenario in India and steps that can be taken to bridge the talent gap, Analytics India Magazine caught up with Nitin Seth, CEO of Incedo Inc., who shares that talent gap is primarily driven by the sharp rise of analytics AI-based solutions needed in different industries. “The supply side has not been able to cope up,” he said.

Customer expectations have reached an all-time high and industry competition is ever increasing — putting businesses under constant pressure to increase efficiency and improve results

Step outside the digital natives from the Silicon Valley and Seattle, and AI as a source of h advantage begins to look like smoke and mirrors. In our conversations with multiple Fortune 100 executives, we see increasing levels of frustration. A not un-common refrain: “We are being asked to spend millions on AI initiatives: is this the best way to allocate capital?” We believe that this is the right question to ask – after all, there is no dearth of investments driven by technology hype cycles. Why should AI be any different?

While the pundits talk breathlessly about AI being responsible for the 4th Industrial Revolution, we believe that the reality is far more nuanced – and a good place to start is to ask the right questions to better understand the current state of AI in your enterprise. So here goes – 10 Questions. And like all good questions, these are meant to provoke a dialogue within your organization and through that, a better assessment of whether AI is ‘real’ and more importantly, the journey that you and your organization need to embark to make AI real.

We follow that up with a strawman Manifesto on what it will take to make AI real: you should create one for your own organization.

10 Questions

AI for the sake of AI
1. Are the AI projects focused on delivering measurable business outcomes?
2. Do you have the right instrumentation integrated to monitor the impact of AI projects?

Nurturing AI Talent
3. Is there a core AI capability under a CDO/CTO? Or is it a bolt-on as part of the CIO org?
4. Are there long-term career paths for AI/ML Data Scientists & Engineers?

Data as a Core Asset
5. Is there a Data Governance team with a CXO commitment to truly enable Data Democratization?
6. Is the legacy BI/EDW environment the main data platform for AI projects?

The Legacy of Deterministic thinking
7. Does the organization have an appetite for Experimentation across the Enterprise: not just cosmetic website changes?
8. Does the business accept the idea of Probabilistic Recommendations?

Crossing the AI Chasm
9. Do you have an enterprise AI platform infrastructure?
10. Have AI projects been integrated into transaction systems (e.g., ERP, RPA) in the last 12 months?

The Manifesto

To make AI truly real in your organization, you need to spark some kind of a revolution. And revolutions obviously (!) need manifestos. A series of bold, declarative statements that set the tone for the entire organization – only then, you have a shot at genuinely using data-driven decision making real and drive competitive advantage.

Here’s the Manifesto:

1. Be Ruthless About Outcomes: Quantified outcomes should drive AI project prioritization – not the other way around; mandate the instrumentation that can be linked back to an outcome

KPI as part of each AI project.

2. Invest in Building Organizational Capability: Invest in a centralized AI/ML Data Science and Engineering capability; balance that with an ecosystem of ‘Citizen Data Scientists’ who can provide capability at the edges in an organization. Create a career path that encourages mobility between the edges and the centralized teams.

3. Elevate Data to be a First-Class Citizen: Data is an Asset. Treat it like one: it deserves a governance structure; invest in a ‘Data as a Service’ architecture that goes beyond just data provisioning.

4. Integrate Probabilistic Systems into Operating Processes: Get the organization comfortable with the idea of probabilistic recommendations; ensure AI systems get better over time – where you don’t have enough observations to learn from, use experiments.

5. Invest in ‘AI Platform as a Service’: Invest in an AI@Scale platform that standardizes AI model lifecycle management; move away from monolithic systems to a marketplace of modular ‘code blocks’ that can be used to assemble solutions.

Two key points are clear:

1. AI is here to stay – it is no longer about the Why or What, but increasingly about the How.

2. AI, like all technology driven transformation, is not a one-size fits all strategy.

Our end-to-end suite of AI services includes AI/ML implementation, business process & digital integration, customer 360 view, and continuous A/B testing, among others.

Learn More:

Management jargon like ‘Extreme Experimentation’, ‘Fail Fast’ have been around for some time now. Much of this thinking and consequently, success has come from the software industry. But once you step outside the Silicon Valley, you will find hard pressed to find successful instances of experimentation translating to actual shareholder value. In my years of working with Fortune 500 leaders, I have found a stubborn chasm between desire and execution – one that goes beyond systems and processes.

It is clear that this is above all, a challenge of cultural transformation – one that moves away from trusting expert judgment to a more incremental approach informed through faster customer feedback. This is a journey that, if executed well, will create not just a data and system architecture but also impact the organization structure and KPIs. In other words, this should trigger a wholesale cultural change. And like all transformation ideas, there need to be a series of initiatives.

  1. Invest in a Cross-functional Design of Experiments Team: Most organizations have digital platforms. And many of them run basic experiments limited to testing out multiple website changes (e.g. A/B testing). This thinking needs to expand beyond these ‘cosmetics’ to experiment with deeper changes – e.g. pricing, product offering changes. Such initiatives require changes not just to the website – here are just a couple of examples:
    1. Product offering experiments: This will require a change in how the product structures are created – instead of individual SKU Bills of Material (BOMs), you will need to create option BOMs, with dynamic optional combinations
    2. Pricing experiments: This will require a change in pricing methods – instead of overall product price, you will need to set up a line structure that prices individual feature combinations

This will require a cross-functional team with a mandate to build this capability. An initial manifesto could look something like this:

  • Design multiple experiments in line with business goals. This requires a heavy dose of Data Science (see below)
  • Implement process changes – from changing say, how pricing gets done to how product changes can be deployed across physical and digital channels
  • Design the right set of KPIs to track not just the lift from individual experiments, but also to track the impact of these changes through implementation
  • Orchestrate the IT infrastructure to deploy these experiments

In our opinion, this is best owned and orchestrated through the Marketing Strategy or Corporate Strategy function. A leading home improvement retailer invested in this capability within the CFO organization – and used this function to drive experiments across channels – from in-store experiments (e.g. store-level promotions) to experiment with omni-channel implementation scenarios (e.g. Buy Online Pick up in Store).

  1. Not all Learning needs to come from field experiments: The proverbial data haystack has many needles. To begin with, the historical product and pricing changes can provide signals on the customer stated preferences – e. the traditional lift from these changes. Even more, data provides the opportunity to tease out revealed preferences, essentially signals that customers communicate through indirect mechanisms – e.g. relative preference for specific attributes (e.g. storage capacity vs. processing power) expressed through features (response to memory upgrades vs. RAM upgrades). Discrete Choice Models often help understand the value customers assign to product attributes (i.e. decompose a product price into the individual attributes). This could be a good starting point to understand price-value of product features. And then abstract out the features to attributes – which can then be imputed back to new features. A B2B tech manufacturer used this strategy to understand the price-value at a feature level of its Server product portfolio. This formed the basis of option-pricing for the next generation of Server products. Needless to say, this was the only viable approach given that it was not possible to run field pricing tests in a highly competitive market.
  2. Build Data Science Capability to extract value from data: It is clear that both of the above will require Data Science capability. And this capability becomes even more important given multiple challenges around not just the quality, but often surprisingly, the quantity of data.
    • Data Quality: Experiment data is more often than not, notoriously noisy. There are often multiple factors at play – both external (e.g. competitor launch, market dynamics) and internal (e.g. marketing promotion calendars, Supply chain considerations around availability etc.). Solving for these truly requires a blend of Art and Science:
      • Experiment design: Design the right test/control methods and the right measurement approach. From A/B testing to sophisticated methods like MAB (Multi-Armed Bandit)
      • Attribution modeling: Deploy Machine Learning models to tease out the attribution of the lift to a specific set of experiments from all other factors.
    • Data Quantity and Context: Most companies do not have the luxury of massive data sets the way Facebook, Amazon or Google do. More often than not, experiments need to deal with sparse datasets (e.g. small samples, poor response rates). And in some cases, there is not enough information in the incoming data to be able to easily execute experiments. For instance, without any prior information about the visitor to a website, how do you decide the right page to serve in an A/B test?

As companies across industries try to improve engagement with their end consumers, building a Design and Execution of Experiments capability is no longer nice to have restricted to the company’s website changes. We believe that the time is right for investing in building the right Organization, Data and Technology eco-system that can create, launch and sustain this process across the enterprise – Product Design and Launch and Pricing are two areas where there should be an immediate opportunity to invest in building this capability.

I have often said that the most valuable thing that I have built from my years in Analytics Consulting is a ‘failure portfolio’. Each failed project has taught a lot, and it comes down to some foundational issues

1. Are you solving the right problem? Call Center Operations are always trying to cut down the call time (Average Handle Time). Needless to say, there are multiple improvement opportunities in the entire process. A telecom company wanted to solve the problem of auto-classification of calls using AI. The idea was to shave a few seconds from an agent’s workflow from every call. This required transcribing the call by converting the audio to text; extracting features using text mining and then combining with other call related data to classify the call using a pre-defined taxonomy. Several thousands of dollars later, they had an AI engine with some acceptable level of accuracy. At the end of this exercise, they had managed to cut a few seconds of the agent time at the end of the call. When the solution was demonstrated to the call center agents, they had a much simpler alternative – training and simple tweaks to the workflow for better routing to the right agents. As it turns out, agents are already organized by problem area (billing, upgrade options, device management etc.) and it would be a few simple training sessions to get them to further classify the call within their domain area. In the end, the AI engine was shelved – the moral of the story: it is important to focus on the right problem. Choice at origin is important – pick the wrong problem and it is easy to go down a rabbit hole.

2. Have you thought of the overall Business Process? One of the problems that automobile manufacturers have long struggled with is that of parts re-use. As multiple engineering teams work on different vehicle platforms, they tend to create new part designs, as opposed to re-using existing parts. The downstream effects of this are obvious – part proliferation leads to inventory holding and procurement costs. Engineering teams are also very good at capturing part specifications – both detailed design as well as attributes. Only that most of them are drawings – from scanned documents (PDFs, TIFFs et al) to CAD files. There is clearly an opportunity to use AI – more specifically, computer vision – to extract features from these documents and create a matching solution that would with a few simple questions about the engineer’s intent, suggest a list of matching parts. A Tier-1 auto manufacturer invested in exactly that and developed a solution that would do any Data Science team proud. Then came the next step – how does this fit into the overall business process? How do you make it part of the engineer’s process? And then there was the issue of systems – Engineers work in CAD/CAE and PLM systems – how does this solution fit into that eco-system? All these were questions that we were not thought through fully to begin with. Too often, we forget that AI solutions are more often than not, solve a very specific problem. And unless this is pieced together with the relevant process tweaks, chances are the AI solution will end up as a proof of concept.

3. Have you engineered the Solution to work at scale? Every retailer is always on the hunt for extract cost savings from the system – and one big area of focus is safety stocks. Retailers have typically lived with a normative method (i.e. a formula that makes a lot of theory driven assumptions) of computing safety stocks. Along came Big Data and AI. The idea was to develop an empirical method to compute safety stocks using reinforcement learning. The solution worked beautifully – there were significant improvements in safety stock levels in test after test. Then came the issue of scaling – to make a real dent of even a few basis points to the bottomline, the solution had to work for over 2,000 stores, with each store carrying 50,000 SKUs on an average. It is no secret that AI solutions are compute and storage intensive. Despite that, the solution, elegant though it was for a small subset of SKUs, was just not designed to operate at scale.

4. Are you trying to fit a technique into a use-case? For those of us who have seen technology hype cycles, we are painfully aware of the early stages of the hype cycle – the temptation to take the new hammer out to look for a nail is too strong to pass. And it was thus, that the Customer Care function in a technology firm took it upon itself to leverage ‘cutting edge AI’. The idea was to go where no one had chosen (yet) to go – and as we all know, unstructured data is the new frontier. And the best minds got together and invented a use-case: using speech-to-text, voice modulation features and NLP to assess in real time, the mood of a caller. The idea: instead of relying on the Call Center representative to judge the need to escalate a call, how about we let machines make the recommendation in real time? By now – it should be obvious where this all landed. On hindsight, it seems almost laughable that we could dream up such a use-case: machines listening in to a human-to-human interaction and intervening in real-time if the conversation was not likely to result in a desirable outcome. But that is exactly what happens – there is a thin line separating an innovative and a ludicrous idea.

And here’s the interesting thing – you may have noticed that these are not necessarily Big Data or AI specific issues. Fundamental issues that are relevant for any transformation initiative. And that’s the good news.

And does this mean that AI is all hype? Of course not – there is absolutely no doubt that AI and Big Data present a tremendous opportunity to drive tremendous, game-changing value in organizations. And to be sure, we will have many such failures – but so long as we approach this thoughtfully, start with outcomes in mind, move with ‘deliberate speed’ and are always willing to learn, we can truly unlock the potential of AI and Big Data.

Innovation likely ranks in the top 10 of the most overused words in our industry today. But, what drives the need for Innovation — cost, new products, competitors, or something else? How does one execute — run experiments, new pilots, setup Innovation ventures?

I found Harvard Business School Prof. Sunil Gupta’s book titled “Driving digital strategy” bring this issue to the forefront in a compelling way. Prof. Gupta describes the need to fundamentally rethink a business in the lens of its customers, not around products or competitors. One of my favorite quotation from the book is “Starting an Innovation unit in a large company is like launching a speedboat to turn around a large ship; often the speedboat takes off but does little to change the course of the ship.”

So, how does one succeed? I reflected on my own experiences helping customers in their innovation journeys. Innovation should first focus on answering the simple question, What is the compelling pain OR gain I can deliver to my customers? Answer to this question begins the process of rethinking how a product or service should position itself for growth by specifically addressing customer problems and taking advantage of data & new digital technologies.

Here is an example from one of my experiences where we assisted a Medical Devices & Diagnostics manufacturer address two key questions:

– How to shift value creation from being an equipment manufacturer to a full-service provider?
– How to increase the share of wallet in my customer base (hospitals)?

This manufacturer historically focused on the strengths of the equipment, the efficacy, and clinical value, delivering differentiation through its hardware. It left a big gap. Third-party software and service providers were providing surrounding solutions that leveraged the data from this manufacturer’s diagnostic equipment along with other data assets to solve specific customer problems (diagnosis aids, improving care workflows, improving the patient experience and improving clinician productivity).
Consequently, the manufacturer was leaving untapped value on the table, operating as a participant instead of owning a larger share of its customer ecosystem. The message was clear — the company had to Innovate or risk getting left behind.

Clarity on purpose for Innovation, what problems to solve is an essential first step, but that does not guarantee success!

It brings us to the next step — Execution. Often lack of effective execution is the reason Innovation efforts fail.

Let’s look at some of the reasons for poor execution:

  • Jumping too quickly into what to innovate and poor definition of the specific business problem (use case) to solve for
  • Getting carried away by Technology — Digital technologies, Data, AI, Machine Learning becomes the focal point of identifying new capabilities, but not the customer need (In my diagnostics example, adding a voice capability to clinical diagnostic equipment had excellent marketing appeal and sounded different, but it did not go too far in the absence of a compelling pain or gain to solve.)
  • Starting with a platform strategy too early and investing too much time and money in building platform capabilities (This introduces confirmation bias before market validation and slows down execution.)
  • Not thinking upfront of downstream changes that will need to occur if an Innovation pilot succeeds (In my Diagnostics example, the pilot use case excited everyone, but an early assessment that it may introduce changes to the regulatory framework and new security considerations led to a more informed, better thought-out approach.)
  • Lack of a clear business impact framework and measurement mechanisms to continually drive alignment between the Innovation strategy and execution (It is not sufficient to come up with top-level goals! Developing a framework to track granular level execution metrics brings tight alignment with the business problem.)

To avoid this pitfall, we should address vital questions upfront:

  • Are we solving a real customer need, and are we able to define it clearly?
  • How well do we understand the end-user journey? Often, there is a low tech but high impact answer to the problem if we can humanize the experience.
  • How will we measure success? What are the Key Performance Indicators (KPIs) that will be a lead indicator of positive change and how to track them?
  • What technologies, capabilities will we need to execute?
  • What changes will have to occur in different parts of the value chain to commercialize such a new product or service?
  • How to simultaneously demonstrate value in the short term while building for scale in the long run?

What does Innovation mean for you, and what determines its success? I’d love to get your comments and learn from your experiences!