Aashutosh Mishra
Optimizing business outcomes with Human-centered AI
Published Jun 30, 2022
A landmark BCG & MIT survey completed in 2020 showed more than half of respondents were deploying AI. The subject group consisted of 3,000 global executives, who responded to a panel of questions about their company’s engagements in AI. Six out of ten had an AI strategy in 2020, up from four out of ten in 2018. Clearly, AI implementation had significant pacing in the pre-pandemic world, and Covid-19 has only accelerated the digitization and openness of new technologies like AI for businesses worldwide.
Despite this acceleration and concerted efforts to hire data scientists and develop algorithms, most companies are yet to realise a return on their investments; and with only one in 10 companies seeing significant benefits from AI (BCG, 2020), it is clear that we are still in the early days of AI use in business.
This is a developing and highly technical industry that currently suffers from numerous bottlenecks. We know that data availability, quality, and the number of employable data scientists are often the most important limiting factors cited for successful AI building. However, the subjects of the BCG study are companies who have already built and deployed AI. They have data, they have the expertise yet, some, still fail to generate substantial business value from their efforts.
Maybe it’s the lack of ongoing model monitoring, I hear you say MLOps or the fancier AIOps. However, these aspects deal with model and decision quality across time, so the secret source for substantial benefits from AI implementation is something more fundamental than these factors.
The study concludes that companies who succeed with AI have a completely different approach. They don’t see AI as a transactional, point to point, technology solution but rather as an opportunity to fundamentally reshape their business model, helping them reimagine their business processes, almost repainting the entire customer operations with AI led decision making.
It sounds like an insurmountable task but we know that there is a way to go about adapting this worldview in practice. Instead of thinking too narrowly in ‘use cases’ or too broadly in the ‘business model’, one can partition the entire business operation into ‘domains’. For example, sales & marketing is one domain, customer operations is another domain.
Once these domains have been identified and prioritized, imagine this as an iterative game where with the help of reciprocal learning between domain experts, data scientists and AI, the domain is slowly reshaped into a data led, agile and AI first process. For many businesses, this may sound like an uphill task but the rewards and chances of success are huge too. The BCG study concludes; businesses that follow an extensive approach to AI led process change are likely to be 5 times more successful than those making small incremental changes to business processes.
With this understood (Thus), the elephant in the room is what happens to the data scientists’ roles in such a world? Aren’t companies rushing to establish centers of analytics excellence to house these expert data scientists and to ‘push’ AI across the organization.? I have personally headed multinational COE’s previously and understand that this approach has many advantages. First, the AI build process gets standardized as the teams can use consistent model building methodologies and definitions. There is also a great opportunity for skill enhancement and continuous learning with other data scientists. COE leaders also find this approach rendering better retention of the coveted data scientists. However, what gets missed out is that it takes double the effort to engage other departments across the organization and rally with them to unfold AI driven changes. If the full potential of AI is in domain based iterative reciprocal learning between humans and algorithms, the centralized analytics model is perhaps a rate determining step.
To answer the role of data scientist in this new world, let us first breakdown the task of reciprocal learning with AI further. Everything starts with data, and if we don’t have relevant data, we better put our efforts into getting that first. A few emerging technologies on synthetic data and NLG could perhaps help augment light data availabilities. Wrangling of data comes next, how do we standardize and make it ready to be fed into any cool maths that we want to do on it. After this, one can perform a variety of algorithmic learning from data, e.g. what factors in the data drive which factors via a causal analysis or which data points belong together as a natural cluster via unsupervised learning. This is a key step in learning back from algorithms and in a lot of cases, the users may want to test some hypothesis they already have. Then comes the options of building some predictive models to predict future behaviours of values. As an example, predicting a customer’s likely behaviour can help draft an operational strategy. Finally, we have decision optimization and outcomes tracking and learning from it to complete the feedback loop into data, algorithms and model designs.
You can see where I am going with this!
A data scientist’s job will change from making machines learn (machine learning) to learning with machines (something I would like to call ‘reciprocal’ learning). Surely there is still scope for the data scientists to get deep into the mathematical details of algorithms, but the business value game will be played at a more holistic level. How do they design the AI test cases? Which KPI’s do they target such that business value is conclusively delivered by the AI test case. Once the model outcomes are available, how do they carry out trade-offs between optimality and risk associated with different outcomes. What degree of machine led recommendation do they allow for which use cases? How do they design tests in operations that deploy these recommendations and how do they learn from the results of these tests? Do they iterate over the degree of reliance on machine recommendations versus domain expert’s recommendation until they find the right optimality of business value and associated risks? Not to forget the necessity of trust in the outcomes and explainability of AI steps.
Handling bias and explainability is another big emerging theme which will require a data scientist’s focus. Both these factors cut across data, models, decisions and outcomes. For example, Explainability is not just why decisions were made, but also once the decision was made and outcome registered, how do we explain the entire business outcomes chain. The collected dataset could be biased so the AI that is built on it will need to ensure it doesn’t carry (and amplify) these biases.
Finally, continuous post analysis to understand where the risks and opportunities lie will need a data scientist’s attention. What is the feedback from outcomes into improving data, modelling and decision making? This is how a data scientist will metamorphosize in businesses playing a pivotal role in transitioning into Human-centered AI, an approach that manages AI risks more ethically and efficiently with automation, for business and human society.