Five Strategies for Putting AI at the Center of Digital Transformation
May 18, 20201.2K views0 comments
Across industries, companies are applying artificial intelligence to their businesses, with mixed results. “What separates the AI projects that succeed from the ones that don’t often has to do with the business strategies organizations follow when applying AI,” writes Wharton professor of operations, information and decisions Kartik Hosanagar in this opinion piece. Hosanagar is faculty director of Wharton AI for Business, a new Analytics at Wharton initiative that will support students through research, curriculum, and experiential learning to investigate AI applications. He also designed and instructs Wharton Online’s Artificial Intelligence for Business course.
While many people perceive artificial intelligence to be the technology of the future, AI is already here. Many companies across a range of industries have been applying AI to improve their businesses — from Spotify using machine learning for music recommendations to smart home devices like Google Home and Amazon Alexa. That said, there have also been some early failures, such as Microsoft’s social-learning chatbot, Tay, which turned anti-social after interacting with hostile Twitter followers, and IBM Watson’s inability to deliver results in personalized health care. What separates the AI projects that succeed from the ones that don’t often has to do with the business strategies organizations follow when applying AI. The following strategies can help business leaders not only effectively apply AI in their organizations, but succeed in adapting it to innovate, compete and excel.
1. View AI as a tool, not a goal.
One pitfall companies might encounter in the process of starting new AI initiatives is that the concentrated focus and excitement around AI might lead to AI being viewed as a goal in and of itself. But executives should be cautious about developing a strategy specifically for AI, and instead focus on the role AI can play in supporting the broader strategy of the company. A recent report from MIT Sloan Management Review and Boston Consulting Group calls this “backward from strategy, not forward from AI.”
As such, instead of exhaustively looking for all the areas AI could fit in, a better approach would be for companies to analyze existing goals and challenges with a close eye for the problems that AI is uniquely equipped to solve. For example, machine learning algorithms bring distinct strengths in terms of their predictive power given high-quality training data. Companies can start by looking for existing challenges that could benefit from these strengths, as those areas are likely to be ones where applying AI is not only possible, but could actually disproportionately benefit the business.
The application of machine learning algorithms for credit card fraud detection is one example of where AI’s particular strengths make it a very valuable tool in assisting with a longstanding problem. In the past, fraudulent transactions were generally only identified after the fact. However, AI allows banks to detect and block fraud in real time. Because banks already had large volumes of data on past fraudulent transactions and their characteristics, the raw material from which to train machine learning algorithms is readily available. Moreover, predicting whether particular transactions are fraudulent and blocking them in real time is precisely the type of repetitive task that an algorithm can do at a speed and scale that humans cannot match.
2. Take a portfolio approach.
Over the long term, viewing AI as a tool and finding AI applications that are particularly well matched with business strategy will be most valuable. However, I wouldn’t recommend that companies pool all their AI resources into a single, large, moonshot project when they are first getting started. Rather, I advocate taking a portfolio approach to AI projects that includes both quick wins and long-term projects. This approach will allow companies to gain experience with AI and build consensus internally, which can then support the success of larger, more strategic and transformative projects later down the line.
Specifically, quick wins are smaller projects that involve optimizing internal employee touch points. For example, companies might think about specific pain points that employees experience in their day-to-day work, and then brainstorm ways AI technologies could make some of these tasks faster or easier. Voice-based tools for scheduling or managing internal meetings or voice interfaces for search are some examples of applications for internal use. While these projects are unlikely to transform the business, they do serve the important purpose of exposing employees, some of whom may initially be skeptics, to the benefits of AI. These projects also provide companies with a low-risk opportunity to build skills in working with large volumes of data, which will be needed when tackling larger AI projects.
The second part of the portfolio approach, long-term projects, is what will be most impactful and where it is important to find areas that support the existing business strategy. Rather than looking for simple ways to optimize the employee experience, long-term projects should involve rethinking entire end-to-end processes and potentially even coming up with new visions for what otherwise standard customer experiences could look like. For example, a long-term project for a car insurance company could involve creating a fully automated claims process in which customers can photograph the damage of their car and use an app to settle their claims. Building systems like this that improve efficiency and create seamless new customer experiences requires technical skills and consensus on AI, which earlier quick wins will help to build.
3. Reskill and invest in your talent.
In addition to developing skills through quick wins, companies should take a structured approach to growing their talent base, with a focus on both reskilling internal employees in addition to hiring external experts. Focusing on growing the talent base is particularly important given that most engineers in a company would have been trained in computer science before the recent interest in machine learning. As such, the skills needed for embarking on AI projects are unlikely to exist in sufficient numbers in most companies, making reskilling particularly important.
In its early days of working with AI, Google launched an internal training program where employees were invited to spend six months working in a machine learning team with a mentor. At the end of this time, Google distributed these experts into product teams across the company in order to ensure that the entire organization could benefit from AI-related reskilling. There are many new online courses to reskill employees in AI economically.
The MIT Sloan Management Review-BCG report mentioned above also found that, in addition to developing talent in producing AI technologies, an equally important area is that of consuming AI technologies. Managers, in particular, need to have skills to consult AI tools and act on recommendations or insights from these tools. This is because AI systems are unlikely to automate entire processes from the get-go. Rather, AI is likely to be used in situations where humans remain in the loop. Managers will need basic statistical knowledge in order to understand the limitations and capabilities of modern machine learning and to decide when to lean on machine learning models.
4. Focus on the long term.
Given that AI is a new field, it is largely inevitable that companies will experience early failures. Early failures should not discourage companies from continuing to invest in AI. Rather, companies should be aware of, and resist, the tendency to retreat after an early failure.
Historically, many companies have stumbled in their early initiatives with new technologies, such as when working with the internet and with cloud and mobile computing. The companies that retreated, that stopped or scaled back their efforts after initial failures, tended to be in a worse position long term than those that persisted. I anticipate that a similar trend will occur with AI technologies. That is, many companies will fail in their early AI efforts, but AI itself is here to stay. The companies that persist and learn to use AI well will get ahead, while those that avoid AI after their early failures will end up lagging behind.
5. Address AI-specific risks and biases aggressively.
Companies should be aware of new risks that AI can pose and proactively manage these risks from the outset. Initiating AI projects without an awareness of these unique risks can lead to unintended negative impacts on society, as well as leave the organizations themselves susceptible to additional reputational, legal, and regulatory risks (as mentioned in my book, A Human’s Guide to Machine Intelligence: How Algorithms Are Shaping Our Lives and How We Can Stay in Control).
There have been many recent cases where AI technologies have discriminated against historically disadvantaged groups. For example, mortgage algorithms have been shown to have a racial bias, and an algorithm created by Amazon to assist with hiring was shown to have a gender bias, though this was actually caught by Amazon itself prior to the algorithm being used. This type of bias in algorithms is thought to occur because, like humans, algorithms are products of both nature and nurture. While “nature” is the logic of the algorithm itself, “nurture” is the data that algorithms are trained on. These datasets are usually compilations of human behaviors — oftentimes specific choices or judgments that human decision-makers have previously made on the topic in question, such as which employees to hire or which loan applications to approve. The datasets are therefore made up of biased decisions from humans themselves that the algorithms learn from and incorporate. As such, it is important to note that algorithms are generally not creating wholly new biases, but rather learning from the historical biases of humans and exacerbating them by applying them on a much larger, and therefore even more damaging, scale.
AI shouldn’t be abandoned given that the alternative, human decision-makers, are biased too. Rather, companies should be aware of the kinds of social harms that can result from AI technologies and rigorously audit their algorithms to catch biases before they negatively impact society. Proceeding with AI initiatives without an awareness of these social risks can lead to reputational, legal, and regulatory risks for firms, and most importantly can have extremely damaging impacts on society.