18 Deploying Artificial Intelligence ^Top ers that foster success. Quantification of the success criteria while defining the use case is critical and so is evaluation of the organization’s available resources and its ability to manage uncertainty.41 This also means that the value proposition better be in the core competence areas of the firm; straying will tend to frustrate employees and send confound- ing signals to the market about the direction of the firm, while also wasting resources including time. Data and AI are still niche activities for most firms, and when venturing into non-core competence areas, incumbents tend to show signs of fatigue due to unmet expectations. Bayer, the German pharma company, humorously identified its habit of continuous piloting as a disease called “Pilotitis.”42 pitfall #6: jumping in without appraising the risks of irresponsible ai. Since machines “learn” from the data we feed them, if we give them biased data, they will learn the biases too and magnify the bias pattern in the data. The subsequent repercussions for incumbent firms can be very expensive and damage brand image considerably. When used in the public domain, AI also becomes a social endeavor with reputational risks. Microsoft’s Tay chatbot experiment is one example. In pursuit of conversational understanding, its ML and NLP internalized abusive language on Twitter and magnified it.43 Similarly, many initial algorithms from 41 Rita Gunther McGrath, “A Real Options Logic for Initiating Technology Positioning Investments,” Academy of Management Review 22, no. 4 (October 1997): 974-996, http://www.jstor.org/stable/259251. 42 Ulla Kruhse-Lehtonen and Dirk Hofmann, “How to Define and Execute Your Data and AI Strategy,” Harvard Data Science Review, July 2020, https://doi.org/10.1162/99608f92.a010feeb. 43 Oscar Schwartz, “In 2016, Microsoft’s Racist Chatbot Revealed the Dangers of Online Conversation,” Tech Talk: Artificial Intelligence: Machine Learning (blog), IEEE Spectrum, November 25, 2019, https://spectrum.ieee.org/tech- talk/artificial-intelligence/machine-learning/in-2016-microsofts-racist-chatbot-revealed-the-dangers-of-on- line-conversation. Microsoft’s engineers fed the algorithm with data from professional comedians and let it discover patterns of language through its interactions to emulate in subsequent conversations. Within hours of its release Tay tweeted more than 95,000 times, but a majority of the messages were racist and abusive. Microsoft
Deploying Artificial Intelligence: Strategic Insights Page 17 Page 19