20 Deploying Artificial Intelligence ^Top Speaking to the often-cited conundrum of choosing interpretability over ac- curacy of a model (also called the black-box problem), experimenting with an uninterpretable algorithm is highly risky so it is better to exchange it for an interpretable albeit less accurate solution to avoid such disasters.46 When applied to the criminal justice system, such mistakes make repercussions grave.47 So understanding how the model behaves, validating the predict- ability of the model’s output, and confirming whether the model’s reasoning aligns with the stakeholder’s mental model is critical. Teams often work in silos but apply their models to have a large-scale ef- fect instead, which makes it difficult to ensure that solutions are ethical and consistent with user expectations, organizational values, and societal norms. While it’s not necessary to hire an expensive AI ethicist for this, it’s imperative to have someone in the organization designated to ensure alignment with the bigger picture. AI governance structure informed by guidance from OECD or WEF, as mentioned in pitfall #2 above, can help in this endeavor. pitfall #7: thinking “big data” means “ai-ready.” It is fairly obvious that data requirements for AI are substantially greater than for any other analytics. Data must be known, understood, available, fit for purpose, and secure.48 But if it’s not representative enough for scaling, or the firm doesn’t have sufficient capabilities for larger-scale implementation, the solutions might be limited to pilots and projects within existing silos.49 46 Matt Turek, “Explainable Artificial Intelligence (XAI),” Defense Advanced Research Projects Agency (DARPA): Our Research: Explainable Artificial Intelligence, retrieved on December 18, 2020, https://www.darpa.mil/program/ explainable-artificial-intelligence. The black box problem: The functions used in ML data models could be too complicated for humans to understand, and understandable functions often are less accurate. 47 Julia Angwin et al., “Machine Bias,” ProPublica, May 23, 2016, https://www.propublica.org/article/machine-bi- as-risk-assessments-in-criminal-sentencing. Amazon’s Rekognition software misidentified congressmen as criminals. 48 Michael Chui et al., “Notes from the AI frontier: Applications and value of deep learning,” McKinsey, retrieved on December 1, 2020, https://www.mckinsey.com/featured-insights/artificial-intelligence/notes-from-the-ai- frontier-applications-and-value-of-deep-learning. 49 Michael Sadowski and Aaron Roth, “Technology Leadership Can Pay Off,” Research-Technology Management 42, no. 3 (1999): 32-33, https://doi.org/10.1080/08956308.1999.11671315.

Deploying Artificial Intelligence: Strategic Insights - Page 20 Deploying Artificial Intelligence: Strategic Insights Page 19 Page 21