In 2019, Apples credit card business came under fire for offering a woman one twentieth the credit limit offered to her husband. When she complained, Apple representatives reportedly told her, I dont know why, but I swear were not discriminating. Its just the algorithm.
Today, more and more decisions are made by opaque, unexplainable algorithms like this often with similarly problematic results. From credit approvals to customized product or promotion recommendations to resume readers to fault detection for infrastructure maintenance, organizations across a wide range of industries are investing in automated tools whose decisions are often acted upon with little to no insight into how they are made.
This approach creates real risk. Research has shown that a lack of explainability is both one of executives most common concerns related to AI and has a substantial impact on users trust in and willingness to use AI products not to mention their safety.
And yet, despite the downsides, many organizations continue to invest in these systems, because decision-makers assume that unexplainable algorithms are intrinsically superior to simpler, explainable ones. This perception is known as the accuracy-explainability tradeoff: Tech leaders have historically assumed that the better a human can understand an algorithm, the less accurate it will be.
Specifically, data scientists draw a distinction between so-called black-box and white-box AI models: White-box models typically include just a few simple rules, presented for example as a decision tree or a simple linear model with limited parameters. Because of the small number of rules or parameters, the processes behind these algorithms can typically be understood by humans.
In contrast, black-box models use hundreds or even thousands of decision trees (known as random forests), or billions of parameters (as deep learning models do), to inform their outputs. Cognitive load theory has shown that humans can only comprehend models with up to about seven rules or nodes, making it functionally impossible for observers to explain the decisions made by black-box systems. But does their complexity necessarily make black-box models more accurate?
To explore this question, we conducted a rigorous, large-scale analysis of how black and white-box models performed on a broad array of nearly 100 representative datasets (known as benchmark classification datasets), spanning domains such as pricing, medical diagnosis, bankruptcy prediction, and purchasing behavior. We found that for almost 70% of the datasets, the black box and white box models produced similarly accurate results. In other words, more often than not, there was no tradeoff between accuracy and explainability: A more-explainable model could be used without sacrificing accuracy.
This is consistent with other emerging research exploring the potential of explainable AI models, as well as our own experience working on case studies and projects with companies across diverse industries, geographies, and use cases. For example, it has been repeatedly demonstrated that COMPAS, the complicated black box tool thats widely used in the U.S. justice system for predicting likelihood of future arrests, is no more accurate than a simple predictive model that only looks at age and criminal history. Similarly, a research team created a model to predict likelihood of defaulting on a loan that was simple enough that average banking customers could easily understand it, and the researchers found that their model was less than 1% less accurate than an equivalent black box model (a difference that was within the margin of error).
Of course, there are some cases in which black-box models are still beneficial. But in light of the downsides, our research suggests several steps companies should take before adopting a black-box approach:
As a rule of thumb, white-box models should be used as benchmarks to assess whether black-box models are necessary. Before choosing a type of model, organizations should test both and if the difference in performance is insignificant, the white-box option should be selected.
One of the main factors that will determine whether a black-box model is necessary is the data involved. First, the decision depends on the quality of the data. When data is noisy (i.e., when it includes a lot of erroneous or meaningless information), relatively simple white-box methods tend to be effective. For example, we spoke with analysts at Morgan Stanley who found that for their highly noisy financial datasets, simple trading rules such as buy stock if company is undervalued, underperformed recently, and is not too large worked well.
Second, the type of data also affects the decision. For applications that involve multimedia data such as images, audio, and video, black-box models may offer superior performance. For instance, we worked with a company that was developing AI models to help airport staff predict security risk based on images of air cargo. They found that black-box models had a higher chance of detecting high-risk cargo items that could pose a security threat than equivalent white-box models did. These black-box tools enabled inspection teams to save thousands of hours by focusing more on high-risk cargo, substantially boosting the organizations performance on security metrics. In similarly complex applications such as face-detection for cameras, vision systems in autonomous vehicles, facial recognition, image-based medical diagnostic devices, illegal/toxic content detection, and most recently, generative AI tools like ChatGPT and DALL-E, a black box approach may be advantageous or even the only feasible option.
Transparency is always important to build and maintain trust but its especially critical for particularly sensitive use cases. In situations where a fair decision-making process is of utmost importance to your users, or in which some form of procedural justice is a requirement, it may make sense to prioritize explainability even if your data might otherwise lend itself to a black box approach, or if youve found that less-explainable models are slightly more accurate.
For instance, in domains such as hiring, allocation of organs for transplant, and legal decisions, opting for a simple, rule-based, white-box AI system will reduce risk to both the organization and its users. Many leaders have discovered these risks the hard way: In 2015, Amazon found that its automated candidate screening system was biased against female software developers, while a Dutch AI welfare fraud detection tool was shut down in 2018 after critics decried it as a large and non-transparent black hole.
An organizations choice between white or black-box AI also depends on its own level of AI readiness. For organizations that are less digitally developed, in which employees tend to have less trust in or understanding of AI, it may be best to start with simpler models before progressing to more complex solutions. That typically means implementing a white-box model that everyone can easily understand, and only exploring black-box options once teams have become more accustomed to using these tools.
For example, we worked with a global beverage company that launched a simple white-box AI system to help employees optimize their daily workflows. The system offered limited recommendations, such as which products should be promoted and how much of different products should be restocked. Then, as the organization matured in its use of and trust in AI, managers began to test out whether more complex, black-box alternatives might offer advantages in any of these applications.
In certain domains, explainability might be a legal requirement, not a nice-to-have. For instance, in the U.S., the Equal Credit Opportunity Act requires financial institutions to be able to explain the reasons why credit has been denied to a loan applicant. Similarly, Europes General Data Protection Regulation (GDPR) suggests that employers should be able to explain how candidates data has been used to inform hiring decisions. When organizations are required by law to be able to explain the decisions made by their AI models, white-box models are the only option.
Finally, there are of course contexts in which black-box models are both undeniably more accurate (as was the case in 30% of the datasets we tested in our study) and acceptable with respect to regulatory, organizational, or user-specific concerns. For example, applications such as computer vision for medical diagnoses, fraud detection, and cargo management all benefit greatly from black-box models, and the legal or logistical hurdles they pose tend to be more manageable. In cases like these, if an organization does decide to implement an opaque AI model, it should take steps to address the trust and safety risks associated with a lack of explainability.
In some cases, it is possible to develop an explainable white-box proxy to clarify, in approximate terms, how a black-box model has reached a decision. Even if this explanation isnt fully accurate or complete, it can go a long way to build trust, reduce biases, and increase adoption. In addition, a greater (if imperfect) understanding of the model can help developers further refine it, adding more value to these businesses and their end users.
In other cases, organizations may truly have very limited insight into why a model makes the decisions it does. If an approximate explanation isnt possible, leaders can still prioritize transparency in how they talk about the model both internally and externally, openly acknowledging the risks and working to address them.
***
Ultimately, there is no one-size-fits-all solution to AI implementation. All new technology comes with risks, and the choice of how to balance those risks with the potential rewards will depend on the specific business context and data. But our research demonstrates that in many cases, simple, interpretable AI models perform just as well as black box alternatives without sacrificing the trust of users or allowing hidden biases to drive decisions.
The authors would like to acknowledge Gaurav Jha and Sofie Goethals for their contribution.
Read the rest here:
AI Can Be Both Accurate and Transparent - HBR.org Daily
- AI File Extension - Open . AI Files - FileInfo [Last Updated On: June 14th, 2016] [Originally Added On: June 14th, 2016]
- Ai | Define Ai at Dictionary.com [Last Updated On: June 16th, 2016] [Originally Added On: June 16th, 2016]
- ai - Wiktionary [Last Updated On: June 22nd, 2016] [Originally Added On: June 22nd, 2016]
- Adobe Illustrator Artwork - Wikipedia, the free encyclopedia [Last Updated On: June 25th, 2016] [Originally Added On: June 25th, 2016]
- AI File - What is it and how do I open it? [Last Updated On: June 29th, 2016] [Originally Added On: June 29th, 2016]
- Ai - Definition and Meaning, Bible Dictionary [Last Updated On: July 25th, 2016] [Originally Added On: July 25th, 2016]
- ai - Dizionario italiano-inglese WordReference [Last Updated On: July 25th, 2016] [Originally Added On: July 25th, 2016]
- Bible Map: Ai [Last Updated On: August 30th, 2016] [Originally Added On: August 30th, 2016]
- Ai dictionary definition | ai defined - YourDictionary [Last Updated On: August 30th, 2016] [Originally Added On: August 30th, 2016]
- Ai (poet) - Wikipedia, the free encyclopedia [Last Updated On: August 30th, 2016] [Originally Added On: August 30th, 2016]
- AI file extension - Open, view and convert .ai files [Last Updated On: August 30th, 2016] [Originally Added On: August 30th, 2016]
- History of artificial intelligence - Wikipedia, the free ... [Last Updated On: August 30th, 2016] [Originally Added On: August 30th, 2016]
- Artificial intelligence (video games) - Wikipedia, the free ... [Last Updated On: August 30th, 2016] [Originally Added On: August 30th, 2016]
- North Carolina Chapter of the Appraisal Institute [Last Updated On: September 8th, 2016] [Originally Added On: September 8th, 2016]
- Ai Weiwei - Wikipedia, the free encyclopedia [Last Updated On: September 11th, 2016] [Originally Added On: September 11th, 2016]
- Adobe Illustrator Artwork - Wikipedia [Last Updated On: November 17th, 2016] [Originally Added On: November 17th, 2016]
- 5 everyday products and services ripe for AI domination - VentureBeat [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Realdoll builds artificially intelligent sex robots with programmable personalities - Fox News [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- ZeroStack Launches AI Suite for Self-Driving Clouds - Yahoo Finance [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- AI and the Ghost in the Machine - Hackaday [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Why Google, Ideo, And IBM Are Betting On AI To Make Us Better Storytellers - Fast Company [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Roses are red, violets are blue. Thanks to this AI, someone'll fuck you. - The Next Web [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Wearable AI Detects Tone Of Conversation To Make It Navigable (And Nicer) For All - Forbes [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Who Leads On AI: The CIO Or The CDO? - Forbes [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- AI For Matching Images With Spoken Word Gets A Boost From MIT - Fast Company [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- Teach undergrads ethics to ensure future AI is safe compsci boffins - The Register [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- AI is here to save your career, not destroy it - VentureBeat [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- A Heroic AI Will Let You Spy on Your Lawmakers' Every Word - WIRED [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- With a $16M Series A, Chorus.ai listens to your sales calls to help your team close deals - TechCrunch [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- Microsoft AI's next leap forward: Helping you play video games - CNET [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- Samsung Galaxy S8's Bixby AI could beat Google Assistant on this front - CNET [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- 3 common jobs AI will augment or displace - VentureBeat [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- Stephen Hawking and Elon Musk endorse new AI code - Irish Times [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- SumUp co-founders are back with bookkeeping AI startup Zeitgold - TechCrunch [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Five Trends Business-Oriented AI Will Inspire - Forbes [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- AI Systems Are Learning to Communicate With Humans - Futurism [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Pinterest uses AI and your camera to recommend pins - Engadget [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Chinese Firms Racing to the Front of the AI Revolution - TOP500 News [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Real life CSI: Google's new AI system unscrambles pixelated faces - The Guardian [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- AI could transform the way governments deliver public services - The Guardian [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Amazon Is Humiliating Google & Apple In The AI Wars - Forbes [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- What's Still Missing From The AI Revolution - Co.Design (blog) [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Legaltech 2017: Announcements, AI, And The Future Of Law - Above the Law [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- Can AI make Facebook more inclusive? - Christian Science Monitor [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- How a poker-playing AI could help prevent your next bout of the flu - ExtremeTech [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- Dynatrace Drives Digital Innovation With AI Virtual Assistant - Forbes [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- AI and the end of truth - VentureBeat [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- Taser bought two computer vision AI companies - Engadget [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- Google's DeepMind pits AI against AI to see if they fight or cooperate - The Verge [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- The Coming AI Wars - Huffington Post [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- Is President Trump a model for AI? - CIO [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- Who will have the AI edge? - Bulletin of the Atomic Scientists [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- How an AI took down four world-class poker pros - Engadget [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- We Need a Plan for When AI Becomes Smarter Than Us - Futurism [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- See how old Amazon's AI thinks you are - The Verge [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- Ford to invest $1 billion in autonomous vehicle tech firm Argo AI - Reuters [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- Zero One: Are You Ready for AI? - MSPmentor [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- Ford bets $1B on Argo AI: Why Silicon Valley and Detroit are teaming up - Christian Science Monitor [Last Updated On: February 12th, 2017] [Originally Added On: February 12th, 2017]
- Google Test Of AI's Killer Instinct Shows We Should Be Very Careful - Gizmodo [Last Updated On: February 12th, 2017] [Originally Added On: February 12th, 2017]
- Google's New AI Has Learned to Become "Highly Aggressive" in Stressful Situations - ScienceAlert [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- An artificially intelligent pathologist bags India's biggest funding in healthcare AI - Tech in Asia [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- Ford pledges $1bn for AI start-up - BBC News [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- Dyson opens new Singapore tech center with focus on R&D in AI and software - TechCrunch [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- How to Keep Your AI From Turning Into a Racist Monster - WIRED [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- How Chinese Internet Giant Baidu Uses AI And Machine Learning - Forbes [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- Humans engage AI in translation competition - The Stack [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Watch Drive.ai's self-driving car handle California city streets on a ... - TechCrunch [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Cryptographers Dismiss AI, Quantum Computing Threats - Threatpost [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Is AI making credit scores better, or more confusing? - American Banker [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- AI and Robotics Trends: Experts Predict - Datamation [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- IoT And AI: Improving Customer Satisfaction - Forbes [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- AI's Factions Get Feisty. But Really, They're All on the Same Team - WIRED [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Elon Musk: Humans must become cyborgs to avoid AI domination - The Independent [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Facebook Push Into Video Allows Time To Catch Up On AI Applications - Investor's Business Daily [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Defining AI, Machine Learning, and Deep Learning - insideHPC [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- AI Predicts Autism From Infant Brain Scans - IEEE Spectrum [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- The Rise of AI Makes Emotional Intelligence More Important - Harvard Business Review [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Google's AI Learns Betrayal and "Aggressive" Actions Pay Off - Big Think [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- AI faces hype, skepticism at RSA cybersecurity show - PCWorld [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- New AI Can Write and Rewrite Its Own Code to Increase Its Intelligence - Futurism [Last Updated On: February 17th, 2017] [Originally Added On: February 17th, 2017]