Book Image

Responsible AI in the Enterprise

By : Adnan Masood, Heather Dawe
5 (1)
Book Image

Responsible AI in the Enterprise

5 (1)
By: Adnan Masood, Heather Dawe

Overview of this book

Responsible AI in the Enterprise is a comprehensive guide to implementing ethical, transparent, and compliant AI systems in an organization. With a focus on understanding key concepts of machine learning models, this book equips you with techniques and algorithms to tackle complex issues such as bias, fairness, and model governance. Throughout the book, you’ll gain an understanding of FairLearn and InterpretML, along with Google What-If Tool, ML Fairness Gym, IBM AI 360 Fairness tool, and Aequitas. You’ll uncover various aspects of responsible AI, including model interpretability, monitoring and management of model drift, and compliance recommendations. You’ll gain practical insights into using AI governance tools to ensure fairness, bias mitigation, explainability, privacy compliance, and privacy in an enterprise setting. Additionally, you’ll explore interpretability toolkits and fairness measures offered by major cloud AI providers like IBM, Amazon, Google, and Microsoft, while discovering how to use FairLearn for fairness assessment and bias mitigation. You’ll also learn to build explainable models using global and local feature summary, local surrogate model, Shapley values, anchors, and counterfactual explanations. By the end of this book, you’ll be well-equipped with tools and techniques to create transparent and accountable machine learning models.
Table of Contents (16 chapters)
1
Part 1: Bigot in the Machine – A Primer
4
Part 2: Enterprise Risk Observability Model Governance
9
Part 3: Explainable AI in Action

References and further reading

  1. https://fairmlbook.org/tutorial2.html
  2. https://fairmlbook.org/tutorial2.html
  3. Nonfunctional requirements verb: https://en.wikipedia.org/wiki/Listofsystemqualityattributes
  4. https://www.Merriam-webster.com/thesaurus/explainable
  5. Ethics guidelines for trustworthy AI. The umbrella term implies that the decision-making process of AI systems must be transparent, and the capabilities and purpose of the systems must be openly communicated to those affected. Even though it may not always be possible to provide an explanation for why a model generated a particular output or decision, efforts must be made to make the decision-making process as clear as possible. When the decision-making process of a model is not transparent, it is referred to as a “black box” algorithm and requires special attention. In these cases, other measures such as traceability, auditability, and transparent communication on system capabilities may be required.
  6. Even though the terms might sound similar, explicability refers to a broader concept of transparency, communication, and understanding in machine learning, while explainability is specifically focused on the ability to provide clear and understandable explanations for how a model makes its decisions. While explainability is a specific aspect of explicability, explicability encompasses a wider range of measures to ensure the decision-making process of a machine learning model is understood and trusted.
  7. Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images: https://arxiv.org/abs/1412.1897
  8. https://www.youtube.com/watch?v=93Xv8vJ2acI
  9. https://fairmlbook.org/tutorial2.html
  10. https://fairmlbook.org/tutorial2.html
  11. https://blogs.partner.microsoft.com/mpn/shared-responsibility-ai-2/
  12. https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai
  13. https://en.oxforddictionaries.com/definition/ethics
  14. https://hbswk.hbs.edu/item/minorities-who-whiten-job-resumes-get-more-interviews
  15. Interpretability is necessary for Machine Learning: https://www.youtube.com/watch?v=93Xv8vJ2acI
  16. https://www.wired.com/story/googles-ai-guru-computers-think-more-like-brains/
  17. Geoff Hinton Dismissed The Need For Explainable AI: Experts Explain Why He’s Wrong: https://www.forbes.com/sites/cognitiveworld/2018/12/20/geoff-hinton-dismissed-the-need-for-explainable-ai-8-experts-explain-why-hes-wrong
  18. In defense of the black box: https://pubmed.ncbi.nlm.nih.gov/30948538/
  19. https://dictionary.cambridge.org/us/dictionary/english/ymmv
  20. Interpretability is necessary for Machine Learning: https://www.youtube.com/watch?v=93Xv8vJ2acI
  21. Interpretable Machine Learning by Christoph Molnar: https://christophm.github.io/interpretable-ml-book/
  22. Explainable AI: Interpreting, Explaining and Visualizing Deep Learning by Wojciech Samek, et al: https://books.google.co.in/books?id=j5yuDwAAQBAJ
  23. Fairness and Machine Learning by Matt Kusner, et al: https://fairmlbook.org/
  24. The Ethics of AI by Nick Bostrom and Eliezer Yudkowsky: https://intelligence.org/files/EthicsofAI.pdf
  25. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy by Cathy O’Neil: https://www.goodreads.com/book/show/29981085-weapons-of-math-destruction
  26. Explainable AI (XAI) by Defense Advanced Research Projects Agency (DARPA): https://www.darpa.mil/program/explainable-artificial-intelligence