Ethical AI frameworks, tool kits, principles, and certifications - Oh my!

By: Kathy Baxter
Image source: pixabay.com

Last updated: August 3, 2021

Originally created: January, 14, 2019

Although it may appear that the topic of ethics in AI is brand new, tech ethicists have been around for decades, mostly in academia and non-profits. As a result, dozens of ethical tools have been created. In fact, doteveryone has a 39-page (and growing) alphabetized directory of resources. If you are thinking about incorporating ethics into your company's culture or product development cycle, check these out before you try recreating the wheel.

Frameworks

  • Ethical OS Framework by IFTF and Omidyar Network
    "The Ethical Operating System can help makers of tech, product managers, engineers, and others get out in front of problems before they happen. It’s been designed to facilitate better product development, faster deployment, and more impactful innovation. All while striving to minimize technical and reputational risks. This toolkit can help inform your design process today and manage risks around existing technologies in the future."
  • Responsible AI in Consumer Enterprise by Integrate.ai
    A framework to help organizations operationalize ethics, privacy, and security as they apply machine learning and artificial intelligence
  • UK Data Ethics Framework by UK gov
    Includes principles, guidance, and a workbook to record decisions made
  • A Moral Framework for Understanding of Fair ML through Economic Models of Equality of Opportunity - paper by ETH Zurich
  • An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations by AI4People
    "We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations to assess, to develop, to incentivize, and to support good AI, which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other stakeholders. If adopted, these recommendations would serve as a firm foundation for the establishment of a Good AI Society"
  • Ethics Canvas by ADAPT Centre
    "The Ethics Canvas helps you structure ideas about the ethical implications of the projects you are working on, to visualise them and to resolve them. The Ethics Canvas has been developed to encourage educators, entrepreneurs, engineers and designers to engage with ethics in their research and innovation projects. Research and innovation foster great benefits for society, but also raise important ethical concerns."
  • A Proposed Model Artificial Intelligence Governance Framework by Singapore Personal Data Protection Commission
    "The PDPC presents the first edition of A Proposed Model AI Governance Framework (Model Framework) - an accountability-based framework to help chart the language and frame the discussions around harnessing AI in a responsible way. The Model Framework translates ethical principles into practical measures that can be implemented by organisations deploying AI solutions at scale. Through the Model Framework, we aim to promote AI adoption while building consumer confidence and trust in providing their personal data for AI."
  • WEFE: The Word Embeddings Fairness Evaluation Framework by Pablo Badilla, Felipe Bravo-Marquez, Jorge Pérez
    "Word Embedding Fairness Evaluation (WEFE) is an open source library for measuring bias in word embedding models. It generalizes many existing fairness metrics into a unified framework and provides a standard interface..."

Tools and Toolkits

  • People + AI Research Guidebook by Google
    "A friendly, practical guide that lays out some best practices for creating useful, responsible AI applications."
  • Model Card Toolkit by Google
    "The Model Card Toolkit (MCT) streamlines and automates generation of Model Cards [1], machine learning documents that provide context and transparency into a model's development and performance. Integrating the MCT into your ML pipeline enables the sharing model metadata and metrics with researchers, developers, reporters, and more."
  • Playing with AI Fairness: What-if Tool by Google
    "Google's new machine learning diagnostic tool lets users try on five different types of fairness"
  • Ethics in Tech Toolkit for engineering and design practice by Santa Clara Univ. Markkula Center
    "Each tool performs a different ethical function, and can be further customized for specific applications. Team/project leaders should reflect carefully on how each tool can best be used in their team or project settings."
  • Responsible Innovation: A Best Practices Toolkit by Microsoft
    "This toolkit provides developers with a set of practices in development, for anticipating and addressing the potential negative impacts of technology on people."
  • Human-AI eXperience (HAX) Toolkit by Microsoft
    "The Guidelines for Human-AI Interaction provide best practices for how an AI system should interact with people. The HAX Workbook drives team alignment when planning for Guideline implementation. The HAX design patterns save you time by describing how to apply established solutions when implementing the Guidelines. The HAX Playbook helps you identify and plan for common interaction failure scenarios. You can browse Guidelines, design patterns, and many examples in the HAX Design Library."
  • Microsoft Interpretable ML: "InterpretML is an open-source python package for training interpretable machine learning models and explaining blackbox systems."
  • Microsoft Fairlearn: "The fairlearn project seeks to enable anyone involved in the development of artificial intelligence (AI) systems to assess their system's fairness and mitigate the observed unfairness. The fairlearn repository contains a Python package and Jupyter notebooks with the examples of usage."
  • Ethics and Algorithms Toolkit by City and County of San Francisco Data Science Team
    SFA risk management framework for governments (and other people too!)
  • AI Fairness & Explainability 360 by IBM
    Open source with case studies, code, and anti-bias algorithms, tutorials, demos & state-of-the-art explainability algorithms (White paper)
  • Aequitas by University of Chicago Center for Data Science and Public Policy
    "The Bias Report is powered by Aequitas, an open-source bias audit toolkit for machine learning developers, analysts, and policymakers to audit machine learning models for discrimination and bias, and make informed and equitable decisions around developing and deploying predictive risk-assessment tools."
  • Design Ethically Toolkit by Kat Zhou (IBM)
    A library of exercises and resources to integrate ethical design into your practice.
  • Algorithmic Accountability Policy Toolkit - AI Now Institute
    "The following toolkit is intended to provide legal and policy advocates with a basic understanding of government use of algorithms including, a breakdown of key concepts and questions that may come up when engaging with this issue, an overview of existing research, and summaries of algorithmic systems currently used in government. This toolkit also includes resources for advocates interested in or currently engaged in work to uncover where algorithms are being used and to create transparency and accountability mechanisms."
  • Lime by the the University of Washington
    Open source toolkit "explaining the predictions of any machine learning classifier."
  • PWC Responsible AI Toolkit: "Our Responsible AI Toolkit is a suite of customizable frameworks, tools and processes designed to help you harness the power of AI in an ethical and responsible manner - from strategy through execution. With the Responsible AI toolkit, we’ll tailor our solutions to address your organisation’s unique business requirements and AI maturity."
  • The MSW@USC Diversity Toolkit: A Guide to Discussing Identity, Power and Privilege: "This toolkit is meant for anyone who feels there is a lack of productive discourse around issues of diversity and the role of identity in social relationships, both on a micro (individual) and macro (communal) level."
  • Pymetrics Audit AI: "audit-AI is a tool to measure and mitigate the effects of discriminatory patterns in training data and the predictions made by machine learning algorithms trained for the purposes of socially sensitive decision processes."
  • World Economic Forum’s AI Board Toolkit "Empowering AI Leadership: An Oversight Toolkit for Boards of Directors. This resource for boards of directors consists of: an introduction; 12 modules intended to align with traditional board committees, working groups and oversight concerns; and a glossary of artificial intelligence (AI) terms."
  • BLM Privacy Bot: Anonymize photos of BLM protesters by Stanford ML Group

Checklists

Principles

AI Ethics Courses and Certifications

  • Responsible AI Governance Badge Program by EqualAI
    "The EqualAI Badge© Program, in collaboration with the World Economic Forum, prepares senior executives at companies developing or using AI for critical functions to ensure their brand is known for its responsible and inclusive practices. By the end of the program, senior executives will learn the ‘How Tos’ of developing and maintaining responsible AI governance, will join an emerging community and network of like-minded senior executives, and will earn the EqualAI badge of certification for learning best practices for AI governance."
  • The Ethics of AI by University of Helsinki
    "The Ethics of AI is a free online course created by the University of Helsinki. The course is for anyone who is interested in the ethical aspects of AI – we want to encourage people to learn what AI ethics means, what can and can’t be done to develop AI in an ethically sustainable way, and how to start thinking about AI from an ethical point of view."
  • Ethics of AI: Safeguarding Humanity by MIT
    "Learn to navigate the ethical challenges inherent to AI development and implementation. Led by MIT thought leaders, this course will deepen your understanding of AI as you examine machine bias and other ethical risks, and assess your individual and corporate responsibilities. Over the course of three days, you’ll address the ethical aspects of AI deployment in your workplace—and gain a greater understanding of how to utilize AI in ways that benefit mankind."
  • Certified Ethical Emerging Technologist (also on Coursera)
    "Over five courses, our AI founders, ethicists, and researchers will lead you through foundational ethical principles; industry standard ethical frameworks; ethical risk identification and mitigation; effective communication about ethical challenges; and the organizational governance required to create ethical, trusted, and inclusive data-driven technologies. When students complete all five courses, they will be ready to act as ethical leaders, prepared to bridge the gap between theory and practice."
  • Artificial Intelligence Ethics in Action by LearnQuest (via Coursera)
    "AI Ethics research is an emerging field, and to prove our skills, we need to demonstrate our critical thinking and analytical ability. Since it's not reasonable to jump into a full research paper with our newly founded skills, we will instead work on 3 projects that will demonstrate your ability to analyze ethical AI across a variety of topics and situations. These projects include all the skills you've learned in this AI Ethics Specialization."

Oaths, Manifestoes, and Codes of Conduct

Policy Papers, White Papers, Statements, Reports

Newsletters/Magazines

Other Resources