Ethical frameworks, tool kits, principles, and oaths - Oh my!

By: Kathy Baxter
Image source: pixabay.com

Last updated: October 23, 2019

Although it may appear that the topic of ethics in AI is brand new, tech ethicists have been around for decades, mostly in academia and non-profits. As a result, dozens of ethical tools have been created. In fact, doteveryone has a 37-page (and growing) directory of resources. If you are thinking about incorporating ethics into your company's culture or product development cycle, check these out before you try recreating the wheel.

Frameworks

  • Ethical OS Framework by IFTF and Omidyar Network
    "The Ethical Operating System can help makers of tech, product managers, engineers, and others get out in front of problems before they happen. It’s been designed to facilitate better product development, faster deployment, and more impactful innovation. All while striving to minimize technical and reputational risks. This toolkit can help inform your design process today and manage risks around existing technologies in the future."
  • Responsible AI in Consumer Enterprise by Integrate.ai
    A framework to help organizations operationalize ethics, privacy, and security as they apply machine learning and artificial intelligence
  • UK Data Ethics Framework by UK gov
    Includes principles, guidance, and a workbook to record decisions made
  • A Moral Framework for Understanding of Fair ML through Economic Models of Equality of Opportunity - paper by ETH Zurich
  • An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations by AI4People
    "We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations to assess, to develop, to incentivize, and to support good AI, which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other stakeholders. If adopted, these recommendations would serve as a firm foundation for the establishment of a Good AI Society"
  • Ethics Canvas by ADAPT Centre
    "The Ethics Canvas helps you structure ideas about the ethical implications of the projects you are working on, to visualise them and to resolve them. The Ethics Canvas has been developed to encourage educators, entrepreneurs, engineers and designers to engage with ethics in their research and innovation projects. Research and innovation foster great benefits for society, but also raise important ethical concerns."
  • A Proposed Model Artificial Intelligence Governance Framework by Singapore Personal Data Protection Commission
    "The PDPC presents the first edition of A Proposed Model AI Governance Framework (Model Framework) - an accountability-based framework to help chart the language and frame the discussions around harnessing AI in a responsible way.

    "The Model Framework translates ethical principles into practical measures that can be implemented by organisations deploying AI solutions at scale. Through the Model Framework, we aim to promote AI adoption while building consumer confidence and trust in providing their personal data for AI."

Toolkits

  • Ethics in Tech Toolkit for engineering and design practice by Santa Clara Univ. Markkula Center
    "Each tool performs a different ethical function, and can be further customized for specific applications. Team/project leaders should reflect carefully on how each tool can best be used in their team or project settings."
  • Ethics and Algorithms Toolkit by City and County of San Francisco Data Science Team
    SFA risk management framework for governments (and other people too!)
  • AI Fairness & Explainability 360 by IBM
    Open source with case studies, code, and anti-bias algorithms, tutorials, demos & state-of-the-art explainability algorithms (White paper)
  • Playing with AI Fairness: What-if Tool by Google
    "Google's new machine learning diagnostic tool lets users try on five different types of fairness"
  • Aequitas by University of Chicago Center for Data Science and Public Policy
    "The Bias Report is powered by Aequitas, an open-source bias audit toolkit for machine learning developers, analysts, and policymakers to audit machine learning models for discrimination and bias, and make informed and equitable decisions around developing and deploying predictive risk-assessment tools."
  • Design Ethically Toolkit by Kat Zhou (IBM)
    A library of exercises and resources to integrate ethical design into your practice.
  • Algorithmic Accountability Policy Toolkit - AI Now Institute
    "The following toolkit is intended to provide legal and policy advocates with a basic understanding of government use of algorithms including, a breakdown of key concepts and questions that may come up when engaging with this issue, an overview of existing research, and summaries of algorithmic systems currently used in government. This toolkit also includes resources for advocates interested in or currently engaged in work to uncover where algorithms are being used and to create transparency and accountability mechanisms."
  • Lime by the the University of Washington
    Open source toolkit "explaining the predictions of any machine learning classifier."
  • PWC Responsible AI Toolkit: "Our Responsible AI Toolkit is a suite of customizable frameworks, tools and processes designed to help you harness the power of AI in an ethical and responsible manner - from strategy through execution. With the Responsible AI toolkit, we’ll tailor our solutions to address your organisation’s unique business requirements and AI maturity."
  • World Economic Forum’s AI Board Toolkit (still yet to be released) "This project aims to create a toolkit for corporate boards to identify specific benefits of AI for their companies and concrete ways to design, develop, and deploy it responsibly."

Checklists

Principles

Oaths, Manifestoes, and Codes of Conduct

Policy Papers and Resources