Although it may appear that the topic of ethics in AI is brand new, tech ethicists have been around for decades, mostly in academia and non-profits. As a result, dozens of ethical tools have been created. In fact, doteveryone has a 37-page (and growing) directory of resources. If you are thinking about incorporating ethics into your company's culture or product development cycle, check these out before you try recreating the wheel.
Ethical OS Framework by IFTF and Omidyar Network "The Ethical Operating System can help makers of tech, product managers, engineers, and others get out in front of problems before they happen. It’s been designed to facilitate better product development, faster deployment, and more impactful innovation. All while striving to minimize technical and reputational risks. This toolkit can help inform your design process today and manage risks around existing technologies in the future."
An Ethical Framework for a Good AI Society:Opportunities, Risks, Principles, and Recommendations by AI4People "We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations to assess, to develop, to incentivize, and to support good AI, which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other stakeholders. If adopted, these recommendations would serve as a firm foundation for the establishment of a Good AI Society"
Ethics Canvasby ADAPT Centre "The Ethics Canvas helps you structure ideas about the ethical implications of the projects you are working on, to visualise them and to resolve them. The Ethics Canvas has been developed to encourage educators, entrepreneurs, engineers and designers to engage with ethics in their research and innovation projects. Research and innovation foster great benefits for society, but also raise important ethical concerns."
A Proposed Model Artificial Intelligence Governance Framework by Singapore Personal Data Protection Commission "The PDPC presents the first edition of A Proposed Model AI Governance Framework (Model Framework) - an accountability-based framework to help chart the language and frame the discussions around harnessing AI in a responsible way.
"The Model Framework translates ethical principles into practical measures that can be implemented by organisations deploying AI solutions at scale. Through the Model Framework, we aim to promote AI adoption while building consumer confidence and trust in providing their personal data for AI."
Aequitasby University of Chicago Center for Data Science and Public Policy "The Bias Report is powered by Aequitas, an open-source bias audit toolkit for machine learning developers, analysts, and policymakers to audit machine learning models for discrimination and bias, and make informed and equitable decisions around developing and deploying predictive risk-assessment tools."
Design Ethically Toolkit by Kat Zhou (IBM) A library of exercises and resources to integrate ethical design into your practice.
Algorithmic Accountability Policy Toolkit- AI Now Institute "The following toolkit is intended to provide legal and policy advocates with a basic understanding of government use of algorithms including, a breakdown of key concepts and questions that may come up when engaging with this issue, an overview of existing research, and summaries of algorithmic systems currently used in government. This toolkit also includes resources for advocates interested in or currently engaged in work to uncover where algorithms are being used and to create transparency and accountability mechanisms."
Lime by the the University of Washington Open source toolkit "explaining the predictions of any machine learning classifier."
PWC Responsible AI Toolkit: "Our Responsible AI Toolkit is a suite of customizable frameworks, tools and processes designed to help you harness the power of AI in an ethical and responsible manner - from strategy through execution. With the Responsible AI toolkit, we’ll tailor our solutions to address your organisation’s unique business requirements and AI maturity."
Microsoft Interpretable ML: "InterpretML is an open-source python package for training interpretable machine learning models and explaining blackbox systems."
Microsoft Fairlearn: "The fairlearn project seeks to enable anyone involved in the development of artificial intelligence (AI) systems to assess their system's fairness and mitigate the observed unfairness. The fairlearn repository contains a Python package and Jupyter notebooks with the examples of usage."
Pymetrics Audit AI: "audit-AI is a tool to measure and mitigate the effects of discriminatory patterns in training data and the predictions made by machine learning algorithms trained for the purposes of socially sensitive decision processes."
World Economic Forum’s AI Board Toolkit (still yet to be released) "This project aims to create a toolkit for corporate boards to identify specific benefits of AI for their companies and concrete ways to design, develop, and deploy it responsibly."
Deon "deon is a command line tool that allows you to easily add an ethics checklist to your data science projects. We support creating a new, standalone checklist file or appending a checklist to an existing analysis in many common formats."
Visualization of AI and Human Rights by Berkman Klein Center "Our data visualization presents thirty-two sets of principles side by side, enabling comparison between efforts from governments, companies, advocacy groups, and multi-stakeholder initiatives."