August 6, 2019 – Earlier this year, Danielle Cass and I ran a workshop with 23 ethical AI practitioners from 15 organizations and shared out insights of what they are doing that has been successful and the open questions or challenges they are working through. A few months later, Matt Marshall, Founder of VentureBeat, invited us to conduct a similar workshop at VentureBeat Transform with the goal of expanding the representation of practitioners and moving the conversation to the next level.
On July 10th, 36 practitioners from 29 organizations came together for a two-hour, high-speed workshop to solve four challenges we, as a discipline, are working on:
Three primary categories arose in the breakout groups: 1) Socialization/education, 2) Processes to kickoff every project, and 3) Tools or processes throughout the product development.
One of the first tasks for people in ethical/responsible roles involves educating everyone in the organization, socializing ethical considerations including how to work responsibly. Through presentations, and formal trainings (e.g inclusion in machine learning classes) etc., the key is making it relevant and specific to the organization.
Psychological safety was identified as a necessary precursor for organizations to have these difficult conversations in a productive way. Ethical discussions can result in visceral responses if individuals feel their values are being attacked. Demonstrating a constructive debate and providing a framework for prioritizing risks and responsibilities (see Taxonomy discussion further down) can help de-escalate tensions and enable productive dialog.
Publishing insights in easy-to-access locations for coworkers to leverage was also necessary. For sales reps, who want to tell customers how their organization creates responsible AI, it’s critical they have ready access to relevant slides and talking points. This mitigates the risk of exposing internally-facing documents with sensitive information. Similarly, publishing insights to customers and others working in this field raises everyone’s awareness and helps move the conversation forward.
Participants shared a number of ways they explore the problem space at the beginning of a project including Consequence-Scanning Workshops, Scenario Planning, or one of the many toolkits or frameworks available today.
It’s imperative to begin by being clear on the goals to be accomplished, then identify the spectrum of intended and unintended consequences. Use an agreed-upon taxonomy to prioritize all of the risks identified. It’s unlikely every possible risk can be mitigated so it’s important to have a mechanism to objectively prioritize those that must be addressed.
One of the companies represented described their process as follows:
Create a Global Data Map
Create a global data map of all of the data various stakeholders in the company and organize by sensitivity and data source. Next, identify all the possible activities teams may use the data. Finally, identify any legal (e.g., GDPR), ethical and best practice considerations for each of those activities. It’s important to keep the matrix dynamic to accommodate new data and new uses.
Create a Risk Profile for Global Legal and Ethical Data Use
The global data map is a table that contains rows and columns for the different categories of data and identified activities for data use. Next, color-code the cells red, yellow or green to indicate the degree of constraints for each scenario. (See Figure 1 below).
Red = Data may not be used for that purpose
Yellow = A risk/benefit analysis must be conducted and documented prior to approval
Green = Data may safely be used for that purpose
Teams wishing to use data for purposes highlighted in yellow must go through a risk/benefit analysis and review with the data ethics team. Some of the questions asked in the review will include:
If the team is approved to use the data, a follow up review is required. Some of the questions in the follow up include:
Review boards and discussion forums with experts were identified as valuable resources in the previous workshop. Participants in this workshop noted the significance of providing context-specific and tangible recommendations. Through checklists, which serve as useful references, questions to ask and reminder vehicles, everyone walks away with clear next steps and greater success.
The importance of documentation throughout is crucial and provides transparency, accountability, and consistency. These include term definitions, governance mechanisms (e.g., model cards, datasheets), feedback from reviews and decision outcomes. Using approachable and simple language avoids miscommunication and encourages compliance.
Key themes that emerged in this breakout group included:
As described in our shared insights from our February workshop, leveraging existing infrastructure (like product review processes) into which you can embed ethics was a best practice echoed by the VentureBeat workshop participants. Whether you are leveraging an existing process to highlight ethics or creating new processes in the software, the product development lifecycle or product evaluation, it’s critical to abstain from overburdening the business. Processes must be lightweight, adaptable and easy-to-use.
Relationships between engineering and product must be navigated delicately as ethics efforts scale across a company. Often, product managers and engineering managers may not be aligned, and because many things can change between the ideation phase and final production, introduction of new elements may impede this.
Several companies are starting to explore establishing incentive structures in order for ethics to be built into employees’ roles. Another approach is building ethics into OKRs by making it one of the pillars for performance reviews.
This breakout discussion also examined how to build penalties around unethical behavior, specifically related to data use. Participants agreed scaling ethics efforts beyond ethics roles will require technical solutions, including open-source tools.
“Solving for this on a more technical level would be so awesome instead of having individual legal review of me with a checklist because that’s not scalable,” said one practitioner from a 3,100-person tech company who has delivered 47 ethics trainings in the past year.
Building a business case for “ethics-by-design” is a compelling path forward for scaling ethics. Similar to the standardization of privacy processes based on privacy law, the key to an “ethics-by-design” effort is to highlight the true risks with reviews boards and certification programs.
Education and training is pivotal but remains the biggest challenge for all tech companies in scaling “ethics-by-design.”
“The idea of building an entire workforce capable of thinking about ethics by design is the goal but this is not something that most technologists get in the university context right now,” said one practitioner from a giant tech company. “So how do we deliver the sort of basic skill set and competence to employ these kinds of design principles across the organization?”
One tech ethics practitioner shared their program: In addition to a required in-person modular training, introduction of a monthly ethics video series addressing serious ethics issues ranging from data access to corruption is presented in a fun, engaging way. This has proven successful with strong participation across the company.
This breakout group brainstormed measures and metrics to evaluate the ethics impact on their work. Drawing from security scoring frameworks like Common Vulnerabilities and Exposures (CVE) for identifying information or cybersecurity vulnerabilities and exposure risks, similarly, ethical risks could be calculated where the types of exposures are identified as being additive or even multiplicative. As a result, a system could have a positive score overall but the sub-components could highlight that specific populations are more vulnerable and at greatest right of exposure to harm. Having a taxonomy to rate risks will prioritize what must be addressed prior to launch.
In addition, the use of Red Teams, which are neutral groups tasked with identifying blind spots and data vulnerabilities to ethically hack the product should be considered. The product team can now identify the harm when it happens and find ways to mitigate the potential risk(s).
Differential Privacy was introduced to minimize exposure of PII (personally identifiable information). A reference from Pier Paolo Ippolito, who did not attend the workshop, defines it this way:
“Differential Privacy enables us to quantify the level of privacy of a database. This can help us to experiment with different approaches in order to identify which is best to preserve the user’s privacy. By knowing our data privacy level we can then quantify the likelihood that someone might be able to leak sensitive information from the dataset and how much information can be leaked at most.”
One participant noted the hiring recruiters’ ability to identify specific archetypes the company was predisposed to hiring vs. others, allowing them to quantify their actions and make changes.
Three challenges in measurement were identified:
As the field of ethics in tech expands, the community of practitioners recognize the urgency to support each other, especially in the face of recent attacks from their efforts. A few participants had personally been criticized, maligned, or mischaracterized on social media or during public events. Twice as many participants had witnessed these things happen to their peers. In one particular example, a colleague, expected to attend the workshop cancelled having recently been doxxed and her life threatened by a powerful group critical of her company.
Three primary categories of criticisms and associated questions we need to address:
It is important to acknowledge that no one is implying those working in this field are above questioning or disagreement; however, when the criticism crosses a line to involve personal attacks or threats of harm, it is not acceptable. Meaningful discussion and debate can only happen when both sides are willing to engage in constructive dialog rather than ad hominem attacks. This website has several wonderful resources to help those dealing with or at risk of online harassment. A few additional ideas were identified to move discussions forward in productive ways:
The two hours we spent together flew by and the community of practitioners left excited for what is next. We all recognize the importance of sharing our work to learn from each other and to provide accountability among our peers. This workshop is over but our engagement is only beginning.
“If you want to go quickly, go alone. If you want to go far, go together.” – African Proverb