Ethical AI Design Workshops
Multiple sources have produced research pointing out problematic results from AI models that perpetuate systemic racial and socioeconomic injustices. Ideating as part of the process before a model is even created should be seen as a key part of the process.
Based on workshops during my time at Fjord and with Feminist.AI, and based on years of facilitation work in social justice non-profits and cooperative spaces, I work with clients to create processes to think through unintended consequences and get to the heart of their intended design and end user experience.
Resources I draw upon
As an experienced data scientist and ML developer and a UX designer, I draw upon a mixture of pre-existing tools to recommend to clients from Nadia Piet's excellent AI Meets Design toolkit to IBM's AI Fairness 360 library. I also draw upon work inspired by the workshop I gave with Christine Meinders at IxDA 2020 in Milan.
There isn't a one-size-fits-all tool for ethical AI design or for workshop development. It depends on the application, client, and approach people are comfortable with.
I am currently working with Alka Roy of the Responsible Innovation Project to develop tools for data scientists and ML developers. As the RI Project just released its first report, we are in early planning stages currently and will make an announcement of our project within a few weeks.
In 2021, I have been giving workshops on data consent to NSF fellows and ethical AI for mobile and web applications.