Mapping the Risk Surface of Text-to-Image AI: A Participatory, Cross-Disciplinary Workshop
Text-to-image (TTI) generative AI (such as Stable Diffusion, DALL-E, MidJourney) inherits many of the risks that come with large-scale models that can be adapted for a wide variety of downstream tasks—like algorithmic monoculture and the difficulty of anticipating use cases. Yet relative to the enormous scrutiny received by the failures of large language models, shared knowledge of the novel harms and risks of TTI models.
How can we build boundary objects that support meaningful collaboration between researchers, impacted communities, and practitioners in mitigating the harms of models that pose novel risks? Our hands-on workshop tackles this issue through a strategy inspired by model documentation and cybersecurity best practices (NIST CVE, MITRE ATT&CK). Our aim is to combine practical experience with the potential and limitations of TTI models, building open resources that minimize harmful misuses of ML, and supporting cross-disciplinary efforts to make open-source models and datasets less harmful to impacted communities.
This CRAFT session will be held virtually, on June 12, 2023, 11:00am - 12:30pm US Central time. The collaborative activities will be aimed at facilitating a participatory conversation. We enable this through a mix of large group discussions and small group breakout sessions.
|11:00 - 11:05||Opening & Icebreakers|
|11:05 - 11:10||(Collaborative writing) Set the stage - What is a TTI Failure?|
|11:10 - 11:25||(Collaborative writing) Reflection on TTI failures and blind spots in the TTI risk space|
|11:25 - 11:45||Presentations by AVID and Hugging Face researchers|
|11:45 - 12:10||Breakout activity + report out|
|12:10 - 12:30||Group-wide discussion|