Experiments reveal how generative AI facilitates gender-based violence
Generative Artificial Intelligence (AI) — deep-learning models that create voice, text, and image — are revolutionizing the way people access information and produce, receive and interact with content. While technological innovations like ChatGPT, DALL-E and Bard offer previously unimaginable gains in productivity, they also present concerns for the overall protection and promotion of human rights and for the safety of women and girls.
The arrival of generative AI introduces new, unexplored questions: what are the companies’ policies and normative cultures that perpetuate technology-facilitated gender-based violence and harms? How do AI-based technologies facilitate gender-specific harassment and hate speech? What “prompt hacks” can lead to gendered disinformation, hate speech, harassment, and attacks? What measures can companies, governments, civil society organisations and independent researchers take to anticipate and mitigate these risks?
A combination of measures are proposed to be put in place by generative AI companies and the technology companies that platform them, regulators and policy makers, by civil society organisations and independent researchers, as well as users.