Women’s visibility and expression on social media are often burdened by the risks of hate and harassment. The potential backlash and violence online have spurred women of all diversity to practice self-policing and censorship and to base their expression on the perceived reactions of their audiences. Several women in KRYSS Network’s earlier research have also expressed that the fear of disparagement and vitriol had led them to modify the way they expressed and spoke in digital spaces, knowing that they have very little to no control over their narratives once they become a target of online gender-based violence.
While online gender-based violence is rooted in gender-based discrimination that takes place in every facet of society, such forms of violence might be facilitated in particular ways by the algorithm and design of social media platforms. The design of social media is not neutral but is planned, prototyped, and developed to invite and shape participation toward particular ends, including what is not permitted and the policing of objectionable content and behavior. However, what is not permitted and the objectionable content and behavior that is policed, are not necessarily to promote and protect human rights. Algorithmic interference in freedom of expression can drive polarization, reinforce existing disparities and discriminatory practices, and has a larger societal concern over manipulations of the distribution of information.
This research intends to better understand the barriers and biases resulting from algorithms in women’s access to freedom of opinion and expression, and to examine women’s resiliency and how they navigate these algorithms that are inherently limiting to create the much-needed space for women and gender non-conforming persons to speak out, to be heard, and to, in effect, occupy digital spaces.
Read the full research here:
Researcher: Serene Lim
Editor: Angela M. Kuga Thas
Design and Layout of Full Report: Chan Qian Hui