IAMCR 2025 Presentation: Intersectional Toxicity in Social Media

From July 13 to 17, 2025, I had the opportunity to present my research at the International Association for Media and Communication Research (IAMCR) Conference, hosted at Nanyang Technological University (NTU) in Singapore. This year’s theme, “Communicating Environmental Justice: Many Voices, One Planet,” sparked vital discussions on justice, voice, and media ecosystems worldwide.

My presentation was part of the panel CJD-5: “(Platform) Power and Justice Imaginaries,” held on Tuesday, July 15, at 16:00 in the SHHK – HSS Seminar Room 6. The session was chaired by Prof. Usha Raman (India) and featured contributions from scholars based in China and Nepal, addressing topics such as digital democracy, political communication, and environmental justice. Together, we examined how digital platforms influence imaginings of power, voice, and inequality in diverse geopolitical and sociocultural contexts.

Our Contribution: Toxicity Across Digital Platforms

I presented our study titled:
«Toxicity Across Digital Platforms: Impact on Vulnerable Communities and Strategies for Mitigation,»
developed within the framework of the COIN Project – Countering Online Intolerance against especially vulnerable groups (https://coinusal4excellence.com/), funded by the EU’s MSCA-COFUND initiative. The research was conducted in collaboration with Carlos Arcila Calderón, Patricia Sánchez Holgado, Maximiliano Frías, William González, Marcos Barbosa (University of Salamanca), and Javier Amores (Pontificia Universidad Católica de Chile).

Research Focus

We explored a key question:

To what extent do toxic messages on platforms such as X, TikTok, Facebook, and Instagram target users based on overlapping vulnerable and stigmatized identities (e.g., migrants, Muslims, Jews, Roma, and LGBTIQ+ individuals)?

Our team analysed a large-scale dataset of nearly 800,000 messages, combining machine learning with Google’s Perspective API to assess language toxicity through metrics such as insultsidentity attacks, and generalised hostility.

Key Insights

Intersectional hate speech—targeting multiple vulnerable identities—proved significantly more toxic than single-group hate across all platforms.

Pairings such as Islam and Judaism, Islam and LGBTIQ+, and Islam and migration were the most common intersectional references.

Facebook and X stood out for their higher concentration of toxic intersectional content.

Intersectional messages on X received more user engagement (e.g., replies), suggesting a potentially troubling amplification on some platforms.

Overall, the most common toxic categories were insults, standard toxicity, and identity attacks.

Why It Matters

This research confirms that hate targeting multiple marginalized groups is not only present in digital discourse—but also more damaging. Drawing on intersectionality theory (Crenshaw, 1991), our findings highlight the importance of integrating multidimensional frameworks when assessing the impacts of online hatred.

Despite methodological challenges (such as moderation bias, irony detection, and low frequency of intersectional posts), this study contributes empirical evidence for developing platform-sensitive, intersectionality-aware strategies to mitigate digital hate. Future work should consider expanding into visual/multimodal content, multilingual discourse, and real-world impacts beyond the digital sphere.

Gratitude

I’m deeply thankful to the IAMCR 2025 organizers, my fellow panelists, and especially the colleagues who engaged in a thoughtful Q&A following the presentation. The diverse perspectives shared during Panel CJD-5 enriched the discussion and reaffirmed the urgency of reimagining justice and accountability in platformed communication.