Search for a command to run...
Content moderators are repeatedly exposed to disturbing material, leaving them susceptible to harm, including post-traumatic stress disorder (PTSD) symptoms. Currently, social media platforms such as Facebook allow moderators to modify (i.e., blur or greyscale) the visual features of images to mitigate the potential impact of the material. However, there is little empirical basis for this intervention. Therefore, we examined whether viewing disturbing images in blur or greyscale, compared to viewing unmodified images, would influence mock content moderators’ anxiety, affect, and image-related intrusions. In Study 1, we randomly allocated participants (n = 309) to view either greyscaled, blurred, or unmodified images within a simulated content moderation task. In Study 2, participants (n = 365) chose the format of images, aligning with the choice real world content moderators have. Participants in both studies recorded and rated problematic characteristics of image-related intrusions they experienced and completed pre- and post-task measures of state anxiety and positive and negative affect. Blurring and greyscaling did not reduce any negative effects of viewing disturbing images, even when participants chose to view blurred or greyscaled images (which was only 14% and 16% of the time, respectively). Participants may have imagined the absent visual features of blurred or greyscaled images, such that they still visualized these images as if they were unmodified. Our data indicate that fixed blurring and greyscaling, under the conditions tested, did not mitigate immediate adverse outcomes, highlighting the need for further research to ensure moderator protection strategies are grounded in robust empirical evidence.