Search for a command to run...
Differentiating automated and inauthentic behaviour on social network platforms presents a complex challenge. While various platforms, tools, and research groups strive to enhance automation detection techniques, strategies to bypass verification methods persistently evolve. While bots, or automated accounts, can be facilitators when their role is transparent, studies reveal their use in artificially influencing public discourse and orchestrating coordinated attacks by manipulating social media algorithms. Since social media platforms have become crucial spaces for political discussions and public debates, identifying malicious bots has become increasingly relevant, particularly in Global South contexts with challenges like information quality and limited internet access. This article introduces the Pegabot (Bot Buster) tool, designed to mitigate negative impacts for average social media users in the Brazilian context. Going beyond simplistic bot labelling, Pegabot offers nuanced results and transparency in bot detection on X/Twitter. The need for transparent and interpretable bot detection tools is emphasized to qualify public opinion and narratives concerning bot usage and empower users to address these challenges. Additionally, we present results from a series of analyses employing machine learning techniques to improve the Pegabot tool’s efficacy.