Search for a command to run...
The modern digital age has ushered in unprecedented access to health information, yet this democratization of knowledge has simultaneously spawned a dangerous phenomenon: The infodemic. Defined by the World Health Organization (WHO) as an overabundance of information, both accurate and inaccurate, that proliferates during disease outbreaks and makes it challenging for individuals to identify trustworthy sources and reliable guidance, the infodemic represents one of the most pressing public health challenges of our time.[1] This term, coined initially during the 2003 severe acute respiratory syndrome outbreak, gained renewed prominence during the COVID-19 pandemic when misinformation spread as rapidly as the virus itself, undermining public health responses and threatening lives globally.[2] Digital platforms have fundamentally transformed how health information circulates within societies. An estimated 60% of the global population now relies on social media not merely for social connection but as a primary source of news and health information.[3] While these platforms offer remarkable opportunities for rapid health communication and community engagement, they simultaneously serve as vectors for misinformation and disinformation. Research demonstrates that health misinformation has tangible consequences. During the first 3 months of 2020 alone, nearly 6000 people worldwide were hospitalized due to coronavirus misinformation, with at least 800 deaths potentially attributable to false information.[4] The velocity and reach of digital misinformation have created an environment where falsehoods can outpace facts, leading to reduced adherence to public health measures, increased vaccine hesitancy, and erosion of trust in healthcare systems and authorities.[5] Addressing this crisis requires coordinated global policy strategies that balance freedom of expression with public health protection. International organizations, including the WHO and United Nations Educational, Scientific, and Cultural Organization (UNESCO), have recognized that managing the infodemic is as critical to pandemic control as traditional public health interventions. The World Health Assembly (WHA) Resolution WHA73.[1] explicitly calls on member states to provide reliable content, counter misinformation, and leverage digital technologies across health responses.[1] This editorial examines the multifaceted challenges posed by health misinformation on digital platforms, analyses existing global policy frameworks, explores successful governance models from various countries and proposes strategies to enhance platform accountability while building public trust in health information. The urgency of this matter cannot be overstated. Without adequate governance mechanisms and health literacy initiatives, the foundations of evidence-based medicine and public health are at risk of erosion in an age where anyone with internet access can broadcast health claims to millions. CHALLENGES OF DIGITAL INFORMATION The digital health information ecosystem faces multiple interconnected challenges that collectively undermine public health efforts. Misinformation on digital platforms manifests in various forms, each with distinct characteristics and impacts [Figure 1]. Medical misinformation includes false claims about treatments, cures and mechanisms of disease transmission. During the COVID-19 pandemic, prominent examples included unfounded assertions about hydroxychloroquine as a cure, claims that the virus could not survive in hot weather, and suggestions that consuming large quantities of certain foods could prevent infection.[6] Vaccine-related misinformation represents another critical category, encompassing conspiracy theories about vaccine ingredients, exaggerated claims about adverse effects and completely fabricated links between vaccines and conditions such as autism.[7,8]Figure 1: Types of health misinformation. Illustration Credit: Nor Faiza Mohd. Tohit.The prevalence of health misinformation across social media platforms is alarmingly high [Figure 2]. Systematic reviews have found that misinformation is most prevalent on Twitter and concerns topics related to smoking products, drugs, vaccines and diseases.[9] The rapid spread of this content is facilitated by several platform-specific features, including algorithmic amplification that prioritises engagement over accuracy, echo chambers that expose users primarily to information that confirms their existing beliefs, and the emotional appeal of sensationalized false claims, which often outperform scientifically accurate but less dramatic content.[10,11]Figure 2: Why misinformation spreads. Illustration Credit: Nor Faiza Mohd. Tohit.The impact of misinformation on public health is profound and multidimensional [Figure 3]. Studies demonstrate that exposure to health misinformation on social media is significantly associated with vaccine hesitancy, reduced compliance with public health measures and poor health decision-making.[12,13] The prevalence of COVID-19 vaccine hesitancy in Asian countries, for instance, ranged from 20% to 55%, with reliance on social media platforms for vaccine information associated with a higher likelihood of hesitancy.[14] Research indicates that belief in misinformation is linked to lower rates of preventive behaviours, as individuals who believe false information are less likely to adopt protective measures such as mask-wearing, physical distancing and vaccination.[15]Figure 3: Impact on public health. Illustration Credit: Nor Faiza Mohd. Tohit.Beyond individual health behaviours, misinformation erodes institutional trust. The COVID-19 infodemic extended beyond public health impacts, undermining confidence in health institutions and government authorities, contributing to societal polarisation and diminishing the effectiveness of public health communication.[16] This decline in trust creates a vicious cycle where individuals become more susceptible to alternative narratives and conspiracy theories, further compromising their ability to make informed health decisions. Recent history offers sobering examples of how misinformation can catalyse public health crises. During the Ebola outbreak in the Democratic Republic of Congo in 2019, misinformation fuelled violence, mistrust and targeted attacks on healthcare providers, directly impeding outbreak control efforts.[17] Similarly, the resurgence of vaccine-preventable diseases such as measles in multiple countries has been linked to declining vaccination rates driven by anti-vaccine misinformation proliferating online.[18,19] The COVID-19 pandemic witnessed the emergence of ‘infodemiology’ as a critical field, as false narratives about origins, treatments and vaccines competed with scientific evidence, delaying appropriate public health responses and contributing to preventable deaths.[20] These challenges are compounded by the psychological mechanisms underlying misinformation spread. Cognitive biases, including confirmation bias and motivated reasoning, make individuals more likely to accept information that aligns with their pre-existing beliefs. Emotional content, particularly that which evokes fear, anger, or outrage, spreads more rapidly than neutral information. Furthermore, the illusion of explanatory depth leads people to overestimate their understanding of complex health topics, making them more confident in sharing questionable information.[21,22] The convergence of these psychological factors with the structural features of digital platforms creates a perfect storm for the proliferation of misinformation. GLOBAL POLICY STRATEGIES Addressing the infodemic requires comprehensive policy frameworks that balance competing values: Freedom of expression, public safety, platform accountability and user protection. Several international organisations and national governments have developed regulatory approaches that provide valuable insights for global governance strategies. INTERNATIONAL FRAMEWORKS The UNESCO has emerged as a leading voice in digital platform governance through its Guidelines for the Governance of Digital Platforms, released after extensive multistakeholder consultation involving 134 countries.[23] These guidelines establish five overarching principles: Platforms must respect human rights in content moderation; platforms must be transparent about their policies and algorithms; platforms must be accountable to relevant stakeholders; there must be checks and balances formally institutionalised in governance systems and governance processes must be open and accessible to all stakeholders, particularly vulnerable and marginalised groups.[24] Importantly, UNESCO’s framework adopts a co-regulatory approach that recognizes the roles of states, platforms, civil society and other stakeholders, avoiding the pitfalls of either pure self-regulation or heavy-handed government control.[25] The WHO has developed complementary frameworks explicitly focused on health misinformation. WHO’s approach emphasises infodemic management as the systematic use of risk and evidence-based analysis to manage the infodemic and reduce its impact on health behaviours during emergencies.[26] This includes establishing early artificial intelligence (AI)-supported response and social listening tools to identify emerging narratives and information gaps that hinder people’s access to reliable health information.[27] WHO has also partnered with over 50 digital companies and social media platforms, including Facebook, Twitter and Google, to ensure science-based health messages appear first when people search for COVID-19 information.[28] The United Nations has promoted global principles for information integrity that emphasize multistakeholder action to prevent and address misinformation. The UN system recognises that managing the infodemic requires coordinated efforts among member states, international organizations, media, social media platforms, researchers, technologists, civil society leaders and influencers.[29] This collaborative approach acknowledges that no single entity possesses sufficient authority or capability to address misinformation effectively in isolation [Figure 4].Figure 4: International policy frameworks. Illustration Credit: Nor Faiza Mohd. Tohit.NATIONAL REGULATORY MODELS Different countries have adopted varying approaches to digital platform governance, reflecting distinct legal traditions, political systems and cultural contexts. These case studies illustrate the diversity of regulatory strategies and their respective strengths and limitations. Germany’s Network Enforcement Act (NetzDG), enacted in 2017, represents one of the first comprehensive national efforts to regulate social media content. NetzDG requires large social media platforms with more than 2 million users in Germany to establish complaint procedures and remove manifestly illegal content, including hate speech and certain forms of misinformation, within 24 h of notification.[30,31] Platforms that fail to comply face fines of up to €50 million. While NetzDG has been credited with reducing the prevalence of hate speech online, it has also faced criticism from the UN Human Rights Committee and civil society organisations for potentially enlisting social media companies to carry out government censorship without adequate judicial oversight.[32,33] The rapid timeline for content removal raises concerns about over-removal, as platforms may adopt overly cautious approaches to avoid penalties, potentially suppressing legitimate speech. Singapore’s protection from online falsehoods and manipulation act (POFMA), passed in 2019, takes a different approach by requiring platforms to issue warnings on posts deemed false by government authorities or, in some instances, to remove content entirely.[34,35] POFMA grants government ministers broad discretionary power to determine what constitutes false information and to order corrections or removals. While proponents argue that POFMA provides an efficient mechanism for addressing misinformation, critics contend that it has been used to target political opposition and activists, raising serious concerns about its impact on freedom of expression and democratic discourse.[36,37] The law exemplifies how misinformation regulations can be co-opted for political purposes when insufficient safeguards are in place. France has implemented a more nuanced approach through its law against information manipulation (2018), which focuses specifically on election periods and requires platforms to increase transparency around sponsored political content.[38] The law includes provisions for judicial review and attempts to balance rapid response with due process protections. However, implementation challenges and concerns about effectiveness have led to ongoing debates about whether this targeted approach provides adequate protection. Brazil’s electoral misinformation framework demonstrates another model, establishing collaboration between electoral courts, fact-checking organisations and platforms to identify and address false election-related information.[39] This approach emphasises verification by accredited fact-checkers and judicial oversight, rather than relying solely on platform self-regulation. While showing promise in protecting electoral integrity, questions remain about its scalability to broader health misinformation contexts. Comparative analysis reveals several key insights from these national approaches. First, regulations that provide clear definitions of prohibited content while maintaining a narrow scope tend to better protect freedom of expression than those with vague or overly broad language. Second, independent oversight mechanisms are essential to prevent governmental abuse and ensure platforms are held accountable without becoming tools of censorship. Third, procedural safeguards, including judicial review, appeal processes and transparency requirements, help balance competing interests.[40,41] PROPOSED ENHANCEMENT STRATEGIES Building upon existing frameworks, several additional strategies could enhance digital platform accountability and transparency while protecting fundamental rights. First, mandatory human rights impact assessments should be conducted before significant platform design changes or policy modifications and before electoral processes.[42] These assessments would evaluate potential effects on freedom of expression, privacy and other rights, enabling proactive identification and mitigation of risks. Second, enhanced transparency requirements should mandate platforms to disclose detailed information about content moderation decisions, algorithmic recommendations and advertising practices. This includes publishing regular transparency reports with disaggregated data on content removals, user appeals and government requests. Such transparency enables external accountability while respecting individual privacy.[43] Third, co-regulatory frameworks combining government oversight with industry self-regulation and civil society participation offer promise. These frameworks would establish independent regulatory bodies with technical expertise, representative stakeholder involvement, and clear mandates to protect both public health and fundamental rights.[44] Regulatory bodies should possess the authority to impose penalties for non-compliance while maintaining independence from direct government control. Fourth, platforms should be required to implement design interventions that reduce the spread of misinformation without relying solely on content removal. This includes modifying recommendation algorithms to minimise amplification of false content, implementing friction mechanisms such as warnings before sharing potentially misleading information, and prominently displaying authoritative health information from recognized sources.[45,46] Fifth, international coordination mechanisms should be strengthened to address the global nature of digital platforms and misinformation. This could include mutual recognition of regulatory standards, information sharing on emerging threats and coordinated responses during health emergencies. The fragmentation of regulations across jurisdictions creates compliance challenges for platforms and opportunities for harmful content to migrate to less-regulated spaces.[47] Finally, adequate resources must be allocated to enforcement. existing regulations from insufficient and undermining their effectiveness of how the underlying framework may infodemic management beyond platform to strategies that individual and community against misinformation. Health as the to and health information, as a critical protective against to OF Health literacy multiple that are particularly relevant to misinformation. Digital health literacy specifically to the ability to and health information from sources and to the knowledge gained to address health Research demonstrates that individuals with higher digital health literacy are better to identify reliable health information evaluate the of online content and However, significant in health literacy across [Figure associated with lower health literacy include lower and to or marginalised These are often by health misinformation and health that targeted at Illustration Credit: Nor Faiza Mohd. at health literacy should be implemented across all from primary to community These should beyond traditional health to include critical source understanding of scientific and recognition of manipulation used in topics should include understanding how algorithms information emotional manipulation and about health claims [Figure that approaches promise. which people to forms of misinformation with has effectiveness in building to false which users to information by the to its through external has been to ability to evaluate online These interventions should be and for Building Health Illustration Credit: Nor Faiza Mohd. INFORMATION represents the most critical in successful public health communication and vaccine Research trust in information sources as the of belief in accurate health information and compliance with trust in institutions has been declining by misinformation that explicitly target public confidence in healthcare government and scientific [Figure and maintaining trust requires multifaceted approaches. First, health authorities must with and scientific understanding rather than false During the COVID-19 pandemic, on such as often as rather than as an appropriate on emerging communication about the scientific process and for recommendations would better serve public Second, communication to is Different have distinct information sources and communication community including healthcare and community significantly and These can address concerns in appropriate and provide that public health Third, proactive engagement rather than more Health authorities should concerns and questions and comprehensive information resources before misinformation knowledge false narratives and responses that legitimate concerns while accurate information to prevent misinformation from becoming Fourth, addressing the emotional of health communication is and information people’s health decisions. health that acknowledges and demonstrates is more likely to than corrections that may of legitimate trust in Illustration Credit: Nor Faiza Mohd. INFORMATION media platforms significant for the information ecosystem they have created and must the of health information. Several interventions promise. systems that information or warnings to posts misinformation have some particularly for users with lower vaccine However, must be as implemented warnings can by additional to false content or among users who attempts at censorship. modifications that reduce the amplification of misinformation without requiring content removal another critical Platforms can recommendation algorithms to content as potentially reduce its in search and its ability to be the it users who such content to access between platforms and authoritative health sources can enhance the of accurate information. for and and other health authorities should have with enhanced and their content should appear prominently when users search for health Platforms can information authoritative resources during health in content moderation particularly human with health expertise, is While systems can potentially content at human for nuanced about and legitimate about scientific Platforms must provide adequate and protection for content who often face Finally, platforms should on information by with access to data while protecting user how misinformation which interventions are and how platform features health information sharing requires a systematic by data [Figure accountability Illustration Credit: Nor Faiza Mohd. Tohit.The infodemic represents a fundamental to public health in the digital one that coordinated global response mechanisms combining regulatory frameworks, platform accountability and community This analysis has that health misinformation on digital platforms creates tangible including vaccine reduced compliance with protective measures, erosion of institutional and preventable and The COVID-19 pandemic these but the challenges beyond single health crisis to public health including vaccination and health and disease policy strategies must competing Regulatory frameworks must be to platforms accountable for the their systems yet to freedom of expression and avoid becoming of censorship. The national approaches from Germany’s NetzDG to Singapore’s POFMA both and single provides a successful governance requires in international human rights International frameworks developed by UNESCO and WHO offer valuable by multistakeholder independent oversight, transparency and human rights protections. These principles should national and regulatory efforts while for implementation coordination among jurisdictions can address the global nature of digital platforms and prevent regulatory fragmentation that enables harmful content to in governance Building health literacy and trust as critical to regulatory approaches. individuals and to evaluate health information, identify manipulation and access reliable sources creates that regulatory Health literacy must be across systems and community with to vulnerable by misinformation. media platforms must their profound for the information ecosystem they have measures, including content algorithmic with health authorities and in content moderation essential However, are regulatory frameworks that mandate accountability and provide clear are to ensure protection of public health and rights. The requires collaborative efforts among international digital platforms, civil healthcare providers, and single stakeholder possesses the or resources to address the infodemic the of coordination mechanisms that rapid information sharing during health the of evidence-based interventions and the and of strategies. digital technologies to with enabling misinformation, including and false health claims, the challenges approaches that emerging threats rather than merely to are This includes in to misinformation tools to and counter false information and building institutional for rapid addressing the infodemic is not merely a technical of content moderation or platform governance, but a societal that to the of democratic scientific literacy and community health. The could not be ability to effectively to health trust in institutions and make evidence-based about our health and the health of our on these The for coordinated global action is before the health crisis and to the infodemic that for The has and the and to be accountable for all of the including or integrity on the editorial of in Human The of the that they not have or with or entity directly or related to the matter or in this review for this review is from significantly to the whether in the or of or all these also in the or critical review, their for the that would be on the to which the would be and the to be held accountable for all of the