Health Misinformation in the UK: How Digital Pathways Become Public Health Risks
Author: Lou Williams
Contributing researchers and editors: Dr Nadia Musa, Dr Mikael Leidenhag
16/11/2025
Background
Health misinformation is emerging as a major challenge for the UK’s public health system, eroding trust, shaping behaviours and undermining outcomes. The rise of social media has created pathways for the spread of health myths, where unverified claims can reach millions in minutes. As Denniss and Lindberg describe it, this “infectious” spread of health misinformation means that “misinformation can ‘go viral’, cause harm, and change beliefs before it can be effectively corrected.”[1] Such narratives range from misleading mental health ideas and weight-loss hype, to false cancer cures and vaccine conspiracies. Emerging technologies including Artificial Intelligence (AI) are compounding the problem, with deepfakes capable of impersonating trusted figures in order to deliberately deceive audiences. Whilst these dynamics are not new, the scale, speed and sophistication mark a clear and dangerous shift.
The consequences are already visible: measles outbreaks are being linked to falling vaccination rates, delays in cancer treatment due to the spread of false “cures”, and severe risks to the mental health of young people exposed to harmful online content. In order to safeguard health outcomes, policymakers must recognise misinformation as the clear threat that it is, and take steps to build resilience, restore trust, and promote credible health information in the digital world. Recent work by the Royal Society, the BBC and Wikimedia UK stresses that adult information literacy, not just school-age media education, is central to that resilience, and that “trust is a process, not an end state, and it should extend beyond the information itself to include messengers, governments, services and products.”[2] Building that trust-based resilience is particularly vital in the health sphere, where false claims can shape real public health outcomes with clear consequences. This brief examines how dis- and misinformation spread, the human impact it has, and what policy actions are needed to rebuild trust in an era of digital health misinformation.
Understanding how dis/misinformation travels through digital spaces is the first step toward tackling it. The following section explores the online platforms and pathways that enable false health narratives to spread and take hold.
The Platforms and Pathways of Misinformation
Health misinformation moves through a network of platforms that reinforce and amplify misleading content. Social media remains the central channel, where algorithms reward engagement rather than accuracy. Germano et al argue that “while engagement-based ranking systems can successfully capture attention, they may do so at the cost of truth and social cohesion.”[3] As false claims are enabled to spread rapidly across platforms like Facebook, X and TikTok, they can quickly form an “infodemic”, a term born out of Covid-19 related misinformation via social media.[4]
Online influencers play a crucial role in amplifying misleading claims. Their perceived authenticity and direct connection with audiences make them powerful intermediaries between misinformation and public perception. As Uksel Ekinci notes, “social media influencers hold immense power over consumer decisions and cultural norms”, and left unchecked, this influence can lead to serious ethical and psychological consequences.[5] Influencers allowing and partaking in the spread of health dis/misinformation have been called out again and again. For example, the BBC’s Global Disinformation Unit found that guests on The Diary of a CEO podcast, hosted by Steven Bartlett, had used the platform to claim that cancer could be cured by keto diets and that COVID-19 was an engineered weapon.[6] This is evidence of misinformation being amplified due to its alignment alongside a trusted influencer.
Alongside the role of online influencers sits a grouping of bad-faith actors who create content with intention of going viral, to spread disinformation, hate and fake news. These can take the form of accounts run by political organisations, hostile foreign states, or individuals pursuing their own agendas. Such tactics were evidenced during the COVID-19 pandemic, where “digital tactics to disseminate misinformation” were employed by the far-right political conspiracy group known as QAnon. [7]
The emergence of AI-generated content and deepfake technologies has further blurred the boundaries between these sources. In some cases, hostile actors may use synthetic media to fabricate the appearance of a trusted influencer or a public figure endorsing a false claim; intensifying confusion, amplifying reach, and eroding public trust. Chris Stokel-Walker highlighted how British doctor and broadcaster Dr Hilary Jones had his image used by deepfake technology to promote drugs falsely claimed to cure high blood pressure and diabetes.[8] As the Royal Society has stated, information literacy must now evolve into AI literacy, equipping the public to recognise manipulated media and evaluate credibility in a rapidly changing digital ecosystem.[9]
Together, these pathways reveal three distinct but overlapping forms of health misinformation. Firstly, influencer-led misinformation thrives on perceived credibility and reputation, where trust in individuals overrides evidence. Secondly, Bad-faith disinformation is deliberately engineered to divide and deceive, exploiting crises for ideological or political gain. Finally is the emerging AI-driven hybrid space, where deepfakes blur the boundaries between truth and fabrication, creating a new layer of risk where reliability itself becomes uncertain.
Whilst these pathways explain how dis/misinformation spreads, the real measure of its danger lies in its consequences. The next section examines how online falsehoods translate into real-world harm for individuals and communities.
The Human Impact
The effects of health misinformation extend far beyond online spaces, shaping real-world behaviours, treatment decisions, and public health outcomes. Exposure to false or misleading claims can erode confidence in proven interventions, delay help-seeking, and deepen mistrust toward health professionals. Recent studies have linked misinformation to measurable declines in vaccination uptake, fuelling the resurgence of preventable diseases such as measles.
In 2019, the World Health Organization declared that the UK was no longer considered to have eliminated measles, after coverage of the second MMR dose fell to 87%; below the 95% required for herd immunity. As then Prime Minister Boris Johnson observed, “people have just been listening to that superstitious mumbo-jumbo on the Internet, all that antivax stuff, and thinking that the MMR vaccine is a bad idea.” This illustrates how online misinformation can translate into tangible public health risks, reversing decades of progress in disease prevention.[10]
Fast forward to 2025, the resurgence continues: since January 2025, there have been 811 laboratory-confirmed measles cases reported in England.[11] Just this Summer, a child who contracted measles died at a Liverpool hospital amid a local surge in cases.[12] This is the human impact of anti-vaccine misinformation. There is no reason why the UK should not have a fully immunised childhood population; but when doubts are seeded online by those with harmful agendas, harmful outcomes inevitably follow.
The circulation of false or misleading mental health content online represents an emerging but under-acknowledged threat to public wellbeing. Platforms such as TikTok, Instagram and YouTube have become saturated with short-form videos claiming to offer quick fixes for anxiety, ADHD or depression, rarely ever supported by clinical evidence.
Hudon et al found that such oversimplified content is being presented to impressionable audiences – such as “if you forget your keys, you definitely have ADHD” . Alongside this, users are at the same time subjected to minimisation of real conditions, with ideas of “mental illness is just a mindset” being normalised. [13] Most concerningly, some suicide-related posts romanticise risk and downplay clinical care, including some content which “romanticised suicidal ideation or presented recovery without professional intervention as universally effective.” A Guardian investigation found that “more than half of all the top trending videos offering mental health advice on TikTok contain misinformation.”[14]
Given the 2023 estimate that one in five children and young people had a probable mental disorder,[15] this demonstrable surge in misleading mental health content presents a clear policy challenge for efforts to improve the mental health and digital resilience of young people. The WHO has found that infodemics and widespread misinformation, particularly during outbreaks and disasters, can significantly harm mental health. In this context, society faces a double-barrelled threat: misinformation that damages mental health through fear, anxiety and distrust, alongside mental-health-specific misinformation that distorts understanding itself, promoting false cures, quick fixes, and even the glamorisation of suicide.
These issues go beyond online mental health, similar ideas of false claims, alternative cures and supplement-based misinformation have also infiltrated the discourse of serious physical illnesses such as cancer. Lazard et al found that “cancer treatment misinformation shared quickly and widely can lead to psychological harm (e.g., distress, abandoning support resources) and physical harm (e.g., deviations from clinical care).”[16]
The human impact of such misinformation has been tragically illustrated by the case of 23-year-old Paloma Shemirani, who died following refusal of chemotherapy for a treatable cancer. Following her death, her brothers publicly condemned not only “their mother’s extreme anti-medicine beliefs but the broader ecosystem that allowed those views to flourish unchecked.”[17] This case underlines how online falsehoods about medical science can and will move beyond the digital world, shaping fatal real-world decisions and eroding public trust in evidence-based care.
Though varied in form, the cases outlined here represent only a glimpse of the many ways that health misinformation manifests and harms. Across countless other contexts, from nutrition and fertility, to tobacco and vaping, similar patterns emerge. Together, they reveal a broader reality: misinformation exploits emotional vulnerability and institutional mistrust and is being amplified by algorithms that allow it to spread infectiously.
Trust and Governance Challenges
The governance of health dis/misinformation exposes fundamental weaknesses in the state’s ability to respond to digital threats. Public health communication remains grounded in hierarchical and centralised models, whilst misinformation operates through decentralised networks that evolve and unravel faster than official responses. This temporal gap leaves institutions perpetually reactive.
Despite the introduction of the Online Safety Act, enforcement remains limited, fragmented and heavily reliant on self-regulation by the platforms who profit from user engagement. The ability of anyone to create and monetise content, coupled with algorithms that privilege engagement over accuracy, allows misinformation to outpace credible communication. Government departments, regulators and health bodies operate in silos, with no single structure accountable for cross-platform coordination or rapid response. As a result, the state is often left countering falsehoods long after they have embedded themselves in public discourse.
These structural challenges are compounded by declining trust in central institutions, a clear trend in recent global politics. This leaves a void to be filled, and misinformation is doing so.
A key governance challenge, alongside combating misinformation, is that misinformation leads to uneven outcomes and disparities. The effects of health misinformation are not evenly distributed; communities already marginalised within the health system (ethnic minorities, non-English speakers, and those with low levels of digital literacy) are more likely to be exposed to misinformation and less likely to receive credible alternatives.
During the COVID-19 Pandemic, these groups were systematically underserved by national communication campaigns that failed to address linguistic and cultural barriers, evidenced by UK Government statistics.[18] In such contexts, misinformation can become a substitute for official information, particularly when institutional voices appear distant or unrelatable.
The challenge for policymakers is now clear: misinformation is not merely a communication failure, but a governance one. Its spread exposes weaknesses in institutional coordination, regulatory capacity, and public trust; all of which must be rebuilt if credible health information is to prevail. Addressing these issues requires more than reactive fact-checking or piecemeal regulation. It demands systemic reform that embeds resilience, inclusion and transparency across the health information ecosystem. The following policy recommendations set out how the UK can begin to strengthen public defences against digital health misinformation and rebuild trust in the integrity of its health communications.
These gaps in coordination, trust and accountability highlight why stronger, more joined-up action is needed to protect the public from the evolving threat of health dis/misinformation.
Policy Recommendations
Because the current system cannot keep pace with the speed and sophistication of digital misinformation, the following recommendations aim to strengthen public defences, rebuild trust and improve the integrity of health communication in the UK.
The Government should establish a Health Information Integrity Programme, which could operate as a standalone body or as a key focus of a wider information integrity campaign. The initiative would strengthen public capability in identifying and evaluating online health content, addressing the widespread misinformation identified across vaccines, cancer and mental health, where false claims often spread faster than official communication. The Royal Society has stressed that both adult and school-based education are essential to building long-term resilience. The key challenge will be ensuring sustained public engagement with the programme to maximise its impact. The government must utilise the most modern and innovative tools used by adversaries of public health, to give the Health Information Integrity Programme its widest reach.
Whilst the Online Safety Act marks an important first step toward accountability, its enforcement mechanisms remain weak and overly reliant on self-regulation. The Government should go further by introducing a statutory duty on technology companies to proactively scan for keywords and patterns commonly associated with health dis/misinformation. Content identified through this process should be flagged with clear, proportionate warnings indicating potential for misleading or inaccurate information. In addition, platforms should be required to introduce provenance indicators for health-related content and to cooperate with Ofcom in providing transparency over algorithmic design and moderation processes.
Finally, government strategy must also embed inclusion and partnership at its core. As shown during the COVID-19 Pandemic, marginalised groups were disproportionately exposed to misleading information. The government must not allow misinformation to supersede official guidance due to a failure to adapt to its audiences. Work must be carried out at the heart of local communities to drive the Health Information Integrity Programme forward.
Overall, the UK can begin to rebuild confidence in health information by tackling the root causes of misinformation, and embedding practical, community-based solutions at the heart of its strategy.
References
[1] Denniss, E., and Lindberg, R. (2025). Social media and the spread of misinformation: infectious and a threat to public health. Health Promotion International. Available at: https://pmc.ncbi.nlm.nih.gov/articles/PMC11955583/
[2] The Royal Society, BBC, Wikimedia UK. (2023). Building adult community resilience to disinformation during health emergencies through information literacy. Available at: https://royalsociety.org/-/media/policy/publications/2025/building-adult-community-resilience-to-disinformation.pdf
[3] Germano, F., Gomez, V., and Sobbrio, F. (2025). Ranking for Engagement: How Social Media Algorithms Fuel Misinformation and Polarization, Barcelona School of Economics. Available at: https://www.researchgate.net/publication/392951191_Ranking_for_Engagement_How_Social_Media_Algorithms_Fuel_Misinformation_and_Polarization
[4] Gabarron, E., Oyeyemi, S. O., Wynn, R. (2021). COVID-19-related misinformation on social media: a systematic review, Bull World Health Organ. Available at: https://pmc.ncbi.nlm.nih.gov/articles/PMC8164188/
[5] University of Portsmouth. (2025). New research unveils the "dark side" of social media influencers and their impact on marketing and consumer behaviour. Available at: https://www.port.ac.uk/news-events-and-blogs/news/new-research-unveils-the-dark-side-of-social-media-influencers-and-their-impact-on-camarketing-and-consumer-behaviour
[6] Global Disinformation Unit. (2024). Steven Bartlett sharing harmful health misinformation in Diary of CEO podcast, BBC News. Available at: https://www.bbc.co.uk/news/articles/c4gpz163vg2o
[7] Mulcachy, R., Barnes, R., de Villiers Scheepers, R., Kay, S., & List, E. (2024). Going Viral: Sharing of Misinformation by Social Media Influencers. Australasian Marketing Journal. Available at: https://journals.sagepub.com/doi/10.1177/14413582241273987
[8] Stokel-Walker, C. (2024). Deepfakes and doctors: How people are being fooled by social media scams. British Medical Journal. Available at: https://www.bmj.com/content/bmj/386/bmj.q1319.full.pdf
[9] The Royal Society, BBC, Wikimedia UK. (2023). Building adult community resilience to disinformation during health emergencies through information literacy. Available at: https://royalsociety.org/-/media/policy/publications/2025/building-adult-community-resilience-to-disinformation.pdf
[10] Burki, T. (2019. Vaccine misinformation and social media. The Lancet Digital Health. Available at: https://www.thelancet.com/journals/landig/article/PIIS2589-7500(19)30136-0/fulltext
[11] UK Government. (2025). Confirmed cases of measles in England by month, age, region and upper-tier local authority: 2025. UK Health Security Agency. Available at: https://www.gov.uk/government/publications/measles-epidemiology-2023/confirmed-cases-of-measles-in-england-by-month-age-region-and-upper-tier-local-authority-2025#:~:text=Since%201%20January%202025%2C%20there,and%20the%20North%20West%20regions
[12] Kelly, A. (2025). Misinformation, access and cuts – the UK’s measles surge explained. (2025). The Guardian. Available at: https://www.theguardian.com/world/2025/jul/17/thursday-briefing-misinformation-access-and-cuts-the-uks-measles-surge-explained
[13] Hudon, A., Perry, K., Anne-Sophie, P., Doucet, A., Ducharme, L., Djona, O., Testart Aguirre, C., Evoy, G. (2025). Navigating the Maze of Social Media Disinformation on Psychiatric Illness and Charting Paths to Reliable Information for Mental Health Professionals: Observational Study of TikTok Videos. Journal of Medical Internet Research. Available at: https://www.jmir.org/2025/1/e64225/
[14] Hall, R., and Keenan, R. (2025). More than half of top 100 mental health TikToks contain misinformation, study finds. Available at: https://www.theguardian.com/society/2025/may/31/more-than-half-of-top-100-mental-health-tiktoks-contain-misinformation-study-finds
[15] NHS England (2023). Mental Health of Children and Young People in England, 2023 - wave 4 follow up to the 2017 survey, Mental Health of Children and Young People Surveys. Available at: https://digital.nhs.uk/data-and-information/publications/statistical/mental-health-of-children-and-young-people-in-england/2023-wave-4-follow-up#:~:text=Key%20Facts,20%20to%2025%20year%20olds
[16] Lazard, A., Licciardello Queen, T., Pulido, M., Lake, S., Nicolla, S., Tan, H., Charlot, M., Smitherman, A., and Dasgupta, N. (2025). Social media prompts to encourage intervening with cancer treatment misinformation. Social Science & Medicine. Available at: https://www.sciencedirect.com/science/article/abs/pii/S0277953625002795
[17] Kent, R. (2025). When Health Misinformation Kills: Social Media, Visibility, and the Crisis of Regulation. King’s College London. Available at: https://www.kcl.ac.uk/when-health-misinformation-kills-social-media-visibility-and-the-crisis-of-regulation
[18] UK Government. (2020). Evidence summary of impacts to date of public health communications to minority ethnic groups and related challenges, 23 September 2020. Scientific Advisory Group for Emergencies. Available at: https://www.gov.uk/government/publications/evidence-summary-of-impacts-to-date-of-public-health-communications-to-minority-ethnic-groups-and-related-challenges-23-september-2020
