Tommy is a researcher, technologist and policy advisor specialising in frontier AI safety and security. He is currently a Senior AI Policy Manager at the Centre for Long Term Resilience. Previously he led the development of a £10m AI platform for the UK government, which deployed novel ML techniques to analyse online disinformation. Prior to this, he was Head of Policy at the non-profit First Draft, where he led and consulted on projects with WHO, Stanford University, Unicef, UN, Google and Partnership on AI. He is currently completing a PhD on AI safety incidents, and how they have transformed the development of AI systems from 2012 to the present day.

Research

I have conducted a wide range of research specialising in AI incidents and disinformation, including in leading peer-reviewed academic journals. This has included threat assessments, threat actor uplift studies, and research into elections using digital methods and data science methods.

Diagram from my research into disinformation threat actor uplifts from LLMs based on a meta-analysis of over 70 research studies and expert interviews. (The near-term impact of AI on disinformation, CLTR, 2024).

Policy work

  • Preparing for AI security incidents

    In a major report, I raised the alarm about the increasing likelihood of a major AI security incident, and developed 34 concrete policy recommendations for how the UK Government could pioneer a holistic AI security strategy. My report was covered in TIME Magazine, and I engaged Ministers and Special Advisors on my recommendations.

  • Incident reporting for AI security

    I developed policy recommendations for how the UK should launch an incident reporting regime for AI incidents. I worked closely with UK government officials on operationalising these recommendations, and my report drove a national conversation about incident reporting with coverage in national news.