Shaping the governance of advanced AI

I lead the AI Risk Index at MIT FutureTech and The University of Queensland, building the evidence base that decision-makers need to manage high-priority AI risks.

I also help grow Australia’s AI safety ecosystem through policy collaborations, community building in Melbourne, and national convenings.

Request a briefing Explore the AI Risk Repository

Interests
  • Advanced AI governance
  • Risk & mitigation evidence synthesis
  • Implementation & scale-up
  • Behaviour & decision science
  • Community building & convening
Education
  • PhD (Social Psychology), 2015

    The University of Queensland

Current focus

Two streams of work:

  1. AI Risk Index (MIT FutureTech × UQ) — research leadership, collaboration, and delivery to assess which AI risks matter most, which mitigations work, and how key actors are responding.

  2. Australia’s AI safety ecosystem — collaborating with Good Ancestors Policy, convening the Melbourne AI Safety community, designing and facilitating the 2024 AI Safety Forum, and maintaining AISafety.org.au.

AI Risk Index

The MIT AI Risk Initiative reviews and communicates evidence on AI risks and mitigations. The AI Risk Index turns this into a continuously updated public resource to help people and institutions understand risks, identify effective mitigations, and track organizational responses over time.

Current activities include a systematic review of AI risk mitigations, expert Delphi studies, and a review of organizational risk responses (for public benchmarking in the Index).

My role: program strategy and execution — research design, multi-institution collaboration, methods & tooling (systematic reviews, Delphi), data infrastructure, and communications.

Research questions (guide the Index):

  1. What are the risks from AI, which are most important, and what are the critical gaps in response?
  2. What are the mitigations for AI risks, and which are the highest priority to implement?
  3. Which AI risks and mitigations are relevant to which actors and sectors?
  4. Which mitigations are being implemented, and which are neglected?
  5. How is the above changing over time?

What we’ve built on

The AI Risk Repository (living systematic review of AI risks) distils 60+ frameworks and has significant reach (135k+ visits), wide referencing (650+ sites incl. Amazon, IBM, Trend Micro), citation in the International AI Safety Report 2025, and integration with the AI Incident Database.

Building Australia’s AI safety ecosystem

  • Good Ancestors Policy — policy analysis, submissions, and workshops supporting Australians for AI Safety (program page, expert open letters).
  • Melbourne AI Safety community — convening meetups, collaboration spaces, and policy-focused events (group page).
  • Australian AI Safety Forum (2024) — designed and facilitated sessions at Australia’s first dedicated forum focused on technical AI safety & governance (event site, overview).
  • AISafety.org.au — national hub for resources, guidance, and opportunities (maintainer) (site).

Other areas

  • Behaviour change & climate adaptation — BehaviourWorks Australia Climate Adaptation Mission (project page).
  • Scale-up Toolkit — with Victoria’s Behavioural Insights Unit; practical tools for scaling effective interventions (BWA toolkit).
  • Public healthSCRUB COVID-19 behavioural survey (21 waves; 40k+ respondents) (project page, OSF data).
  • Ready Research — research, training, and communications aligned with effective altruism (readyresearch.org).

See all other projects

Selected publications & reports

Peer-reviewed

  • Saeri, A. K. et al. (2022). What Works to Increase Charitable Donations? A Meta-Review with Meta-Meta-Analysis. VOLUNTAS. doi: 10.1007/s11266-022-00499-y
  • Grundy, E. A. C., Slattery, P., Saeri, A. K., et al. (2021). Interventions that Influence Animal-Product Consumption: A Meta-Review. Future Foods. doi: 10.1016/j.fufo.2021.100111
  • Slattery, P., Saeri, A. K., & Bragge, P. (2020). Research co-design in health: a rapid overview of reviews. Health Research Policy and Systems. doi: 10.1186/s12961-020-0528-9
  • Saeri, A. K., Cruwys, T., Barlow, F. K., Stronge, S., & Sibley, C. G. (2018). Social connectedness improves public mental health. ANZJP, 52(4), 365–374. doi: 10.1177/0004867417723990

Preprints / reports

  • Slattery, P., Saeri, A. K., Grundy, E. A. C., Graham, J., Noetel, M., Uuk, R., Dao, J., Pour, S., Caspar, S., & Thompson, N. (2024). The AI Risk Repository: A Comprehensive Meta-Review, Database, and Taxonomy of Risks From AI. arXiv: 10.48550/arXiv.2408.12622
  • Saeri, A. K., Noetel, M., & Graham, J. (2024). Survey Assessing Risks from Artificial Intelligence: Technical Report. UQ / Ready Research. doi: 10.2139/ssrn.4750953
  • Saeri, A. K., O’Connor, R. M. A. (2023). Applying AI to sustainability policy challenges: A practical playbook. Monash University. doi: 10.31219/osf.io/y75rq

Methods & datasets

Full list: Google Scholar — https://scholar.google.com.au/citations?user=B_bJG3kAAAAJ