I lead the AI Risk Index at MIT FutureTech and The University of Queensland, building the evidence base that decision-makers need to manage high-priority AI risks.
I also help grow Australia’s AI safety ecosystem through policy collaborations, community building in Melbourne, and national convenings.
PhD (Social Psychology), 2015
The University of Queensland
Two streams of work:
AI Risk Index (MIT FutureTech × UQ) — research leadership, collaboration, and delivery to assess which AI risks matter most, which mitigations work, and how key actors are responding.
Australia’s AI safety ecosystem — collaborating with Good Ancestors Policy, convening the Melbourne AI Safety community, designing and facilitating the 2024 AI Safety Forum, and maintaining AISafety.org.au.
The MIT AI Risk Initiative reviews and communicates evidence on AI risks and mitigations. The AI Risk Index turns this into a continuously updated public resource to help people and institutions understand risks, identify effective mitigations, and track organizational responses over time.
Current activities include a systematic review of AI risk mitigations, expert Delphi studies, and a review of organizational risk responses (for public benchmarking in the Index).
My role: program strategy and execution — research design, multi-institution collaboration, methods & tooling (systematic reviews, Delphi), data infrastructure, and communications.
Research questions (guide the Index):
The AI Risk Repository (living systematic review of AI risks) distils 60+ frameworks and has significant reach (135k+ visits), wide referencing (650+ sites incl. Amazon, IBM, Trend Micro), citation in the International AI Safety Report 2025, and integration with the AI Incident Database.
Peer-reviewed
Preprints / reports
Methods & datasets
→ Full list: Google Scholar — https://scholar.google.com.au/citations?user=B_bJG3kAAAAJ