top of page
  • LinkedIn
  • White Twitter Icon

Kevin Klyman

Technology Policy Strategist

IMG_1774_edited.png

What I Do

I am a policy researcher who advocates for responsible uses of technology to reduce harm and advance peace. I am currently a researcher at Stanford where I investigate the risks of large AI models. In 2024, I published the first transparency reports for AI models used by millions of people and helped win new protections for third-party AI researchers in the US. ​My research has been published at top machine learning conferences like NeurIPS and ICML and featured by the New York Times and Washington Post. My essays on geopolitics and tech have been published by outlets like Foreign Policy, TechCrunch, and Just Security. You can follow my work on Linkedin.

Anchor 1

Essays, Blogs, White Papers

Resume

HELM Safety v1.0 (2024) an evaluation framework for safety of large language models using standard public benchmarks

Foundation Models Under the EU AI Act (2024) a blog tracing the evolution of the EU AI Act over time

Transparency of AI EO Implementation (2024) a tracker of the implementation of the US AI Executive Order

How to Promote Responsible Open Foundation Models (2023) a summary of a Stanford-Princeton workshop on open models

Do Foundation Model Providers Comply with the Draft EU AI Act? (2023) — an analysis of the European Parliament's position

Biden Takes Measured Approach on China Investment Controls (2023) an essay in Foreign Policy on the costs and benefits of US outbound investment controls

The US Wants to Make Sure China Can't Catch Up on Quantum Computing (2023) — an essay in Foreign Policy

China's tech crackdown could give it an edge (2022) — an op-ed in The Diplomat arguing China's tech crackdown might just work

Who really benefits from digital development? (2022) — an article in TechCrunch finding that digital development projects may have unintended consequences

The Great Tech Rivalry: China vs the US (2021) — a report for the Kennedy School on ML, 5G, quantum information science, semiconductors, biotech, & green tech in China and the US

Collaborating to measure digital transformation: Sharing the Digital Impact Alliance’s draft Digital Transformation Indicator Library (2021) — a dataset of over two thousand indicators of readiness for digital investment that I collected and cleaned 

Stopping Killer Robots: Country Positions on Banning Fully Autonomous Weapons and Retaining Human Control (2020) — a report on national policies regarding autonomous weapons

UC Berkeley is making the student housing crisis worse (2019) — an investigation published in the San Francisco Chronicle

Cluster Munition Monitor (2018) — a report that exposed war crimes in bombings in Syria and Yemen

Campaign to Stop Killer Robots Report on Activities (2018) — a report documenting the campaign's influence on policymakers

​

Cited By

Research

Language model developers should report train-test overlap (2024) a position paper calling for training data transparency

Acceptable Use Policies for Foundation Models (2024) a mapping of 30 AI developers' acceptable use policies

Consent in Crisis: The Rapid Decline of the AI Data Commons (2024)an audit of AI training datasets that showed how websites are working to resist scraping by AI companies

AIR-Bench 2024 (2024) a benchmark based on AUPs that assesses if companies' models adhere to their own policies

The Responsible Foundation Model Development Cheatsheet (2024) a resource for developers of foundation models spanning the AI lifecycle, from data to evaluations

AI Risk Categorization Decoded (2024) an AI risk taxonomy based on the policies of AI developers and governments

Considerations for Governing Open Foundation Models (2024) an paper in Science based that assesses how different regulatory proposals for AI might affect open models

The 2024 Foundation Model Transparency Index (2024) A paper with transparency reports for 14 foundation models

Introducing v0.5 of the AI Safety Benchmark from MLCommons (2024)A new safety benchmark with 7 harm areas

A Safe Harbor for AI Evaluation and Red Teaming (2024) A position paper calling for protections for third party AI research

On the Societal Impact of Open Foundation Models (2024) — A position paper calling for marginal risk assessment of AI models

Foundation Model Transparency Reports (2024) A framework  for transparency reporting of foundation models

The 2023 Foundation Model Transparency Index (2023) A first of its kind metric for the transparency of foundation models based on the practices of major developers

Affiliations

Researcher at Stanford's Center for Research on Foundation Models and Stanford's Institute for Human-Centered AI

Co-President of the Board of the John Gardner Fellowship Association

Co-lead of the Technology Working Group at Foreign Policy for America's NextGen Initiative

I'd love to hear from you

  • LinkedIn
  • White Twitter Icon
  • 1024px-Google_Scholar_logo.svg(1)

© 2024 by Kevin Klyman

My email is my kevin [dot] klyman [at] berkeley [dot] edu

bottom of page