AI Policy Fellowship

Institute for AI Policy and Strategy (IAPS)
Location Washington D.C., USA
Application deadline 18 March 2024
Contract type Fixed term
Hours Full-time
Salary US$5,000 / month
Apply now
The Institute for AI Policy and Strategy is seeking ten AI Policy Fellows to work on AI policy projects with a focus on the governance of frontier models.

About the organisation

The Institute for AI Policy and Strategy (IAPS) is a remote-first think tank of aspiring wonks, trying to figure out what risks from advanced AI might matter most, and anticipate them with forward-thinking solutions. We aim to be humble yet purposeful: we’re all having to learn about AI very fast, and we’d love it if you could join us in tackling these risks together. Our work covers three areas: AI policy and standards, compute governance, and international governance & China.

Brief description of the role

The IAPS AI Policy Fellowship will be a paid 3-month remote program with a two-week in-person component for 10 individuals to work on AI policy projects with a focus on the governance of frontier models, with the goal of building the skills and connections necessary to obtain a longer-term career in AI governance.

Key roles and responsibilities

Fellows will work on the supervisor’s agenda. We expect fellows to be well-positioned to do work such as providing research assistance, distilling policy ideas to key stakeholders (e.g., government agencies), and co-authoring research and policy pieces (e.g., policy briefs, reports, op-eds). Fellows’ work will be in one of IAPS’s focus areas: Compute governance Policy & Standards (mainly US AI standards, regulations, and legislation) International governance & China (including, for example, EU AI standard-setting).

Skills and competencies

We’re seeking candidates with the ability to:

  • Analyze AI policy research and developments, and to a lesser extent AI technical work
  • Write in a professional style with clarity, making points in a clear and structured way
  • Understand the importance of identifying actionable recommendations to reduce risk from AI and tie them to specific stakeholders
  • Prioritize their time, communicate blockers proactively, and keep their supervisor in the loop
  • Communicate clearly why they think what they do, referring to sources as needed and evaluating confidence in them
  • Take into account the impact of their work as a crucial part of their decision-making process
  • Willingness and ability to consider a further career in AI governance after their fellowship
  • Comfort with remote work and asynchronous collaboration using technologies like G Suite (we’re a remote-first organization with staff in multiple time zones)

Knowledge and experience

Some relevant knowledge in their workstream of choice.

Examples include (but are not limited to) having one of the following:

  • Knowledge of relevant US agencies/departments and how they work, e.g. NIST, DoD
  • Knowledge of relevant policy in other countries and institutions working on AI (e.g., UK, France, Germany, EU, UN)
  • Understanding of the Chinese government and/or China-US relations
  • Understanding of semiconductor supply chains and export controls
  • Understanding of standard-setting institutions and procedures
  • Understanding of software engineering, hardware engineering, information security, and/or cybersecurity
  • Prior work in policy (not necessary for your policy work to have been on AI or in tech)
  • This role is open to candidates with all levels of experience, though we expect candidates who are early or mid-career professionals with some work experience to be the best fits for this fellowship.