We're a nonprofit focused on accelerating talent into the fields of AI safety and policy. We believe the development of transformative AI will be one of the most consequential events in history. Getting it right requires building a robust ecosystem of researchers, policymakers, technical professionals, and skilled operators who can navigate the complex challenges ahead.
We operate with urgency. If transformative AI arrives soon, we need to move fast to build this ecosystem. For us, that means staying lean, shipping quickly, and constantly reassessing whether our priorities are the right ones.
Our current strategy focuses on interventions that can scale flexibly in response to public interest, allowing the fields of AI safety and policy to rapidly absorb large amounts of talent. We execute on this strategy by supporting decentralized community building at universities through Pathfinder, and by facilitating scalable research mentorship through SPAR.
Kairos runs a portfollio of AI safety projects including:
SPAR
The largest research fellowship focused on risks from advanced AI.
A part-time, remote research fellowship matching aspiring AI safety researchers with experts in the field to work on impactful projects, together.
Research conducted through SPAR has been accepted at ICML and NeurIPS, covered by TIME, and led to full-time job offers for mentees.
Our Fall 2025 round features 80 projects and 309 mentees across 55 countries.
Pathfinder Fellowship
Mentorship and financial support for university clubs focused on AI safety and policy
A fellowship providing funding, mentorship, community, and other resources to organizers of AI safety student groups at universities around the world.
Pathfinder enables organizers to run ambitious programming, helping their members upskill for careers in the field.
Our Fall 2025 round features 65 Pathfinder Fellows at 51 universities across 11 countries.
Our Priorities include:
Creating Scalable Impact
We design programs that can reach large numbers of people without sacrificing quality. We build infrastructure that lets large numbers of motivated individuals find mentorship and do meaningful work, whether that's publications through SPAR or supporting people through Pathfinder.
Being Purpose-Driven
Impact comes first. We're aiming to find and support talent that actually moves the needle on AI safety and policy. We value what delivers impact, not what looks impressive.
Staying Agile
The field is changing rapidly, and so are we. We maintain the speed and adaptability to pivot our strategy as new evidence comes in or the landscape shifts. Our lean team size is a feature, not a bug; it lets us make decisions quickly and test new approaches without excessive process.
Seeking Truth
We pursue truth rigorously. That means being honest about uncertainty, updating our beliefs when the data demands it, and building accurate models of how our programs create impact. We'd rather have clarity than false confidence, and we actively resist motivated reasoning, even when it might be more comfortable to believe otherwise.