This post was written by Léo Dana, Charbel-Raphaël Ségerie, and Florent Berthet, with the help of Siméon Campos, Quentin Didier, Jérémy Andréoletti, Anouk Hannot, Angélina Gentaz, and Tom David.
In this post, you will learn what were EffiSciences’ most successful field-building activities as well as our advice, reflections, and takeaways to field-builders. We also include our roadmap for the next year. Voilà.
EffiSciences is a non-profit based in France whose mission is to mobilize scientific research to overcome the most pressing issues of the century and ensure a desirable future for generations to come.
EffiSciences was founded in January 2022 and is now a team of ~20 volunteers and 4 employees.
At the moment, we are focusing on 3 topics: AI Safety, biorisks, and climate change. In the rest of this post, we will only present our AI safety unit and their results.
TL;DR: In one year, EffiSciences created and held several AIS bootcamps (ML4Good), taught accredited courses in universities, organized hackathons and talks in France’s top research universities. We reached 700 students, 30 of whom are already orienting their careers into AIS research or field building. Our impact was found to come as much from kickstarting students as from upskilling them. And we are in a good position to become an important stakeholder in French universities on those key topics.
TL;DR of the content
In order to assess the effectiveness of our programs, we have estimated how many people have become highly engaged thanks to each program, using a single metric that we call “counterfactual full-time equivalent”. This is our estimate of how many full-time equivalent these people will engage in AI safety in the coming months (thanks to us, counterfactually). Note that some of these programs have instrumental value that is not reflected in the following numbers.
full time equivalent
How to read this table? We organized 2 French ML4Good events and significantly influenced the careers of 7.4 equivalent persons. This means that, on average, 3.7 new full-time persons started working on AI safety after each ML4Good event.
In total, those numbers represent the aggregation of 43 people who are highly engaged, i.e. that have been convinced by the problem and are working on solving it through upskilling, writing blog posts, facilitating AIS courses, doing AIS internships, attending to SERI MATS, doing policy work in various orgs, etc. The time spent working by these 43 people adds up to 23 full-time work equivalent.
As a field-building group, EffiSciences’ AI Unit has 2 major goals for AIS:
Here is our theory of change as of June 2023:
The ExTRA pipeline: Exposure, Technical upskilling, Research and Alignment. Bolded terms activities are our core activities.
Right now, we have been mostly focused on the first two steps of our theory of change: Exposure and Technical Upskilling. We also aim to have a more direct influence on Research by taking interns and opening research positions. In addition, we intend to hand over the final steps, especially in promoting concrete measures in AI governance, to other organizations, and to keep our focus on research for the time being.
We now present our thoughts after a year of field-building.
You can find here an almost complete chronology of our events, we will only discuss the ones included in the above summary.
Machine Learning for Good: Our most successful program
Turing Seminar: Academical AI Safety course
AIS training day: the 80/20 of the Turing Seminar
EffiSciences’ educational hackathons:
Hosting Apart’s hackathons:
Lovelace study groups:
Create research and internship opportunities:
Strengthening our bonds with French academia:
Diving into AI governance:
Talking to the media
We are all pretty happy and proud of this year of field-building in France. We made a lot of progress, but there is so much more to be done.
Don’t hesitate to reach out: If you’d like to launch similar work we would love to support you or share our experience. And if you have feedback or suggestions for us, we’d love to hear it!
We are currently fundraising, so if you like what we do and want to support us, please get in touch using this email.
We mean that creating the AI Unit has enabled its founders to engage in AIS and we count the work they would not have done otherwise.
The actual numbers would be higher if we included the fact that we also help other groups launch their own ML4G in their country (e.g. LAIA in Switzerland).