Lauriane recently joined Laterite’s Analytics Team as an AI Developer. She brings a blend of technical depth and real-world perspective shaped by studies at CentraleSupélec Paris-Saclay and Tsinghua University. With experience ranging from large language models to reinforcement learning (and an early start in public service as a city council member in France), Lauriane is fascinated with how AI can meaningfully support decision-making.

In December 2025, she presented her research at NeurIPS in San Diego, a project born during her time in China. We sat down with her to learn more about the journey behind these milestones, and where she hopes to take her work next.

Your academic path spans France and China. What motivated you to study across two very different contexts?

I wanted to experience a different culture and a different way of thinking. Curiosity—both about China and about who I would become in this new environment—was a big driver for me. China felt like a place where ideas move quickly: projects can get started fast, there’s significant investment in research, and the pace is intense and competitive. I was also motivated to learn another language and see how science and AI are approached in a context that’s very different from Europe. Living there for two years really broadened my perspective.

You served as a city council member at just 18. How has that shaped the way you think about AI and its social impact?

It made me very aware that AI can be an incredible tool, but also something that can increase inequalities if it’s not regulated and understood. In local government, decisions are connected to reality: many are logical, but there’s always a political and ideological layer, and that part can become emotion-driven and disconnected from data. That experience made me interested in how we can better predict the impact of decisions, like quantifying policy changes and their social outcomes. It also convinced me that action matters at every scale: even at a city level, things like education and intergenerational support for using technology can have real impact.

F7268d48 3ec4 4aaa b7a5 90db19c97a9a 720

What motivated you to join Laterite?

Laterite felt like the perfect blend: real technical challenge, autonomy, and work that makes sense. In a small team, you can touch many types of AI work—LLMs, OCR, translation, classic machine learning—and be involved in client discussions, ideation, development and deployment. I also wanted to work somewhere that uses AI responsibly, where the “why” is taken seriously. For me, it’s important to build tools that help people and support better research and decision-making.

How have you been settling into the team?

I’ve felt really welcomed. Colleagues took time to explain their workflows and the industry context, which made it easy to understand how Laterite works and what our research supports. I also like that my role touches many aspects of the company and involves interacting with people in different positions. There’s a lot of cross-team learning. There’s also a shared curiosity about AI and what we can build with it, which makes collaboration feel natural. And on a personal note, moving to Amsterdam has been quite easy. Having hobbies like training for an Ironman helps a lot with settling in and meeting people.

You presented your paper on reinforcement learning at the recent NeurIPS conference. For a non-expert, what’s the main idea behind your work?

Reinforcement learning (RL) is about training an algorithm to make decisions based on rewards—like learning to play a video game, control a robot, or drive a car. A lot of RL systems are trained for one specific task. The big idea behind my work is pushing toward more general algorithms for decision-making: moving from “an algorithm for one task” to “an algorithm for all tasks.” You can think of it as working toward a foundation-model-like approach for reinforcement learning, where a system can learn about its environment then use this understanding to accomplish any given task.

Img 2134

How do you hope to contribute to the AI and data products we’re building for researchers and policymakers?

My goal is to build tools that make researchers’ work easier and more powerful. I want to build systems that free up time so people can focus on better research rather than repetitive tasks. I’m especially excited about tools that help uncover deeper insights and bring additional, useful data into the research process. That could mean accelerating analysis, improving access to knowledge, or supporting new ways of interacting with data. Ultimately, I want to create reliable and practical tools researchers can trust and use day to day.

How do you see AI changing the landscape of social research in the coming years?

I’m still discovering the field, but I think AI will reshape both what we study and how we study it. On the project side, it’s a real opportunity. Development work (like education) can benefit from new tools. But it can also widen gaps if some countries or groups are empowered with AI much more than others, so there will be a growing need for responsible adoption and capacity building. On the research side, AI will change how we interact with populations, how we analyze large datasets to spot patterns, and how we access and synthesize existing knowledge (papers, evidence, and past research) more efficiently. It’s evolving quickly across every step of the process.

What are your future goals, for yourself and for Laterite’s Analytics team?

I think we have an important role to play as a catalyst in this space. Laterite has been helping push forward how AI can support social research responsibly. I’d love to collaborate widely with others in the field to advance the work as a whole, not in isolation. Personally, I want to keep building useful models and tools that help colleagues and researchers. And not only delivering tools, but helping people understand the models they use, what they can and can’t do, and encouraging them to proactively ask for the right tools. Long term, I’m interested in working even closer to decision-makers—building algorithms that support policy and public decision-making with stronger evidence.