Accelerator applications are open!
Apply Now

Divya Siddarth is Shaping AI With Global Input

July 29, 2025
Divya Siddarth is Shaping AI With Global Input

Meet the leaders who are putting AI to work for good. Humans of AI for Humanity is a joint content series from the Patrick J. McGovern Foundation and Fast Forward. Each month, we highlight experts, builders, and thought leaders using AI to create a human-centered future — and the stories behind their work.

Divya Siddarth wants to make AI work for everyone, not just the powerful few.

She leads the Collective Intelligence Project (CIP), where public input shapes how AI gets built. CIP runs civic systems like Alignment Assemblies and Global Dialogues that bring in thousands of voices from around the world. Their goal is to turn public perspectives into clear guidance for the labs developing frontier AI.

Divya’s background blends tech, policy, and governance. She studied AI at Stanford University, worked on strategy at Microsoft, and spent time with their research team in India. At every step of her career, she saw the same problem: AI decisions were happening without public input. CIP is her answer.

Today, she’s working with partners like Anthropic to align AI tools with public values and shape AI policy alongside global communities.

In this interview, Divya shares why people are turning to AI for support, how public feedback is shaping real decisions, and what it takes to build AI that serves everyone.

How did your journey inspire you to explore AI for humanity?

Since studying AI at Stanford, I had a sense of how transformative the technology would be. That sense only increased as I spent time in India at Microsoft Research and later worked on AI governance in the office of the CTO at Microsoft. I saw a massive gap between the profound impact frontier AI was going to have and how few people had a voice in its development. Public understanding and transparency were so limited that people lacked the ability to shape their own futures. That's why I left to start the Collective Intelligence Project — to directly address how we can build an AI future that serves all of us, involving people at the scale and speed required.

How do you balance expert insight with public input when designing community-aligned AI?

While it's true that experts have a much deeper understanding of AI's technical architecture, we believe people should be able to exert their values over AI and use the tools in a way that works for them. We balance this in a few ways.

First, we identify key points in the AI development process — from data collection to training to deployment — where decisions are about values, not just technical specs. Public insight is crucial at those stages. This includes the model’s “constitution,” a set of values and instructions that guide how it should behave, as well as the policies that govern AI use and the methods used to evaluate models.

Second, we sometimes use proxies for the public, such as civil society organizations (CSOs) and policymakers, to help mediate the relationship between the public and technical experts. We’re currently partnering with global CSOs, such as Factum in Sri Lanka and Karya in India, to run on-the-ground evaluations of frontier models. They’re testing how AI models work in everyday, culturally specific contexts.

For instance, models don’t perform very well in providing nuanced responses to prompts related to healthcare in India or elections in Sri Lanka. The experiences of people who work on those issues every day should be considered expertise. By doing this, we’re taking the technically complex work of AI evaluations and creating feedback mechanisms that allow a broader slice of humanity to give input. We’re building a “Wikipedia for evaluations,” allowing anybody to build and run their own AI evaluations. We call this Weval.

We believe expert insight without public input leaves critical information and valuable data on the table. Conversely, you can't simply ask anyone on the street to evaluate a complex AI model; it requires careful mechanisms to be effective.

Divya Siddarth is Shaping AI With Global Input – Fast Forward

What’s a moment from Global Dialogues that shifted how you see public participation in AI?

Global Dialogues are recurring, digital conversations that ask people around the world how they feel about AI and how their relationship to it is evolving. We hear directly from participants about a wide variety of topics, such as human-AI relationships, trust, the future of agents, and cultural differences.

Right before we run a big collective input process is when I’m at my most doubtful. One thing about working in democracy is that you do not have control over what people say. You can set up a great process, but you just can't make people do anything. Every single time, at least so far, I have been delighted and surprised by how nuanced, engaged, and meaningful the participation is.

A recent moment that stands out came from a process we ran on AI trust and human-AI relationships. We learned that one in three adults uses AI for emotional support on a weekly basis, and 40% of people trust their chatbots more than their elected representatives. These findings highlight the world AI is entering — one that often lacks cohesion and coordination in governance. It reinforces that if we don't involve the public in making society resilient and understanding these transformations, we won't end up with the pluralistic, inclusive world we want.

What’s surprised you most in how people are using AI in their everyday lives?

What's most surprising is the disparity between how people view AI companies versus AI chatbots. There's a lack of trust in the companies, but a growing trust in the chatbot tools themselves. The level of trust in the latter is remarkable, particularly in how people are using AI for emotional support. While it’s not necessarily surprising, it’s also touching how deeply and thoughtfully people are thinking about AI’s impact on their lives, and what kinds of futures they are imagining with it. It’s core to my mission to make sure people have the collective agency to both steer AI for better futures and to use it to solve their most pressing challenges. That’s collective intelligence.

What core values drive your unique vision for impact in an AI-driven future?

I believe that positive futures with AI are possible and that millions of people have valuable contributions to make toward them. My work is driven by a commitment to giving people agency and input into what those positive futures look like, and accelerating the path to get there so we can improve lives as quickly as possible.

"It’s core to my mission to make sure people have the collective agency to both steer AI for better futures and to use it to solve their most pressing challenges. That’s collective intelligence."

Divya Siddarth, Co-Founder & Executive Director, The Collective Intelligence Project

Which visionary leaders, philosophies, or movements give you hope for a more human-centered AI future?

The Fast Forward Accelerator cohort I was just a part of gives me hope. Audrey Tang also gives me hope. She's one of my favorite people and is a major inspiration for thinking about how AI can build more democratic futures. Ultimately, it’s the people doing the work on the ground who inspire me most. What’s most encouraging are the people who don't just talk about democratizing AI, but are building useful, scalable tools that democratize AI engagement and take its real-world impacts seriously.

What is your 7-word autobiography?

Sidestepping the singularity to build plurality.

Stay tuned for next month’s Humans of AI for Humanity blog, featuring BuildChange’s CEO Juan Caballero. For more on AI for good, subscribe to Fast Forward’s AI for Humanity newsletter and keep an eye out for updates from the Patrick J. McGovern Foundation.