Inside the Google.org Impact Summit: EMEA

Something big was in London earlier this month — and its name wasn’t Ben. It was the Google.org Impact Summit: EMEA, and its goal was to address two urgent questions: How can we ensure AI benefits everyone and how can we leverage our collective strengths to make that happen?
The event was dynamic. Across dozens of sessions, hundreds of leaders across the philanthropy and nonprofit sectors worked together to chart the social sector’s path forward. AI-powered nonprofit leaders (many of whom just kicked off Google.org’s Generative AI Accelerator!) had their say too. From time on the main stage to live demos, leaders from Materiom, EIDU, Bayes Impact, and more shared their visions for a more resilient future.
Google.org’s VP and Global Head, Maggie Johnson, kicked off the day with a keynote that set the tone for action. She laid out a three-part roadmap, urging philanthropy to:
- Fund the tech
- Fund together, not separately
- Fund for the future
The principles echoed throughout the day as attendees explored what responsible, inclusive, and impactful AI adoption looks like on the ground. Here’s what happened.
Fund the Tech
Maggie’s keynote made it clear: AI is no longer optional. It’s an essential layer of infrastructure and nonprofits need it baked into how they work. But tools alone aren’t enough. Adoption requires capital for organizations to invest in training and capacity. That’s where many get stuck — 51% of nonprofits cite funding constraints as a top barrier to adopting AI, according to Google.org’s Nonprofits & Generative AI report.
On the main stage, David Spriggs, CEO of nonprofit Infoxchange, brought this message to life. He shared how – thanks to Google’s NotebookLM – tedious tasks like grant writing and content creation now take a fraction of the time. And it’s not just a back-office upgrade. Ask Izzy, their flagship service for people at risk of homelessness, is evolving into a conversational AI that helps users navigate support in their own language. This AI integration allows their staff to spend less time on paperwork and more time with people.
In a parallel conversation, attendees discussed the future of learning with Google’s Dr. Shanika Hope. The group focused on how AI can enhance education. Like tools that help teachers support students through "productive struggle,” not bypass it. Participants emphasized the importance of maintaining human connection in the classroom, using AI to personalize learning while reinforcing teacher-student relationships. The group also explored how AI can reduce teacher workload, freeing up time for one-on-one support and deeper instruction. Lenny Learning, an AI-powered nonprofit in Google.org’s latest Generative AI Accelerator cohort, illustrates this concept. Their platform helps counselors and teachers access on-demand, evidence-based behavioral health lesson plans. The tool saves educators up to five hours per week.

Dr. Shanika Hope, Director of Tech Education at Google
During an afternoon panel, Materiom co-founder Alysia Garmulewicz described how her team uses AI to fast-track sustainable materials R&D — from 20 years to two. Materiom builds open-source tools to help innovators design compostable materials from natural ingredients. With AI, they can predict how different ingredients will perform. As Alysia put it, their mission is to “create a kind of live wire between the frontiers of science and the innovators who really have the capacity to radically transform our material world.” When nonprofits treat AI as essential, they can unlock progress that once felt out of reach.
That idea carried into a hands-on roundtable led by Fast Forward’s own Shannon Farley. The session brought funders and nonprofit leaders together to explore how to assess and support responsible AI. Participants discussed how to evaluate data quality, test for embedded bias, and build feedback loops with the communities the AI tool is meant to serve. They also surfaced a key need — shared frameworks and evaluation tools to help funders back AI solutions with confidence. With clearer guidance, funders can make strategic investments to ensure AI serves the public good.
The spirit of innovation was on full display at the demo booths. AI-powered nonprofits gave attendees a hands-on look at how AI is already transforming frontline work. Bayes Impact shared CaseAI, an AI-powered tool that helps social workers generate care plans and free up time for human connection (check out their Demo Day pitch here!) EIDU showcased their AI-powered education platform which reaches over 600K students across Kenya, Nigeria, and Pakistan. And Jacaranda Health demoed PROMPTS, an AI-enabled help desk that responds to 12K daily questions from mothers in sub-Saharan Africa.

Bayes Impact demo booth
Fund Together, Not Separately
As Maggie Johnson said during her keynote speech, “The problems we’re working on are so big that no one organization can solve them.” She pointed to FireSat as a vivid example. The new initiative, backed by Google.org, the Moore Foundation, Earth Fire Alliance, and Muon Space, deploys satellites to capture imagery and AI to detect wildfires in real time. It’s a moonshot that only works because of pooled capital and deep collaboration. A shared belief in the power of partnership fueled the day’s conversations about cross-sector collaboration.
From the main stage, TechSoup’s Anna Sienicka and Impact Europe’s Roberta Bosurgi challenged funders to break out of their silos. They unpacked the tension between tech’s “move fast” mindset and the nonprofit sector’s cautious “do no harm” approach. The speakers offered a new idea — patient disruption. AI innovation should be bold, but not rushed. Nonprofits must implement community input, ethical guardrails, and a commitment to minimizing harm. But ambition matters too. The challenges we face are vast and require scale. The conversation also revealed a bigger truth: no single player can do this work alone. Solving complex challenges with ethical innovation demands cross-sector collaboration and flexible funding.
During a session hosted by Fast Forward’s Shannon Farley and Google.org’s Amanda Timberg, corporate philanthropy took center stage. Leaders from corporate giving programs grappled with a central question: how can we use non-cash assets like tech talent, tools, and expertise to support AI for humanity? The room agreed that we make a bigger impact when we act together. But collaboration is hard. Goals shift. Timelines don’t always match. Teamwork is rarely incentivized. Still, the potential is too big to ignore. Participants pointed to a few ways to make collaboration easier. Often, strong alliances start with a trusted third-party convener like a nonprofit, a fund, or a government agency. Existing business alignment can also pave the way for partnership. And the most impactful collaborations happen when each partner plays to their unique strengths.

Shannon Farley, co-founder of Fast Forward
That spirit of partnership continued into breakout workshops where attendees got tactical. The group explored how to align goals across sectors, open up tools like cloud credits and APIs, and invest in shared infrastructure. One example is Google.org’s Flood Hub, an AI-powered flood forecasting platform. In Nigeria’s Kogi and Adamawa states, GiveDirectly and the International Rescue Committee used Flood Hub’s API to identify at-risk villages. This data helped trigger cash transfers to 7.5K people five to seven days before peak flooding. Families used the funds to secure essentials, protect livestock, and evacuate in time. It’s a prime example of how cross-sector collaboration and shared infrastructure can turn predictive data into life-saving action.
“The problems we’re working on are so big that no one organization can solve them.”
Fund for the Future
The message was clear: short-term thinking won’t cut it. If we want AI to serve the public good, we have to back early-stage innovation and bold bets. Philanthropy must fund beyond pilots. We need to support the infrastructure, people, and ideas that may not yet have clear outcomes, but do have outsized potential.
In her lightning talk, Rose Nakasi, Head of Makerere’s AI Health Lab, showed what’s possible when research meets real-world needs. Rose spotlighted Ocular, an AI-powered diagnostic tool that transforms any standard microscope with a 3D-printed adapter. The tool detects disease in just five seconds, a task that typically takes a human expert over 30 minutes. In Uganda, where one pathologist may serve over 16K patients, these tools are already helping frontline health workers catch life-threatening conditions like malaria and cervical cancer earlier. But this kind of innovation doesn’t happen overnight. It requires flexible capital that invests early, sticks around through R&D, and helps translate prototypes into practical tools.
That theme carried through a session on youth safety and well-being, hosted by Google.org’s Rowan Barnett. Platform leaders, nonprofit advocates, and policymakers gathered for a candid discussion about how to protect young people online in the age of AI. The group surfaced priorities like “safety by design,” youth digital literacy, and cross-sector collaboration. Attendees agreed that it’s not enough to react to online harms — we need to anticipate them, and invest now in solutions that put young people at the center.

Maggie Johnson, VP and Global Head of Google.org
The Google.org Impact Summit: EMEA is proof of what’s possible when philanthropists, nonprofits, and technologists come together around a shared goal. In this case, ensuring AI works for everyone. The big takeaways rang loud and clear (just like Big Ben.) Nonprofits need flexible funding that supports both early AI ideas and lasting AI infrastructure. To see potential actualized, the path forward must include partnership across sectors, borders, and areas of expertise.
In truly collaborative form, the Impact Summit ended with an open invitation to the community led by Google.org’s Annie Lewin. To fund smarter. To act in concert. And to push for a future where AI isn’t just powerful, but also ethical. As one attendee put it, “We should use this moment in AI to double down on our humanity.” Challenge accepted.