We're Hiring!
Explore Open Roles

Trina Reynolds-Tyler is Holding Power to Account with AI

January 21, 2026
Trina Reynolds-Tyler is Holding Power to Account with AI

Written by Yolanda Botti-Lodovico

Meet the leaders who are putting AI to work for good. "Humans of AI for Humanity" is a joint content series from the Patrick J. McGovern Foundation and Fast Forward. Each month, we highlight experts, builders, and thought leaders using AI to create a human-centered future — and the stories behind their work.

Investigative journalism has long been an expression of community power. It helps us reveal what remains hidden, hold leaders accountable, and demand the change we want to see. What if artificial intelligence could help accelerate the process while creating space for more people to meaningfully participate?

Since 2021, Trina Reynolds-Tyler has merged the power of AI with community voices to shed light on the issues that Chicagoans care about most. A ten-year veteran of the Pulitzer-Prize-winning Invisible Institute, she serves as the Data Director and lead investigator for the Beneath the Surface project, which uses machine learning (ML) to help community volunteers identify gender-based violence at the hands of the Chicago Police Department (CPD). Her philosophy around AI centers on awareness and self-determination. By teaching communities how AI is already impacting them, and ways to engage safely and responsibly, she is restoring agency to people who are often excluded from the trajectory of technological progress.

In this interview, Trina discusses how the Invisible Institute is building pathways to community power with the help of AI and shaping a world where truth and accountability prevail.

How did your journey inspire you to explore AI for humanity?

As a child, I was naturally curious. Because of my mother’s profession in technology services, we were the only house on my block with a computer, which allowed me to exercise that curiosity early on. But the experience also led me to important questions about the world, like access, exposure, and power. Fast forward to today, when technology is evolving so rapidly and access is still a major issue, those early experiences have shaped how I show up as a data-driven investigator and journalist. I understand now more than ever how critical it is for every single one of us to be equipped to navigate these new systems with confidence, so that we can actually shape the kind of future we want to see.

In my research project, “Beneath the Surface,” I worked with hundreds of Chicago community volunteers to review and label evidence from various sources and collaboratively train an ML model that can detect gender-based violence at the hands of the police. Many of these volunteers had no prior experience working with AI or ML, so my goal first and foremost was to create space for learning and connection. I often began my training sessions by showing our volunteers how they have already been interacting with AI without knowing it. I then gave them space to explore the technology on their own. In a way, I’m fulfilling a vision I had as a child. I’m finally taking my computer out onto the block and sharing it with the community, so they can feel empowered to leverage technology when they need it most.

What are some of the major data challenges that you face in your day-to-day work, and how has machine learning helped you navigate those challenges?

I work with two kinds of data — structured and unstructured — both of which have their own unique set of challenges. When it comes to structured data, one of the major challenges for investigators is that we’re often limited by how the data was originally categorized. For example, if an officer responds poorly to an incident of gender-based violence, that complaint may be categorized by the CPD as an Operation and Personnel Violation. But it’s hard to know what that means on the surface. ML has helped us cut through those institutional categories and get to the heart of what Chicagoans are actually saying about police misconduct. It has helped us dive deeper into the truth and identify the real problem at hand, which in this case is a form of police neglect, while exploring better solutions, such as crisis response training.

We also have this problem of selection bias in the research, where we perceive police misconduct solely based on the most egregious manifestations of it, such as murder or assault. But in reality, police misconduct can show up in many different ways. ML can help us parse through more diverse datasets and lived experiences, allowing us to challenge some of our pre-existing assumptions and listen to survivors better.

When it comes to unstructured data, like CSV files of officer information, audio files, surveillance footage, and bodycam data, the biggest challenge is processing and aligning the various streams of information. ML improves our efficiency and speed, so that we can generate useful insights without losing the participatory element of community-led research.

“When used responsibly, these tools create new mechanisms of power that enable researchers and communities to hold police forces accountable.”

Trina Reynolds-Tyler, Data Director, Invisible Institute

You began using AI in 2021 when you developed a machine learning model to uncover police misconduct. How has the continued evolution of AI, including Large Language Models (LLMs), enabled you to adapt your model and build new community-centered tools for different use cases?

In 2021, we built a traditional classifier, which is a focused ML model that ingested underlying documents related to police misconduct records. We then trained hundreds of volunteers to label the narratives within those records, generating the training data for the model. For many of the volunteers, reviewing what appeared to be the same data over and over again made it harder for them to pursue new information and draw meaningful connections. It took them a long time to label their data, which ultimately resulted in numbers that they could not comprehend. Moreover, multiple labelers, an abundance of data that we couldn’t prioritize, and issues with inter-rater reliability (i.e., the consistency with which different people agree with the assessments of others) introduced new challenges.

Today, with the advancement of highly versatile LLMs, we are updating our model to process significantly more unstructured text and produce more comprehensible outputs. That will allow volunteers to follow their own series of inquiries instead of just endlessly labeling data. Ideally, volunteers, through engaging with LLMs, will be able to understand and engage with more parts of the investigative process with minimal guidance, which will, in turn, allow us as investigative reporters to ask better questions and refine our process based on the feedback we receive.

More broadly, the evolution of AI — and LLMs specifically — invites us to question the very model of investigative research. The tools we have today can make the process of investigation accessible to more people who don’t come from a technical background, allowing us to obtain new perspectives and information that we couldn’t access before. When used responsibly, these tools create new mechanisms of power that enable researchers and communities to hold police forces accountable.

What core values drive your unique vision for impact in an AI-driven future?

When it comes to AI, I believe in objectivity. In other words, AI exists, and it’s evolving whether we pay attention to it or not. So, it’s critical that we don’t allow our fears or apprehensions to stop us from engaging with AI altogether. We need to identify what scares us most and what our own personal red lines are, and use that understanding to shape how we show up in our day-to-day lives, with our elected officials, with our spending, and beyond.

Secondly, I believe in prioritizing people above all else. As we use AI to improve and iterate on our goals, especially during the investigative process, we need to make sure that we don’t lose people along the way. We still have a lot to learn from our teams and communities, and the only way to move forward is by including them in the process from beginning to end.

Which visionary leaders, philosophies, or movements give you hope for a more human-centered AI future?

Hope can be hard because AI comes with a lot of risks and challenges. But I’ve been especially moved by leaders like Dr. Joy Buolamwini, who unapologetically acknowledge the reality to improve conditions, while working to keep communities informed. Through her research, she has shed light on issues like algorithmic discrimination in facial recognition technologies and other applications of AI in our daily lives. Her work reminds us to be critical of both how the AI is being trained and who is making the decisions about its use. Ultimately, it can’t just be a few people making decisions for the rest of us. We need everyone to be aware and actively engaged in this AI future.

At the same time, I feel hopeful when I think about what human-centered, responsible AI could achieve. I’ve already heard stories of chatbots helping people get away from abusive partners and navigate difficult, scary conversations. But we also know that chatbots can be incredibly risky in high-stakes situations. My hope is that companies start involving interdisciplinary experts and communities early on in the design of these tools, while creating safe spaces for exploration and experimentation. A lot of my work at the Invisible Institute involves showing people that they still have agency and power despite how things might appear. If we can help people feel the same way about the future of AI, while teaching them how to navigate the tools safely and effectively, we can shape a future where everyone thrives.

What is your 7-word autobiography?

Philomath, curious, resourceful, magnetic, provocative, deliberate, earthborn.

Stay tuned for next month’s Humans of AI for Humanity blog. For more on AI for good, subscribe to Fast Forward’s AI for Humanity newsletter and keep an eye out for updates from the Patrick J. McGovern Foundation.