MYM Blog

AI, THE CHURCH, AND WHY MINISTRY LEADERS SHOULD PAY ATTENTION

Written by Christopher Wesley | Jun 16, 2025 1:46:24 PM

Last week, the USCCB released a letter to Congress (read it here) offering ethical principles and policy recommendations concerning artificial intelligence. I appreciated the letter—not just for its substance, but for what it represents.

We’re entering an age where the Church cannot afford to simply react to technology. We’re called to engage it, shape it, and guide it through the lens of our faith.

That’s why I’ve become more curious and intentional in exploring AI. Every week, I set aside time to read, tinker, and reflect on what these tools might mean for ministry. My journey didn’t begin when I went full-time with this work—it started a few years ago when I first heard about ChatGPT.

Since then, I’ve come to believe AI is a tool that can alleviate many of the pressures ministry leaders face and enhance discipleship and evangelization efforts. But—and this is key—only if we are intentional. If we’re passive, we’ll fall behind. If we’re proactive, we can shape how AI serves the Church’s mission.

Start with the Church’s Wisdom

If you haven’t yet read the USCCB’s recent letter, that’s a great place to begin. It lays out thoughtful principles around human dignity, the common good, environmental impact, and more.

While artificial intelligence may be new, the Church’s relationship with emerging technology is not. Across history, Catholic teaching has consistently encouraged us not to fear innovation, but to approach it with discernment and fidelity to the Gospel.

AI isn’t a threat to be feared—it’s a reality to be understood and stewarded.

Learn from the Pioneers

AI has been evolving rapidly. In 2011, Ken Jennings famously lost to IBM’s Watson on Jeopardy!—a moment that hinted at what was coming. Since then, people like Allie K. Miller and companies like IBM have taken the lead in shaping ethical and innovative AI use.

But technical progress alone isn’t enough—we need moral and cultural insight, too.

That’s why we need to be listening to pioneers like:

  • Dr. Joy Buolamwini, founder of the Algorithmic Justice League, who brings vital perspective on AI bias, representation, and justice. Her research has exposed how facial recognition and generative models often fail to recognize or represent women and people of color. She challenges us to ask not just what AI can do, but for whom it works—and who gets left out.

  • Alessandro DiSanto, co-founder of Hallow, whose team is exploring how AI can support spiritual life through Catholic prayer and formation.

  • Matthew Harvey Sanders, CEO of Longbeard, who’s hosting conversations around tech innovation in Catholic spaces and asking what digital discipleship can look like in the future.

They don’t have all the answers. But their courage, clarity, and commitment help us ask better questions—and that’s where real discernment begins.

Create Safe Space to Explore AI

When I first engaged AI, I hesitated to share. I didn’t feel immoral—but I feared being seen as lazy or detached. Through conversations with others, I discovered many were already using it—it was just happening quietly.

With community, AI exploration became a part of my weekly rhythm: building custom GPTs, testing workflows, and discovering best practices. And I wouldn’t have grown in this without accountability and encouragement from others.

That’s why I believe every parish, diocese, and ministry organization needs a transparent “sandbox culture”—a space where people have permission to try, test, and reflect. Not everything will work. But what we learn along the way will be essential to the Church’s future witness.

Pause and Question: Bias, IP, and Environment

Even as we lean into AI, we must take time to confront the serious concerns surrounding it.

1. Bias and Lack of Cultural Representation

AI tools often default to white faces, Western norms, and male-coded outputs—unless told otherwise. These aren’t just design flaws—they reflect deeper systemic bias in the data and decision-making.

Dr. Joy Buolamwini has shown that facial recognition and generative AI routinely fail to recognize darker-skinned and female faces. As she puts it, bias in AI is not a glitch—it’s a mirror.

2. Respect Intellectual Property

Many generative AI tools draw on large swaths of copyrighted content—books, images, music—often without attribution. Just because a tool lets us generate something quickly doesn’t mean it’s ours to use freely.

As ministry leaders, we must model ethical content use and teach our teams to value creative labor. Respect for intellectual property is a form of justice.

3. Environmental Impact

AI may seem invisible—but it’s resource-hungry. According to MIT News, running large models requires massive energy and water. One AI session can consume as much water as a standard dishwasher cycle—and training these systems can generate carbon emissions on par with international flights.

As we lean into digital tools, we also have to ask:

  • Where is the energy coming from?

  • What are the real-world environmental costs?

  • Are we practicing stewardship not just of time, but of creation?

Don’t Wait to Join the Conversation

Artificial intelligence is not a passing trend. It’s already shaping how we work, connect, and communicate. The Church can’t afford to sit this one out—or fall behind like we did with social media and mobile tech.

At Marathon Youth Ministry, we’re actively exploring what it means to adopt an AI-first mindset centered on Christ. We’re talking about it with peers, partners, and even our families—not to “get ahead,” but to guide others with clarity, ethics, and hope.

Let’s Talk About It

I’d love to hear your thoughts:

  • Have you started using AI in your ministry?

  • What excites you—or gives you pause?

  • What kind of guidance would help you move forward?

Drop a comment. Let’s explore this together.