When people say AI “thinks like humans,” it sounds reassuring. If these systems are going to help us in classrooms, clinics, and community organizations, then “thinking human” feels like a good start.
But here is the real question: which humans?
A Harvard study (Henrich et al., 2023) revealed cultural bias in AI, showing that large language models (LLMs) mostly mirror the mindset of people from Western, Educated, Industrialized, Rich, and Democratic (WEIRD) societies. That is the shorthand researchers use for the populations most often studied in psychology and social science. In practice, it means AI often sounds like it grew up in Boston or Berlin, not Bogotá or Bamako.
And sometimes, the models lean even further into this worldview than the people themselves. They can be more WEIRD than WEIRD.
For mission-driven leaders, this blind spot matters. If your work depends on AI for insights, outreach, or strategy, the technology you’re using may be leaving out entire communities.
Cultural bias in AI at home and abroad
Globally, the mismatch is obvious. Populations in Africa, South Asia, and Indigenous communities align very little with how AI “thinks.”
But this is not only a global issue. In the United States, AI’s blind spots show up in familiar ways:
- A curriculum tool built from suburban school data might not resonate in rural Oklahoma or majority-minority districts in Houston.
- A healthcare assistant trained on urban hospital systems may be out of touch with the realities of rural clinics or community health workers.
- A workforce app that assumes everyone has credit cards, stable internet, and four-year degrees will miss low-income families who live in a different reality.
AI reflects the voices that dominate online. That means it tilts toward urban, affluent, English-speaking communities and misses those less represented in digital spaces.
And that’s not just hearsay; multiple studies have proven these gaps.
For example, Stanford researchers document how major LLMs are trained predominantly on English language data, leaving many languages and cultural contexts under-represented (Stanford HAI, 2025).
Another analysis found disparities in the accuracy of image geolocation estimation across different regions, with a tendency for AI tools to predict higher-income locations more often (Salgado Uribe, Bosch, & Chenal, 2024).
With mounting evidence of these biases, it’s important to assess the impact on our own AI-powered initiatives.
Why inclusive AI matters for leaders
Mission-driven work depends on connecting with people where they are. And if your audience doesn’t align with the demographics that LLMs are trained on, you run the risk of undermining your company’s impact, reputation, and funding.
Here’s how that might look:
- Excluding key voices: Campaigns unintentionally overlook rural, multilingual, or underrepresented communities.
- Missing the mark: Messaging comes across as out of touch, weakening trust with your target audience.
- Missed opportunities: Important insights get lost, leading to lower fundraising, adoption, and customer loyalty.
This is not just a technology problem; it’s a leadership challenge. But one that can improve with the right changes.
What more inclusive AI could look like
Right now, AI is a sponge. It soaks up what is most available online, which skews the results. A more inclusive approach would look different:
- Diverse data: Training should include stories, conversations, and materials from underrepresented communities, not just Silicon Valley blogs and English-language media, so the model reflects a wider range of lived experiences.
- Cultural filters: Imagine an “equity mode” setting, where leaders can shift how a model frames ideas depending on the audience. While some tools offer surface-level tone adjustments, they are not yet sophisticated enough to capture cultural norms, values, and context-specific subtleties.
- Values awareness: AI needs to understand not just what people say, but why they say it. That could be loyalty to family, faith traditions, or the need to stretch every dollar. This understanding enables more authentic, relevant, and responsible engagement.
The opportunity for leaders
Cultural bias is a risk of using AI tools, but it also offers a chance to lead.
- Spotting biases early helps leaders avoid costly missteps and apply thoughtful scrutiny when using LLMs.
- Audiences notice when companies go beyond AI defaults. Tailored messaging makes communities feel understood and sets your brand apart.
- Leading with inclusivity in AI as part of equity work raises the standard for trust across industries and communities.
Closing thought
Cultural bias in AI is real, but it is not unavoidable. The leaders who see it and demand more will be the ones shaping technology that bridges communities instead of excluding them.
Spotting these blind spots is the first step. In the next post, we’ll share six practical questions you can ask your tech teams and partners to hold them accountable, ensuring AI truly reflects the communities you serve.