The New Search: How AI Overviews and Chatbots Are Changing How People Discover Nonprofits 

In 2026, digital discovery is shifting from web links and search rankings to AI-generated summaries and conversational assistants. For nonprofit leaders, understanding how donors, beneficiaries, and partners encounter organizations in this new landscape is crucial for mission visibility, funding, and trust.

Today, more searchers receive answers directly on the results page through AI Overviews, rather than clicking into individual websites. Donors and partners rely less on sifting through pages of search results and more on instant, AI-powered responses from Large Language Models (LLMs) such as ChatGPT, Google Gemini, and Perplexity.  

What Are AI Overviews and LLMs? 

AI Overviews are generative-AI summaries placed above Google Search results. They draw from Google’s indexed web pages and deliver a compact answer with relevant links. (1) 

LLMs operate via what’s called Retrieval-Augmented Generation (RAG). This approach retrieves live web data from websites, articles, research reports, and even community platforms like Reddit or Quora to craft natural, human-like responses. (2) 

These technologies now surface nonprofits not just based on keywords and backlinks, but on transparency, quality of storytelling, and clarity of mission. 

Put simply: 

  • AI Overviews summarize Google-indexed sources.
  • LLMs/chatbots blend live web data and learned context to generate natural language answers. 

3 Reasons Why the Age of AI Search Matters 

  1. Visibility Without Clicks: AI Overviews and LLMs summarize information without requiring a click, so your website may capture fewer direct visits. Visibility now depends on the strength of your story and the clarity of your impact, not just keywords and links. 
  2. Trust Begins Before the Visit: Searchers form judgments based on what AI tools say about you before they ever reach your homepage. Algorithms highlight organizations that show authentic outcomes, clear missions, and active engagement.  
  3. Consistency Builds Credibility: Inconsistent or outdated public messaging can result in missed opportunities for discovery in AI-generated overviews. Clear messaging, transparent impact data, and mission-driven storytelling allows nonprofits to shape how they appear in AI-powered search results. 

Where does the Opportunity Lie? 

This shift in AI-driven search offers nonprofit organizations a chance to shape how their missions are understood and trusted. The opportunity lies in clear, consistent messaging, transparent impact data, and authentic, mission-driven storytelling. 

When you back up your claims and describe them in everyday language, AI is more likely to feature your work. Early adopters who refine their content and maintain consistent public profiles will earn trust and visibility. By leading with clarity and transparency, nonprofits can define how their stories show up in this new landscape of AI-powered discovery. 

Practical Steps to Prepare 

Run HarborWay Foundations’ four-step visibility audit to understand your organization’s digital footprint. 

  1. Search five to 10 questions a typical donor, volunteer, or service recipient might ask about your mission. 
  2. Check if your organization appears in AI Overviews or LLM/chatbot answers. Document what you do or do not find. Pay attention to which competitors or market dominators surface in the answers and document the source content cited.
  3. Review your findings and identify what needs improvement. If you already have relevant pages, assess whether your content clarity or impact framing needs refinement 
  4. Create a 6–12-month plan to improve how your mission appears across these emerging discovery channels.  

If you want to better understand how your organization stacks up, HarborWay Foundations can help. Together, we’ll build a strategy to stay visible in the age of AI discovery. 

Leading with Equity: How Organizations Are Making AI Inclusive and What You Can Learn 

In Part 1 of this series, we explored AI’s cultural blind spots and how tools can miss the mark when they’re built without the full spectrum of human experience in mind. In Part 2, we dug into the role of inclusive leadership and why human judgment is essential for steering AI toward equitable outcomes. 


Together, these conversations point to a simple truth. AI reflects the choices, values, and perspectives of the people who build it. And when those perspectives aren’t diverse, the technology isn’t either. Left unchecked, AI can quietly reinforce inequities, undermine trust, and work against the very missions our organizations are trying to advance. 

But here’s the good news: many organizations are already showing what’s possible when equity in AI comes first. In this post, we highlight real-world examples of organizations leading with equity and share lessons mission-driven leaders can apply right now. 

Why Equity in AI Matters for Leaders 

AI systems that overlook diversity or amplify bias can damage reputations, erode audience trust, and reinforce systemic inequities. For mission-driven leaders, this is both a technical challenge and a strategic one. 

Equity in AI aligns with organizational purpose, ensuring that the tools we use serve all communities fairly, respect cultural contexts, and advance positive social outcomes. By prioritizing inclusive AI, leaders have an opportunity to shape technology that reflects their values and strengthens trust with the people they serve. 

Real-World Examples of Equity in AI 

Latimer.ai: Inclusive Training Data 

Latimer.ai is a standout example of integrating equity from the ground up. Their large language model (LLM) is trained using input from underrepresented communities including folk tales and oral histories from around the world, ensuring that the AI understands diverse perspectives. 

Through partnerships with universities and community organizations, Latimer.ai incorporates cultural nuance and lived experience into its datasets, showing that inclusive training data is foundational for equitable AI outputs. 

Key takeaway: AI that reflects a wide spectrum of experiences produces more balanced results that better represent a diverse audience. 

AI Now Institute: Research and Policy Advocacy 

The AI Now Institute examines the social implications of AI, with a focus on bias, fairness, and equity. Through rigorous research, policy recommendations, and frameworks, they guide organizations in adopting responsible AI practices. 

Their work underscores that responsible AI is a social challenge, and organizations need robust, research-driven insights to make ethical decisions that genuinely advance fairness. 

Key takeaway: Evidence-based research is crucial for understanding where AI falls short and shaping policies that promote equity. 

Inclusive AI Foundation: Governance & Best Practices 

The Inclusive AI Foundation is a nonprofit organization that works to embed ethical, inclusive practices across AI development. Their approach emphasizes structured governance, evaluation frameworks, and community engagement to ensure AI systems serve all populations fairly. 

They offer workshops, consulting, assessments, and road mapping for leaders looking to implement inclusive AI in their own organizations.  

Key takeaway: Governance and stakeholder engagement are essential for embedding equity into AI design and deployment. 

Five Key Lessons for Mission-Driven Leaders 

  1. Audit your AI tools and outputs: Examine datasets and model outputs for underrepresentation and bias. 
  2. Demand cultural filters or adjustable framing: Ensure AI tools allow context-aware outputs tailored to diverse audiences. 
  3. Prioritize values-aware AI: Understand the priorities, values, and constraints of the communities you serve. 
  4. Partner with underrepresented communities: Co-create datasets, evaluation metrics, or prompts to ensure authentic representation. 
  5. Measure, iterate, communicate: Track outputs for bias and inclusivity; make equity part of your organizational standard. 

These practices help leaders translate abstract principles into concrete actions that make AI more inclusive. 

Closing thought 

AI’s cultural blind spots are real, but equity is achievable. Organizations like Latimer.ai, AI Now Institute, and Inclusive AI Foundation demonstrate how inclusive AI is possible when intentionality, research, governance, and community engagement come together. 

Mission-driven leaders can learn from these examples, apply these lessons, and take proactive steps to embed equity into AI initiatives. By doing so, you not only enhance your organization’s impact but also build trust, credibility, and lasting relationships with the communities you serve. 

AI has the power to amplify good, but only if we lead with equity. Let’s make inclusive AI the standard, not the exception. 

Inclusive AI Leadership: Key Questions Leaders Should Ask About Equity and Bias 

In our last post, we explored AI’s cultural bias: the way models tend to mirror Western, educated, urban populations while leaving out many others. That raised the next question: how can leaders practice inclusive AI leadership, ensuring the tools their teams use are equitable, culturally aware, and free from bias? 

The answer is not to learn how to code. Leaders do not need to become technologists. But they do need to hold their tech teams accountable. The most powerful way to start is by asking sharper questions. 

Why inclusive AI leadership matters 

AI is already shaping how we teach, treat patients, respond to climate change, and support communities. The tools you choose now will shape the path your work takes for years to come. 

But this goes beyond simply checking a compliance box. For truly inclusive AI, equity should be embedded into strategy, design, and governance. By making equity and inclusion a core consideration, leaders ensure AI tools reflect the diversity of the communities they serve. 

Failing to do so risks defaulting to solutions designed for the most visible populations, leaving gaps in access, fairness, and trust. Inclusive leadership ensures that technology is accountable, representative, and aligned with organizational values. 

Six questions every leader should ask about AI equity and bias 

  1. Whose voices trained this model? Does the data reflect a wide range of people across regions, incomes, industries, and experiences? Or just English-speaking, affluent ones? 
  1. How does the model handle cultural variation? Can it shift how it responds depending on context, from rural towns to major metropolitan areas? 
  1. Is there a default worldview built in? If so, has it been acknowledged and balanced?  A system that assumes everyone has access to the same education, technology, or financial resources is not neutral. 
  1. What languages and dialects are supported? Does the tool capture nuance, from regional dialects and multilingual phrasing to hybrid forms like Spanglish or local vernacular? 
  1. How is fairness measured? What audits are you running, and do they check for equity across diverse populations, not just the majority? 
  1. Who tested the system before rollout? Did the process include input from a range of users and communities, or mostly technical experts and insiders?

Real-world examples of AI bias across industries 

  • Education: A tool tuned to suburban schools might collapse in a rural district with patchy internet. 
  • Healthcare: A digital assistant may provide advice that fits city hospitals but ignores the long drives and limited options in rural America. 
  • Climate: AI that misses Indigenous ecological knowledge will overlook proven practices, from controlled burns to water management. 

The leadership opportunity: integrating equity into mission 

Across industries, conversations about responsible and equitable AI are gaining traction. The next step is turning those principles into everyday leadership.  

This is about reframing AI equity as part of the mission, not an optional add-on. Asking these questions puts inclusivity on the same list as equity in funding or representation in leadership. 

The upside is trust. People notice when tools reflect their lives. Leaders who demand culturally inclusive AI will stand apart as authentic, responsive, and credible. 

Closing thoughts: building trust through inclusive AI 

AI’s cultural bias is not permanent. Leaders who ask sharper questions now will shape systems that reflect all of humanity, not just the loudest slice. 

In the next part of this series, we will highlight organizations already pushing for equity in AI and what others can learn from their example. 

Cultural Bias in AI: Why Leaders Need to Ask Which Humans It Reflects

When people say AI “thinks like humans,” it sounds reassuring. If these systems are going to help us in classrooms, clinics, and community organizations, then “thinking human” feels like a good start.

But here is the real question: which humans?

A Harvard study (Henrich et al., 2023) revealed cultural bias in AI, showing that large language models (LLMs) mostly mirror the mindset of people from Western, Educated, Industrialized, Rich, and Democratic (WEIRD) societies. That is the shorthand researchers use for the populations most often studied in psychology and social science. In practice, it means AI often sounds like it grew up in Boston or Berlin, not Bogotá or Bamako.

And sometimes, the models lean even further into this worldview than the people themselves. They can be more WEIRD than WEIRD.

For mission-driven leaders, this blind spot matters. If your work depends on AI for insights, outreach, or strategy, the technology you’re using may be leaving out entire communities.

Cultural bias in AI at home and abroad

Globally, the mismatch is obvious. Populations in Africa, South Asia, and Indigenous communities align very little with how AI “thinks.”

But this is not only a global issue. In the United States, AI’s blind spots show up in familiar ways:

  • A curriculum tool built from suburban school data might not resonate in rural Oklahoma or majority-minority districts in Houston.
  • A healthcare assistant trained on urban hospital systems may be out of touch with the realities of rural clinics or community health workers.
  • A workforce app that assumes everyone has credit cards, stable internet, and four-year degrees will miss low-income families who live in a different reality.

AI reflects the voices that dominate online. That means it tilts toward urban, affluent, English-speaking communities and misses those less represented in digital spaces. 

And that’s not just hearsay; multiple studies have proven these gaps. 

For example, Stanford researchers document how major LLMs are trained predominantly on English language data, leaving many languages and cultural contexts under-represented (Stanford HAI, 2025). 

Another analysis found disparities in the accuracy of image geolocation estimation across different regions, with a tendency for AI tools to predict higher-income locations more often (Salgado Uribe, Bosch, & Chenal, 2024).

With mounting evidence of these biases, it’s important to assess the impact on our own AI-powered initiatives.  

Why inclusive AI matters for leaders

Mission-driven work depends on connecting with people where they are. And if your audience doesn’t align with the demographics that LLMs are trained on, you  run the risk of undermining your company’s impact, reputation, and funding.

Here’s how that might look:

  • Excluding key voices: Campaigns unintentionally overlook rural, multilingual, or underrepresented communities.
  • Missing the mark: Messaging comes across as out of touch, weakening trust with your target audience.
  • Missed opportunities: Important insights get lost, leading to lower fundraising, adoption, and customer loyalty.

This is not just a technology problem; it’s a leadership challenge. But one that can improve with the right changes. 

What more inclusive AI could look like

Right now, AI is a sponge. It soaks up what is most available online, which skews the results. A more inclusive approach would look different:

  • Diverse data: Training should include stories, conversations, and materials from underrepresented communities, not just Silicon Valley blogs and English-language media, so the model reflects a wider range of lived experiences.
  • Cultural filters: Imagine an “equity mode” setting, where leaders can shift how a model frames ideas depending on the audience. While some tools offer surface-level tone adjustments, they are not yet sophisticated enough to capture cultural norms, values, and context-specific subtleties.
  • Values awareness: AI needs to understand not just what people say, but why they say it. That could be loyalty to family, faith traditions, or the need to stretch every dollar. This understanding enables more authentic, relevant, and responsible engagement.

The opportunity for leaders

Cultural bias is a risk of using AI tools, but it also offers a chance to lead.

  • Spotting biases early helps leaders avoid costly missteps and apply thoughtful scrutiny when using LLMs.
  • Audiences notice when companies go beyond AI defaults. Tailored messaging makes communities feel understood and sets your brand apart.
  • Leading with inclusivity in AI as part of equity work raises the standard for trust across industries and communities.

Closing thought

Cultural bias in AI is real, but it is not unavoidable. The leaders who see it and demand more will be the ones shaping technology that bridges communities instead of excluding them.

Spotting these blind spots is the first step. In the next post, we’ll share six practical questions you can ask your tech teams and partners to hold them accountable, ensuring AI truly reflects the communities you serve.