Part 3: AI’s Impact on the Truth

Man tired from AI and chatbots. December 10, 2025 By: Kristine Metter, CAE

Just as quickly as generative AI has reshaped how content is produced, it’s casting doubt on everything we see, read, and hear. What can your association do to protect its role as a beacon of truth?

This article is the final installment in a three-part series on ASAE ForesightWorks’ new driver of change, Truth Under Pressure. Read part one and part two.

Is that keynote speaker video real or fake? Does that article really uncover new research findings? Is that podcast really sharing a breakthrough solution? It is increasingly difficult to confirm the credibility of information and determine its truthfulness.

In the last three years, we’ve seen an explosion in the amount of low-quality and factually incorrect content produced by generative AI. We can categorize this content into three broad buckets:

  • AI hallucinations, such as false, incorrect, or nonsensical information
  • Intentional misinformation and disinformation campaigns
  • Novice or uninformed users generating their own content that might be inaccurate or unoriginal

This is just one aspect of the ASAE ForesightWorks new driver of change, Truth Under Pressure. This three-article series serves to introduce key factors of the driver. Part 1 outlined possible future scenarios. Part 2 explored potential impacts on associations. Part 3 addresses AI’s impact on the truth.

While a patchwork of AI-related regulations and voluntary standards aim to address transparency, data privacy, bias, accountability, and human oversight, we continue to see troubling developments, such as a push toward more extreme and emotionally charged content and unsubstantiated content presented as fact. This proliferation of substandard content could make it more difficult for your association’s high-quality content to break through the noise.

One step you can take is to become a trusted information validator and educate your constituents and the public to be better information consumers. Build your capacity to advance substantiated information through emerging digital content verification tools. Develop or update your data governance and AI usage policies. Train staff and volunteers on digital literacy and how to spot bias (conscious or unconscious). Teach all your constituents how to verify sources and think critically about the content they encounter.

And finally, consider how to protect your content. Weak practices can lead to misinformation creeping into your publications, events, or online communities. Regularly review your content to make sure it is accurate and current. Explore and update your content access strategy with possible shifts in what is behind the member paywall and what is publicly available. You may need to balance providing exclusive member benefits and publicly countering misinformation.

As we navigate the complex and dynamic world of AI-generated content, we need to continue to monitor developments, keep educating ourselves about new opportunities and risks, create safe spaces for civil discourse, and protect our positions as beacons of truth.

For additional insights on this topic, a list of suggested actions you can take today, and a set of questions for reflection, you can purchase the driver of change action brief here.

Kristine Metter, CAE

Kristine Metter, CAE is president of Crystal Lake Partners.