How Do Associations Actually Teach AI Skills?

Mid adult businesswoman doing AI presentation to team in the modern office January 23, 2026 By: Gleb Tsipursky

Association leaders need practical ways to build AI fluency among wary, time-strapped staff and volunteers. Sprint-style workshops could be the answer.​

With a recent Pew Research report showing that 50 percent of Americans are more concerned than excited about AI, and only 10 percent more excited than concerned, association leaders who want employees and volunteers to learn AI skills have a tough slog ahead. Here’s an idea: If you want them to adopt generative AI, put them in a room, give them real data, and require working demos by the end of the sprint, as demonstrated by The American Society for Nondestructive Testing.

The strongest case for this format comes from how learning and adoption for adults actually happens. A widely cited active learning meta-analysis found higher performance and fewer failures when people solve problems directly rather than listen to lectures. A rigorous project-based learning review ties artifact creation to gains in achievement. A 2025 working paper on AI in customer support measured a 14 to 15 percent productivity lift with larger gains for novices, evidence that guided tools and exemplars accelerate capability on the job. Organizational change also follows visibility and peer proof, which is the core of the diffusion literature synthesized in a classic diffusion analysis. Put people together, give them governed data, and require five-minute demos to executive judges. That is the fastest way to move from interest to working software and from working software to credible pilots.

Time pressure and coaching focus attention. A recent hackathon  maps how short, intense builds drive teamwork, problem solving, and persistence when organizers set clear goals and provide structure. A complementary educational  reaches similar conclusions and highlights the value of facilitators who unblock teams during the sprint. The result is not only faster skill acquisition but also a clearer path to standardization, governance, and scale.

The American Society for Nondestructive Testing turned the research into a program that showed results in public. Its 2025 conference positioned an AI Agent Battle as a marquee experience on the official agenda, with a dedicated session description spelling out a two-day, build-and-compete format tied to practical NDT workflows. The broader events hub framed the week as hands-on and technology forward. ASNT primed the field before the showdown through a public webinar that introduced agent patterns, build steps, and governance expectations, which lowered activation energy for first-time builders.

The structure mattered. Attendees did not sit for long lectures. They built agents tied to real inspection tasks, iterated in public, and showed results on a deadline. That format aligns with strong evidence that active learning outperforms lecture-first instruction, including a well-cited meta-analysis that found higher performance and lower failure rates when learners engage directly with problems. Reviews of project-based learning show similar gains, as documented in a recent higher-education review and a science-education meta-analysis. Research on hackathon-style builds also points to improved teamwork, problem solving, and persistence when the event is time-bound and well coached, as summarized in a 2024 systematic review and a complementary educational evaluation.

As Barry Schieferstein, the chief operating officer of ASNT, noted after the event:

“I was struck by how the AI Agent Challenge transformed what a conference experience can be. Instead of talking about innovation, our members were building it, creating real AI agents that connect directly to nondestructive testing practice. … We proved that hands-on, coached learning not only transfers skills faster but also creates deeper engagement. … It showed that associations can be at the forefront of applied technology, not just in what we teach but in how we learn together.”

So how should business and government leaders adapt this model? Start by promising what matters to executives: working demos on a clock that address real workflows. Publish an internal schedule that mirrors ASNT’s public agenda, including the sprint start, the demo window, and the judging criteria. Staff expert facilitators to roam as unblockers rather than lecturers. Offer a short pre-brief a week before the build that mirrors ASNT’s preparatory webinar, where you introduce three agent patterns your business needs and review data guardrails. Provide a sandbox that mirrors production constraints and preload governed, redacted, or synthetic datasets so teams can build safely without waiting on approvals.

Treat the workshop like a product launch, not a class. Give it a name, publish rules, and state deliverables up front. Require three artifacts from every team by the final bell: a short problem statement, a must-have capability checklist, and a data access plan that names sources and permissions. Record every demo and publish them on an internal portal. Tag entries by workflow and data domain, and include a lightweight request form for productionization. Commit to a two-week decision window for the strongest prototypes to move into controlled pilots. As teams progress, connect their outcomes to the enterprise business case with the same clarity that the generative AI productivity working paper uses to report throughput gains.

Close the loop before momentum fades. Ask each team to submit a one-page risk register that captures data dependencies, security exposures, and monitoring needs. Stand up a lightweight review that approves top prototypes for pilots. Begin the next quarter’s build with quick updates from prior winners showing movement on cycle time, defect rates, or satisfaction. Over time, you will build a library of approved, reusable agents and a standing competition that sources the next candidates. The effect is cumulative.

A well-run build workshop is not theater. It is an evidence-backed way to translate generative AI from headlines into operating leverage. The ASNT AI Agent Battle shows how to stage the format at scale. Leaders who adopt this model will leave not with slide decks but with demos, data plans, and pilots they can fund immediately. That is how education becomes deployment and deployment becomes sustained competitiveness.

Gleb Tsipursky

Dr. Gleb Tsipursky is CEO of the AI adoption consultancy Disaster Avoidance Experts and author of The Psychology of AI Adoption at Work in Columbus, Ohio