Five Key Legal Issues to Consider When It Comes to Generative AI

Tenenbaum_legal considerations for AI April 27, 2023 By: Jeffrey S. Tenenbaum

As associations consider implementing AI technology, they must keep in mind several legal issues that could affect data privacy, intellectual property, insurance, discrimination, and tort liability when it comes to their members, other volunteers, and staff.

As artificial intelligence tools, such as ChatGPT, continue to evolve and become more commonplace, many trade and professional associations are turning to AI technology to enhance their operations and decision-making processes and benefit their members. However, as with any emerging technology, the use of AI by associations raises several important legal issues that must be carefully considered and worked through. Here are five of them:

Data privacy. One of the primary legal issues associated with the use of AI by associations is data privacy. AI systems rely on vast amounts of data to train and improve their algorithms, and associations must ensure that the data they collect is used in accordance with applicable federal, state, and international privacy laws and regulations. Associations must be transparent with their members about how their data will be collected, used, and protected, and must obtain the necessary member consents to use and share sensitive data. Remember that data, such as confidential membership information, that is inputted into an AI system will no longer remain confidential and protected and will be subject to the AI system’s most current terms of use/service. As such, associations should not allow its staff, volunteer leaders, or other agents to input into an AI system any personal data, data constituting a trade secret, confidential or privileged data, or data that may not otherwise be disclosed to third parties.

Intellectual property. Intellectual property is a key legal issue that associations must consider when using AI. AI systems can generate new works of authorship, such as software programs, artistic works, and articles and white papers. That means associations must ensure that they have the necessary rights and licenses to use and distribute these works, as well as being transparent about who or what created such works.

Associations must ensure that the data they collect is used in accordance with applicable federal, state, and international privacy laws and regulations.

Take steps to ensure that AI-generated content is not, for instance, registered with the Copyright Office as the association’s own unless it has been sufficiently modified to become a product of human creation and an original work of authorship of the association. Associations also must be mindful of any third-party intellectual property rights that may be implicated by their use of AI, such as copyrights or patents owned by AI vendors, developers, or others, and ensure that they do not infringe any third-party copyright, patent, or trademark rights. Finally, as stated previously, be mindful not to permit the inputting into an AI system of any confidential or otherwise-protected content.

Discrimination. Another legal issue to consider is discrimination. AI systems can inadvertently perpetuate bias and discrimination, particularly if they are trained on data that reflects historic biases or inequalities. Associations must ensure that their AI systems do not discriminate on the basis of race, ethnicity, national origin, gender, age, disability, or other legally protected characteristics, and must take steps to identify and address any biases that may be present in their algorithms. For instance, the use by large employers of AI systems to help screen applicant resumes and even analyze recorded job interviews is rapidly growing. If AI penalizes candidates because it cannot understand a person’s accent or speech impediment, for instance, that could potentially lead to illegal employment discrimination. While this will only become a legal issue in certain contexts (such as the workplace), the use of AI has the potential to create discriminatory effects in other association settings (e.g., membership and volunteer leadership) and needs to be carefully addressed.

Tort liability. Associations must consider the potential tort liability issues that may arise from their use of AI. If an AI system produces inaccurate, negligent, or biased results that harm members or other end users, the association could potentially be held liable for any resulting damages. Associations must therefore ensure that their AI systems are reliable and accurate, and that all resulting work product (such as industry or professional standards set by an association) is carefully vetted for accuracy, veracity, completeness, and efficacy.

Insurance. Associations need to ensure that they have appropriate insurance coverage in place to protect against potential liability claims in all of these areas of legal risk. Keep in mind that traditional nonprofit directors and officers (D&O) liability and commercial general liability insurance policies may be—and likely are—insufficient to fully protect associations in all of these areas. Associations also should explore acquiring an errors and omissions liability/media liability insurance policy to fill those coverage gaps.

In conclusion, while the use of AI by associations presents numerous opportunities and benefits, there are several legal issues that need to be carefully considered before going too far down the AI path. Among other things, associations must ensure that they are transparent with their members about the use of their data, obtain necessary intellectual property rights and licenses and avoid infringing others’ rights, address any potential biases in their algorithms, protect themselves against potential tort liability claims, and secure appropriate insurance coverage to protect against these risks.

As the work of associations involves both staff and member leaders, adopting and distributing appropriate policies governing AI usage by staff, officers, directors, and committee members is critical, as is policing compliance with such policies. Similar clauses should be built into employee handbooks and contracts with staff, contractors, and members (including agreements with volunteer speakers, authors, and board and committee members).

With careful planning and attention to these issues, associations can use ever-developing AI technology to enhance their operations, programs, and activities; better serve their members; and further advance their missions.

Jeffrey S. Tenenbaum

Jeffrey S. Tenenbaum, Esq., is managing partner at Tenenbaum Law Group PLLC in Washington, DC.