Are You Integrating Generative AI Into Your Business? Global Insights & What the Regulators Want You to Know

by: Dhara Shah

Generative Artificial Intelligence (AI), we know what it is, what its privacy, copyright, and security considerations are – but what does your company really need to focus on before it integrates generative AI into its processes?

OpenAI reports that 80% of Fortune 500 companies have registered ChatGPT accounts, based on their analysis of accounts associated with corporate email domains. As AI continues to advance, it is no surprise that companies are leveraging generative AI to enhance internal processes and operational efficiency. Notable examples include:

  • Coca-Cola made headlines in February, announcing that they signed on as an early partner with Bain & Company and OpenAI to utilize ChatGPT, OpenAI’s natural language processing tool, and Dalle-E, another OpenAI tool that generates realistic images from text descriptions. Coca-Cola plans to use these AI tools to assist with creating personalized ad copy, images, and messaging.

  • Shopify announced that they’ve integrated ChatGPT into its services, which allows customers to interact with ChatGPT-powered chatbots to customize their shopping experience.

  • Mastercard shared that they’ve adopted AI to streamline recruiting efforts, utilizing AI to assist with drafting job descriptions. The company has also implemented an AI-game to evaluate candidates’ ability to complete specific tasks during the recruiting process.

No matter how companies may be integrating generative AI, it is important to ensure proper compliance with applicable laws, guidance, and best practices.

The FTC’s and Other Agency Guidance on the Use of AI

With the current absence of a federal AI law, the FTC put out a joint statement with agencies such as the Consumer Financial Protection Bureau, Department of Justice, & Equal Employment Opportunity Commission, that reminds companies that AI is still governed by existing laws. For example, the FTC may enforce AI use under the Equal Credit Opportunity Act, Children’s Online Privacy Protection Act, Fair Credit Reporting Act, and Section 5 of the FTC Act governing unfair and deceptive trade practices.

Companies can avoid FTC enforcement by keeping the following in mind when developing, integrating, or utilizing generative AI:

  • Ensure claims accurately reflect the AI tool's abilities. Check your claims to ensure they don't over promise.

  • Take measures to prevent consumer injury before using or launching an AI tool. Avoid deception when using AI tools just like with anything else in your business.

  • Make sure the AI tools do not have biases or discriminatory impacts.

  • Properly understand how the algorithm is generating outcomes – in enough detail to explain it to your users and to regulators.

  • Be aware of the risks associated with the AI tool. When using a third-party tool, you can't just push this responsibility on the developer of the AI.

  • Make sure data that is used to train AI systems is collected and processed in compliance with all laws.

Global Progress and Regulatory Frameworks in AI

Keep in mind that U.S. state privacy laws, such as California’s CPRA and Colorado’s CPA, impose requirements on businesses using automated decision-making technology and profiling tools. These statutes require comprehensive disclosures, consents, opt-out rights for consumers, and the requirement to conduct impact assessments. There is also a clear focus among state regulators to govern AI-powered decisions that grant or deny financial services, lending services, insurance, housing, healthcare services, employment opportunities, educational access, and/or basic necessities. See our team’s post on 11 Tasks to Prepare for the Use of AI in Your Business for more information on compliance obligations under U.S. state laws. Here’s an overview of Colorado and California’s requirements:

  • Colorado: Colorado requires businesses to make additional disclosures when using AI tools for certain purposes – including to disclose (1) the logic and training used to create the AI tool; (2) whether the AI tool has been evaluated for accuracy, fairness, and bias; and (3) why the AI tool must be used.

  • California: On August 28th, the California Privacy Protection Agency (CPPA) released draft regulations on Privacy Risk Assessments and Cybersecurity Audits. Although both sets of regulations are in the early stages of development, the drafts regulations provide enough insight to underscore the CPPA’s ongoing consideration of this technology. The draft Privacy Risk Assessment Regulations set forth proposed definitions for “artificial intelligence” and “automated decision making technology”, in addition to proposed obligations for businesses who process personal information for the purposes of training AI.

AI legislative developments are incoming, with significant developments in the European Union, as evidenced by the current negotiations surrounding the EU AI Act, which is poised to become the first piece of comprehensive legislation centered around AI. Once finalized, our team will report back with key takeaways. In the U.S., additional regulations are highlighted by the White House Blueprint for the AI Bill of Rights, the SAFE Framework, and risk frameworks presented by NIST. Existing laws and incoming regulations and frameworks will serve as a basis to mitigate risks surrounding fairness, discrimination, and explainability with the use of AI.

What Should My Business Do Now?

With the increase of generative AI tools, and the myriad of benefits it can provide to a business – it is important to address compliance before such tools are integrated into your business practices. Consider what internal policies, trainings, and contractual updates you may need in order to ensure the use of generative AI is in line with existing laws.

Originally published by InfoLawGroup LLP. If you would like to receive regular emails from us, in which we share updates and our take on current legal news, please subscribe to InfoLawGroup’s Insights HERE.