InfoLawGroup LLP

View Original

Ain’t Nothin’ to See Here?  Quite the Contrary.  Making Proper Disclosures for AI Features

by: Rosanne Yang

As generative artificial intelligence (AI) tools rapidly proliferate, many businesses find themselves incorporating them into their existing services or launching new services altogether.  Common services where generative AI is showing up more frequently include general customer service tools (e.g., a more conversational “live chat” or “virtual salesperson”), research tools, virtual assistants, and more.  Current laws and regulator discussions make it clear that there are special disclosures that businesses should be making in order to effectively inform their consumers about the implications of the AI use. In addition to determining what needs to be disclosed, businesses must also determine where those disclosures should be made. 

What to Disclose

While disclosures will vary depending on the service involved, here are some of the top items particular to a generative AI environment that may need to be disclosed:

  1. Use of Automation.  If you use a bot to respond to customer inputs without human involvement, it’s not just a good idea to make sure users know it is a bot – it may be required in California.  California Business & Professions Code Sec. 17940 requires disclosure that the interaction is with a bot in certain instances, including if the interaction is to incent a purchase.  The Federal Trade Commission (FTC) also stated recently that it is “obviously not okay” to sell AI-created digital products while trying to fool people into thinking they were created by humans.  To the extent that the automation triggers state privacy laws on automated decision making, disclosure (and choice) may be required.

  2. Recording the Inputs.  Cases alleging that the use of chatbot technologies violate state wiretapping statutes continue to be filed.  Gathering consent to the “recording” of the user’s inputs will help mitigate the risks of being swept into these kinds of cases. 

  3. Your Uses of the Inputs.  Consumers may not realize that their interactions with the generative AI tool may be used for purposes beyond the interaction itself, such as to further train the AI (either by the business or the model’s provider) or for marketing purposes.  Providing transparency as to those uses, and indicating whether or not the data will be anonymized for such uses, will increase the likelihood that a consumer’s consent to those other uses will be deemed valid. These disclosures may also be required, regardless of the obviousness of the use, under other laws such as state privacy laws. 

  4. Ownership of Rights/Licenses.  The FTC warned recently that failing to be clear about what rights the users have to the outputs could be actionable as deceptive or unfair.  Therefore, be clear with your users about whether they are getting a license to the output or a transfer of ownership rights in the output, and what rights you as the provider retain. 

Conversely, even if desiring to transfer ownership rights in the output to the user, be careful not to overstate what they will own at the end of the process.  Some jurisdictions, like the United States, do not grant copyright recognition to content created solely by automated means. 

  1. Training Data.  A variety of governmental, quasi-governmental, and industry organizations are urging transparency to the users regarding the data used to train the generative AI model.  For instance, the FTC called out a need to inform users of the inclusion of copyrighted or other protected content in the training data – they view that information as potentially material to a consumer’s decision to use the product – and, as discussed above, any use of the consumer’s input or consumer data to train the AI should be disclosed as well. 

  2. Choices.  If the user has choices about their use of the AI service (e.g., customization, privacy settings, or even the choice to not use it – such as if there is an option to connect to a human customer support specialist or have the response reviewed by a human), be clear about it.  See also the section below on privacy compliance.

  3. Accuracy, Limitations, and Risk.  Consumers may need to be informed about the lack of accuracy that is – at least at this point in the development of generative AI tools – seemingly inherent in the technology, or about limitations or risk in using the outputs.  This is especially true if the outputs are likely to be relied upon in ways that could hurt or disadvantage the consumer in significant ways.

  4. Restrictions on User Conduct.  It is important to prohibit user conduct that could undermine further use of the inputs or create other risks for the business.  While terms restricting user conduct will not prevent the conduct from occurring, being clear about such restrictions will at least create an ability to take action against users who engage in the unwanted conduct.  Some items to consider adding to standard lists of prohibited conduct include:

Providing False Information.  Especially the inputs will be used to further train the generative AI model, it is important that users not corrupt the model. 

Entering (Sensitive) Personal Data.  Inclusion particularly of sensitive personal data (the user’s own or that of another) in the inputs introduces increased security risks from hacking or disclosure to others in later AI-produced responses, as well as extended compliance obligations in the event of data subject requests for access, deletion, or correction.  Depending on the context, consider whether it makes sense to extend the prohibition to additional types of personal data, or potentially all personal data, but be sure any prohibition is realistic.    

Where to Disclose

There are some disclosures that may be suitable to appear only in the terms of use.  Other disclosures may need to appear more overtly in order to withstand scrutiny, such as in a one-time pop up or persistently in the interface.  Time will tell what regulators and courts deem sufficient in these contexts, but it has always been true that the more material an issue is to the consumer’s determination of whether to use the service, the less likely burying that item in a terms document would be valid.  The trendline shows increasing, rather than decreasing, need for transparency.

Therefore, consider also disclosing certain topics (a) at the point the customer signs up for the AI-based service or (b) at the point the customer first begins to interact with the AI-based service if there is no separate “sign up” – this latter course could be on the landing page for the service, or if the interaction is in a pop-up box, as the first message in the interaction. 

And when updating the terms of use and privacy policy to accommodate new AI provisions, make sure to provide proper notice of those updates to your existing customer base.

Addressing Privacy Compliance

In addition to ensuring good disclosures up front, be sure that consumer interactions with the AI service are accounted for in any privacy request fulfillment that may be required under applicable laws, such as the right to access, delete, or correct personal data, or to opt-out of targeted advertising or the sale of personal data relating to the interactions.  This will require an in-depth understanding of where and how the data is stored, accessed, and used.  A determination must be made as to whether the data is “personal data,” and subject to privacy rights requests.  Consider what it means to “delete” the data at the user’s request, particularly if it has been used to train the AI, and be sure that such a deletion request can and will be fulfilled. 

Remember that even if some or all of the data is outside the scope of a privacy rights request, it still presents a potential security risk.   In any case, depending on your answers to these questions, communications and disclosures regarding privacy rights will need to be carefully crafted to avoid misrepresenting the scope of request fulfillment – it may be as important to determine what not to say, as it is to affirmatively disclose.

Conclusion

One thing that government and industries seem to agree on is that transparency is a key factor in the use of AI, and in particular generative AI, in consumer facing services.  Keep in mind too that this is not the wild west.  A myriad of existing laws and regulations currently apply to AI operations.  As you incorporate AI into your services and features, all these transparency considerations and understandings of current laws and regulations should be taken into account when designing them, as part of the communications plan, and in ongoing operations.

Originally published by InfoLawGroup LLP. If you would like to receive regular emails from us, in which we share updates and our take on current legal news, please subscribe to InfoLawGroup’s Insights HERE.