InfoLawGroup LLP

View Original

11 Tasks to Do Today to Prepare for the Use of AI in Your Business


by Rosanne Yang & Dhara Shah

With artificial intelligence (AI) driving business decisions, writing code, producing artwork, and even writing whole social media posts, blog articles, and other customer-facing content, the brave new world of artificial intelligence is not a hypothetical future state. It is here. It is now. Businesses must take action to not only leverage these new tools if desired, but to also protect themselves from known – and unknown – uses of AI by employees, contractors, and vendors. Who owns the work product? Is it reliable? What kinds of legal compliance needs will the use of AI trigger?

“But I Paid for It! What Do You Mean I Don’t Own It?”

Copyright attorneys often find themselves answering this question from incredulous clients. For years, the answer has focused on a failure to include proper assignment language in the contract. But when it comes to the use of AI, even with this magical assignment language, a business can find itself still not owning the work it paid for. Even more, it’s no longer an issue just involving freelancers, contractors, and vendors – it’s also about what the business’s own employees are creating.

Without substantial human involvement in the creation of a work, there is no ownership of a copyright. Anyone can use the work – without the permission of, or payment to, the business or person that thinks they are the owner. Just ask the photographer who tried to claim that he owned a picture taken by a monkey with the photographer’s camera. He found out the hard way that the photo is free for the taking.

When it comes to AI-generated content, there is significant question about whether, or at least to what extent, its output can be owned from a copyright standpoint. The U.S. Copyright Office is actively investigating where it has reason to believe that applied-for works – and even registered works – have been created with AI. The Copyright Office has denied protection to an AI-generated picture and is in the process of investigating the registration for a graphic novel created, in part, using an AI text-to-image tool. But what does “substantial human involvement” mean? It is a murky subject area and we will have to wait and see where the Copyright Office and courts come down on the question.

In the meantime, and as a practical matter, this means that businesses must (a) understand how deliverables are being created and (b) take steps to shape how deliverables are created… unless they don’t care if their competitors use the exact same content and that “asset” has no value when it comes to sale or licensing

What Businesses Should Do Today:

1. Determine if you care about owning the deliverable. If not, stop here and rest easy. If yes, then…

2. Investigate how the deliverable is being generated. Ask a lot of questions, look at the website of the vendor, and look at what tools employees are using.

3. Update your contracts, as well as your employee handbooks, training materials, and policies to include particular provisions around the use of AI. Outline procedures to follow if you have determined that the use of AI to produce deliverables may be acceptable under certain conditions.

4. If you decide to prohibit the use of AI, be clear about it and, if the prohibition applies to employees, make sure the IT team is monitoring. Include in contracts representations and warranties that AI has not been used.

The Accuracy & Ethics of Your AI Tools

The value of AI tools is rooted in the data originally put into the system. When third party AI tools are used, such as when a blog writer uses GPT-3 or a programmer utilizes GitHub CoPilot, the exact original data set can be an unknown. And, when we don’t know where data is coming from, we don’t know the true value or the accuracy of the output.

Very recently, Google’s AI bot, Bard, falsely noted that the James Webb Space Telescope took the first photos of exoplanets (when, in reality, the European Southern Observatory’s telescope took these photos). While this issue was easily fact-checked, this mistake raises concerns in the accuracy of AI-based tools as their uses expand. How do you know an AI tool is reliable when your researcher uses it to draft a paper, or your contractor uses an AI tool to code a product?

This issue of not knowing what data is used to develop an AI tool goes further when we look at long-term implications. Currently, AI tools have an initial data input that is then “recycled” and, at least in theory, refined over time. Not only is the data used in these tools increasingly outdated as time goes on, but it risks the creation of biases that may lead to inadvertent discrimination. For example, by using an AI tool to create consumer profiles, static data sets and outdated information could create biases against certain groups.

What Businesses Should Do Today: Before creating or using a third-party AI tool, a business needs to understand what data and logic was involved in its creation. To do so, businesses should keep the following in mind.

5. Understand where the original data fed into the AI tool was collected from, such that you can reasonably rely on its outputs to be valid.

6. If using a third-party AI tool, review any external evaluations sufficient to show both the third party and the tool’s accuracy and reliability.

7. Ask whether the AI tool has been tested for bias – and how.

8. Understand whether there is any human review and verification of the AI tool’s outputs.

Increased Compliance Obligations under Privacy Laws 

The use of AI tools may increase compliance obligations and costs for businesses.  For example, if your company is using AI tools to create profiles about your users, state privacy laws in Colorado, Virginia, California, and Connecticut will require you to provide opt-out rights, conduct risk assessments, and provide certain disclosures. Specifically, Colorado will require companies to disclose the data and logic used to create profiling systems, note how outputs will be used, and display how the tool is evaluated for fairness (amongst additional disclosures) – or face potentially heavy fines for having deceptive trade practices. Similarly, California is currently discussing regulations to govern the use of automated decision-making tools that will impose additional requirements on businesses. Automated decision-making tools may be used for a variety of reasons, including intuitive chatbots, recommendation engines, and employment decisions. Certain laws, such as the New York City Automated Decision Tools Law also place obligations on the use of automated employment decision tools by employers and employment agencies. While these compliance requirements optimistically reduce the above-mentioned ethical concerns, they will also increase the internal efforts and costs your company will face in implementing AI into its business. Thus, prior to engaging in the use of AI tools, companies should work with legal counsel to proactively embed privacy requirements into your processes. 

What Businesses Should Do Today:

9. First, weigh the benefits of using AI against the risks and costs of compliance when engaging with AI tools that use personal information of individuals. As discussed above, privacy laws will mandate such analysis. Common benefits may include increasing efficiency and accuracy of internal processes, providing useful analytics and insight into customer activity, and increasing the scope of services offered. Potential risks may include increasing scrutiny and compliance costs under applicable laws, increasing the chance of inadvertent bias, and opening question of legal liability (for example, who is liable when your AI vendor provides inaccurate information). Document this analysis. If the benefits reasonably outweigh potential risks, then proceed.

10. Next, update your data inventory to track and understand what data is being input into the AI tool, how it is collected, why it is processed, and how its outputs are stored and used.

11. Work with legal to use this data inventory to understand what laws are triggered through your use of the AI tool – to ensure your company is providing required disclosures and obtaining necessary consents.

Be sure to restart and repeat this process any time you change your practices with these AI tools, or integrate a new AI tool into your business practices. 

Originally published by InfoLawGroup LLP. If you would like to receive regular emails from us, in which we share updates and our take on current legal news, please subscribe to InfoLawGroup’s Insights HERE.