How Brands Can Protect Themselves From Legal Ramifications Over AI Privacy
Artificial intelligence is not a futuristic dream—it’s here now. Marketers are tapping the boundless power of AI for a variety of reasons, and each use of AI carries its own privacy and other legal challenges.
The Wall Street Journal has said AI systems have a “thirst for data.” As AI stretches its tentacles, it interacts with other machines and engages with more and more third parties. Data sharing with third parties pose a variety of privacy challenges and risks, such as data leakage.
The complexity of AI systems makes maintaining and enforcing accurate and transparent data policies even more challenging. Increasingly, legislators are passing laws that demand companies are transparent about the chain of custody of information. For example, GDPR and California’s Consumer Privacy Act of 2018 (CaCPA) are only two recent examples of major legislative overhauls of how data is regulated.
To help manage privacy compliance, companies should (to the extent possible) keep a clear record of how their AI will collect, store, use and share data. Companies should also work closely with legal counsel to determine how this information should be disclosed and what options should be presented. Of course, these things are challenging given the transformative nature of AI systems.
Studies show that customers are warming up to chatbots, which is especially true when the interactions feel more lifelike. Specifically, emotional chatbots are designed to make interactions with humans more seamless and interactive.
Part of how chatbots become more human is by extracting information from interactions with customers and using details to relay real emotions. Chatbots aren’t just engaging in witty banter. Emotional conversations churn sensitive data regarding health insurance, financial problems and relationships drama. As chatbots become more lifelike, marketers must take measures to monitor and protect the collection, storage and disclosure of personal information and consider what disclosures are necessary to users about the nature of what they are interacting with.
A friend was recently gathering research for an article she was working on about PTSD. After an hour or so of clicking on articles, she returned to her inbox to find an eerie email: a medical provider from the other side of the country had emailed her an advertisement about help for PTSD.
Companies that are selling healthcare-related data have to be especially careful to secure the necessary permissions they need to solicit customers and to make data anonymous so it can’t be used to invade the privacy of healthcare consumers, which can mean a violation of the Health Insurance Portability and Accountability Act. As with all who collect and use personal information for marketing or other purposes, be sure to consider carefully whether your privacy notice is up-to-date, clearly disclosed and understandable.
Facial and Speech Recognition
Facial and speech recognition technology is more sophisticated and prevalent than ever. With its ability to detect subtle changes in a person’s face combined with its incorporation into the latest iPhone model, facial recognition especially is on the brink of becoming ubiquitous. Having the means to ID someone by their face and to upload that marker into the digital data universe will likely change our culture and shouldn’t be taken lightly.
The same principles discussed above need to be considered with recognition technology. The Federal Trade Commission recommends that “companies take steps to make sure consumers are aware of facial recognition technologies when they come in contact with them and that they have a choice as to whether data about them is collected. So, for example, if a company is using digital signs to determine the demographic features of passersby, such as age or gender, they should provide clear notice to consumers that the technology is in use before consumers come into contact with the signs.”
Originally Published by Adweek