First Circuit Ruling May Extend Reach of VPPA

On April 29, 2016, the First Circuit Court of Appeals addressed the question of what data constitutes “personally identifiable information” and who is a “subscriber” under the Video Privacy Protection Act (VPPA) in Yershov v. Gannet Satellite Information Network, Inc. The plaintiff claimed that Gannett shared information identifying him and the video clips that he watched through the app with Adobe, which provided analytics services for the mobile app. As described in greater detail below, the court decided that

  • downloading and using the USA Today mobile app (without monetary payment) could make a person a subscriber and
  • sharing device identifier and precise geolocation data (along with a description of the video clips viewed) may be a disclosure of personally identifiable information.

Businesses that publish mobile apps (or websites) that show video materials and the third party service providers that may receive information about the content and those who view the content (particularly precise geolocation data) should carefully follow the case when it returns to the trial court.

Personally Identifiable Information

The court concluded that the combination of a unique device identifier and precise geolocation is personally identifiable information that may identify a “person as having requested or obtained specific video materials.” Because the complaint alleges that Adobe had the ability to use these data elements to identify a specific person, the court determined that it stated a reasonable claim. However, it should be noted that the plaintiff will be obligated to prove that Adobe would be able to do so at trial.

If the plaintiff is successful at trial, the resulting case could establish a precedent that applies the VPPA to a wide variety of third party service providers. It should also be noted that the court did not opine on whether either of those data types alone would be personally identifiable information. However, the court’s analysis focuses primarily on the identifiability of precise geolocation, relying on the device identifier simply to show that a group of videos were viewed on the same device. Subsequent courts could extrapolate this decision to conclude that sharing precise geolocation data from a single device (without sharing the actual device identifier) may be sufficient to identify a person.

Subscriber

The court determined that monetary payment is not necessary in order to be a subscriber. The VPPA applies to information collected from “any renter, purchaser or subscriber of goods or services from a video tape service provider.” The First Circuit reasoned that requiring monetary payment to be a subscriber would be redundant, because any paid subscriber would also be a renter or purchaser.

The court also distinguished the Eleventh Circuit ruling in Ellis v. Cartoon Network, Inc., 803 F.3d 1251 (11th Cir. 2015). In that matter, the Eleventh Circuit also concluded that monetary payment is not required in order to be a subscriber. The Eleventh Circuit reasoned that there must be “some type of commitment, relationship, or association (financial or otherwise),” but found that plaintiff Ellis could not demonstrate such. To the contrary, the First Circuit found that Yershov’s provision of his unique device identifier and precise geolocation was sufficient consideration to establish a relationship and thus be a subscriber. The court stated that sharing this information created a relationship with Gannett substantially different from those who visit the USA Today website.

Moreover, the First Circuit rejected the district court analysis that downloading the USA Today mobile app was no different than adding the USA Today website to the favorites in a web browser. The First Circuit found that the district court’s argument assumes that there is no difference between the USA Today website and mobile app. However, the court acknowledged that its ruling is limited to the conclusion that the plaintiff has stated a reasonable claim. It would be the plaintiff’s obligation to demonstrate at trial that downloading and using the mobile app is substantially different than simply visiting the USA Today website.

Math Question As Age-Gate and Invite-A-Friend Under Fire

The Children’s Advertising Review Unit of the Council of Better Business Bureaus (“CARU”) routinely monitors web sites and mobile apps for compliance with its Guidelines and the Children’s Online Privacy Protection Act (“COPPA”).  Through that routine monitoring, CARU recently discovered the information practices of the 1st through 7th grade mobile applications called Friendzy (e.g., 1st Grade Friendzy). Kids can play the games available in the apps without registering, but the games offer a registration feature to track the time spent on the app and see points earned.  In-app purchases are also available.  Registration required full name, username, password, email address, country, city, zip code and grade of the student.  Here is what happened during registration:

  • If you clicked to register, a pop-up box presented the following statements: “Ask your parents.  Parental permission is required to continue.”
  • The registration page included tabs at the top labeled STUDENT, PARENT AND TEACHER.  PARENT was set as the default tab.
  • Then, a pretty basic math question with six possible answers was presented.
  • Incorrect answers resulted in a new question and you could keep going through questions until you got it right on the first try.
  • After registration, you could invite friends via email (the native email app on the device) or via text (the native text app on the device).

CARU pursued the following 3 issues:

  1. Collection of personally identifiable information from children younger than 13 without prior parental consent during registration.
  2. Allowing children younger than 13 to disclose personally identifiable information through the friends invite feature without prior parental consent.
  3. Inconsistency between the apps’ privacy policy and the actual privacy practices.

WS Publishing, the operator of the Friendzy apps, agreed to remedy the noted issues by setting the default tab to STUDENT, removing the fields that collect last name and email address, removing the invite-a-friend feature and updating its privacy policy.

Given the operators quick agreement to remedy the noted issues, there is no part of the case report that describes any operator defenses.  Instead, CARU includes several pages describing the guidelines and legal requirements applicable to the noted issues in the case.  Much of the discussion details general COPPA compliance.  However, we found this case notable for two of the apps’  features: (1) the novel attempt at using a math question to ascertain age for COPPA purposes; and (2) the apps’ use of the native email and text applications on the device (i.e., outside of the app) for an invite-a-friend feature.

Not surprisingly, CARU found the math question deficient as a neutral age gating mechanism.  CARU commented that many kids who are younger than 13 can answer basic math questions and, as mentioned above, you could just keep getting new questions and answer choices until you got the correct answer.  Setting those deficiencies aside, CARU commented that even if the math questions were made more difficult, a correct answer would not identify age.  In addition, while the FTC has not addressed this issue, I do not recommend using a math question as a COPPA compliant age-gating mechanism.

As discussed, for the Friendzy apps, the invite-a-friend feature launched the device’s native email or text applications, both of which are outside of the app itself.  If a user decided to use those features, no information was collected by the app.  Rather, the focus for that feature is on allowing the user (who could be a child) to disclose personal information publicly, which requires prior parental consent in most instances.  Section I(3) of the FTC’s COPPA FAQs addresses this issue.  The FAQ in question asks whether forward-to-a-friend systems can take advantage of COPPA’s exceptions to parental consent.  The answer is that it depends, but allowing the potential child to reveal anything more than the recipient’s email address (and possibly the sender or recipient’s first name) requires verifiable consent from the sender’s parent.  The FTC also wrote that the forward-to-a-friend system “must not allow the sender to enter her full name, her email address, or the recipient’s full name.  Nor may you allow the sender to freely type messages either in the subject line or in any text fields [of the communication to the friend].”  Because the native email and text applications allow for disclosure of all sorts of personal information, CARU found that feature to be out of compliance with CARU’s Guidelines and COPPA.  It is my experience that many marketers and developers feel that having the app launch a native email or text application on a device is a panacea for compliance issues.  This case is just one signal pointing to that being an unfounded belief.

Check out our other posts for more COPPA guidance here, here, here and here.

FTC Enters Proposed Consent Order Against Lord & Taylor for Native Advertising Campaign

The Federal Trade Commission (“FTC”) has wasted no time in bringing an action against an advertiser for allegedly deceptive native advertising. The FTC released its Enforcement Policy on Deceptively Formatted Advertisements (“Native Advertising Guidelines”) in late December 2015 (which we blogged about here) and last week the FTC concluded an enforcement action against Lord & Taylor, LLC (“Lord & Taylor”) for its native advertising campaign.

Generally, the complaint against Lord & Taylor alleged that the retailer violated Section 5 of the FTC Act by misrepresenting the nature of certain advertisements and failing to disclose material connections between Lord & Taylor and paid endorsers in a March 2015 marketing campaign for its Design Lab clothing line. The extensive campaign included a weekend blitz of branded blog posts, advertorials in online fashion magazines, and influencer endorsements on social media. The posts and articles appeared to be organic, but were paid for by Lord & Taylor and failed to disclose the material connection between Lord & Taylor and the posters.  Specifically, the complaint alleged that Lord & Taylor:

  • Paid approximately 50 influencers (i.e., individuals with strong social media presence and followings) between $1000 and $4000 each and provided them with a free dress from the line. Those influencers were required to post a photo of themselves wearing the dress on the Instagram platform, tag “@lordandtaylor”, and include the hashtag “#DesignLab” in the caption. However, the influencers were not contractually obligated or otherwise required to disclose that they received a free dress or had been compensated by Lord & Taylor. Moreover, Lord & Taylor reviewed and edited each Instagram post prior to posting, but did not add appropriate disclosures.
  • Paid for, reviewed, and edited an Instagram post by Nylon Media, LLC (“Nylon”) about the same dress. Like the influencer posts, Nylon’s Instagram post did not contain any disclosure that Lord & Taylor had paid for the posting.
  • Paid Nylon to publish an article in its online magazine about the dress. Lord & Taylor reviewed, edited, and approved the article, but the article failed to disclose the relationship between the parties.

Incidentally, the campaign was highly successful and the dress quickly sold out.

The proposed Consent Order, which will be in place for at least 20 years, prohibits Lord & Taylor from misrepresenting that: (1) paid commercial advertising is from an independent or objective source, and (2) any endorser is an independent or ordinary consumer. Further, it requires Lord & Taylor to: (1) disclose any material connection between itself and any influencer or endorser (consistent with FTC principles, including those set forth in the Guides Concerning the Use of Endorsements and Testimonials in Advertising (“Endorsement & Testimonial Guides”)); and (2)  obtain a signed and dated statement from each future endorser obligating the endorsement to clearly and conspicuously disclose any material connection to Lord & Taylor. Finally, the Consent Order establishes a monitoring and review program for Lord & Taylor’s future endorsement campaigns.

KEY TAKEAWAYS

Even though the Native Advertising Guidelines were not yet in place when Lord & Taylor conducted the Design Lab campaign, the result of the action is not surprising given that the campaign contained clear violations of established FTC guidelines and regulations, including the Endorsement & Testimonial Guides (as alleged in the complaint). Nonetheless, the action serves as an important reminder that the FTC is serious about native advertising. This, also, should come as no surprise given the popularity of native ads and how effective they can be as a marketing tool. As such, consistent not only with the Native Advertising Guidelines but long standing FTC principles, including those in the Endorsement & Testimonial Guides, advertisers across all industries must:

  • Take care to include proper disclosures in a clear and conspicuous manner in advertising (native or otherwise) across all media whenever it is necessary to avoid deception.
  • Consider implementing written policies that obligate endorsers (like influencers) to comply with applicable law, including disclosure of any material connection between the advertiser and endorser.
  • Consider implementing a monitoring program, especially where numerous paid endorsers are engaged across various media.

We have thoroughly discussed compliance with the Native Advertising Guides in our blog post on the guides and have discussed compliance with other related FTC policies here and here, for example.  Complying with these FTC policies is of paramount importance to avoid an FTC action like that which was faced by Lord & Taylor.

The Issue of Harm in Lawsuits on Retail Price-Comparison Advertising: Massachusetts Cases to Note

While the swell of class-action lawsuits based on retail price-advertising practices continues to build daily (particularly in California), retailers should note an interesting development from last week in a case out of Massachusetts. In that case, Mulder v. Kohl’s Department Stores, Inc. [FN 1], the plaintiff alleged a number of claims that centered around Kohl’s practice of advertising comparison prices on its merchandise, including allegations that Kohl’s failure to abide Massachusetts’ regulations on price comparisons[FN2] constituted an unfair or deceptive trade practice pursuant to Mass. Gen. Law Ch. 93A.

Last week, Kohl’s was granted its motion to dismiss all claims. The dismissal of the deceptive-practices claim followed from a conclusion by the court that – while plaintiff had sufficiently plead that the “misrepresentation” and “causation” elements of her claim – she had failed to allege sufficient injury.

Continue Reading

The New “EU-US Privacy Shield”

Since the European Court of Justice invalidated the fifteen-year-old EU-US “Safe Harbor Privacy Framework” last October, thousands of US companies have been awaiting the results of negotiations between the US government and the European Commission to produce “Safe Harbor 2.0,” a set of protocols to permit the continued flow of personal data between Europe and the US in contexts as varied as ecommerce, social media posts, and the internal management of global corporate groups.  Today, the sleep-deprived negotiators announced a framework agreement for an “EU-US Privacy Shield,” two days after an informal deadline for reaching agreement.

The framework agreement is not the end of the story, and it will likely be weeks, at least, before American companies can actually rely on the new program. The Commission must first draft a more detailed “adequacy decision,” which will be approved only after consulting the EU member state governments and representatives of the national data protection authorities (“DPAs”).  Moreover, the October court decision makes it clear that the DPAs may also make their own determinations on the adequacy of privacy protection when data are transmitted outside the EU.  Their representative “Article 29 Working Group” meets tomorrow and will subsequently issue an opinion on the framework agreement, and it is entirely possible that some national DPAs (or state-level DPAs in Germany) will reject the deal or demand additional conditions.  Beyond that, privacy advocacy groups in Europe are already forecasting new court challenges, arguing that the arrangement does not sufficiently rein in electronic snooping by the US government.

According to today’s press release from the European Commission, the new focus is on transparency and recourse for government surveillance, but there is also a commitment to more rigorous enforcement.  It does not appear that the substantive elements of the Safe Harbor Privacy Principles will be changed materially, but the US government has reassured the European Commission that the US Department of Commerce will monitor compliance by participating companies and that the Federal Trade Commission (FTC) will enforce their commitments.

The new element in the agreement is an undertaking to control US governmental surveillance of trans-Atlantic communications, which was the aspect of privacy protection that the European Court of Justice found lacking in the old Safe Harbor program. Here is how this undertaking is described in the Commission’s press release:

“For the first time, the US has given the EU written assurances that the access of public authorities for law enforcement and national security will be subject to clear limitations, safeguards and oversight mechanisms. These exceptions must be used only to the extent necessary and proportionate. The U.S. has ruled out indiscriminate mass surveillance on the personal data transferred to the US under the new arrangement. To regularly monitor the functioning of the arrangement there will be an annual joint review, which will also include the issue of national security access. The European Commission and the U.S. Department of Commerce will conduct the review and invite national intelligence experts from the U.S. and European Data Protection Authorities to it.”

Moreover, European citizens will have more options for transparency and redress. Companies will be required to respond to questions and complaints within a fixed time period.  European DPAs will be able to refer complaints to the Department of Commerce and the FTC.  Alternative dispute resolution mechanisms also must be made available to individuals free of charge.  Finally, an Ombudsman will be appointed to investigate claims of inappropriate monitoring by US national security agencies.

It should become clearer over the next few days and weeks whether the new deal will be broadly accepted in Europe and provide a satisfactory road forward for the many companies, large and small, that must share data across geographical borders. Meanwhile, companies with international business must continue to rely on other legal bases for transborder data flows, prominently informed consent, transfers necessary for the performance of a contract, data transfer agreements using EU-approved model contract clauses, and, for a few relatively large corporate groups, approved “binding corporate rules.”

All of these may conceivably fall prey to concerns over US governmental surveillance, however, no matter how carefully the companies themselves handle personal information in the course of their business. Thus, the new undertakings by the US government may represent an important step forward in balancing public and private interests in personal information and communications, one that promises to benefit global trade.  It is conceivable that more US companies will be attracted to the new EU-US Privacy Shield, if it becomes more widely accepted than other legal approaches precisely because it includes governmental as well as corporate commitments.

This raises some interesting questions:

  • Will the arrangement spread beyond the 28 EU countries? The three non-EU members of the European Economic Area – Norway, Iceland, and Liechtenstein – will presumably be bound by the new version of Safe Harbor, as they were by the old one. Israel also allowed data transfers to US Safe Harbor companies and will probably allow transfers to companies participating in the new EU-US Privacy Shield program. Switzerland adopted its own Safe Harbor program modeled on the EU-US Safe Harbor Framework and is likely to do the same with the new program. It is even possible that other countries with comprehensive data protection laws ultimately could take the approach of approving data flows to the US subject to corporate and governmental Privacy Shield commitments.
  • Will the EU demand similar undertakings from other business partners? Surely, the privacy risks are no less significant when personal data flows from the EU to Russia, China, Iran, or a host of other countries with sophisticated surveillance capabilities and, in many cases, much less transparency and legal recourse. If the governments of those countries are not willing to make similar commitments, will the EU try to block data flows? Will it even ask? Or is the EU approach simply to seek improved privacy protections among the more like-minded liberal democracies?
  • Will the EU apply similar standards for government surveillance in its own territory? In the UK, Parliament is currently revisiting its oversight structure for the GCHQ, a counterpart of the American NSA. Will the resulting scheme parallel in some respects the “EU-US Privacy Shield?” Will similar forms of transparency and redress move forward in Germany, the Netherlands, Denmark, and other countries that reportedly share communications intelligence with the NSA? The recent terrorist attacks in Paris have actually produced broader authorizations for surveillance by French and Belgian police and intelligence agencies, and that trend is reflected in other European countries worried about Islamic State sympathizers and a massive wave of refugees from Syria, Libya, Afghanistan, and other hotbeds of “radical Islam.” The governing parties in several EU member states, such as Poland and Hungary, seem to be riding a wave of nativist reaction to perceived external threats. Will they agree to conform to an EU consensus on the proper limits of electronic surveillance? Is there such a consensus?

The oversight of national intelligence agencies in a democracy is a complex issue of great public importance, one that must necessarily evolve with changing technologies. In this case, that evolution has suddenly accelerated, not through comprehensive public or legislative debate but as the result of a single court decision about storing Europeans’ social media posts on American servers.  Even in a digital world, sometimes the tail wags the dog.

Businesses Take Heed: FTC’s Recent Report, Conference Signal Big Data’s the Big Deal in 2016

FTC Kicks Off New Year with New Report on Growing Use of Big Data Analytics Across All Industries

Without so much as a week of 2016 having lapsed, the Federal Trade Commission (“FTC” or “Commission”) released a new report with recommendations to businesses on the growing use of big data. The report, “Big Data: A Tool for Inclusion or Exclusion?  Understanding the Issues” (“Report”), is based primarily on the FTC’s synthesis of the numerous discussions and written public comments submitted in connection with FTC’s September 2014 public workshop exploring the use of big data and its impact on American consumers, as well as a prior FTC seminar on alternative scoring products.  The primary purpose of the Report is to ensure that businesses’ use of big data analytics, while producing many benefits for consumers, avoids outcomes that may be exclusionary or discriminatory, in particular with respect to low-income and underserved populations.

Big Data Defined

“Big data,” as the Report explains, refers to a confluence of factors, including: (i) the nearly ubiquitous collection of consumer data from a variety of (primarily, but not exclusively) online sources, whether by shopping, visiting websites, paying bills, connecting with family and friends through social media, using mobile applications, or using connected devices (e.g., fitness trackers, smart televisions, etc.); (ii) dramatic reductions in the cost of data storage; and (iii) the growing availability of powerful new data processing capabilities, which can analyze data, draw connections, and make inferences and predictions with ever-increasing speed and accuracy.

Big Data: Benefits and Risks

In the Report, the Commission acknowledges full well that the era of big data has arrived, and that the role of big data is growing rapidly across virtually all industries, which, in turn, is producing tremendous benefits for both businesses and consumers, as well as for society as a whole. At the same time, advocates, academics, and others have raised concerns about whether certain uses of big data analytics may harm consumers, violate consumer protection or equal opportunity laws, or more generally detract from the core values of inclusion and fairness.  Thus, a central focus of the Report is to examine the intersection of big data’s benefits and risks for businesses and consumers alike, in an effort to avoid discriminatory data use (which was a key topic at the recent FTC conference, PrivacyCon, discussed below).

The Report sets forth a number of improvements to society that are made possible through big data analytics. In addition to more effectively matching products and services to consumers, big data can create opportunities for low-income and underserved communities, for example, by:

  • increasing educational attainment for individual students;
  • providing access to credit using non-traditional methods;
  • providing healthcare tailored to individual patients’ characteristics;
  • providing specialized healthcare to underserved communities; and
  • increasing equal access to employment

At the same time, the Report notes that some researchers and others have expressed concern that the use of big data analytics may result in the exclusion of certain populations from the benefits that society and businesses have to offer, based on issues related to the quality of data, including its accuracy, completeness, and representativeness, as well on whether there are uncorrected biases in the underlying consumer data. For example, businesses could use big data to exclude low-income and underserved communities from credit and employment opportunities, or to create and reinforce disparities in the pricing and availability of certain products and services.

Highlighting Numerous Laws Potentially Applicable to Big Data Practices

In addition to the potential for diminishing inclusion and fairness, the Report also makes clear that certain uses of big data could violate various existing laws governing big data practices, such as the Fair Credit Reporting Act (“FCRA”), equal opportunity laws, and the Federal Trade Commission Act (“FTC Act”). The Report attempts to guide businesses in this regard by providing an overview of the existing legal framework established by these laws.  However, the Report also emphasizes that it is not intended to identify or fill any legal or policy gaps, so businesses need to ensure they have a complete understanding of these and any other laws that may be implicated at any stage in the life cycle of big data.  To this end, the advice of counsel is strongly recommended, particularly given that the FTC reiterates throughout the Report that it will continue to monitor areas where big data practices could violate existing laws and that it will bring enforcement actions where appropriate.

Key Considerations for Assessing Compliance with Laws Identified in the Report

For businesses already using or considering engaging in big data analytics, the Report offers a number of key questions (set forth below) to consider as a starting for assessing an entity’s compliance, or lack thereof, with the existing laws addressed in the Report. Predictably, these questions focus, to a great extent, on certain disclosures that may need to be made to consumers in connection with the use and sharing/transfer of big data (or “little” data that may become “big” data), as well as on the security of such information. Among other things, businesses should consider the following:

  • If you compile big data for others who will use it for eligibility decisions (such as credit, employment, insurance, housing, government benefits, and the like), are you complying with the accuracy and privacy provisions of the FCRA? FCRA requirements include requirements to (1) have reasonable procedures in place to ensure the maximum possible accuracy of the information you provide, (2) provide notices to users of your reports, (3) allow consumers to access information you have about them, and (4) allow consumers to correct inaccuracies.
  • If you receive big data products from another entity that you will use for eligibility decisions, are you complying with the provisions applicable to users of consumer reports? For example, the FCRA requires that entities that use this information for employment purposes certify that they have a “permissible purpose” to obtain it, certify that they will not use it in a way that violates equal opportunity laws, provide pre-adverse action notice to consumers, and thereafter provide adverse action notices to those same consumers.
  • If you are a creditor using big data analytics in a credit transaction, are you complying with the requirement to provide statements of specific reasons for adverse action under Equal Credit Opportunity Act (“ECOA”)? Are you complying with ECOA requirements related to requests for information and record retention?
  • If you use big data analytics in a way that might adversely affect people in their ability to obtain credit, housing, or employment:
    • Are you treating people differently based on a prohibited basis, such as race or national origin?
    • Do your policies, practices, or decisions have an adverse effect or impact on a member of a protected class, and if they do, are they justified by a legitimate business need that cannot reasonably be achieved by means that are less disparate in their impact?
  • Are you honoring promises you make to consumers and providing consumers material information about your data practices?
  • Are you maintaining reasonable security over consumer data?
  • Are you undertaking reasonable measures to know the purposes for which your customers are using your data?
    • If you know that your customer will use your big data products to commit fraud, do not sell your products to that customer. If you have reason to believe that your data will be used to commit fraud, ask more specific questions about how your data will be used.
    • If you know that your customer will use your big data products for discriminatory purposes, do not sell your products to that customer. If you have reason to believe that your data will be used for discriminatory purposes, ask more specific questions about how your data will be used.

FTC Enforcement Actions Expected

With the FTC’s release of the big data Report on January 6, and considering that the Commission held a major conference (discussed below) just over a week later, which focused in part on the latest research and commercial trends in big data, we expect big data to be a very big deal in 2016. Further, now that the Commission has provided businesses additional substantive and actionable guidance in the realm of big data, enforcement actions from the FTC are likely forthcoming.  This is something we will continue to watch.

PrivacyCon 2016: FTC Conference Intensifies Spotlight on Big Data

On January 14, just eight days after the FTC released its latest Report on big data (discussed above), the Commission hosted PrivacyCon, a first-of-its-kind multi-stakeholder conference focused on the latest research and commercial trends in big data, consumer privacy, and related areas. PrivacyCon featured presentations and discussions by a diverse group of stakeholders, including whitehat researchers, academics, industry representatives, consumer advocates, and government regulators.

The original research presented at the day-long event centered on such issues as how consumers’ understanding of privacy online squares with the options about their privacy that they are provided, tools to analyze the way consumers’ information is shared and used online, and the effectiveness of programs to track security vulnerabilities. Within this broader context, the FTC put big data on center stage for the second time in just over a week when Commissioner Brill kicked off an afternoon session on big data and algorithms, during which several academics presented research and findings on the importance of transparency tools in revealing and avoiding data discrimination. The slide deck used during this session can be found here.

FTC Releases Policy Statement and Business Guides on Native Advertising

Just before 2015 came to an end, the Federal Trade Commission (“FTC”) released its much anticipated Enforcement Policy Statement on Deceptively Formatted Advertisements (“Policy Statement”) along with informal, practical guidance for businesses titled “Native Advertising: A Guide for Businesses” (“Business Guides”). The FTC first began considering native advertising in a December 2013 workshop. Native advertising is digital media content that blurs the line between advertising and editorial by inserting paid content into the regular stream of media.[i] Unsurprisingly, the FTC concluded at the workshop that misrepresenting the source of content or failing to disclose that it is commercial in nature likely amounts to a violation of Section 5 of the FTC Act, but the FTC did not further elaborate. The Policy Statement reiterates this general concept and goes on to clarify the FTC’s position on native advertising by tying it to long standing FTC principles across varying spheres (from door-to-door sales to telemarketing to CAN SPAM and from infomercials to advertorials in traditional print media). The Business Guides expand on the Policy Statement and provide detailed guidance (including 17 examples) on when and how to prevent consumer deception in the digital advertising space. Generally, the position taken by the FTC is not particularly shocking as it is consistent with the established truth-in-advertising standard that the commercial nature of content must be readily apparent or accompanied by a clear and conspicuous disclosure.[ii] The specificity of the requirements set forth in the Business Guides, however, may pose significant creative and technical hurdles for advertisers. Industry groups, like IAB, have already expressed concern. Thorough discussions of the Policy Statement and the Business Guides, as well as key takeaways, follow.

***

In the Policy Statement and Business Guides, the FTC explains that it will likely find a violation of Section 5 of the FTC Act if commercial content misleads consumers into believing that it is impartial, independent, or from a source other than the advertiser. Such a misrepresentation, the FTC explains, is material as it is likely to affect the credibility the consumer gives to the content and the consumer’s decision to interact with the content. To avoid misleading consumers in this way, advertisers must consider the following two questions:

  1. Is the content clearly an advertisement?
  2. If not, have sufficient disclosures been included?

Is the content clearly an advertisement?

The FTC explains that certain materials “may be so clearly commercial in nature that they are unlikely to mislead consumers even without a specific disclosure.” However, if the content promotes the advertiser’s product/service and implies that it is anything but an ad (i.e., if it implies that it is news, informational, or educational), the FTC is likely to find a violation of Section 5 of the FTC Act unless appropriate disclosures are included. As always, the FTC will look at the “net impression” of the ad to determine whether the commercial nature the content is clear to the “reasonable” consumer.

In making this evaluation, the FTC will consider the overall appearance of the ad itself (and verbal/audio content in the case of multimedia), as well as the similarity of ad’s style/formatting to the surrounding non-advertising content and whether the substantive content is distinguishable from the surrounding non-advertising content. Simply put, the more similar the ad is to the publisher site’s content, the more likely it is that a disclosure will be required.

The Policy Statement sets forth a handful of factors the FTC will consider in determining whether the consumer will recognize the content as an advertisement:

  • On what media is the content featured? The FTC acknowledges that consumers have different expectations depending on the media being consumed (e.g., social media versus a news website).
  • Who is the target audience? For example, if children are the target audience, they are unlikely to recognize something as an ad.
  • Does the substantive content of the ad differ from the surrounding content? For example, an ad inviting a consumer to take a car test drive in a news stream is likely recognizable as an advertisement.
  • Is the format of the ad similar in written, spoken or visual style (including text, images, audio, and video) to the non-advertising content? For example, does the content look like an editorial article on a news site? Conversely, is the content set apart using background shading, borders or other visual cues to indicate that it is an ad?

The Business Guides provide additional color to the FTC’s position via the examples. For instance:

  • If the context makes it clear that the content is advertising, prior disclosure may not be necessary (but since the content is shareable a disclosure within the content will likely be required). For example, in social media users most often know whom they follow so the commercial nature of an ad appearing in a user’s own feed will likely be clear to that user and prior disclosure may not be necessary, but that same content shared by the user to another feed or platform would likely require disclosure within the content.
  • Linked content must contain appropriate disclosures. For example, a non-paid search engine result that is an advertisement must clearly identify itself as an ad. The Business Guides use an example of a non-paid search engine result displaying a link to a video that describes how to build a deck in 5 minutes. The video is sponsored by an advertiser that sells deck stain and the advertiser’s deck stain is used in the video. The FTC recommends that the search engine result itself (not just the page a user lands on after clicking the link) make clear that the video is commercial in nature.
  • If editorial content is paid for by an advertiser and such content does not tout the benefit of the advertiser’s products/services it need not be labelled as advertising. On the other hand, paid content that appears editorial/informational, but touts the benefits of or otherwise promotes the advertiser’s products/services in any way must be labelled as an ad.
  • Content that is clearly advertising (e.g., a billboard on a street in a video game, product placement) need not be labelled as advertising. However, if that same content is clickable and the consumer is taken out of the publisher site (i.e., the video game) to an advertiser’s property (e.g., web site or app) or an advertisement, a disclosure would be necessary before the consumer clicks through to the content.

If it is not clearly an advertisement, have sufficient disclosures been included?

If something is not clearly an advertisement and disclosures are required to avoid deception, the disclosures must follow the FTC’s established principles with respect to disclosures. In other words, the disclosures must be clear, unambiguous, prominent, and in close proximity to the ad. Further, audio disclosures must be read slowly and use words that are easy to understand and video disclosures must be on screen long enough to be noticed, read, and understood.

In addition to those general principles, the Business Guides contain specific guidance on how to make sufficient disclosures. For instance:

  • The disclosure must be near the ad’s focal point so that a consumer will see it and easily identify the content to which the disclosure applies, e.g., the disclosure must be on the video or thumbnail itself rather than in the text description, the disclosure must be to the immediate left of a headline, the disclosure should not come after the user views the commercial content, the disclosure should not be made too early, etc.
  • The disclosures must travel with the ad content if it is shared/linked, e.g., in the title tag for a non-paid search result, in the beginning of the URL for a shared URL, and in the content itself rather than text that accompanied the content on the publisher site.
  • The disclosure must be readable and understandable on any device or media through which the content is accessible, e.g., multimedia may require audio disclosure for a consumer to be able to read/comprehend the disclosure.
  • The disclosure must use terminology that is consistent with the language on the particular platform/media.
  • The disclosure must use plain language, rather than legalese or marketing “jargon”.

KEY TAKEAWAYS

  • If the source of the advertisement is unclear (i.e., if the content expressly or impliedly misleads the consumer into believing that it is not an advertisement), the FTC will consider an ad deceptive even if the underlying claims are truthful and non-misleading.
  • If something is so clearly an advertisement that no reasonable consumer would think otherwise, then a disclosure may not be necessary. If the content in question does not fit precisely within one of the FTC’s enumerated examples, however, it is likely best to err on the side of disclosure.
  • Acceptable disclosures include things like “Ad”, “Advertisement” “Paid Advertisement” and “Sponsored Advertising Content”. Anything that is slightly more vague such as “Promoted”, “You might like”, or mere inclusion of the advertiser’s name/logo is likely insufficient, per the FTC. Further, “sponsored by [x]” or “promoted by [x]” is also unacceptable as a disclosure for a native ad because consumers may not understand that the advertiser influenced/created the content, according to the FTC.[iii]
  • Since ads may be accessed in various ways (e.g. via web search, linking, or sharing in addition to direct access from the publisher site), it will often be insufficient to include disclosures only on the publisher site. In other words, content that can be republished must contain clear and conspicuous disclosures that travel with the content.
  • Similarly, the ad content that a consumer accesses after clicking through may often require a disclosure in addition to the disclosure on the publisher site.
  • The target audience must be considered. For example, is the content targeted to children or to adult social media users that know whom they follow and how ads appear in their feed? The required disclosures may vary depending on the target audience, but the FTC did not elaborate on the different disclosures that would be required for different audiences. Specifically, the FTC wrote: “To the extent that an advertisement is targeted to a specific audience, the Commission will consider the effect of the ad’s format on reasonable or ordinary members of that targeted group.”
  • The Policy Statement and Business Guides apply across all media, including, without limitation, news/content aggregator sites, social media platforms, messaging apps, product review sites, email, user generated and professionally produced videos, video games, and animations.
  • All parties who help to make or publish content (e.g., ad agencies, affiliate advertising networks) are on the hook—not just the advertiser itself.

Enforcement actions from the FTC are likely forthcoming and this is something we will continue to watch.

[i] For additional background on native advertising see here and here on our blog.

[ii] For additional discussion of the FTC’s disclosure standards see here and here on our blog.

[iii] If an advertiser sponsors purely editorial content that does not promote the advertiser’s products/services in any way and has not been created or influenced by the advertiser, then it is likely acceptable (but not required) to include “sponsored by”, “promoted by” or similar.

Privacy and Ed-tech in 2016

There was a lot of legislative movement for the educational technology (ed-tech) industry in 2015 with states placing additional privacy regulations on the industry, and the effects of those new acts should be felt this year. The states that passed this type of legislation in 2015 were following California’s lead. California’s governor signed the Student Online Personal Information Protection Act (SOPIPA) (2014 Cal SB 1177) back in 2014. Even though these states enacted legislation after SOPIPA, at least one of these acts came into effect before SOPIPA became operative (which was January 1, 2016). Continue Reading

FTC Settles Advertising-Related COPPA Charges Against Two App Developers

This week, the FTC announced settlements with two developers of children’s apps that it claimed had failed to comply the Children’s Online Privacy Protection Act (“COPPA”). While any COPPA enforcement by the FTC is noteworthy, these cases are particularly interesting in that they are the FTC’s first COPPA enforcement actions that are based on allegations that a child-directed online service allowed advertisers to collect and use persistent identifiers for the purpose of targeting advertisements.

Continue Reading

LexBlog