The New “EU-US Privacy Shield”

Since the European Court of Justice invalidated the fifteen-year-old EU-US “Safe Harbor Privacy Framework” last October, thousands of US companies have been awaiting the results of negotiations between the US government and the European Commission to produce “Safe Harbor 2.0,” a set of protocols to permit the continued flow of personal data between Europe and the US in contexts as varied as ecommerce, social media posts, and the internal management of global corporate groups.  Today, the sleep-deprived negotiators announced a framework agreement for an “EU-US Privacy Shield,” two days after an informal deadline for reaching agreement.

The framework agreement is not the end of the story, and it will likely be weeks, at least, before American companies can actually rely on the new program. The Commission must first draft a more detailed “adequacy decision,” which will be approved only after consulting the EU member state governments and representatives of the national data protection authorities (“DPAs”).  Moreover, the October court decision makes it clear that the DPAs may also make their own determinations on the adequacy of privacy protection when data are transmitted outside the EU.  Their representative “Article 29 Working Group” meets tomorrow and will subsequently issue an opinion on the framework agreement, and it is entirely possible that some national DPAs (or state-level DPAs in Germany) will reject the deal or demand additional conditions.  Beyond that, privacy advocacy groups in Europe are already forecasting new court challenges, arguing that the arrangement does not sufficiently rein in electronic snooping by the US government.

According to today’s press release from the European Commission, the new focus is on transparency and recourse for government surveillance, but there is also a commitment to more rigorous enforcement.  It does not appear that the substantive elements of the Safe Harbor Privacy Principles will be changed materially, but the US government has reassured the European Commission that the US Department of Commerce will monitor compliance by participating companies and that the Federal Trade Commission (FTC) will enforce their commitments.

The new element in the agreement is an undertaking to control US governmental surveillance of trans-Atlantic communications, which was the aspect of privacy protection that the European Court of Justice found lacking in the old Safe Harbor program. Here is how this undertaking is described in the Commission’s press release:

“For the first time, the US has given the EU written assurances that the access of public authorities for law enforcement and national security will be subject to clear limitations, safeguards and oversight mechanisms. These exceptions must be used only to the extent necessary and proportionate. The U.S. has ruled out indiscriminate mass surveillance on the personal data transferred to the US under the new arrangement. To regularly monitor the functioning of the arrangement there will be an annual joint review, which will also include the issue of national security access. The European Commission and the U.S. Department of Commerce will conduct the review and invite national intelligence experts from the U.S. and European Data Protection Authorities to it.”

Moreover, European citizens will have more options for transparency and redress. Companies will be required to respond to questions and complaints within a fixed time period.  European DPAs will be able to refer complaints to the Department of Commerce and the FTC.  Alternative dispute resolution mechanisms also must be made available to individuals free of charge.  Finally, an Ombudsman will be appointed to investigate claims of inappropriate monitoring by US national security agencies.

It should become clearer over the next few days and weeks whether the new deal will be broadly accepted in Europe and provide a satisfactory road forward for the many companies, large and small, that must share data across geographical borders. Meanwhile, companies with international business must continue to rely on other legal bases for transborder data flows, prominently informed consent, transfers necessary for the performance of a contract, data transfer agreements using EU-approved model contract clauses, and, for a few relatively large corporate groups, approved “binding corporate rules.”

All of these may conceivably fall prey to concerns over US governmental surveillance, however, no matter how carefully the companies themselves handle personal information in the course of their business. Thus, the new undertakings by the US government may represent an important step forward in balancing public and private interests in personal information and communications, one that promises to benefit global trade.  It is conceivable that more US companies will be attracted to the new EU-US Privacy Shield, if it becomes more widely accepted than other legal approaches precisely because it includes governmental as well as corporate commitments.

This raises some interesting questions:

  • Will the arrangement spread beyond the 28 EU countries? The three non-EU members of the European Economic Area – Norway, Iceland, and Liechtenstein – will presumably be bound by the new version of Safe Harbor, as they were by the old one. Israel also allowed data transfers to US Safe Harbor companies and will probably allow transfers to companies participating in the new EU-US Privacy Shield program. Switzerland adopted its own Safe Harbor program modeled on the EU-US Safe Harbor Framework and is likely to do the same with the new program. It is even possible that other countries with comprehensive data protection laws ultimately could take the approach of approving data flows to the US subject to corporate and governmental Privacy Shield commitments.
  • Will the EU demand similar undertakings from other business partners? Surely, the privacy risks are no less significant when personal data flows from the EU to Russia, China, Iran, or a host of other countries with sophisticated surveillance capabilities and, in many cases, much less transparency and legal recourse. If the governments of those countries are not willing to make similar commitments, will the EU try to block data flows? Will it even ask? Or is the EU approach simply to seek improved privacy protections among the more like-minded liberal democracies?
  • Will the EU apply similar standards for government surveillance in its own territory? In the UK, Parliament is currently revisiting its oversight structure for the GCHQ, a counterpart of the American NSA. Will the resulting scheme parallel in some respects the “EU-US Privacy Shield?” Will similar forms of transparency and redress move forward in Germany, the Netherlands, Denmark, and other countries that reportedly share communications intelligence with the NSA? The recent terrorist attacks in Paris have actually produced broader authorizations for surveillance by French and Belgian police and intelligence agencies, and that trend is reflected in other European countries worried about Islamic State sympathizers and a massive wave of refugees from Syria, Libya, Afghanistan, and other hotbeds of “radical Islam.” The governing parties in several EU member states, such as Poland and Hungary, seem to be riding a wave of nativist reaction to perceived external threats. Will they agree to conform to an EU consensus on the proper limits of electronic surveillance? Is there such a consensus?

The oversight of national intelligence agencies in a democracy is a complex issue of great public importance, one that must necessarily evolve with changing technologies. In this case, that evolution has suddenly accelerated, not through comprehensive public or legislative debate but as the result of a single court decision about storing Europeans’ social media posts on American servers.  Even in a digital world, sometimes the tail wags the dog.

Businesses Take Heed: FTC’s Recent Report, Conference Signal Big Data’s the Big Deal in 2016

FTC Kicks Off New Year with New Report on Growing Use of Big Data Analytics Across All Industries

Without so much as a week of 2016 having lapsed, the Federal Trade Commission (“FTC” or “Commission”) released a new report with recommendations to businesses on the growing use of big data. The report, “Big Data: A Tool for Inclusion or Exclusion?  Understanding the Issues” (“Report”), is based primarily on the FTC’s synthesis of the numerous discussions and written public comments submitted in connection with FTC’s September 2014 public workshop exploring the use of big data and its impact on American consumers, as well as a prior FTC seminar on alternative scoring products.  The primary purpose of the Report is to ensure that businesses’ use of big data analytics, while producing many benefits for consumers, avoids outcomes that may be exclusionary or discriminatory, in particular with respect to low-income and underserved populations.

Big Data Defined

“Big data,” as the Report explains, refers to a confluence of factors, including: (i) the nearly ubiquitous collection of consumer data from a variety of (primarily, but not exclusively) online sources, whether by shopping, visiting websites, paying bills, connecting with family and friends through social media, using mobile applications, or using connected devices (e.g., fitness trackers, smart televisions, etc.); (ii) dramatic reductions in the cost of data storage; and (iii) the growing availability of powerful new data processing capabilities, which can analyze data, draw connections, and make inferences and predictions with ever-increasing speed and accuracy.

Big Data: Benefits and Risks

In the Report, the Commission acknowledges full well that the era of big data has arrived, and that the role of big data is growing rapidly across virtually all industries, which, in turn, is producing tremendous benefits for both businesses and consumers, as well as for society as a whole. At the same time, advocates, academics, and others have raised concerns about whether certain uses of big data analytics may harm consumers, violate consumer protection or equal opportunity laws, or more generally detract from the core values of inclusion and fairness.  Thus, a central focus of the Report is to examine the intersection of big data’s benefits and risks for businesses and consumers alike, in an effort to avoid discriminatory data use (which was a key topic at the recent FTC conference, PrivacyCon, discussed below).

The Report sets forth a number of improvements to society that are made possible through big data analytics. In addition to more effectively matching products and services to consumers, big data can create opportunities for low-income and underserved communities, for example, by:

  • increasing educational attainment for individual students;
  • providing access to credit using non-traditional methods;
  • providing healthcare tailored to individual patients’ characteristics;
  • providing specialized healthcare to underserved communities; and
  • increasing equal access to employment

At the same time, the Report notes that some researchers and others have expressed concern that the use of big data analytics may result in the exclusion of certain populations from the benefits that society and businesses have to offer, based on issues related to the quality of data, including its accuracy, completeness, and representativeness, as well on whether there are uncorrected biases in the underlying consumer data. For example, businesses could use big data to exclude low-income and underserved communities from credit and employment opportunities, or to create and reinforce disparities in the pricing and availability of certain products and services.

Highlighting Numerous Laws Potentially Applicable to Big Data Practices

In addition to the potential for diminishing inclusion and fairness, the Report also makes clear that certain uses of big data could violate various existing laws governing big data practices, such as the Fair Credit Reporting Act (“FCRA”), equal opportunity laws, and the Federal Trade Commission Act (“FTC Act”). The Report attempts to guide businesses in this regard by providing an overview of the existing legal framework established by these laws.  However, the Report also emphasizes that it is not intended to identify or fill any legal or policy gaps, so businesses need to ensure they have a complete understanding of these and any other laws that may be implicated at any stage in the life cycle of big data.  To this end, the advice of counsel is strongly recommended, particularly given that the FTC reiterates throughout the Report that it will continue to monitor areas where big data practices could violate existing laws and that it will bring enforcement actions where appropriate.

Key Considerations for Assessing Compliance with Laws Identified in the Report

For businesses already using or considering engaging in big data analytics, the Report offers a number of key questions (set forth below) to consider as a starting for assessing an entity’s compliance, or lack thereof, with the existing laws addressed in the Report. Predictably, these questions focus, to a great extent, on certain disclosures that may need to be made to consumers in connection with the use and sharing/transfer of big data (or “little” data that may become “big” data), as well as on the security of such information. Among other things, businesses should consider the following:

  • If you compile big data for others who will use it for eligibility decisions (such as credit, employment, insurance, housing, government benefits, and the like), are you complying with the accuracy and privacy provisions of the FCRA? FCRA requirements include requirements to (1) have reasonable procedures in place to ensure the maximum possible accuracy of the information you provide, (2) provide notices to users of your reports, (3) allow consumers to access information you have about them, and (4) allow consumers to correct inaccuracies.
  • If you receive big data products from another entity that you will use for eligibility decisions, are you complying with the provisions applicable to users of consumer reports? For example, the FCRA requires that entities that use this information for employment purposes certify that they have a “permissible purpose” to obtain it, certify that they will not use it in a way that violates equal opportunity laws, provide pre-adverse action notice to consumers, and thereafter provide adverse action notices to those same consumers.
  • If you are a creditor using big data analytics in a credit transaction, are you complying with the requirement to provide statements of specific reasons for adverse action under Equal Credit Opportunity Act (“ECOA”)? Are you complying with ECOA requirements related to requests for information and record retention?
  • If you use big data analytics in a way that might adversely affect people in their ability to obtain credit, housing, or employment:
    • Are you treating people differently based on a prohibited basis, such as race or national origin?
    • Do your policies, practices, or decisions have an adverse effect or impact on a member of a protected class, and if they do, are they justified by a legitimate business need that cannot reasonably be achieved by means that are less disparate in their impact?
  • Are you honoring promises you make to consumers and providing consumers material information about your data practices?
  • Are you maintaining reasonable security over consumer data?
  • Are you undertaking reasonable measures to know the purposes for which your customers are using your data?
    • If you know that your customer will use your big data products to commit fraud, do not sell your products to that customer. If you have reason to believe that your data will be used to commit fraud, ask more specific questions about how your data will be used.
    • If you know that your customer will use your big data products for discriminatory purposes, do not sell your products to that customer. If you have reason to believe that your data will be used for discriminatory purposes, ask more specific questions about how your data will be used.

FTC Enforcement Actions Expected

With the FTC’s release of the big data Report on January 6, and considering that the Commission held a major conference (discussed below) just over a week later, which focused in part on the latest research and commercial trends in big data, we expect big data to be a very big deal in 2016. Further, now that the Commission has provided businesses additional substantive and actionable guidance in the realm of big data, enforcement actions from the FTC are likely forthcoming.  This is something we will continue to watch.

PrivacyCon 2016: FTC Conference Intensifies Spotlight on Big Data

On January 14, just eight days after the FTC released its latest Report on big data (discussed above), the Commission hosted PrivacyCon, a first-of-its-kind multi-stakeholder conference focused on the latest research and commercial trends in big data, consumer privacy, and related areas. PrivacyCon featured presentations and discussions by a diverse group of stakeholders, including whitehat researchers, academics, industry representatives, consumer advocates, and government regulators.

The original research presented at the day-long event centered on such issues as how consumers’ understanding of privacy online squares with the options about their privacy that they are provided, tools to analyze the way consumers’ information is shared and used online, and the effectiveness of programs to track security vulnerabilities. Within this broader context, the FTC put big data on center stage for the second time in just over a week when Commissioner Brill kicked off an afternoon session on big data and algorithms, during which several academics presented research and findings on the importance of transparency tools in revealing and avoiding data discrimination. The slide deck used during this session can be found here.

FTC Releases Policy Statement and Business Guides on Native Advertising

Just before 2015 came to an end, the Federal Trade Commission (“FTC”) released its much anticipated Enforcement Policy Statement on Deceptively Formatted Advertisements (“Policy Statement”) along with informal, practical guidance for businesses titled “Native Advertising: A Guide for Businesses” (“Business Guides”). The FTC first began considering native advertising in a December 2013 workshop. Native advertising is digital media content that blurs the line between advertising and editorial by inserting paid content into the regular stream of media.[i] Unsurprisingly, the FTC concluded at the workshop that misrepresenting the source of content or failing to disclose that it is commercial in nature likely amounts to a violation of Section 5 of the FTC Act, but the FTC did not further elaborate. The Policy Statement reiterates this general concept and goes on to clarify the FTC’s position on native advertising by tying it to long standing FTC principles across varying spheres (from door-to-door sales to telemarketing to CAN SPAM and from infomercials to advertorials in traditional print media). The Business Guides expand on the Policy Statement and provide detailed guidance (including 17 examples) on when and how to prevent consumer deception in the digital advertising space. Generally, the position taken by the FTC is not particularly shocking as it is consistent with the established truth-in-advertising standard that the commercial nature of content must be readily apparent or accompanied by a clear and conspicuous disclosure.[ii] The specificity of the requirements set forth in the Business Guides, however, may pose significant creative and technical hurdles for advertisers. Industry groups, like IAB, have already expressed concern. Thorough discussions of the Policy Statement and the Business Guides, as well as key takeaways, follow.


In the Policy Statement and Business Guides, the FTC explains that it will likely find a violation of Section 5 of the FTC Act if commercial content misleads consumers into believing that it is impartial, independent, or from a source other than the advertiser. Such a misrepresentation, the FTC explains, is material as it is likely to affect the credibility the consumer gives to the content and the consumer’s decision to interact with the content. To avoid misleading consumers in this way, advertisers must consider the following two questions:

  1. Is the content clearly an advertisement?
  2. If not, have sufficient disclosures been included?

Is the content clearly an advertisement?

The FTC explains that certain materials “may be so clearly commercial in nature that they are unlikely to mislead consumers even without a specific disclosure.” However, if the content promotes the advertiser’s product/service and implies that it is anything but an ad (i.e., if it implies that it is news, informational, or educational), the FTC is likely to find a violation of Section 5 of the FTC Act unless appropriate disclosures are included. As always, the FTC will look at the “net impression” of the ad to determine whether the commercial nature the content is clear to the “reasonable” consumer.

In making this evaluation, the FTC will consider the overall appearance of the ad itself (and verbal/audio content in the case of multimedia), as well as the similarity of ad’s style/formatting to the surrounding non-advertising content and whether the substantive content is distinguishable from the surrounding non-advertising content. Simply put, the more similar the ad is to the publisher site’s content, the more likely it is that a disclosure will be required.

The Policy Statement sets forth a handful of factors the FTC will consider in determining whether the consumer will recognize the content as an advertisement:

  • On what media is the content featured? The FTC acknowledges that consumers have different expectations depending on the media being consumed (e.g., social media versus a news website).
  • Who is the target audience? For example, if children are the target audience, they are unlikely to recognize something as an ad.
  • Does the substantive content of the ad differ from the surrounding content? For example, an ad inviting a consumer to take a car test drive in a news stream is likely recognizable as an advertisement.
  • Is the format of the ad similar in written, spoken or visual style (including text, images, audio, and video) to the non-advertising content? For example, does the content look like an editorial article on a news site? Conversely, is the content set apart using background shading, borders or other visual cues to indicate that it is an ad?

The Business Guides provide additional color to the FTC’s position via the examples. For instance:

  • If the context makes it clear that the content is advertising, prior disclosure may not be necessary (but since the content is shareable a disclosure within the content will likely be required). For example, in social media users most often know whom they follow so the commercial nature of an ad appearing in a user’s own feed will likely be clear to that user and prior disclosure may not be necessary, but that same content shared by the user to another feed or platform would likely require disclosure within the content.
  • Linked content must contain appropriate disclosures. For example, a non-paid search engine result that is an advertisement must clearly identify itself as an ad. The Business Guides use an example of a non-paid search engine result displaying a link to a video that describes how to build a deck in 5 minutes. The video is sponsored by an advertiser that sells deck stain and the advertiser’s deck stain is used in the video. The FTC recommends that the search engine result itself (not just the page a user lands on after clicking the link) make clear that the video is commercial in nature.
  • If editorial content is paid for by an advertiser and such content does not tout the benefit of the advertiser’s products/services it need not be labelled as advertising. On the other hand, paid content that appears editorial/informational, but touts the benefits of or otherwise promotes the advertiser’s products/services in any way must be labelled as an ad.
  • Content that is clearly advertising (e.g., a billboard on a street in a video game, product placement) need not be labelled as advertising. However, if that same content is clickable and the consumer is taken out of the publisher site (i.e., the video game) to an advertiser’s property (e.g., web site or app) or an advertisement, a disclosure would be necessary before the consumer clicks through to the content.

If it is not clearly an advertisement, have sufficient disclosures been included?

If something is not clearly an advertisement and disclosures are required to avoid deception, the disclosures must follow the FTC’s established principles with respect to disclosures. In other words, the disclosures must be clear, unambiguous, prominent, and in close proximity to the ad. Further, audio disclosures must be read slowly and use words that are easy to understand and video disclosures must be on screen long enough to be noticed, read, and understood.

In addition to those general principles, the Business Guides contain specific guidance on how to make sufficient disclosures. For instance:

  • The disclosure must be near the ad’s focal point so that a consumer will see it and easily identify the content to which the disclosure applies, e.g., the disclosure must be on the video or thumbnail itself rather than in the text description, the disclosure must be to the immediate left of a headline, the disclosure should not come after the user views the commercial content, the disclosure should not be made too early, etc.
  • The disclosures must travel with the ad content if it is shared/linked, e.g., in the title tag for a non-paid search result, in the beginning of the URL for a shared URL, and in the content itself rather than text that accompanied the content on the publisher site.
  • The disclosure must be readable and understandable on any device or media through which the content is accessible, e.g., multimedia may require audio disclosure for a consumer to be able to read/comprehend the disclosure.
  • The disclosure must use terminology that is consistent with the language on the particular platform/media.
  • The disclosure must use plain language, rather than legalese or marketing “jargon”.


  • If the source of the advertisement is unclear (i.e., if the content expressly or impliedly misleads the consumer into believing that it is not an advertisement), the FTC will consider an ad deceptive even if the underlying claims are truthful and non-misleading.
  • If something is so clearly an advertisement that no reasonable consumer would think otherwise, then a disclosure may not be necessary. If the content in question does not fit precisely within one of the FTC’s enumerated examples, however, it is likely best to err on the side of disclosure.
  • Acceptable disclosures include things like “Ad”, “Advertisement” “Paid Advertisement” and “Sponsored Advertising Content”. Anything that is slightly more vague such as “Promoted”, “You might like”, or mere inclusion of the advertiser’s name/logo is likely insufficient, per the FTC. Further, “sponsored by [x]” or “promoted by [x]” is also unacceptable as a disclosure for a native ad because consumers may not understand that the advertiser influenced/created the content, according to the FTC.[iii]
  • Since ads may be accessed in various ways (e.g. via web search, linking, or sharing in addition to direct access from the publisher site), it will often be insufficient to include disclosures only on the publisher site. In other words, content that can be republished must contain clear and conspicuous disclosures that travel with the content.
  • Similarly, the ad content that a consumer accesses after clicking through may often require a disclosure in addition to the disclosure on the publisher site.
  • The target audience must be considered. For example, is the content targeted to children or to adult social media users that know whom they follow and how ads appear in their feed? The required disclosures may vary depending on the target audience, but the FTC did not elaborate on the different disclosures that would be required for different audiences. Specifically, the FTC wrote: “To the extent that an advertisement is targeted to a specific audience, the Commission will consider the effect of the ad’s format on reasonable or ordinary members of that targeted group.”
  • The Policy Statement and Business Guides apply across all media, including, without limitation, news/content aggregator sites, social media platforms, messaging apps, product review sites, email, user generated and professionally produced videos, video games, and animations.
  • All parties who help to make or publish content (e.g., ad agencies, affiliate advertising networks) are on the hook—not just the advertiser itself.

Enforcement actions from the FTC are likely forthcoming and this is something we will continue to watch.

[i] For additional background on native advertising see here and here on our blog.

[ii] For additional discussion of the FTC’s disclosure standards see here and here on our blog.

[iii] If an advertiser sponsors purely editorial content that does not promote the advertiser’s products/services in any way and has not been created or influenced by the advertiser, then it is likely acceptable (but not required) to include “sponsored by”, “promoted by” or similar.

Privacy and Ed-tech in 2016

There was a lot of legislative movement for the educational technology (ed-tech) industry in 2015 with states placing additional privacy regulations on the industry, and the effects of those new acts should be felt this year. The states that passed this type of legislation in 2015 were following California’s lead. California’s governor signed the Student Online Personal Information Protection Act (SOPIPA) (2014 Cal SB 1177) back in 2014. Even though these states enacted legislation after SOPIPA, at least one of these acts came into effect before SOPIPA became operative (which was January 1, 2016). Continue Reading

FTC Settles Advertising-Related COPPA Charges Against Two App Developers

This week, the FTC announced settlements with two developers of children’s apps that it claimed had failed to comply the Children’s Online Privacy Protection Act (“COPPA”). While any COPPA enforcement by the FTC is noteworthy, these cases are particularly interesting in that they are the FTC’s first COPPA enforcement actions that are based on allegations that a child-directed online service allowed advertisers to collect and use persistent identifiers for the purpose of targeting advertisements.

Continue Reading

Caveat Venditor: FTC Amends Telemarketing Sales Rule to Enhance Anti-Fraud Protections and to Update and Clarify Several Key Provisions Relating to the National Do Not Call Registry

On November 18, 2015, the Federal Trade Commission (FTC) released a final rule setting forth a number of key amendments to its Telemarketing Sales Rule (TSR).  [FN1]  Specifically, in response to changes in the financial marketplace, the final rule prohibits the use of certain payment methods in telemarking.  In addition, and of likely much greater significance to the vast majority of legitimate, compliance-focused telemarketers operating today, the TSR amendments update and clarify several provisions related to the National Do Not Call (DNC) Registry, including those concerning:  (i) the telemarketing exemption for calls to businesses; (ii) demonstrating the existence of an “established business relationship” with the customer; and (iii) sellers/telemarketers sharing the cost of DNC Registry fees.  The FTC also made several important clarifications regarding a consumer’s right to be placed on entity-specific do-not-call lists, as well as with respect to call recordings made to memorialize a customer’s “express verifiable authorization.”  In light of these recent changes to the TSR, even the most reputable and well-intentioned sellers and telemarketers will need to reexamine their current practices to ensure compliance with the final rule.

Certain Telemarketing Payment Methods Prohibited

The weighty, 136-page final rule focuses, to a great extent, on further protecting consumers from fraud by banning the use of four types of “novel payment methods” in telemarketing, namely, remotely created checks, remotely created payment orders, cash-to-cash money transfers, and cash reload mechanisms.  [FN2]  According to Jessica Rich, Director of the FTC’s Bureau of Consumer Protection, these are the “payment methods that scammers like, but honest telemarketers don’t use.”  For this reason, the FTC’s amendments in this regard were purposely and narrowly tailored to prohibit just these specific payment methods, thereby allowing for innovations with respect to other payment methods that are used by legitimate companies.

Do-Not-Call Amendments and Clarifications

Of greater consequence to most legitimate businesses engaged in telemarketing activities (i.e., sellers and/or the telemarketers that may be acting on their behalf) are the FTC’s amendments and clarifications with respect to the DNC Registry and other do-not-call requests made by consumers, as follows:

  • Existing Business Relationship. If a consumer’s number is on the DNC Registry, the revised TSR expressly states that sellers or telemarketers bear the burden of demonstrating that they have (i) an existing business relationship with the person or (ii) the person’s express written agreement to get calls. [FN3]
  • Business-to-Business Exemption. The business-to-business exemption extends only to calls to induce a sale to, or contribution from, a business entity; it does not cover soliciting employees at their places of business to make personal charitable contributions or to purchase goods or services for their individual use.
  • Entity-Specific Do-Not-Call Lists.
    • The amended TSR illustrates the kind of burdens that would illegally interfere with a consumer’s right to be placed on a seller’s/telemarketer’s entity-specific do-not-call list. For example, impermissible burdens include, among others, harassing consumers who make such a request, hanging up on them, requiring the consumer to listen to a sales pitch before accepting the request, and assessing a charge or fee for honoring the request.
    • Further, the TSR now specifies that if a seller or telemarketer does not get the information needed to place a consumer’s number on their entity-specific do-not-call list, the business is disqualified from the safe harbor for isolated or accidental violations.
  • DNC Registry Cost Sharing. The revised TSR emphasizes that no person may participate in any arrangement to share the cost of accessing the DNC Registry, including any arrangement with any telemarketer or service provider to divide the costs to access the registry among various clients of that telemarketer or service provider.

Finally, in addition to the do-call-call modifications highlighted above, the FTC also sought to make its enforcement policy more transparent by clarifying that any call recording made to memorialize a customer’s or donor’s “express verifiable authorization” must include an accurate description, clearly and conspicuously stated, of the goods or services or charitable contribution for which payment authorization is sought.  Such “express verifiable authorization” must be obtained by sellers and telemarketers prior to billing for telemarketing purchases or donations if payment is not made by credit or debit card.  [FN4]

*The DNC Registry and other do-not-call provisions will take effect 60 days after the final rule is published in the Federal Register.  The “novel payment method” prohibitions become effective 180 days after Federal Register publication.


[FN1]     The FTC promulgated the original TSR in 1995 and subsequently amended it in 2003 and again in 2008 and 2010 to add, among other things, provisions establishing the National Do Not Call Registry and addressing the use of pre-recorded messages and debt relief offers.  In general, the TSR requires telemarketers to make specific disclosures of material information; prohibits misrepresentations; sets limits on the times telemarketers may call consumers; prohibits calls to a consumer who has asked not to be called again; and sets payment restrictions for the sale of certain goods and services.

[FN2]     The FTC distinguishes these four prohibited payment methods from “conventional payment methods” (e.g., credit cards, and electronic fund transfers, such as debit cards), which are processed or cleared electronically through networks that can be monitored systematically for fraud.  Further enhancing the security of conventional payment methods is the fact that they are subject to federal laws that provide statutory limitations on a consumer’s liability for unauthorized transactions and standard procedures for resolving errors.  Detailed definitions of these four banned payment methods are set forth in the Federal Register notice announcing the final rule, at Section I.B.1.

[FN3]     For purposes of this section of the TSR, the term “signature” includes an electronic or digital form of signature, to the extent that such form of signature is recognized as a valid signature under applicable federal law or state contract law.

[FN4]     See 16 CFR 310.3(a)(3).

An Overview of the NY AG’s Demands On FanDuel and DraftKings

Anyone who has turned on their television in the past year is likely familiar with the business model of DraftKings, Inc. (“DraftKings”) and FanDuel Inc. (“FanDuel”). In case you have missed the numerous advertisements, both FanDuel and DraftKings run daily fantasy sports (“DFS”) promotions where contestants pay varying entry fees for the opportunity to receive cash prizes based upon the performance of their fantasy team in a given day. DFS promotions have exploded in popularity in the blink of an eye. FanDuel was founded in 2009 and, according to the FanDuel web site, it now has over one million registered users, has paid out $560 million in 2014 alone, and has received over $350 million in venture funding. Its junior, DraftKings, founded in 2012, promises over $1 billion in payouts this year according to its web site. Both companies have brokered partnerships with professional sports leagues and teams as well as other industry leaders.

Recently, however, DFS companies, including, most prominently, FanDuel and DraftKings, have come under fire due to allegations (among other reasons not discussed in this post) that DFS constitute illegal gambling under certain state lottery and gambling laws and, also, the federal Unlawful Internet Gambling Enforcement Act of 2006 (“UIGEA”).[i] DraftKings asserts on its website that it is “100% legal” because 45 of 50 states[ii] and the federal government consider one-day daily fantasy sports a legal game of skill, but both federal and state regulators and enforcement agencies are beginning to question this assertion. Most recently, on November 10, 2015, the New York State Attorney General (“NY AG”) issued cease and desist letters to DraftKings and FanDuel demanding that both parties stop accepting entry fees (or “wagers”, according the NY AG) from New York residents. The NY AG’s action follows closely behind an October 15, 2015 Nevada Gaming Control Board (“NGCB”) notice which asserted that DFS constitute gambling under Nevada law and, therefore, must be licensed by the NGCB. Further, according to a recent Wall Street Journal report, both companies are under investigation by the FBI and Department of Justice to determine whether DFS violate the UIGEA.


As background, a promotion will be found to be an illegal lottery, i.e., illegal gambling, if all of the following three elements are present:

  1. Prize;
  2. Mandatory consideration (g., payment of money, purchase of a product, etc.); and
  3. Chance.

A private sector company that wants to offer a prize promotion must remove one of the three lottery elements to avoid violating state lottery and gambling statutes. Determining the presence of prize and consideration are usually relatively straightforward whereas it may be more difficult to ascertain whether chance is present. In part, this is because states take a wide range of approaches to determine what constitutes a skill contest versus a game of chance. The tests used by states generally fall into one of the following four categories: “pure chance”, “predominant element”, “material element”, and “any chance” taints.  Each test can be summarized as follows:

  • Pure Chance Test. A minority approach is the “pure chance” doctrine under which a promotion must be solely based on chance to be an illegal lottery.  The exercise of any skill by a participant removes the promotion from within the definition of a lottery and it will instead be viewed as a game of skill.
  • Predominant Element Test.  The majority of jurisdictions apply the “predominant element” test. This test evaluates the relative degree of skill and chance present in the game; if the element of skill predominates over chance, then the game is likely permitted.
  • Material Element Test.  In several jurisdictions, a promotion will be pro­hibited if chance plays a “material” role in the outcome. A jurisdiction using this test would prohibit wagering on the game if chance has more than a mere incidental effect on the game, even if skill primarily influences the outcome of the game. This is a stricter standard than the “predominant element” test and makes it more difficult to offer skilled-based gaming if the games in question resort to a chance component in determining the outcome. New York adheres to the material element test.
  • Any Chance Test. A small number of jurisdictions apply the “any chance” test. In other words, if any chance is present in the promotion and there is a charge to play, then the promotion becomes an illegal lottery.  Certain (but not all) of the Excluded States adhere to the “any chance” test.

Further, certain states (e.g., Montana) explicitly prohibit fantasy sports or, more generally, pay to play contests by statute.

With respect to federal law, the UIGEA generally prohibits online gambling but makes an exception for online fantasy sports where (among other requirements) the “winning outcomes reflect the relative knowledge and skill of the participants and are determined predominantly by accumulated statistical results of the performance of individuals (athletes in the case of sports events) in multiple real-world sporting or other events.”[iii]


In the cease and desist letters, the NY AG concluded that both FanDuel and DraftKings run gambling operations that are illegal under NY law (pursuant to N.Y. Penal Law § 225.000 et seq. and N.Y. Const. Art. I, § 9) and demanded that both cease and desist from accepting “wagers” from NY residents. Under NY law, “a person engages in gambling when he stakes or risks something of value upon the outcome of a contest of chance or a future contingent event not under his control or influence” and a “contest of chance” exists where winning or losing depends on elements of chance to a “material degree” (i.e., NY adheres to the “material element” test discussed above) (N.Y. Penal Law § 225.00(2), see also N.Y. Gen. Bus. Law § 369-ee). Under NY law, those who operate illegal gambling enterprises are subject to criminal prosecution, but the individual gamblers are not.

The NY AG distinguished DFS from traditional fantasy sports stating: “participants in traditional fantasy sports conduct a competitive draft, compete over the course of a long season, and repeatedly adjust their teams. They play for bragging rights or side wagers, and the Internet sites that host traditional fantasy sports receive most of their revenue from administrative fees and advertising, rather than profiting principally from gambling.” The letters allege that, unlike traditional fantasy sports, FanDuel and DraftKings are engaged in illegal gambling because the fees paid by participants are actually bets that “depend on the real-world performance of athletes and numerous elements of chance” and because each party in its respective business controls the prize amounts, sets the variables (e.g., player “salaries”), advertises easy game play with big payoffs, stresses the lack of long term strategy needed, and profits directly from the wagers (i.e., by taking a cut of the entry fees paid by participants). Also for these reasons and because “a small number of professional gamblers profit at the expense of casual players”, the letters assert that DFS are more akin to (and, therefore, pose the same public health and economic risks as) poker and traditional gambling.

Notably, the letters point out that Washington state’s definition of gambling is substantially the same. Like NY, Washington adheres to the “material element” test. As noted herein, Washington is (and always has been) one of the Excluded States for both companies and Washington state regulators and its Attorney General’s office have consistently challenged the legality of all fantasy sports (traditional and DFS).

Finally, the NY AG letters lay the ground work for bringing a lawsuit against each company for their advertising campaigns, which the letters allege deceive and mislead consumers as to the prospects of winning large sums of money.

Both parties vehemently deny that their activities constitute illegal gambling and each quickly stated their opposition to the NY AG’s position. On Friday morning, both FanDuel and DraftKings filed actions seeking declaratory and injunctive relief in the New York County Supreme Court.[iv] At least for the time being, however, FanDuel has stopped accepting deposits from NY residents.


Given the high level of market exposure that FanDuel and DraftKings have experienced recently, it is not surprising that they are the subject of multiple enforcement investigations. It remains to be seen whether DFS companies will be able to persuade regulators and enforcement agencies that DFS are in fact games of skill that do not constitute illegal lotteries. This is something that we will continue to watch.

[i] Generally speaking, the provisions of the UIGEA are applicable only to payment processors.

[ii] To date, both FanDuel’s Terms of Use and DraftKing’s Terms of Use prohibit participation in each of their respective promotions by residents of Arizona, Iowa, Louisiana, Montana, Nevada and Washington (“Excluded States”). Nevada was added to the list of Excluded States on both parties’ web sites shortly after the October 15, 2015 issuance of the notice by the NGCB.

[iii] A bill recently introduced in Illinois (HB 4323 and SB 2193) that aims to exempt fantasy contests from the state’s criminalization of gambling contains nearly identical language.

[iv] DraftKings’ filing also includes numerous other causes of action based on, among other things, violations of the U.S. and NY Constitutions.

Come hear Jamie Rubin and Heather Nolan speak on hot topics in marketing law at the BAA Conference in Chicago on November 10th & 11th

On November 10, 2015, Partner, Jamie Rubin will moderate a panel on The Tricky Terrain Of Retail Marketing at the Brand Active Association/ Association of National Advertisers Legal Conference in Chicago.

Also, on November 11, 2015, Partner, Heather Nolan will present a roundtable discussion on legal compliance steps for charitable promotions at the same conference.

For those in Chicago on Tuesday, November 10th, even if you are not attending the conference, please feel free to come by our Rooftop After Hours Event between 8 and 10:30pm.   It will be at The Rooftop Lounge at the Wit Hotel (201 North State Street).


Heather Nolan to speak at Women Leaders in Advertising Law conference this week

Join Heather Nolan at ACI’s Women Leaders in Advertising Law conference this Thursday, October 22nd, at the Carlton Hotel in New York City.  She will host a panel comprised of leading self-regulatory and in-house attorneys discussing how to make the most out of work and non-work life during this time of 24/7 availability through technology.  

The panel is titled “Updating Your Status 24/7: How Women Leaders in the Increasingly Demanding Advertising and Marketing Space Can Make the Most Out of Both Life and Career.” To register click on this link