What's in Google's SaaS Contract with the City of Los Angeles? Part Two.

This blogpost is the second in our series analyzing the terms of Google and Computer Science Corporation’s (“CSC”) Cloud contract with the City of Los Angeles. In Part One, we looked at the information security, privacy and confidentiality obligations Google and CSC agreed to. In this installment, we will focus on terms related to compliance with privacy and security laws, audit and enforcement of security obligations and incident response.

As a reminder, the City of LA will ultimately be entering into two contracts as part of this SaaS deal: one with CSC, the “CSC Contract” (which is acting as a reseller/implementer of Google SaaS solution) and one with Google itself for the SaaS services (the “Google Contract”).  We look at the provisions of both contracts.

Compliance with Privacy and Security Laws

The CSC Contract

In sub-section 5.1.5 of the CSC Contract each party represents and warrants that it will comply with certain laws as follows:

[each party represents and warrants that] it shall comply with all applicable federal, state, local, international or other laws and regulations applicable to the performance by it of its obligations under this Contract . . .

Appendix A has a similar provision in it. While this warranty does not explicitly reference privacy or data security laws it is broad enough to encompass them. As discussed further below, because of some quirks in data security and privacy laws related to distinctions between “service providers/processor” and data owners/licensees/controllers, some might argue that the scope of this obligation is limited for service providers.

The Google Contract

In section 7.5, Google agrees to comply with applicable state or federal security breach notice laws in the event of a Security Breach (a defined term we will be coming back to in the final installment of this series). Note that for most breach notice laws, Google would argue that it is a service provider (rather than a licensee or owner of personal information), and its only obligation is to provide notice to the City of LA of a security breach (and LA would have to provide notice to impacted individuals).

However, in the Google contract, it does not appear that Google made any specific promise to comply with privacy or data security laws that require specific controls or security measures. In fact, section 2.4 of the Google Contract actually requires the City of Los Angeles to comply with applicable privacy laws and regulations. In essence, if LA violates a privacy law it might be in breach of the Google Contract, but the same would not be true if Google violated a privacy law (despite the fact that Google is the party actually storing and processing personal information).

These regulatory compliance terms are always tricky in the data security and privacy contracting context. The problem arises based on the relationship of the parties. For a given data security law, the data owner or controller may have the primary obligation to comply with such law. This is the case for the EU Data Privacy Directive, as explained here, data controllers have the primary obligation to comply with the Directive, while processors may take on certain compliance obligations through their contracts with data controllers. Some service providers may take the position that any agreement to comply with “applicable laws” means any laws that are directly applicable to the service provider (and in the case of the EU Directive their position would be that the Directive is not directly applicable to it, but rather the data controller).

The problem for data owners is when they cede the storage, processing and transmission of personal information to service providers, if the service provider’s practices and security controls violate a data security law applicable to the data owner, the data owner could be held liable. Thus, data owners when entering into a contract like the Google Contract would prefer to have the service provider agree to comply with all laws applicable to the data owner and the personal information the data owner is working with. In contrast, service providers may take the position that it is the data owner’s responsibility to vet the service provider’s practices and controls to make sure they comply with laws applicable to the data owner.

Of course, if a service provider wants to serve a data owner that they know is subject to certain laws, it does not seem unreasonable to expect the provider to have addressed those laws. Unfortunately, this may not always be the case in the Cloud where offerings are standardized and built to scale to many different customers. At some point, to counter this we may see some competition between service providers offering CaaS (“Compliance as a Service”). So a Cloud provider serving the financial industry might design their Cloud offerings to be compliant with GLB security requirements, or those serving retailers would promise to comply with the PCI Standard (and in fact this is already happening with the PCI Standard).

In the case of the Google and CSC contracts, the privacy and data security legal compliance provisions are thin. The Google Contract does not appear to have such an obligation despite the fact that it will be storing, processing and transmitting LA’s information. While the CSC Contract does have a provision that appears fairly broad, as the implementer of the SaaS offering, their data handling activities are likely more limited than Google’s. Moreover, the provision cited above references laws applicable to the performance of CSC under the contract. Both sides would have arguments as to whether this language requires CSC to comply with privacy and security laws applicable to the City of LA.

Audit and Enforcement Terms

Getting a promise in writing from a service provider to implement certain data security controls is one thing. Being able to confirm that a service provider is actually implementing those controls, and if not, being able to make them comply, is another. The difference highlights philosophical differences as to the purpose of a contract in this context. Is the purpose of the contract to establish security duties and provide remedies if those duties are breached to allow for potential recovery after-the-fact, or is the purpose of the contract to proactively prevent a security breach or legal non-compliance in the first place. When it comes to data security and privacy, and legal compliance obligations related thereto, I contend that the latter applies. A customer is in a much better position if it can assess its service provider’s security and address weaknesses in order to (hopefully) prevent a security breach or legal non-compliance (rather than waiting for it to happen and suing for damages). This is especially true for long term contracts, and contracts where liability has been limited by the service provider significantly (the topic of the next blog post on the Google/LA contracts).

The CSC Contract

Section 11.2 of the CSC Contract provides the City with certain rights to review and audit CSC’s information security program. The City may review CSC’s security program before commencement of the services and “time to time” during the term of the contract. The audit may be an on-site audit, or at CSC’s option, CSC may instead complete a security audit questionnaire. In addition, CSC is required annually to perform a “SAS 70 or equivalent” audit of Google’s information security program and provide it to the City upon written request. Significantly, CSC must implement any “required safeguards” as identified by the City or information security program audits.

On its face, the contract provides strong audit and enforcement rights. The City has multiple audit options and no limits in terms of timing or frequency. On the enforcement front the City, CSC has an obligation to implement required safeguards identified by the City (while it may be intended, I would have clarified that CSC could not charge LA for the implementation of those safeguards). However, it is not clear to what extent CSC’s security program is relevant. Again, it appears that it will mainly be Google that will be storing, processing and transmitting LA’s data (although admittedly CSC could have a larger role with respect to data processing, it is just not clear from the contract, or my limited reading of it; as far as I can tell CSC may have some LA data while testing the SaaS set up, and perhaps be holding some back-up data).

CSC’s obligation to perform a SAS-70 audit or equivalent of Google is interesting and probably more relevant to the risk faced by LA. However, the usefulness of a SAS-70 audit, especially a SAS-70 Type I, is debatable.  It may be more useful to peg compliance to an established security standard like ISO 27001/2, which has an international scope and was developed by an outside standards body (rather than the subjective judgment of the service provider).   That said, if the City can have some input into the design of the SAS-70 audit, it may be more useful (although that right does not appear in the contract). However, even more powerful would be the ability of the City to directly audit Google.  If that right existed it would have to be in the Google Contract.

The Google Contract

Under the Google Contract, the City does not appear to have any right to conduct an audit or security assessment of Google’s security program. While it is possible that CSC’s SAS 70 audit will discover security program problems, the Google Contract does not have any provision that requires Google to implement any controls to address those problems. The City of LA’s only remedy if deficiencies are found would be to threaten an action for breach of contract or termination of the Google contract (notably, however, the City of LA does not have an explicit right to terminate the Google contract for convenience). The bottom line is that with respect to Google, the City has limited audit and enforcement rights. This is the case even though Google will be the primary party handling LA’s data.

Incident Response Contract Terms

When it comes to data security and privacy and using service providers, I often advise my clients to think of their service providers’ security as an extension of their own. The same concept holds true for incident response. The key issue here is understanding how the service provider and customer will communicate, coordinate and mitigate when the service provider suffers a security breach exposing the customer’s data.

As a first step, as part of its due diligence, the customer should carefully review the service provider’s incident response plan and policies. Key issues to consider include whether the provider has monitoring and detection capabilities (IDS, log reviews, etc.), the information the provider captures that may relate to a breach, how breaches are categorized, the escalation rules that result in key stakeholders discovering the breach, etc. In many cases, if the service provider’s procedures are adequate, a contractual requirement to follow those procedures would be appropriate. If there are weaknesses and gaps, specific obligations and controls can be imposed in the contract.

However, even if the service provider’s internal incident response policy is solid, there is still a need to consider how the service provider and customer will coordinate if the provider suffers a breach. The Cloud contract should address this issue.

The CSC Contract

The CSC Contract does not appear to explicitly contain any security incident response obligations owed by CSC. It may be generally covered under the contract’s SLA with reference to various Severity Levels, but those Severity Levels relate to events discovered by the City and notified to CSC. There does not appear to be any duty for CSC to report a security breach to LA.  Again, since CSC's data processing role seems limited in this arrangement, not having these obligations may be appropriate (i.e. CSC may not be in a position to even discovery breaches that occur on Google's systems).

The Google Contract

Section 7.5 of the Google contract contains some sparse incident response language. First, it confirms that Google will comply with breach notice laws (most of which would require Google to provide LA with notice of a breach exposing personal information). For security breaches that don’t trigger breach notice laws, the Google Contract provides as follows:

Google will notify Customer of a Security Breach, following the discovery or notification of such Security Breach, in the most expedient time possible under the circumstances, without unreasonable delay, consistent with the legitimate needs of applicable law enforcement, and after taking measures necessary to determine the scope of the breach and restore the reasonable integrity of the system.

As you might imagine, that laundry list of caveats creates plenty of flexibility for Google to delay notifying LA of a breach (e.g. taking measures necessary to ascertain scope can go on for some time). A better clause for a customer would be to impose a certain timeframe by which notice of a breach or reasonably suspected breach must be reported (e.g. 12 hours, 24 hours, etc.).

Since incident response obligations may be crucial for mitigating harm caused by a security breach impacting customer data, additional contract terms should be considered, including a duty to:

  • collect and retain certain information that might be relevant to security breaches
  • conduct a reasonable investigation of the security breach
  • contain, prevent and mitigate a security breach
  • provide notice to the customer within XX hours of discovering the breach
  • provide a written report concerning the security breach within XX days of the breach
  • collect and preserve all data and evidence concerning the security breach
  • document and detail the remedial action taken, and planned to be taken, to remediate the breach
  • allow a post-breach security assessment or audit
  • allow the customer to perform its own on-site forensic examination of the security breach

One point to highlight with respect to post-incident forensic assessment.  Being able to conduct an independent forensic assessment may be crucial for a customer whose data was exposed.  That assessment will allow for the gathering and preservation of potential evidence to support the customer's defense in the event of a lawsuit or regulatory action.  That information may also reveal whether and to what extent the service provider itself may have breached its contract or violated the law. 

In the Cloud, however, where multiple clients may have data on a single server (e.g. multi-tenancy), conducting a forensic investigation may be more challenging.  Such an investigation could reveal data belonging to the service provider's other customers (and could even result in a breach of NDAs by the service provider), or potentially disrupt the services being provided to other service provider customers.  Yet without such rights, the customer may be completely reliant on the service provider's forensic investigation. One could argue that this imposes an inherent conflict of interest.  Moreover, if the service provider makes a mistake, and say fails to retain crucial relevant data, the customer could face spoliation issues in court.  Unfortunately, this is another tricky issue for Cloud providers and customers to work out.

Conclusion

As you can gather from reading the above, the terms and conditions of Cloud contracts can become very complex and involved and take a great deal of foresight and knowledge to understand and put together. In our next installment we wade into the thicket that is “risk of loss” contract terms, arguably the most important terms in these contracts.