There is never a dull day in privacy, cyber and data protection, and 2024 has certainly lived up to that reputation. Selecting the developments that we wanted to include in our ‘Year In Review’ has been challenging – it has been a busy year, and there is so much to choose from!
The privacy, cyber and data protection landscape continues to be complex, dynamic and rapidly evolving. 2024 saw a slew of new laws in this space come into effect around the world, multiplying the compliance burden on businesses. Of course, the rapid advancements in data-driven technologies, products and services and the development and acceleration of AI laws globally further complicate the legislative environment (and keep us on our toes!)
Alongside the proliferation of legislation, the enforcement landscape continues to develop. The speed, aggression and sophistication of cybercrime also continues to climb, with AI amplifying the impact both for cybersecurity efforts and for bad actors’ attempts to thwart them. Throw into the mix increasingly heightened public awareness of such cyberattacks and data breaches, regulatory action and AI (all of which consistently make front-page news) and the fluctuations in government and political direction in the UK and the U.S.: it’s clear that the rate of evolution is unlikely to let up anytime soon.
We are delighted to bring you this publication, our ‘Year In Review’, ahead of ‘Data Privacy and Protection Day’ (January 28, 2025), which marks the anniversary of the signing of Convention 108, the first international legally binding treaty for privacy and data protection. We hope that you will find our ‘Year In Review’ interesting, thought-provoking and entertaining and we look forward to working with you in 2025.
Daniel Alvarez, Laura Jehl, Briony Pollard and Susan Rohol
The Privacy, Cybersecurity and Data Strategy Team
All Changes: Political Shifts and the Impact on Data Protection, Cybersecurity and AI
Briony Pollard, Kari Prochaska, and Elodie Currier
2024 was a year of great political change: in the UK the Conservatives were defeated in the July general election by Keir Starmer’s Labour Party, and in the U.S., a second Trump administration will take office and control of the executive branch in 2025, while Republicans won narrow control of the House and Senate, paving the way for Republicans to implement policy initiatives across a wide range of topics. Here, we discuss the potential changes we can expect for privacy, cybersecurity and AI as a result of such political change.
UK
Data Protection. In October, the latest iteration of the UK’s proposed data protection legislation, the Data (Use and Access) Bill, was submitted to the House of Lords. It is early days for the draft law, which recycles certain concepts from its predecessor, the Conservatives’ Data Protection and Digital Information Bill. Along with GDPR-specific changes, the DUA introduced a new “smart data” scheme that allows for the sharing and access of customer and business data, new digital verification services and changes to the ICO structure. The DUA is narrower in scope than the DPDI, so does not lessen the burden on businesses to the extent anticipated; however, this should not be surprising given the questions that have been raised as to the UK’s ability to maintain its adequacy decision in the eyes of the EU Commission if any legislative changes resulted in the UK offering a materially different level of data protection. The EU Commission has yet to weigh in with concerns regarding the DUA, but is due to review and determine whether the UK should maintain its “adequacy” status by June 2025.
AI. Labour was explicit in its manifesto that to “ensure the safe development and use of AI models” it would introduce “binding regulation on the handful of companies that are developing the most powerful AI models.” Though it was expected, an AI bill was not included in the King’s Speech. Labour has maintained that it intends to establish appropriate legislation, but for now has postponed the introduction of a specific bill. Labour has a challenge ahead in maintaining an environment in which the UK can thrive in the dynamic AI sector, while balancing the increasing need for specific legislation to address the considerable risks and novel harms that come with the rapid advancement of complex AI technology.
Labour has proposed a Regulatory Innovation Office to consolidate strategic regulatory planning in relation to AI and tech into one central body to promote transparency, accountability and consistency, and improve efficiency in decision-making and approval processes for innovative products and services. Labour has indicated that it intends to make the voluntary testing of AI model capabilities mandatory for certain types of AI models, safeguard effectiveness by the AI Safety Institute, and oblige larger organizations to share required data with the AI Safety Institute by placing the assessments on a statutory footing.
Cybersecurity. The cybersecurity threat landscape is constantly evolving; rapid developments in AI contribute to the volume, and heighten the impact, of cyberattacks. Unsurprisingly then, that Labour introduced the Cybersecurity and Resilience Bill in the King’s Speech. The CRB will introduce requirements and powers similar to the EU’s proposed Cyber Resilience Act to report incidents such as ransomware attacks, so that more intelligence on such attacks impacting British businesses can be gathered and utilized. The CRB will also give regulators greater powers to require organizations to implement cybersecurity defenses and will include rules designed to protect critical national infrastructure from attackers. Nevertheless, again Labour will need to strike a balance between building the UK’s cyber resilience and burdening organizations with overly prescriptive and costly requirements.
U.S.
Privacy. Privacy legislation has seen broad support from both parties in the past, but enactment often gets bogged down in the details of key aspects like preemption and enforcement. With a Republican trifecta, however, the chances of enacting privacy legislation seem higher than before, albeit with a more business-friendly focus, and anti-censorship and parental rights provisions.
Data Transfer. The Trump administration’s approach to state surveillance and law enforcement powers will be closely monitored by EU data regulatory bodies and privacy watchdogs concerned about EU-U.S. data flows. Any changes, particularly to those aspects of the federal government involved in the execution and implementation of the EU-U.S. DPF could increase the likelihood of success for parties challenging the legality of the DPF, and would likely complicate any subsequent negotiations to develop a successor regime. We may also see a greater focus on data localization given the incoming administration’s anticipated trade policies.
AI Regulation. Early announcements of agency head appointments and AI-related policy statements offer further indicia of the Trump administration’s deregulatory focus. In one of his first Executive Orders, President Trump repealed the Biden administration’s AI Executive Order—though it’s unclear how much of the work that has already been completed the new administration will actively try to “undo.”
Cybersecurity. Given reports of China-linked Salt Typhoon’s breach of U.S. telecommunications networks, including interception of communications by high-ranking U.S. government officials such as President-elect Trump, cybersecurity policy seems likely to be heavily influenced by and intertwined with China policy. This might create some tension for regulators that are otherwise inclined to deregulatory, business-friendly solutions to cybersecurity issues, as Congress and other parts of the administration focus on toughening critical infrastructure in response to geopolitical threats and other considerations.
Picking Up The Pace: U.S. State Privacy Legislation
Daniel Alvarez, Jahi Beal, and Alexandra Barczak
The push for U.S. state-level comprehensive privacy laws strengthened in 2024. Of the 19 comprehensive state privacy laws that have been enacted to date, seven were enacted in 2024— Kentucky, Maryland, Minnesota, Nebraska, New Hampshire, New Jersey, and Rhode Island. These laws contain a number of provisions that are similar to each other and to previously enacted privacy laws. For example, each of the seven new laws provides consumers with similar data privacy rights (e.g., the right to access, delete, and opt out) and imposes similar obligations on businesses, such as transparency requirements and discrimination prohibitions. However, there remain some key differences. For example, the Minnesota Consumer Data Privacy Act includes a unique right to question the results of a controller’s profiling. Likewise, New Jersey adopted a broader definition of sensitive data compared to many other U.S. states that has implications for how and when companies may process and disclose such information; and the Maryland Online Data Privacy Act completely prohibits the selling of sensitive data without exception. All of these newly enacted laws will take effect in 2025, except for those in Kentucky and Rhode Island, which take effect in 2026.
Another variation in the trend has been privacy laws that are more limited in terms of the scope of data or parties they cover. For example, Florida’s Digital Bill of Rights includes broad privacy protections, but the law’s narrow definition of controller and annual revenue threshold of $1 billion means few businesses are subject to the legislation. In Washington State, the My Health My Data Act applies broadly across industries, but only regulates digital health information.
Finally, several existing laws were updated last year, with amendments that focused on protecting minors and expanding definitions of sensitive data. For example, the Colorado Privacy Act was amended to include biological and neural data as sensitive, to establish rules on the processing of biometric data, and to expand privacy protection for minors under the age of 18. Similarly, the California legislature amended the CCPA to expand children’s privacy protection and the definition of “sensitive personal information,” and to clarify that AI systems are capable of outputting personal information.
The pace of change to the privacy law landscape seems unlikely to let up any time soon. Unless and until the U.S. federal government finally enacts comprehensive privacy legislation, expect U.S. states to continue to enact new laws and tinker with existing laws as they try to ensure that privacy protections keep pace with technological and marketplace developments.
Many Hands or Too Many Cooks? Varied Approaches to Legislating AI
Susan Rohol, Briony Pollard, Stefan Ducich, Kari Prochaska and Alexandra Barczak
2024 was an important year for AI: in Europe, the EU AI Act entered into force; in the U.S., Colorado became the first state to enact a wide-ranging AI law and the California Legislature passed, and Governor Newsom signed into law, 17 sector and use-case specific AI bills. California may win in terms of the abundance of proposed AI legislation, but its patchwork approach differs considerably from that of the more comprehensive, risk-based laws that we have seen coming out of the EU and Colorado. As more jurisdictions establish their position on AI regulation, navigating the AI compliance landscape in 2025 and beyond is set to become increasingly complex.
THE COMPREHENSIVE APPROACH
Opposite California and its 17 AI-related bills sit the EU and Colorado, which have enacted comprehensive AI legislation of general applicability that classifies AI systems based on the potential danger they pose to consumers and imposes obligations on the system developers or deployers based on that categorization.
The EU AI Act takes a risk-based approach to regulating the entire lifecycle, from development to deployment, of AI systems that operate or provide services to users in the EU and applies irrespective of the industry in which the AI system operates. AI systems are tiered based on the risk they generate, with corresponding obligations on the developer/deployer. Certain “prohibited AI systems” are subject to an outright ban for posing a clear threat to individuals’ safety, livelihoods, and/or rights. The EU AI Act also identifies eight “high-risk systems,” as well as those used as a safety component subject to EU harmonization legislation, which are generally acceptable so long as certain conditions are met.
Colorado followed in the EU’s footsteps with the Colorado AI Act, which predominantly regulates high-risk AI systems, i.e., those which make, or substantially contribute to, a consequential decision. Similarly, while high-risk systems are generally permitted, developers/deployers of such systems are subject to a range of obligations, including risk management and transparency requirements.
THE PATCHWORK APPROACH
California, on the other hand, has opted to legislate in a piecemeal manner. Rather than opting for one overarching law, it has created a patchwork of legislation. Each law (or bill) seeks to address an identified issue, such as the spread of misinformation, election integrity, or legislating against deepfakes and targets some of California’s key industries, such as entertainment and tech, though the implications are broader and their effects will not be limited to those industries alone.
One law (AB 1008) amends the CCPA to clarify that personal information “can exist in various formats, including . . . AI systems that are capable of outputting personal information.” Consequently, organizations now must be aware that their AI systems could generate information that the California legislature would consider subject to the protection of the CCPA—an expansion that, arguably, broadens the definition of personal information beyond that of the GDPR, as interpreted by certain EU Member State DSAs. For example, in July 2024, the Hamburg data protection commissioner opined that large language models that output personal information may “lack . . . the necessary direct, targeted association to individuals” for the purposes of GDPR). Also new on the California legislative map is a law (AB 2013) imposing pre-deployment transparency requirements on developers of generative AI systems.
Notably, a heavily lobbied bill, The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, which would have imposed safety measures on large AI models to mitigate potential critical harms, did not make it past Governor Newsom. Unlike the rest of California’s AI-related legislation, this bill took a broad, sweeping approach like that taken in the EU. While this bill did not go the distance, Newsom did make clear his intention to work with the legislature, technology experts, ethicists and other stakeholders to draft and pass similar legislation to “protect . . . against actual threats without unnecessarily thwarting the promise of [AI] to advance the public good.”
REGULATOR ACTION
As part of its ongoing regulatory work, the CPPA—responsible for implementing and enforcing California privacy laws—is in the process of developing regulations that would impose requirements on businesses to provide consumers with notice, a right to opt out and a right to access information with respect to automated decision-making technology. With the continual patchwork of legislation appearing in California, however, these efforts by the CPPA are likely to be affected by whatever comes next.
Federal regulators are taking notice. Notwithstanding the absence of comprehensive federal AI legislation, AI risks are of interest to myriad enforcement agencies, which have made clear their intention to leverage existing laws to address such risks to consumers. For example, the FTC has authority over false and deceptive practices, such as AI-generated deepfakes, while the SEC can enforce against fraud, such as misrepresenting AI use to attract investors. We have already seen enforcement actions brought by both agencies related to such AI practices.
In the EU, enforcement of the EU AI Act will rest with the EU Member States’ authorities that will hand down rules on penalties and enforcement in coordination with the EU Commission’s AI Office.
LOOKING AHEAD
With such differing approaches taken by different stakeholders (including federal regulators keen to make their mark on AI), organizations seeking to advance the development/deployment of AI technologies in their businesses have the challenging task of accounting for a variety of potentially applicable obligations, while calibrating their approaches to the regulator most likely to come knocking.
All About the Kids: U.S. States Build on Global Frameworks to Protect Children
Susan Rohol and Jahi Beal
As Congress continues to dither on how and whether to protect children online, U.S. states passed an overwhelming number of new laws in 2024. These included age-appropriate design codes, comprehensive consumer privacy laws, and laws that set age limits or otherwise restrict how and when children can access content on social media platforms.
STATE AGE-APPROPRIATE DESIGN CODES
In September 2022, California enacted the first AADC in the U.S. requiring businesses that “provide an online service, product, or feature likely to be accessed by children” to design their online services with the best interest of children in mind. Key aspects of the California AADC include requirements that businesses: (i) conduct a DPIA; (ii) estimate the age of child users with a “reasonabl[e] level of certainty appropriate to the risks that arise from the data management practices of the business”; (iii) enact data minimization; (iv) configure default privacy settings for children to those that offer a high level of privacy; (v) provide privacy policies in “clear language”; and (vi) prohibit the use of dark patterns to lead or encourage children to provide personal information that is materially detrimental to their physical or mental health, or well-being. However, following a challenge (in NetChoice v. Bonta), the Ninth Circuit issued a permanent injunction for the DPIA report requirement of the California AADC in August 2024, because it “compel[led] speech by requiring covered businesses to opine on potential harm to children,” and “[d]eputize[d] covered businesses into serving as censors for the State” thereby violating the First Amendment.
Maryland became the second state to pass an AADC in May 2024. While similar to California’s AADC, the Maryland legislation incorporated a handful of other changes, such as: (i) removing the age-estimation requirement; (ii) limiting the DPIA requirement related to exposing children to harmful content; (iii) limiting the scope of the law with a reasonableness standard; and (iv) providing a definition for the “best interest of children” duty of care. The Vermont General Assembly also passed an AADC in 2024; however, the bill was vetoed by Governor Phil Scott, who cited California’s ongoing AADC litigation as the reason for his decision. With the Ninth Circuit’s decision in 2024, we expect additional states to consider age-appropriate design codes in 2025.
NEW TARGETED ADVERTISING BANS FOR CHILDREN UNDER 18
When California enacted the CCPA, it became the first state to prohibit sale of the data of children under the age of 16 without opt-in consent. This year, Colorado and Maryland raised the bar when they banned targeted advertising for children under the age of 18. Colorado passed an amendment to its privacy laws that imposes a duty of care on covered entities and requires consent before processing minors’ personal data for a wide array of uses, including targeted ads, sale of data, and geolocation. This law goes into effect in October 2025.
Maryland followed Colorado’s lead and passed a comprehensive privacy law that also prohibits targeted advertising to children under the age of 18. This law will not go into effect until April 2026.
SOCIAL MEDIA
Many states have also drafted and enacted laws that target social media platforms. In particular, Arkansas, Colorado, Florida, Georgia, Louisiana, Mississippi, New York, Ohio, Tennessee, and Utah passed laws banning social media use by children without parental consent. The term “children” is defined as individuals under 18 years old for Arkansas, Colorado, Mississippi, New York, and Utah; “known or reasonably believe[d]” to be under 18 for Tennessee; under 16 for Georgia, Louisiana, and Ohio; and under 14 for Florida.
In addition to parental consent requirements, certain U.S. states also included unique features in their social media laws:
- Colorado. The Colorado law requires that social media platforms either: (i) display a pop-up or full screen notification to users under 18 when they have spent one cumulative hour on the platform within a 24-hour period or between the hours of 10:00 p.m. and 6:00 a.m.; or (ii) provide minors with information about the effects of social media on the development of the brain. After the initial pop-up, the notification must repeatedly pop up every 30 minutes. The law goes into effect on January 1, 2026, and is the first of its kind.
- Florida. The Florida law requires social media platforms to terminate accounts for any individual under 14 years of age. The law went into effect on January 1, 2025.
- New York. The NY Stop Addictive Feeds Exploitation (SAFE) for Kids Act requires social media platforms to obtain parental consent before providing an “addictive feed” and prohibits platforms from sending certain notifications to children between 12:00 a.m. and 6:00 a.m. The law goes into effect 180 days after the office of the AG has promulgated rules and regulations.
- Utah. The Utah law requires social media companies to “reasonably calculate” the age of account holders with 95 percent accuracy. The law went into effect on October 1, 2024.
TAKEAWAY
States have implemented various approaches in 2024, but there is one thing most legislators seem to agree on: we should be doing more to protect children online. 2025 is likely to bring more laws targeted at protecting children. Without federal preemption, which seems unlikely with the new Congress, we will continue to see a patchwork of U.S. state legislation and more companies customizing their products for children based on location.
Glad You Asked, It’s A Sensitive Subject
Susan Rohol, Stefan Ducich, and Michelle Bae
The last few years have seen a proliferation of U.S. state comprehensive privacy laws, which require heightened treatment for Sensitive Personal Data. Add to this mix a growing number of states writing “consumer health data” privacy laws that further expand what is considered sensitive, and the result is a complicated patchwork of new state obligations.
States are generally consistent in their definition of what constitutes Sensitive Personal Data—i.e., data revealing racial or ethnic origin, religious beliefs, mental or physical health conditions or diagnoses, sex life and sexual orientation (which may include gender expression), citizenship or immigration status, biometric data, personal data of a known child, and precise geolocation data. Most of these laws impose strict purpose specifications and/or use limitations on sensitive personal data, and the majority also require opt-in consent for collection and use of Sensitive Personal Data.
OPT-IN V.OPT-OUT CONSENT
By January 1, 2026, 16 of 19 currently enacted state comprehensive privacy laws will require freely given, specific, informed, and unambiguous opt-in consent for the collection and processing of sensitive personal data.
SENSITIVE DATA INFERENCES
Colorado and California comprehensive privacy laws also include derived or inferred personal information. To date, Colorado is the only state to explicitly set out rules governing “Sensitive Data Inferences” to include the obligation for businesses to obtain opt-in consent to process this data (except where the consumer would reasonably expect such inferences to be made given the purposes and context of the personal data collection). California includes inferences in its definition of personal information, and it grants consumers a right to restrict processing of Sensitive Personal Data. It would not be surprising if California further clarified its regulations to include sensitive data inferences.
CONSUMER HEALTH DATA PRIVACY
In the wake of the Supreme Court’s Dobbs decision overturning Roe v. Wade, several states enacted laws with a goal of protecting a broader array of health information than that which is covered by HIPAA. Washington State led the way with its My Health My Data Act (effective as of March 31, 2024); Nevada and Connecticut quickly followed with laws that also came into effect in 2024. These laws protect a broad swath of information that “identifies the consumer’s past, present, or future physical or mental health status.” As the Washington AG’s office has made clear, regulated “Consumer Health Data” is broadly construed and includes inferences related to a consumer’s health drawn from otherwise mundane sources (e.g., where the purchase of non-health products like toiletries may be extrapolated to determine whether a person is pregnant).
These health data privacy laws require the consumers’ prior consent to collect, share, and sell their Consumer Health Data. Further, Washington and Nevada prohibit the sale of consumer health data without specific, detailed, and time-limited prior authorization (e.g., a consumer’s authorization generally expires after one year), which must be maintained up to date by the business.
Other states are taking note, and we can expect more legislation around health data in 2025, including in New York, where a similar bill came close to passage this year.
WHAT DOES THIS MEAN?
Sensitive Personal Data collection and use will become increasingly difficult in the U.S. if more states adopt an opt-in approach. This is particularly true for advertisers, especially those advertising pharmaceutical, fitness, or other health products. Many of these companies have been forced to alter their advertising strategies by state, to minimize risk. We expect this trend to continue, as more state legislators look to protect their consumers’ Sensitive Personal Data.
Privacy Enforcement Landscape: Regulators’ Varied Approaches
Laura Jehl, Briony Pollard, Amelia Putnam, and Alexandra Barczak
In 2024, U.S. federal and state regulators, and DSAs in the EU and the UK, focused their enforcement efforts on privacy issues. These actions build upon already strong enforcement in recent years, upping the stakes by increasing monetary penalties and requiring implementation of specific information security measures.
U.S.FEDERAL GOVERNMENT
The FCC entered into a Consent Decree with T-Mobile related to investigations into four security incidents that compromised customer data. T-Mobile agreed to pay a penalty of over $15 million and commit to spending an additional $15 million over the next two years to implement information security measures including phishing-resistant multifactor authentication.
The FTC focused on privacy and information security through its enforcement actions:
- Data Breaches. The FTC entered into a Consent Decree with Marriott International and its subsidiary Starwood Hotels that required both to implement a robust information security program. The FTC’s enforcement action came in response to several large data breaches involving the personal information of approximately 344 million customers. In a separate action related to the incidents, the companies agreed to pay a $52 million penalty to 49 states.
- Location Data. The FTC brought an enforcement action against data broker X-Mode Social for failing to honor consumers’ choices to opt out of marketing and failing to notify consumers about the purposes for which their location data would be used. X-Mode was required to implement a sensitive-location data program to ensure that it does not sell or share information about certain sensitive locations, and delete all location data it had collected unless it obtained consumer consent.
- COPPA. The FTC and the Los Angeles District Attorney’s Office brought a lawsuit against NGL Labs and its co-founders related to the anonymous messaging app, NGL. The FTC alleged that NGL Labs unfairly marketed to children, falsely claimed that its AI content moderation system prevented cyberbullying, and failed to obtain parental consent to collect personal information from children in violation of COPPA. NGL Labs and the co-founders agreed to pay a $5 million penalty and to prohibit under 18’s from using the NGL app.
U.S.STATES
U.S. state AGs have demonstrated an eagerness to use not only comprehensive U.S. state privacy laws, but also general consumer protection laws, to bring a wide range of privacy-related enforcement actions in 2024, among them:
- California. California AG Rob Bonta announced a settlement with DoorDash to resolve allegations that it violated the CCPA by selling personal information without providing notice to consumers or permitting consumers to opt out. DoorDash agreed to pay a $375,000 civil penalty and take other actions to comply with the CCPA.
- Texas. The Texas AG’s Office made its claim to a role as a key privacy regulator by bringing several privacy-related lawsuits this year. For example, the AG Ken Paxton sued General Motors for the collection and sale of personal information collected from cars. The Texas AG’s Office also brought an enforcement action under the Securing Children Online through Parental Empowerment Act, which requires social media companies to permit parents access to supervise and control account settings for minors. This year, the Texas AG’s office also secured its first settlement under Texas’s Capture or Use of Biometric Identifier Act, which requires informed consent of Texans to capture and use their biometric identifiers, with Meta agreeing to pay $1.4 billion to the state of Texas over five years.
EU AND UK DSAS
In the EU, the Dutch DSA issued two significant GDPR fines: (1) Uber: €290 million for failing to implement appropriate safeguards required for transferring drivers’ personal data to the U.S. for a 27-month period; and (2) Netflix: €4.75 million for failing to provide sufficient and clear information about customers’ personal data processing between 2018 and 2020.
Following a temporary ban on ChatGPT in April 2023, the Italian DSA fined OpenAI €15 million for GDPR failings relating to transparency, legal basis identification, data breach notification and age verification in relation to the collection of personal data for generative AI training. OpenAI was also ordered to carry out a six-month information campaign.
The UK ICO continued to focus enforcement action and fines on PECR violations, namely non-compliant marketing practices, but did not issue any big ticket GDPR fines in 2024.
TAKEAWAY
While these enforcement actions vary widely, taken together, they demonstrate regulators’ continued focus on privacy, especially with respect to:
- Children’s Privacy. Given U.S. states’ and Congress’s recent legislative efforts related to children’s privacy, we can expect that both state regulators and the FTC will continue to focus on companies’ practices for obtaining parental consent and their use of personal information of children, as well as their public disclosures about how such information is used.
- AI and Privacy. The EU’s recent enforcement actions may hamper the development and deployment of AI in the EU because there is a conflict between AI development, which requires huge volumes of data, on one hand, and the GDPR’s requirements for data minimization and obtaining consent to use personal data on the other.
- Information Security Incidents. While regulators have long brought enforcement actions in response to information security incidents, regulators have only recently begun imposing prescriptive, specific information security requirements as a part of consent decrees. We expect to see regulators continue to impose not only massive monetary penalties as a part of settlement, but also continue to require the implementation of significant and detailed information security measures.
“Pay Or Okay” —What’s Next? Shifting Sands of the Advertising Landscape
Briony Pollard, Kari Prochaska and Alexandra Barczak
Over the past year, Meta’s ongoing legal challenges with EU data protection authorities, the CJEU and the EU Commission regarding its behavioral and targeted advertising practices has created uncertainty as to the legality of the so-called “pay or OK” model under both data protection and competition legislation, particularly with respect to large online platforms that occupy a dominant position in the market. While we await further guidance from the EDPB (following its opinion in April 2024), the outcome of Meta’s challenge to the opinion, and the outcome of the ICO’s call for views on the model, organizations that utilize the pay or OK model will need to consider their consent-management practices in order to avoid regulatory scrutiny and maintain presence in the lucrative EU online advertising space.
BACKGROUND
The GDPR requires that processing of personal data must have a legal basis (e.g., contractual necessity, legitimate interest, or data subject consent). Until recently, organizations that process personal data for behavioral or targeted advertising purposes typically relied on the first two legal bases to do so. In a series of rulings from the Irish Data Protection Commission in 2023, Meta’s use of such legal bases was determined to be unlawful, which forced Meta to rely on data subject consent. Following a 2023 CJEU decision (Bundeskartellamt) which stated that users that do not consent to behavioral advertising must be offered an alternative service (that does not involve the processing of personal data) “for an appropriate fee,” Meta launched a subscription model for certain services in the EEA and Switzerland in late 2023, which gave users the option to consent to, or “OK,” the use of their personal data for behavioral advertising, or “pay” the subscription fee to avoid it.
EDPB, ICO AND EU COMMISSION WEIGH IN
Following requests from the Dutch, Norwegian and Hamburg data protection authorities, the EDPB issued a non-binding opinion in April that stopped short of stating that the model was unlawful; rather, it indicated that such models are permitted, provided that the GDPR consent requirements are met. However, the EDPB stated that, in most cases, it will not be possible for large online platforms to comply with the requirements for valid consent under the GDPR if they provide users with a binary choice between consenting to processing of their personal data for behavioral advertising or paying a fee, and that there should be an “equivalent alternative” for users that does not entail paying a fee or agreeing to behavioral advertising.
The bar for valid consent under the GDPR is high; it must be freely given and the user must not suffer if he or she refuses to provide or withdraw any consent that is provided. The EDPB considered that consent would not be valid if the choice created detriment to the user, which could be financial in nature or social (i.e., where the service served a prominent role in daily life and it would be hard for a user to move to, or use, an alternative service). The EDPB also challenged whether consent could really be said to be “freely given” in the face of an imbalance of power between an individual and service provider, such as Meta.
In the UK, the ICO issued a call for views regarding the model in March 2024 (which has now closed) and broadly aligned with the EDPB opinion, noting that in principle that UK data protection laws do not prohibit the model, provided that consent to personal data processing is valid (i.e., freely given, informed and capable of refusal/withdrawal without detriment). It is anticipated that the ICO will provide further details on the pay or OK model when it provides updated guidance regarding cookie and data collection technologies, which was expected in late 2024, but has yet to be published.
Of course, the implications of the pay or OK model are not limited to data protection law or data protection regulators. In its preliminary findings, following its investigation into Meta under the Digital Markets Act (which is EU legislation, the aim of which is to ensure a high degree of competition in EU digital markets by preventing large companies, so-called “gatekeepers” from abusing their market dominance), the EU Commission stated that Meta’s pay or OK advertising model failed to comply with Article 5(2) of the DMA. Article 5(2) requires gatekeepers to obtain user consent to combine personal data between the gatekeeper’s core platform services and its other services, and, if consent is refused, to give users access to a less personalized, but equivalent, alternative.
WHAT’S NEXT?
In November 2024, Meta announced the implementation of a free, less-personalized ad option that utilizes less personal data and focuses on contextualized advertising generated by content viewed during a single browsing session (rather than broader profiling to serve ads) and a reduction in the price for its ad-free subscription. Whether Meta’s latest iteration will be acceptable to data protection and competition regulators is an open question. Unsurprisingly, privacy advocacy group NOYB and Max Schrems have already stated that they do not consider it to be so, because the model still uses some personal data without consent and the less personalized ads will come as “full-screen” ads that cannot be skipped, which, state NOYB/Max Schrems, effectively annoys users into consenting. As a consequence, we doubt the long-running pay or OK saga is anywhere near its conclusion.
U.S. Privacy Class Actions Show No Signs of Stopping
Susan Rohol, Debra Bogo-Ernst, Nicholas Chanin, and Elodie Currier
Companies across the internet have been hit in recent years by lawsuits brought under older (some might say obsolete) privacy laws. Plaintiffs allege digital tracking technologies violate federal and state-level anti-wiretap laws including ECPA, CIPA, and VPPA. Others have been sued for collection, use, and/or disclosure of biometric or genetic information under BIPA and GIPA. In 2024, more than 250 cases were brought under CIPA and nearly 100 under VPPA. Moreover, changes to EU consumer law may herald an era of privacy collective actions in the EU. Relatedly, plaintiffs’ lawyers also pursue these theories in the mass arbitration context. We discuss some important trends of 2024.
U.S.STATE WIRETAP LAWS
Perhaps the most active area in privacy litigation has been the weaponization of 1960s-era anti-wiretapping laws. These cases typically allege that session replay software, pixels, or other tracking technologies employed on websites constitute an illicit interception of communications. These suits have seen mixed results in court, and 2024 continued that trend.
- Trap and Trace: Pen Register Theory. There is increased activity under the CIPA provision that prohibits installing a device that records “dialing, routing, addressing, or signaling information.” This appears to be more in line with what trackers collect than the “communications” covered by CIPA’s wiretap provisions. While hundreds of CIPA cases are filed annually, a smaller number of these cases get litigated, owing to numerous dismissals and quick, nominal individual settlements that happen routinely without publicity. For the remaining cases that proceed to pleading challenges, success on demurrers in California state courts is mixed, while there are at least two California federal court decisions, often touted by plaintiffs’ counsel allowing pen register cases to proceed past the pleading stage: Greenley v. Kochava, Inc. and Dino Moody v. C2 Educational Systems.
- East Coast vs. West Coast. California may be fertile ground for wiretap suits, but the Massachusetts Supreme Judicial Court ruled in October that Massachusetts’s prohibition on wiretaps does not extend to website tracking technologies. It is instead limited to the illicit interception of communications between two people (as opposed to between a person and a website).
VIDEO PRIVACY PROTECTION ACT
Congress originally passed the VPPA to protect Americans’ video rental history by prohibiting videotape service providers from disclosing, without consent, what movies a person has rented; the VPPA was amended in 2013 to incorporate streaming services. Plaintiffs now use the VPPA to allege that third-party trackers amount to an unlawful disclosure of video rental history. In October, the Second Circuit, in reversing a previous dismissal, took the broad view that a “renter, purchaser, or subscriber” protected by the VPPA includes website newsletter subscribers, regardless of whether that subscription relates to any video material a website provides.
HEALTH CARE LITIGATION TRENDS
In June, HHS issued guidance notifying businesses subject to HIPAA that the use of tracking technologies on their websites, especially patient portals, could constitute an unlawful disclosure of PHI. The FTC issued a Health Breach Notification rule in April, stating that use of tracking technologies by non-HIPAA covered entities could constitute a health data breach. Since then, suits targeting HIPAA-covered entities (such as hospitals and health plans) increased considerably related to those entities’ use of tracking technologies. There have been a handful of reported settlements in this space and some courts have found these violations are compensable injuries under U.S. state-level privacy laws or common law privacy rights. However, many cases are still advancing and some courts remain sympathetic to defendants’ claims.
ILLINOIS’ BIOMETRIC INFORMATION PRIVACY ACT AND GENETIC INFORMATION PRIVACY ACT
BIPA prohibits collecting biometric information without the written consent of the individual from whom the biometrics were collected. With its private right of action, BIPA has generated some of the largest awards of any privacy law (e.g., a private freight rail operator settled a BIPA case for $75 million). In August, the Illinois legislature amended the law’s statutory damages from a per-violation basis to a per-person basis, so it is now unlikely that these cases will result in the same astronomical awards. However, we have seen an increase in actions by the Texas AG under Texas’s biometric privacy law, with some notable cases resolving for more than a billion dollars.
Similar to BIPA, GIPA governs how genetic information and testing can be used or solicited certain employment and insurance contexts, and prohibits disclosing the identity of a genetic testing subject or genetic testing results to third parties without authorization. GIPA cases have skyrocketed this year. While life insurers have fared well getting these cases dismissed, employers have not been as lucky, as many cases have proceeded past the motion to dismiss phase.
GDPR COLLECTIVE REDRESS
The Representative Actions Directive was passed in the EU in 2020—and is steadily being implemented by EU member states—to allow representative entities to bring legal actions on behalf of consumers. Over the past year, a number of these actions have been brought to assert consumer rights provided by the GDPR. Companies may need to defend consumer privacy actions on an international scale.
WHAT’S NEXT?
We expect the plaintiffs’ bar to advance more creative theories in 2025, especially as they target industries that collect more sensitive information related to health or finances. The cost to defend class actions and mass arbitrations is substantial, so it is critical that website and app operators review the myriad ways they may be collecting and sharing data. In some cases, it may be time to offload obsolete data streams and/or collect consumer consent.
Time to Rethink Data Transfer: U.S Regulators Target the Free Flow of Sensitive Data to Countries of Concern
Daniel Alvarez, Amelia Putnam, and Elodie Currier
While many organizations have been focused on the cross-border data transfer issues raised by EU, UK, and Chinese data protection laws, the trend of placing restrictions on the international transfer of personal information finally reached the U.S. in 2024, and seems likely to continue into 2025. Between proposed regulations and enacted federal laws, the legal landscape for certain data transfers out of the U.S.— which until recently has been largely unregulated—is set to change drastically.
BACKGROUND
In February 2024, President Biden signed an EO directing the AG and federal agencies to take various steps to protect Americans’ sensitive personal data from exploitation by countries of concern. In response to the EO, the Department of Justice published an Advance Notice of Proposed Rulemaking that proposes regulations to implement the EO which, if adopted, would restrict transactions involving a bulk volume of personal data or specific government data to countries of concern, such as China, Cuba, and Russia.
On October 29, 2024, the DOJ published a Notice of Proposed Rulemaking (“NPRM”), which largely adopted the requirements proposed in the Advance Notice. On January 8, 2025, the final rule was published in the United States Federal Register, and the effective date of the rules is April 8, 2025 (assuming no changes by Congress pursuant to the Congressional Review Act).
On a separate but parallel track, President Biden signed into law PADFAA as part of an omnibus appropriations bill focused primarily on emergency aid to Ukraine and Israel. PADFAA prohibits “data brokers” from transferring personally identifiable sensitive data to certain foreign adversary countries (e.g., China), as well as to entities “controlled by a foreign adversary.” PADFAA went into effect in June 2024.
COMPLIANCE CHALLENGES
Both the DOJ’s NPRM and PADFAA have the goal of limiting data transfers of certain data outside of the U.S., but each takes a different track. This may result in different applicable requirements and a broad range of data transfers that may be implicated.
Companies, especially data brokers, may face significant challenges in transferring personal data outside of the U.S., and in some cases may not be able to conduct certain data transactions at all. For example, data brokerage transactions involving a bulk amount of certain data, such as personal health data, personal financial data, or precise geolocation data, are prohibited if the recipient is an entity in China or other country of concern.
Companies will also need to conduct diligence on the third parties to which they transfer personal information. Namely, companies will need to assess where third parties are domiciled and organized, as well as whether the third party is controlled by a foreign adversary, and take steps to ensure that covered data is not transferred to any such entity.
NEXT STEPS
Companies should conduct a more detailed review and analysis of their existing data transfer practices compared to the DOJ’s final rules. With respect to PADFAA, the FTC is charged with enforcement, but it remains to be seen whether the FTC will provide any guidance on how it intends to approach such a task or if Congress will enact clarifying legislation. In the meantime, companies should assess whether any of their data transfers would be implicated by the law. Either way, companies that once could assume that data transfers from the U.S. were lawful and straightforward, can no longer do so.
Not My Breach, Not My Problem? Major IT Outages in 2024 Highlight the Risks of Overreliance on Single Service Providers
Laura Jehl, Nicholas Chanin, and Elodie Currier
As cyberattacks and IT outages have become an unfortunate fact of life, both private- and public-sector organizations have grown more adept at planning for, responding to, and recovering from these incidents. Events in 2024, however, illustrated a significant and frequently overlooked vulnerability plaguing organizations across diverse industries: overreliance on a single service provider. In February, a ransomware attack at Change Healthcare (“Change”), a subsidiary of UnitedHealth Group, paralyzed the U.S. healthcare sector’s ability to process insurance claims and payments. Then, in July, cybersecurity company CrowdStrike released an update to its software that was incompatible with other software used by its customers, leading to widespread outages. Finally, in late November, Blue Yonder, a supply chain management software company, fell victim to a ransomware attack and itself became the source of a supply chain attack. As a result of incidents at just these three service providers, thousands of companies around the world suffered major operational impacts, and many were required to notify regulators and consumers that their security practices had proven inadequate.
CHANGE HEALTHCARE RANSOMWARE
While Change may not be a household name, it occupies a major position in the healthcare industry because of its role in processing transactions between healthcare providers and healthcare insurers. All told, Change processes billions of healthcare-related records associated with approximately one-third of all Americans. When ransomware threat actor BlackCat/AlphV launched an attack against the company in February 2024 that forced Change to take its systems offline, the resulting disruption prevented hospital systems, doctors’ offices, pharmacies and other healthcare providers from submitting claims and collecting payments for their services. This in turn led to a massive drop in cash flow for many Change customers, as payments were delayed indefinitely. Even though Change’s corporate parent, UnitedHealth Group, offered a workaround solution (offering accelerated or advance payments) to allow customers to continue operations, Change customers that are publicly traded were required to evaluate whether this incident—with its considerable short-term effect on cash flow and uncertain longer-term effect on revenue—was sufficiently material to trigger SEC 8-K reporting. Further, as the attack compromised personal data—much of which was protected health information—of at least 100 million Americans, Change’s customers had to devote significant time and resources to evaluating their HIPAA reporting obligations. To date, this incident has resulted in approximately $2.5 billion in losses.
CROWDSTRIKE OUTAGE
CrowdStrike’s Falcon software is used by thousands of companies to monitor hundreds of thousands of endpoints and the software running on those endpoints, so any changes to the Falcon software are pushed to all of CrowdStrike’s customers’ networks. When CrowdStrike pushed an update that contained an undiscovered bug, it caused significant disruption to thousands of entities worldwide, including companies in the transportation, finance, media, and other sectors. As a result, flights were cancelled, and critical national security systems were knocked offline and required extensive review and repair. CISA stepped in to coordinate with CrowdStrike to fix the bug and help affected organizations bring their systems back online. As with the Change incident, in addition to scrambling to address operational outages, public companies had to quickly assess whether the effects on their operations from the outage were sufficiently material to require reporting to the SEC, or whether the incident sufficiently rendered European personal data “unavailable” such that it required reporting to EU authorities. In total, more than 8.5 million systems were knocked offline and affected entities may have suffered more than $5 billion in losses.
BLUE YONDER RANSOMWARE ATTACK
Blue Yonder provides a platform that allows its customers to manage complex supply chains, including solutions for warehouse, supply and demand forecasting, and labor management. When Blue Yonder was hit by a ransomware attack in November, numerous customers lost the ability to access systems critical to their daily operations. The attack affected two of the UK’s largest supermarket chains, Morrison’s and Sainsbury’s, with Morrison’s reportedly losing access to its warehouse management system, possibly delaying the supply of products to its stores. In the U.S., Starbucks reportedly lost the ability to pay its baristas.
CONSEQUENCES
These three incidents highlight not only the financial and operational risks to businesses that rely heavily on single-service providers for critical operations, but also the regulatory risks such dependence presents. Even when a breach or IT incident is on a vendor’s system, companies affected by third-party incidents have to devote time and resources to determining whether a mistake by a service provider necessitates a report to a regulator. In the EU and UK, companies have a limited window in which they need to determine whether a third party’s failure has sufficiently compromised personal data to be reportable to DSAs under the GDPR and UK GDPR. Deciding to report an incident may put an entity on its regulators’ radar, while a decision not to report could be second-guessed in the future.
It is critical that companies thoroughly, consistently, and regularly vet their vendors and service providers’ cybersecurity posture, and that incident response plans contemplate third-party incidents. It is also important for companies to assess their reliance on a single vendor for key functions, and to consider whether they should diversify or create backup options. The significant third-party incidents of 2024 have highlighted that just because your company’s systems were not breached does not mean that the company will not suffer significant consequences or be held responsible.
Parenting Pains: Parental Liability for Subsidiary GDPR Breaches
Briony Pollard
In September 2024, an AG of the CJEU issued an opinion on the scope of parental liability in calculating a fine where a subsidiary is found to be in breach of the GDPR (“Opinion”). While the Opinion is not binding on the CJEU (although generally, AG opinions are followed), nor does it move the needle from previous understanding, it does act as a reminder of the importance of conducting adequate GDPR due diligence pre-acquisition, and remediation of compliance gaps post-completion, because in the event of a subsidiary’s breach, the consequences as to liability and fine calculation by EU and UK data supervisory authorities will not necessarily be limited to the entity impacted by the breach and may impact the parent itself.
WHAT CONSTITUTES AN “UNDERTAKING”?
Article 83 GDPR permits DSAs to issue fines of up to a maximum of €20m, or 4% of worldwide annual turnover of an “undertaking,” for breaches. “Undertaking” is not defined in the GDPR; however, Recital 150 provides that such term should be understood in accordance with EU competition law as referring to multiple legal entities that form a single economic entity. A parent company, such as an investment fund, will be a single economic entity with the relevant subsidiary if the parent has “decisive influence” over the subsidiary, which in practice means the ability to control the subsidiary’s strategic direction.
THE “SHAREHOLDER” PRESUMPTION
Where the parent has a 100% shareholding (or near 100%) in the subsidiary, or has a substantial majority shareholding in addition to 100% (or near 100%) of the voting rights, there is a presumption that the parent exercises decisive influence over the subsidiary. The presumption is rarely successfully rebutted in practice. Crucially, it is not a requirement that any such influence was actually exercised in relation to the subsidiary’s GDPR breach; it is the general relationship between the parent and the subsidiary that matters. This means that a parent’s liability can be triggered even if it was not involved in, or aware of, the breach. Where a parent has a majority shareholding (but not “near 100%”), there is no such presumption and the onus falls on the DSA to show actual decisive influence; however, it will be relatively straightforward for a DSA to demonstrate that the parent has decisive influence over the subsidiary by reason of the majority shareholding.
MINORITY SHAREHOLDING
Where a parent’s stake is a minority interest, various factors will be examined to determine whether the parent has actual decisive influence (e.g., composition of the subsidiary’s board, the parent’s rights to influence commercial decisions of the subsidiary, such as veto and sign-off rights, and evidence of the parent’s efforts to influence the subsidiary’s commercial policy). Where the presumption does not apply, competition authorities have relied on a range of links between a parent and subsidiary to find evidence of actual decisive influence, including personal links and the presence on the board of the subsidiary of senior-level parent representatives. Decisive influence has been found by competition authorities on the basis of minority shareholdings of as little as 33%.
RISK THE DOCTRINE CREATES FOR PARENTS
The Opinion confirms that in practice (depending on the influence the parent has over its subsidiaries) the GDPR parental liability doctrine may lead to the turnover of the entire parent being accounted for in the calculation of the fine under GDPR, rather than just the turnover of the specific subsidiary in breach. The AG further clarified that while the concept of “undertaking” is relevant for setting the maximum fine, it is not a specific factor for determining the actual fine, and accordingly, DSAs should not use parent or group turnover as the main or only reference for setting the fine. Based on the limited exercise of the parental liability doctrine by DSAs, the primary example being that of the Irish DSA (following a binding decision by the EDPB) to calculate WhatsApp Ireland’s fine by reference to the global revenues of its parent, Facebook, Inc. in September 2021, the prevailing view is that DSAs would need to have a reason to look beyond the turnover of the entity that is actually in breach. In WhatsApp’s case, the Irish DSA increased the fine to €225m to reflect Facebook, Inc.’s larger revenues rather than WhatsApp Ireland’s lower revenues to ensure the fine was “effective, proportionate and dissuasive” in accordance with Article 83(1). Essentially, the Irish DSA invoked the parental liability doctrine to ensure the fine was punitive.
ADDRESSING THE RISK
DSAs expect parent organizations to have oversight of, and take active responsibility for, ensuring that subsidiaries are GDPR compliant; the parental liability doctrine will not allow parents to avail themselves of liability for subsidiary breaches in the event that the parent was not aware of or involved in the breach in any way, and is one of the many reasons it is vital that pre-acquisition, thorough privacy due diligence is conducted on a target to understand its compliance posture and identify any compliance gaps, and post-acquisition, steps are taken to address material noncompliance. Irrespective of the target’s compliance posture, it is prudent to conduct regular GDPR (and indeed broader privacy compliance) gap assessments and audits in order to remediate any problem areas in short order so as to mitigate the risk such issues present to the target and to the parent.
Glossary
- AADC: Age-Appropriate Design Codes
- AG: Attorney General
- AI: Artificial intelligence
- BIPA: Illinois Biometric Information Privacy Act
- CCPA: California Consumer Privacy Act
- CIPA: California Invasion of Privacy Act
- CISA: Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency
- CJEU: Court of Justice of the European Union
- Consumer Health Data: personal information that is linked or reasonably linkable to a consumer and that identifies the consumer’s past, present, or future physical or mental health status (e.g., individual health conditions, treatment, diseases, or diagnosis; social, psychological, behavioral, and medical interventions; health-related surgeries or procedures; use or purchase of prescribed medication; bodily functions, vital signs, symptoms, or measurements of the same; diagnoses or diagnostic testing, treatment, or medication; gender-affirming care information; reproductive or sexual health information; biometric data; genetic data; precise location information that could reasonably indicate a consumer’s attempt to acquire or receive health services or supplies; data that identifies a consumer seeking health care services; or any information processed to associate or identify a consumer with such data that is derived or extrapolated from nonhealthy information.
- COPPA: Children’s Online Privacy Protection Act
- CPPA: California Privacy Protection Agency
- CRB: Cybersecurity and Resilience Bill
- DMA: Digital Markets Act
- DOJ: U.S. Department of Justice
- DPDI: Data Protection and Digital Information Bill
- DPF: Data Privacy Framework
- DPIA: Data Protection Impact Assessment
- DSA: data supervisory authorities
- DUA: Data (Use and Access) Bill
- ECPA: Electronic Communications Privacy Act
- EDPB: European Data Protection Board
- EO: Executive Order
- EU: European Union
- FCC: U.S. Federal Communications Commission
- FTC: U.S. Federal Trade Commission
- GDPR: General Data Protection Regulation
- GIPA: Illinois General Information Privacy Act
- HHS: U.S. Department of Health and Human Services
- HIPAA: Health Insurance Portability and Accountability Act of 1996
- ICO: UK Information Commissioner’s Office
- NPRM: Notice of Proposed Rulemaking
- PADFAA: Protecting Americans’ Data from Foreign Adversaries Act of 2024
- PECR: Privacy and Electronic Communications (EC Directive) Regulations 2003
- PHI: Protected Health Information
- SEC: U.S. Securities and Exchange Commission
- Sensitive Personal Data: data revealing racial or ethnic origin, religious beliefs, mental or physical health conditions or diagnoses, sex life and sexual orientation (which may include gender expression), citizenship or immigration status, biometric data, personal data of a known child, and precise geolocation data.
- U.S.: United States
- UK: United Kingdom
- VPPA: Video Privacy Protection Act
Click here to review this newsletter on the Willkie website.