An Exhaustive Analysis of Biometric Privacy Litigation: The Cases of Meta Platforms and TikTok
Informational Resource
This document analyzes public legal proceedings and settlements for research and educational purposes. This is not legal advice. The information provided is intended for research and educational use only.
The Genesis of Litigation: The Foundational Role of Illinois' BIPA
The modern landscape of biometric privacy litigation in the United States was fundamentally shaped by the enactment of the Illinois Biometric Information Privacy Act (BIPA) in 2008. This legislation established one of the most stringent regulatory frameworks for biometric data in the nation, creating a private right of action that empowered individuals to sue companies for statutory violations without needing to prove actual harm or financial loss. The genesis of this powerful statute lies in legislative concern over the commercialization of uniquely sensitive personal identifiers, exemplified by the bankruptcy of Pay By Touch, a fingerprint-scanning payment company. The law explicitly defines "biometric identifier" to include retina or iris scans, fingerprints, voiceprints, and scans of hand or face geometry, emphasizing their "biologically unique" and irreplaceable nature. The foundational rationale embedded within the statute itself highlights the profound risk posed by compromised biometric data, which cannot be changed like a password or credit card number, making it a severe and irreversible privacy threat. BIPA mandates that private entities obtain an informed written release prior to collecting any biometric identifier, provide written notice of the specific purpose and length of time the data will be collected, and publish a publicly available retention schedule and destruction guidelines.
The true catalyst for widespread litigation, however, was not just the law's provisions but a series of landmark judicial interpretations that transformed it from a niche consumer protection act into a formidable legal weapon. The pivotal moment arrived in January 2019 with the Illinois Supreme Court's unanimous ruling in Rosenbach v. Six Flags Entertainment Corporation. In this decision, the court held that suffering any violation of BIPA's statutory requirements—such as failing to obtain written consent or provide required notice—constitutes a concrete, actionable "statutory injury." This ruling rejected the lower court's argument that a plaintiff needed to allege additional harm, such as identity theft or pecuniary loss, to have standing to sue. By establishing that a statutory violation alone is sufficient to confer Article III standing, the Rosenbach decision opened the floodgates for class-action lawsuits, as plaintiffs could now bring claims based solely on non-compliance with the law's technical requirements. This precedent was further solidified when the U.S. District Court for the Northern District of California affirmed that BIPA applied extraterritorially, holding that a consumer's mere use of Facebook in Illinois made the statute applicable even though Facebook's servers were located elsewhere. This combination of a robust private right of action and low-barrier standing created a perfect storm for litigation, leading to major settlements against tech giants like Google, Snapchat, and ultimately, Facebook and TikTok.
The technological mechanism at the heart of these disputes is the extraction of "face geometry," a process that converts human faces into numerical representations for identification purposes. Both Facebook and TikTok were accused of engaging in this practice without adhering to BIPA's strict notice-and-consent requirements. For Facebook, the focus was on its "Tag Suggestions" feature, launched around 2010-2011, which used facial recognition software to scan uploaded photos and videos to create graphical representations of users' facial features. This system operated automatically, scanning virtually every face in every uploaded photo without notifying users or obtaining their explicit written consent. The process involved detecting fiducial points on the face (eyes, nose, mouth) and calculating the complex spatial relationships between them to generate a unique "face signature" or "face template." These templates were then stored in a massive database, described as "so large that it dwarfs the FBI's," to enable future automatic tag suggestions. Similarly, TikTok's alleged violations centered on its use of proprietary facial recognition technology to scan every video uploaded to the app, extracting geometric data to create facial templates without user notice or consent. This data was allegedly used for various purposes, including verifying a user's age before applying filters, powering augmented reality features, and profiling users for targeted advertising. The core allegation in both cases was that the companies were harvesting the most intimate data of their users—their unique biological identifiers—to build vast surveillance systems while simultaneously violating the clear, written mandates of Illinois law. The legal battles that ensued would test the limits of BIPA and establish new precedents for corporate accountability in the digital age.
The Facebook Case Study: A $650 Million Settlement and Its Aftermath
The legal challenge against Meta Platforms, Inc. (formerly Facebook, Inc.) stands as a landmark case that tested the full force of Illinois' Biometric Information Privacy Act (BIPA) and resulted in one of the largest privacy-related settlements in U.S. history. The lawsuit, initially filed in 2015 by attorney Jay Edelson, centered on Facebook's "Tag Suggestions" feature, which had been enabled by default for all users since its launch in 2011. The complaint alleged that this feature used facial recognition software to scan millions of uploaded photos and videos, extract the unique geometric data of faces, and store these "face templates" in a massive database without providing the written notice or obtaining the informed written consent mandated by BIPA §§ 15(a) and 15(b). The sheer scale of the alleged violation was staggering; the system scanned billions of faces over more than a decade, capturing a protected biometric identifier for countless users. The legal foundation for the case was built upon the Illinois Supreme Court's Rosenbach precedent, which confirmed that a statutory violation of BIPA constituted a concrete injury, thereby granting plaintiffs standing to sue without proving actual harm. This interpretation allowed the case to proceed as a class action, eventually certified to include Illinois residents who had a face template created and stored by Facebook after June 7, 2011.
The settlement negotiations culminated in a significant financial resolution. Initially proposed at $550 million, the settlement fund was later increased to $650 million following judicial scrutiny that deemed the original amount insufficient. Judge James Donato of the Northern District of California granted final approval for the $650 million settlement on February 26, 2021. The settlement guaranteed that approximately 1.6 million eligible Illinois users would receive at least $345 per person, with some reports indicating initial payments ranged from $200 to $400 and supplemental payments were later issued. The settlement administration was managed by Gilardi & Co LLC, and claim forms had to be submitted by November 23, 2020. Eligibility was strictly defined: a user had to have been located in Illinois for at least six months (183 days) and have had a face template created and stored by Facebook after June 7, 2011. The legal battle was fiercely litigated, with defense experts contending that Facebook's technology analyzed pixel values rather than human-notable facial features, creating a genuine factual dispute that could have gone to trial.
Beyond the monetary compensation, the Facebook settlement imposed sweeping injunctive relief designed to fundamentally alter the company's biometric data practices. As a core component of the agreement, Meta was required to implement binding technical and policy changes to ensure future compliance with BIPA. First, the company agreed to set the global default setting for its "Face Recognition" feature to 'off' for all users who had not previously affirmatively opted in or provided express consent. Second, Facebook committed to deleting all existing and stored face templates for class members unless the users subsequently provided express consent after receiving a separate, specific disclosure detailing the intended use of their biometric data. Third, the settlement mandated the deletion of face templates for any users who had been inactive on Facebook for three years. Furthermore, as part of the resolution, Meta announced in 2021 that it was shutting down its facial recognition system entirely, a move that included the deletion of faceprint data for over one billion people. This comprehensive package of injunctive relief demonstrates a clear strategic shift away from the mass, default-enabled collection of biometric data toward a model predicated on explicit, affirmative user consent. The case was handled by a team of prominent firms, with plaintiffs represented by Edelson PC, Labaton Sucharow LLP, and Robbins Geller Rudman & Dowd LLP, while defendants were represented by Cooley LLP. The aggressive litigation strategy of the plaintiffs' counsel was credited by the Office of the Attorney General in a separate, related case as being foundational to achieving a historic outcome, underscoring the critical role of expert legal representation in enforcing biometric privacy rights.
Facebook BIPA Settlement Key Facts
| Case Name | In re Facebook Biometric Info. Privacy Litig. |
|---|---|
| Settlement Amount | $650 Million |
| Jurisdiction | Primarily Illinois, with extraterritorial application |
| Legal Basis | Illinois Biometric Information Privacy Act (BIPA) |
| Core Violation | Unauthorized collection and storage of facial templates via 'Tag Suggestions' feature without written consent |
| Eligible Class | Illinois residents who used Facebook, lived in Illinois for ≥6 months, and had a face template created/stored after June 7, 2011 |
| Estimated Payout | At least $345 per eligible claimant |
| Key Injunctive Relief | Set Face Recognition default to 'off'; delete existing templates for class members unless re-consented; delete templates of inactive users after 3 years |
| Admission of Wrongdoing | Maintained no admission of wrongdoing |
| Lead Counsel (Plaintiffs) | Edelson PC, Labaton Sucharow LLP, Robbins Geller Rudman & Dowd LLP |
| Final Approval Date | February 26, 2021 |
The TikTok Case Study: A $92 Million Settlement Amidst National Security Scrutiny
The legal challenges against TikTok, owned by the Chinese company ByteDance, unfolded concurrently with those against Facebook but presented a distinct set of circumstances, particularly concerning its younger user base and geopolitical context. The first BIPA class-action lawsuit against TikTok was initiated in April 2020 by Baer Law LLC, representing Illinois children who used the app. The complaints alleged that TikTok's proprietary facial recognition technology scanned every face in uploaded videos to collect and store facial landmarks and other biometric information without providing the required notice or obtaining informed written consent as mandated by BIPA. These allegations were particularly potent given that a significant portion of TikTok's user base consists of minors, raising heightened concerns under laws protecting children's data. The lawsuits claimed TikTok's technology served multiple purposes, including powering popular AR filters, stickers, and face-tracker lenses, as well as conducting age verification by analyzing a user's face before running an algorithm to determine their age. The company disputed these allegations, arguing that its demographic classification algorithms did not create facial templates and were therefore not subject to BIPA, but chose to settle to avoid prolonged litigation. The settlement negotiations were also influenced by political pressure, including former President Trump's executive order threatening to ban TikTok's U.S. operations, which incentivized ByteDance to resolve liabilities ahead of a potential sale.
The resulting settlement, approved by U.S. District Judge John Z. Lee of the Northern District of Illinois on August 23, 2022, amounted to $92 million, resolving multidistrict litigation involving 21 federal lawsuits consolidated against TikTok and its parent company, ByteDance. The settlement fund was allocated to two classes: a Nationwide Class comprising all U.S. TikTok users who used the app prior to preliminary approval (estimated at 89 million), and an Illinois Subclass for Illinois residents who created one or more videos on the app during the same period (estimated at 1.4 million). To reflect BIPA's enhanced statutory protections, Illinois residents received six pro rata shares of the settlement fund, while nationwide class members received one share each. Consequently, payouts varied significantly, with valid Illinois claims receiving approximately $163.13 and valid nationwide claims receiving around $27.19. The settlement required TikTok to cease collecting or storing users' biometric information, geolocation data, or clipboard data without explicit disclosure in its privacy policy. It also prohibited the pre-uploading of user-generated content and the transmission of U.S. user data outside the country. Additionally, TikTok was required to implement a new privacy compliance training program for its employees and contractors and hire a third-party firm to review this training for three years. Like its counterpart, TikTok denied any wrongdoing but stated it sought to avoid lengthy litigation and focus on building a "safe and joyful experience" for its community.
The TikTok case was further complicated by significant national security concerns stemming from ByteDance's Beijing-based headquarters and its perceived ties to the Chinese Communist Party. Multiple lawsuits alleged that U.S. user data, including biometric information, was sent to servers in China accessible by Chinese employees. Internal recordings revealed that China-based ByteDance employees had repeated access to nonpublic data about U.S. TikTok users, contradicting public claims of data separation. While TikTok maintained that its U.S. recommendation engine is stored and operated in the Oracle Cloud on servers located in the United States under full U.S. control, these allegations fueled intense scrutiny from the Committee on Foreign Investment in the United States (CFIUS). This geopolitical dimension added another layer of complexity and urgency to the settlement discussions, influencing ByteDance's strategic decision to resolve the civil litigation. The settlement also followed earlier penalties against TikTok for violating children's privacy, including a $5.7 million fine in 2019 for collecting data from users under 13 without parental consent, highlighting a pattern of regulatory scrutiny over its data practices involving minors. The case serves as a critical example of how data privacy litigation can intersect with international relations and national security imperatives, creating unique pressures that shape corporate legal strategies.
TikTok BIPA Settlement Key Facts
| Case Name | In re TikTok Inc. Consumer Privacy Litig. |
|---|---|
| Settlement Amount | $92 Million |
| Jurisdiction | Nationwide Class plus an Illinois Subclass |
| Legal Basis | Illinois Biometric Information Privacy Act (BIPA) |
| Core Violation | Unauthorized collection of facial landmarks for filters and age verification without consent |
| Eligible Classes | Nationwide Class (89 million U.S. users); Illinois Subclass (1.4 million Illinois users) |
| Estimated Payouts | ~$163.13 per valid Illinois claim; ~$27.19 per valid nationwide claim |
| Key Injunctive Relief | Stop collecting/storing biometric/geolocation data; stop pre-uploading content; implement employee privacy training |
| Admission of Wrongdoing | Disputed allegations but settled to avoid prolonged litigation |
| Procedural History | Preliminary approval granted Oct 7, 2021; Final approval granted Aug 23, 2022 |
| Lead Counsel (Plaintiffs) | Plaintiffs' Leadership Group appointed by the court |
Comparative Analysis: Jurisdictional Power and Financial Disparities in Biometric Privacy Enforcement
A direct comparison of the biometric privacy settlements against Meta Platforms and TikTok reveals a stark contrast in financial scale, which is primarily attributable to the differing legal strategies and jurisdictions employed. The combined value of the Facebook ($650 million) and TikTok ($92 million) settlements under Illinois' Biometric Information Privacy Act (BIPA) totals $742 million. In sharp contrast, the State of Texas secured a $1.4 billion settlement with Meta, more than double the combined value of the two BIPA actions. This dramatic disparity underscores the immense leverage a single-state attorney general wields compared to a consumer-led class action. The Texas lawsuit, filed by Attorney General Ken Paxton in 2022, was not a class action but a direct enforcement action brought by the state government against Meta. It alleged that Meta violated the Texas Capture or Use of Biometric Identifier Act (CUBI) by automatically enabling its facial recognition feature since 2011 for millions of Texans, using the technology to capture and retain their facial geometry for over a decade without informed consent. The sheer size and scope of this single-state enforcement action allowed for a much larger recovery, demonstrating that a well-resourced state agency can achieve results far exceeding those of a typical consumer class action, where the total payout is divided among potentially hundreds of millions of eligible individuals, drastically reducing the per-capita value.
While the financial outcomes diverge significantly, the underlying technological practices and the resulting injunctive relief show notable convergence. Both the Facebook and TikTok settlements stemmed from the unauthorized collection of "face geometry" through their respective platforms' core features: Facebook's "Tag Suggestions" and TikTok's video scanning for filters and age verification. In both instances, the fundamental legal violation was the failure to obtain informed written consent and provide required notices, a cornerstone of BIPA. The post-settlement remedies reflect a shared industry trend toward greater user control over biometric data. Following the Facebook settlement, Meta was forced to globally disable facial recognition by default and require users to opt-in, a change that led to the eventual shutdown of the entire system and deletion of over a billion face templates. Similarly, the TikTok settlement imposed broad injunctive relief prohibiting the collection of biometric information without explicit disclosure in the privacy policy, effectively forcing a similar shift to an opt-in model. These changes signal a clear retreat from the era of default-enabled, mass-scale biometric scanning and represent a tangible victory for privacy advocates, driven by the legal risks posed by states like Illinois and Texas. However, the Texas settlement went a step further by requiring Meta to notify the state's Attorney General of any future activities falling under biometric data laws, establishing a novel monitoring mechanism designed to prevent future violations.
Settlement Comparison Analysis
| Feature | Facebook (BIPA - $650M) | TikTok (BIPA - $92M) | Texas (CUBI - $1.4B) |
|---|---|---|---|
| Legal Basis | Illinois Biometric Information Privacy Act (BIPA) | Illinois Biometric Information Privacy Act (BIPA) | Texas Capture or Use of Biometric Identifier Act (CUBI) |
| Plaintiff | Class of Illinois Residents | Class of U.S. Residents (Nationwide + Illinois Subclass) | State of Texas |
| Core Violation | Unauthorized collection/storing of facial templates via 'Tag Suggestions' | Unauthorized collection of facial landmarks for filters and age verification | Unauthorized capture/use of facial geometry through facial recognition software |
| Settlement Amount | $650 Million | $92 Million | $1.4 Billion |
| Distribution Model | Divided among ~1.6M Illinois claimants ($345+ each) | Divided among ~1.2M valid claims (~$163/IL, ~$27/NW) | All funds go to the State of Texas |
| Key Injunctive Relief | Global opt-in default; deletion of existing templates; deletion of inactive users' data | Prohibition on collecting biometric/geolocation data without disclosure; employee training | Notification to Texas AG of future activities; deletion of >1B people's data; injunction against future CUBI/DTPA violations |
| Admission of Wrongdoing | Maintained no admission of wrongdoing | Disputed allegations but settled to avoid prolonged litigation | Denied liability as part of the settlement compromise |
| Procedural Significance | Landmark BIPA settlement; affirmed extraterritorial application of BIPA | Demonstrated vulnerability of apps with large youth user bases to BIPA litigation | First major CUBI enforcement action; largest single-state privacy settlement ever |
This comparative analysis reveals that while the core technology—unauthorized facial scanning—is consistent across cases, the ultimate outcome is heavily influenced by jurisdiction. The BIPA framework, with its private right of action, proved highly effective for consumer-focused litigation, leading to significant financial settlements and substantive changes in corporate policy. However, the Texas CUBI settlement demonstrates that a direct, government-led enforcement action can achieve a dramatically higher financial penalty, likely due to the inability to divide the recovery among a massive class of individual claimants. This distinction is crucial for understanding the evolving ecosystem of biometric privacy regulation in the United States, where state-level attorneys general are increasingly becoming powerful agents of enforcement, complementing and sometimes surpassing the reach of class-action lawyers.
Beyond Facial Templates: The Broader Landscape of Algorithmic Profiling and Emerging Technologies
While the high-profile settlements against Facebook and TikTok focused on the unauthorized collection of "face geometry" for identity matching, the underlying technologies and broader data practices of these companies extend far beyond simple facial recognition. The legal and ethical debates surrounding biometric privacy are rapidly expanding to encompass more sophisticated forms of data analysis, including emotion AI and remote photoplethysmography (rPPG). Emotion AI refers to the use of algorithms to analyze facial expressions, vocal tones, or other behavioral cues to infer a person's emotional state, such as happiness, anger, or stress. RPPG is a non-contact optical technique that uses a camera to detect subtle changes in skin color caused by blood flow, allowing for the estimation of physiological signals like heart rate and respiration rate. Although neither Facebook's DeepFace system nor TikTok's filter technology was alleged to perform these functions in the initial BIPA lawsuits, the infrastructure for such capabilities exists within their platforms. Companies like Facebook (via acquisition of FacioMetrics) and Apple (via acquisition of Emotient) have explicitly entered the emotion recognition space, signaling the industry's trajectory towards deeper affective computing. The potential for misuse is significant, as these technologies could be used for manipulative advertising, psychological profiling, or even discrimination based on inferred personality traits or health conditions.
The Federal Trade Commission (FTC) has begun to address these emerging threats directly. In a May 2023 Policy Statement on Biometric Information, the FTC warned that unsubstantiated marketing claims about the accuracy, reliability, or fairness of biometric AI—including technologies that claim to classify emotions or other personality traits—constitute deceptive practices under Section 5 of the FTC Act. This framework establishes a new frontier of legal risk for companies deploying advanced biometric analytics. The FTC's concerns are rooted in the "vast surveillance" conducted by social media and streaming services, as detailed in its September 2024 staff report, 'A Look Behind the Screens'. The report found that companies like Meta and TikTok engage in extensive data collection from both users and non-users, using algorithms, data analytics, and AI to infer highly sensitive demographic and personal attributes, such as familial status, household income percentile, and location visited. This practice of algorithmic profiling creates a "surreptitious and unexpected collection or use of biometric information," which the FTC identifies as a factor in determining unfairness under its authority. The risks are amplified when this data is used to make automated decisions that can lead to negative consequences, such as denial of employment or housing, or to amplify privacy harms by revealing sensitive inferences about an individual's life, such as healthcare visits or political affiliations.
The legal implications of these advanced technologies are still developing. The initial BIPA settlements were grounded in the clear definition of "face geometry" as a biometric identifier. However, the line between identifying a person and inferring their internal state is becoming increasingly blurred. Future litigation may target the collection of data streams that can be processed to yield emotional or physiological insights, even if the raw data itself is not explicitly classified as a biometric identifier under current statutes. The FTC's stance suggests that the substance of the data collection—its surreptitious nature and potential for harm—is becoming as important as its formal legal classification. This evolution marks a critical shift from protecting a user's identity to protecting their autonomy and psychological well-being from algorithmic manipulation. The FTC's call for comprehensive federal privacy legislation reflects the growing consensus that self-regulation has failed and that stronger protections are needed to govern the use of AI in ways that safeguard consumers from these emerging forms of commercial surveillance. The path forward requires a regulatory approach that can keep pace with the rapid advancement of biometric technologies, moving beyond the limitations of static definitions to address the dynamic and often hidden ways in which personal data is harvested and exploited.
The Evolving Legal Frontier: Post-Cothron Uncertainty and the Future of Biometric Regulation
The legal landscape governing biometric privacy, particularly under Illinois' BIPA, is currently in a state of flux, defined by a critical uncertainty surrounding the retroactive application of recent legislative amendments. The foundation of massive damage awards in many BIPA cases was laid by the Illinois Supreme Court's 2023 ruling in Cothron v. White Castle System. In this decision, the court interpreted BIPA's language to mean that each instance of collecting or disclosing a person's biometric data constitutes a separate violation, a doctrine known as "per-scan liability." This interpretation was a departure from the traditional view that a single violation should be counted per aggrieved person. The Cothron ruling effectively allowed for cumulative damages that could escalate rapidly—for instance, a single employee could incur up to $37 million in potential damages over five years under the statute's maximum penalty of $5,000 per violation.
Recognizing the potential for this to create "annihilative liability" that could cripple businesses, the Illinois legislature acted swiftly. On August 2, 2024, Governor J.B. Pritzker signed Public Act 103-0769, amending BIPA to clarify that repeated collection or disclosure of the same biometric identifier from the same person using the same method constitutes a single violation. This legislative fix was designed to overturn the Cothron court's interpretation and limit liability to one statutory award per aggrieved person.
However, the central and unresolved question is whether this amendment applies retroactively to conduct that occurred before its effective date. As of late 2025, courts remain deeply divided on this issue, creating significant legal uncertainty for thousands of pending BIPA cases. Several federal courts have ruled that the amendment applies prospectively only, holding that it represents a substantive change in the law rather than a clarification of legislative intent, and thus cannot apply to past conduct without explicit language stating so. For example, Judge Alexakis in Schwartz v. Supply Network, Inc. adopted this prospective-only view. Conversely, some state courts have ruled that the amendment applies retroactively, reasoning that the legislature's action was merely clarifying the statute's original intent, as suggested by the Illinois Supreme Court in Cothron. This split in judicial interpretation means that plaintiffs and defendants in older cases face a high degree of unpredictability. If the amendment is ultimately found to be retroactive, it could dramatically reduce the potential recovery in many long-standing lawsuits. If it is found to be prospective, the full force of the Cothron interpretation remains intact for past conduct, preserving the possibility of multi-million dollar verdicts. This ongoing legal battle, which may eventually require resolution by the Illinois Supreme Court, has profound implications for the future of BIPA litigation and corporate exposure.
Looking ahead, the success of state-level laws like BIPA and Texas's CUBI highlights the inadequacy of relying on industry self-regulation to protect consumer privacy. The FTC has repeatedly called for comprehensive federal privacy legislation to establish a baseline standard for data collection, sharing, and retention practices, and to prohibit the use of sensitive data for discriminatory ad targeting. Any future federal bill will need to grapple with the lessons learned from the Facebook and TikTok cases: the importance of strong consent mechanisms, the dangers of indefinite data retention, and the need for transparency in how personal data is used. The emergence of advanced technologies like emotion AI further complicates this landscape, demanding regulations that not only protect identity but also safeguard against algorithmic manipulation and psychological profiling. Finally, the phenomenon of the "privacy paradox"—where users express high levels of concern about online privacy yet willingly accept invasive data collection practices to access popular services like TikTok—remains a persistent challenge. This behavioral reality helps explain how companies could engage in questionable data practices for years before facing significant legal consequences. Ultimately, the path forward will require a multi-pronged approach combining robust state-level enforcement, proactive federal legislation, and a continued effort to educate consumers about the true costs of free digital services in an era of pervasive surveillance.
References
1. Attorney General Ken Paxton Secures $1.4 Billion Settlement from Meta Over Unauthorized Capture of Biometric Data. Texas Attorney General Office.
2. Meta to pay Texas $1.4 billion for using facial recognition data without consent. Texas Tribune.
3. McKool Smith Secures Record-Breaking $1.4 Billion Settlement for State of Texas. McKool Smith.
4. Meta Platforms to Pay $1.4 Billion to Settle Texas Lawsuit Over Facial Recognition. Reuters.
5. In re Facebook Biometric Info. Privacy Litig. Law firm case documentation.
6. Historic Biometric Privacy Suit Settles for $650 Million. American Bar Association.
7. Texas Biometrics Case Highlights Need for Consent: Meta Settles for $1.4 Billion. Vedder Price.
8. Facebook's $650M BIPA settlement 'a make-or-break moment for privacy litigation'. IAPP.
9. Meta's Bid to Dismiss Biometric Privacy Class Action Rejected. Nelson Mullins.
10. Meta reaches $1.4bn settlement with Texas over privacy lawsuit. The Guardian.
11. FTC Staff Report Finds Large Social Media and Video Streaming Companies Have Engaged in Vast Surveillance. FTC.
12. FTC v. Meta Platforms: Plaintiff's Opening Statement. FTC.
13. Examining the Data Practices of Social Media and Video Streaming Companies. FTC.
14. F.T.C. Study Finds 'Vast Surveillance' of Social Media Users. New York Times.
15. FTC Report on Streaming and Social Media Companies Emphasizes Privacy, Security, and AI-Related Risks. EPIC.
16. Meta To Pay $1.4B For Unauthorized Use Of Biometric Data. Hall Booth Smith.
17. Baer Law LLC Initiates First BIPA Class Action Lawsuit in Illinois Against TikTok. Baer Law LLC.
18. Tik Tok's $92 million settlement, limiting what companies can do with user data. University of Maryland Law.
19. TikTok settlement highlights power of privacy class actions to shape U.S. protections. IAPP.
20. Biometric Information Privacy Act (BIPA) Litigation Tracker. Stop Spying.