Tsaaro got CERT-IN Emplanelled | MeitY has published the DPDP Rules, 2025.
Official PDF
Get a DPDPA Compliance Plan
Tsaaro got CERT-IN Emplanelled | MeitY has published the DPDP Rules, 2025.
Official PDF
Get a DPDPA Compliance Plan
Tsaaro got CERT-IN Emplanelled | MeitY has published the DPDP Rules, 2025.
Official PDF
Get a DPDPA Compliance Plan
Tsaaro got CERT-IN Emplanelled | MeitY has published the DPDP Rules, 2025.
Official PDF
Get a DPDPA Compliance Plan
Back To Home
Research Team (Tsaaro)
California Ramps Up Privacy Enforcement with Data Broker Crackdown and Major Opt-Out Fine

The state of California is actively enforcing privacy regulations under the California Consumer Privacy Act (CCPA) and the California Privacy Rights Act (CPRA). Recent actions by the California Privacy Protection Agency (CPPA) and the Attorney General’s Office shows a heightened focus on data broker accountability and upholding consumer rights.
Data Broker Enforcement Strike Force
The CPPA has introduced a new “Data Broker Enforcement Strike Force” to ensure compliance within the data broker sector. The team’s core goal is enforcing current CCPA rules and prepare the industry for the new Delete Act, effective from 1 January 2026. This Act mandates registered brokers to participate in the consumer-friendly Delete Request and Opt-out Platform (DROP), allowing consumers a single method to request data deletion. This proactive measure aims to address public concern over the potential misuse of Californian residents’ personal information.
Major Settlement for Opt-Out Failures
Concurrently, California Attorney General’s Office recently secured a $1.4 million settlement in civil penalties from a mobile app gaming company. The fine was imposed due to the company’s failure to implement proper, CCPA-compliant opt-out mechanisms across its platforms. This was deemed a serious breach, particularly because the company sold the data of minors without obtaining prior opt-in consent, violating enhanced CCPA protections for children.
The settlement requires the company to implement compliant opt-out consents for the general public and mandatory opt-in consents for minors, alongside a three-year compliance monitoring period. The key takeaway for all businesses is the necessity of ensuring opt-out processes are straightforward, effective, and easily accessible to consumers.
Defense of State Regulatory Authority
Beyond direct enforcement, the CPPA is actively defending California’s regulatory powers. The agency is opposing federal attempts to preempt or limit state-level rules on the use of Advanced Decision-Making Technology (ADMT). California’s ADMT regulations, designed to further protect privacy rights, could impact how employers operate, and the CPPA remains committed to defending these state-level measures against federal challenge.
News of the week
1. Global: Amnesty International Releases Toolkit to Combat AI-Driven Human Rights Abuses
Amnesty International, has launched its Algorithmic Accountability Toolkit, a specialised resource designed to empower civil society in tackling the adverse consequences of automated systems. Launched recently the toolkit provides essential guidance for activists, journalists and rights defenders globally to investigate, expose, and demand accountability for human rights abuses caused by the rapid deployment of Artificial Intelligence (AI) and Automated Decision-Making (ADM) systems.
The new guidance critically examines algorithmic technologies used across four key public sector domains Welfare, Policing, Healthcare, Education
It offers a sharp assessment of the technology’s capacity to perpetuate exclusion and systemic bias, contrasting this reality with the frequent unsubstantiated claims of “efficiency” and “societal improvement” made by state actors and corporations. The resource strongly advises civil society organisations (CSOs), community groups, and investigators to establish an organised framework emphasizing robust investigation and a multi-pronged strategy. Those challenging these systems must prioritise building collaborative strength to counter abusive practices and conduct meticulous evaluations of AI applications across various global jurisdictions, integrating learning from extensive investigations in nations including Denmark, the Netherlands, India, and the UK.
This proactive stance mandates confronting essential ethical integration challenges, particularly how AI enables mass surveillance undermines the fundamental right to social protection and restricts freedom of peaceful assembly. It demands the active management of risks where technological reliance reaches prejudice and discrimination across societal structures. Furthermore, it shows the necessity of anchoring all accountability efforts in the bedrock of human rights law, identifying this legal foundation as a critical missing piece in standard AI ethics discussions and auditing methodologies. This protection is not merely technical; it inherently requires essential collaboration with local communities and organisations who must remain committed to the process. Maintaining a multi-method approach remains pivotal to the guidance stressing that continuous monitoring and investigation of these opaque systems when combined with effective legal and communication tactics such as strategic litigation and advocacy are essential prerequisites for securing genuine accountability. Finally, those utilising the toolkit are urged to align their actions with established human rights norms and challenge the ongoing, unchecked experimentation and massive state investments in AI development.
2. US Executive Order Targets State AI Laws to Push ‘Minimally Burdensome’ Federal Standard
On 11 December 2025, President Trump issued an Executive Order (EO), “Ensuring a National Policy Framework for Artificial Intelligence,” marking an aggressive federal push to curb and preempt state level AI regulations such as those enacted in California and Colorado that the administration deems overly restrictive. The EO is the formal policy manifestation of the President’s goal to replace the complex, fragmented patchwork of state laws with a single, “minimally burdensome national standard” that prioritises AI innovation.
The EO employs a multi-faceted approach to challenge state authority. It directs the Attorney General to establish a Department of Justice (DOJ) AI Litigation Task Force charged with mounting legal challenges to state AI laws on grounds they unconstitutionally regulate interstate commerce, are preempted by existing federal regulations, or are otherwise unlawful. Complementing this, the Department of Commerce is directed to publish a list of onerous state AI laws, which will be referred to the Task Force for potential legal action. Furthermore, the EO leverages federal funding directing the Department of Commerce to withhold grants under the Broadband Equity, Access and Deployment (BEAD) Program from states identified as having restrictive AI laws and requiring all other agencies to assess similar funding restrictions.
Beyond enforcement, the EO seeks to establish the federal regulatory framework through agency action and legislation. The Federal Communications Commission (FCC) and the Federal Trade Commission (FTC) are tasked with developing new federal standards for AI disclosure and consumer protection, with the intent of preempting conflicting state laws. While the EO calls on White House advisors to engage Congress on a federal preemption law, it specifically excludes certain areas from preemption, including child safety, AI compute infrastructure, and state government use of AI. Despite the political push, legal experts note that the constitutional and statutory authority for withholding federal funds and using existing law to strike down state AI measures remains uncertain and will likely face significant challenges in court. Consequently, companies must maintain compliance with existing state laws until they are definitively invalidated.
Source: https://www.mofo.com/resources/insights/251213-executive-order-state-ai-laws
Image: US Executive Order Targets State AI Laws to Push ‘Minimally Burdensome’ Federal Standard – Search
3. Norwegian Court Upholds €6.5M Fine Against Grindr for GDPR Data Sharing Breach
Norway’s Borgarting Court of Appeal has dismissed Grindr’s final appeal, upholding a substantial €6.5 million fine imposed by the Norwegian Data Protection Authority for General Data Protection Regulation (GDPR) violations. The court definitively ruled that between July 2018 and April 2020, the dating app unlawfully shared sensitive user data, including App IDs that inferred sexual orientation and sexual relations, with multiple advertising partners without obtaining valid user consent. The judgement stressed that because Grindr markets itself to the LGBTQ+ community, merely identifying a user as being on the platform constitutes special-category data under Article 9(1) of the GDPR, triggering the highest protection requirements.
The court extensively examined Grindr’s consent mechanism, finding it failed legal requirements because users had no genuine freedom of choice; they were forced to accept blanket data sharing by accepting the privacy policy just to access the service. The judges explicitly rejected the argument that payment options or device settings constituted the voluntary, explicit, and informed consent mandated by the GDPR, particularly given the policy was found to be “unclear, incomplete, and partly misleading.” The final fine, representing approximately 30 percent of the GDPR’s maximum penalty, was deemed appropriate because Grindr acted intentionally, and the court dismissed claims of alignment with industry standards, arguing that widespread non-compliance only reinforces the need for a deterrent effect. The ruling significantly clarifies that conditioning service access on data sharing is prohibited and confirms that non-EEA companies may face separate fines in each EEA country for jurisdiction-specific breaches.
Source and Image : https://ppc.land/norwegian-court-upholds-eu6-5m-grindr-fine-for-data-sharing-violations/
