• Tsaaro got CERT-IN Emplanelled | MeitY has published the DPDP Rules, 2025.

    Official PDF

    Get a DPDPA Compliance Plan

  • Tsaaro got CERT-IN Emplanelled | MeitY has published the DPDP Rules, 2025.

    Official PDF

    Get a DPDPA Compliance Plan

  • Tsaaro got CERT-IN Emplanelled | MeitY has published the DPDP Rules, 2025.

    Official PDF

    Get a DPDPA Compliance Plan

  • Tsaaro got CERT-IN Emplanelled | MeitY has published the DPDP Rules, 2025.

    Official PDF

    Get a DPDPA Compliance Plan

Logo

Your trusted compliance partner

Logo

Your trusted compliance partner

Back To Home

Research Team (Tsaaro)

Compliance Blueprint for AI Startups and SMEs: DPDPA 2023 and DPDP Rules 2025

Introduction 

The Digital Personal Data Protection Act (DPDPA) transformed India’s data-driven innovation economy for good on August 11, 2023. The DPDP Rules converted the law’s purpose into rules that all data fiduciaries, including AI-first companies and small and medium-sized firms (SMEs), have to follow. These rules include deadlines and protections. The Act and Rules say how to handle data, how to give notice and acquire permission, how to keep data safe, how to disclose breaches, and how to protect people’s rights. In order to do business legally and sustainably in India, AI companies and small and medium-sized businesses (SMEs) that use generative AI tools, analytics platforms, recommendation engines, or other machine learning systems must follow this method for collecting, training, deploying, and monitoring their data. The regulation aligns well with India’s AI Governance Guidelines 2025, which highlight Trust as Foundation, People First, Fairness and Equity, Accountability, Understandable by Design, Safety, Resilience & Sustainability. This plan takes legal language and translates it into things that AI startups and small and medium-sized organisations can do to stay in compliance until May 14, 2027. 

Why This Is Important for Small and Medium-Sized Businesses and AI Startups 

AI startups usually need massive, fast-changing datasets that may have private information about users, staff, and customers. The more complex the model is, the more data it tends to take in and make guesses about, which makes it more likely to break the DPDPA rules. These groups have two problems to solve: they need to maintain privacy while staying flexible, get valid consent on a large scale, manage data flows across borders, and make sure their algorithms don’t make things worse or reinforce bias. Most startups and small and medium-sized businesses (SMEs) don’t have their own privacy or legal teams, unlike giant internet companies. But the law applies to all data fiduciaries, no matter how big they are. The policy response recognises this fact. India’s AI governance and data protection framework includes tiered timetables, risk-based duties, and optional reliance on permission managers or third-party infrastructure. The framework gives companies a method to comply without stopping innovation. 

Getting to Know the DPDPA 2023 and DPDP Rules 2025

Core Architecture: Processing Based on Consent
Personal data can only be processed with consent or for legitimate purposes under the DPDPA. Any entity that determines the purpose and manner of processing is a Data Fiduciary under Section 4, making it completely accountable for compliance even if it uses external providers. 

Key concepts for AI startups and SMEs: 

  • Data Fiduciary: Your startup, if it decides what data is collected and why e.g., for training models, ranking content, or personalising experiences. 

  • Data Principal: The individual whose data is processed users, customers, employees. 

  • Data Processor: Any third‑party processing data for you (cloud provider, annotation vendor, analytics tool). You remain accountable for their actions under the DPDPA. 

  • Algorithmic Processing: Where AI systems influence significant decisions (creditworthiness, hiring, admissions, eligibility, or ranking), transparency and fairness expectations substantially increase. 

The Act allows AI training on personal data only under clear legal bases and with safeguards such as purpose limitation, consent, and transparency regarding downstream uses. 

Three Steps to Implementation: A Realistic Schedule
The DPDP Rules 2025 deliberately avoid demanding instant full compliance. Instead, they provide a sequenced roadmap:
Phase 1: Immediate (November 14, 2025 – Present)
The Data Protection Board of India is operational and accepting complaints. Startups should: 

  • Appoint a privacy lead or focal point. 

  • Map current data flows and model pipelines. 

  • Identify high‑risk processing (e.g., profiling, children’s data, cross‑border transfers). 

Phase 2: 12-Month Window (by November 14, 2026)
Rule 4 operationalises Consent Manager registration. If your startup plans to rely on third‑party consent platforms, due‑diligence and contractual alignment must begin well before this date. 

Phase 3: 18-Month Full Compliance (by May 14, 2027)
All core obligations become enforceable: compliant notices and consent, robust security safeguards, breach reporting, rights handling, retention and deletion policies, and, where applicable, Significant Data Fiduciary (SDF) duties such as DPIAs and audits.  

Informed Consent: The Basis for Training AI Models
Section 6 requires consent to be free, specific, informed, unconditional, and unambiguous. For AI startups, this means users must understand not just how data is collected but also how it will feed models, shape outputs, or personalise experiences. 

Rule 3: Requirements for Notice and Consent
Rule 3 of the DPDP Rules 2025 sets detailed standards for privacy notices: 

  • Standalone: Notices must be understandable without reading long terms of service. 

  • Plain Language: Avoid jargon; explain AI and data uses in simple terms. 

  • Itemised: Clearly list categories of data (e.g., “location via GPS,” “in‑app interaction logs”). 

  • Purpose-Specific: Differentiate between service delivery, analytics, personalisation, and marketing. 

  • Rights-Centred: Explain rights to withdraw consent, access data, seek correction/erasure, and raise grievances. 

How to Use AI in Real Life for Startups
Operating these rules in AI-driven environments involves: 

  • Granular Consent: Separate toggles for service provision, analytics, personalisation, and marketing. 

  • AI Training Transparency: Clearly inform users if their interaction data trains models, what decisions it influences, and whether they can opt‑out or seek human review. 

  • Consent Managers: From November 2026, engage only registered Consent Managers that meet net‑worth, technical, and independence criteria under Rule 4. 

  • Audit Trails: Log when notices were served, consents obtained or withdrawn, and which model pipelines used which datasets. This record is critical in enforcement scenarios.  

Data Security for AI Systems: Beyond the scope of Encryption 

Basic Security Steps
Rule 6 says that data fiduciaries must put in place “reasonable security safeguards” that are based on the level of risk. This is the least that AI infrastructures need: 

  • Encryption: AES‑256 at rest for training datasets and user profiles; TLS 1.2+ in transit. 

  • Access Controls: RBAC and MFA for engineers accessing training corpora, model weights, or sensitive logs. 

  • Monitoring and Logging: Systematic logging of access, downloads, and changes; anomaly detection for unusual queries. 

  • Resilience: Regular backups and tested disaster‑recovery plans; clear backup retention limits aligned with legal and business needs. 

Steps Taken by the Organisation 

  • Formal data security and incident response policies.
    Mandatory security and privacy training for developers, data scientists, and operations staff. 

  • Rigorous vendor management, ensuring cloud and tooling providers commit contractually to equivalent safeguards and cooperation in incident handling. 

The 72-Hour Breach Notification Mandate: Implementing Rule 7 into Action
Rule 7 requires notifying the Data Protection Board within 72 hours of becoming aware of a personal data breach, supported by forensic detail, and informing affected individuals “without undue delay.” 

Two-Stage Notification Process 

Stage 1 – Immediate Individual Notification
Communicate via email, SMS, or in‑app messages: what happened, data affected, likely impacts, remedial steps, and a contact point. 

Stage 2 – 72-Hour Board Notification
Submit a formal report detailing causes, affected datasets, mitigation measures, user notification evidence, and steps to prevent recurrence. 

Practical Breach Response Framework
Startups must avoid improvisation by: 

  • Defining clear incident detection and escalation roles. 

  • Pre‑identifying internal or external forensics teams

  • Preparing plain‑language notification templates.
     Pre-Breach Preparation
    Maintain an incident response playbook, retain specialist firms, and map the most sensitive systems for enhanced monitoring and faster triage. 

AI Training Data and Algorithmic Fairness 

The DPDPA’s principles of data minimisation, purpose limitation, and accuracy, combined with AI Governance Guidelines on fairness and non‑discrimination, directly shape AI development. 

Data Minimization for AI Models 

  • Collect only what is necessary for defined features. 

  • Prefer interaction signals (clicks, views, purchases) over invasive raw content where feasible. 

  • Use aggregated or pseudonymised datasets when individual‑level linkage is not essential.

    Algorithmic Transparency and Bias Mitigation
    Where AI affects people’s opportunities or access to services, startups must: 

  • Provide meaningful explanations of high‑impact automated decisions. 

  • Conduct periodic bias and fairness audits, especially for protected or vulnerable groups. 

  • Ensure human oversight for high‑risk use cases, with documented review procedures. 

Children’s Data Protection: Age Verification and Behavioral Guardrails
Section 9 and Rule 10 create enhanced duties for processing children’s data (under 18). 

  1. Verifiable Parental Consent
    Startups must obtain verifiable parental consent before processing children’s data, using methods such as validated KYC information or government‑backed digital identity tokens (e.g., DigiLocker).

  2. Behavioral Monitoring Restrictions
    The law prohibits tracking, behavioural monitoring, and targeted advertising directed at children. AI systems must disable engagement‑driven optimisation and profiling for child users. 

  3. Data Minimization for Children
    Education or learning platforms should only gather metrics that are relevant to learning, like scores, progress, and module completion. They should not collect sensitive behavioural profiles, financial data, or detailed location information unless necessary. 

Cross-Border Data Transfer: Navigating Global Operations 

Section 16 and Rule 14 regulate transfers of personal data outside India. 

  1. Government Notification Monitoring
    The Central Government may notify restricted jurisdictions. Startups must continually monitor MeitY communications to ensure data is not transferred to blacklisted destinations and should record risk assessments for significant transfers. 

  2. Transfer Mechanisms
    Use contractual safeguards to impose DPDPA‑equivalent duties on foreign cloud or analytics providers, including encryption, access limitation, breach reporting, and deletion cooperation. Conduct privacy impact assessments of foreign recipients’ legal, security, and operational environments. 

Data Retention and Lifecycle Management 

The DPDPA requires deletion once the purpose is fulfilled, and Rule 8 prescribes specific timelines for certain platforms (for example, deleting inactive user data after three years in defined categories). 

Establish Retention Schedules
Startups should: 

  • Define retention for training datasetsinteraction logs, and model artefacts

  • Prefer retaining models rather than raw identifiable training data, where feasible. 

48-Hour Deletion Notice
Rule 8(2) requires a 48‑hour notice before deleting inactive users’ data, giving them an opportunity to re‑engage. This demands automated triggers, user‑facing options to retain accounts, and auditable deletion logs. 

Technical Implementation
Define step‑by‑step deletion procedures for: 

  • Operational databases. 

  • Backups and archives (with finite retention). 

  • Third‑party processors, enforced through contractual obligations and attestations. 

Data Principal Rights: Operationalizing Individual Control
The DPDPA grants rights to access, correction, erasure, grievance redressal, and nomination. 

  1. AI Users’ Data Access Reports
    When users exercise access rights, startups should provide: 

  • Categorised data exports (profile, interaction, preference data). 

  • Plain‑language explanations of purposes and model usage. 

  • Machine‑readable formats (such as CSV or PDF) for ease of review.  

Privacy by Design: Making AI Development Include Privacy
Privacy by Design requires that privacy and data protection are embedded at every stage of design, development, testing, and deployment. 

  1. For AI startups, this means: 

  • Designing with minimal data collection, encryption, and user controls from day one. 

  • Developing with privacy‑preserving defaults and visible user settings. 

  • Testing for bias, robustness, and unintended harms before release. 

  • Deploying with clear documentation, continuous monitoring, and accessible grievance channels. 

Alignment with India AI Governance Guidelines
The India AI Governance Guidelines 2025 reinforce and complement the DPDPA by articulating seven principles that AI startups should operationalise: People First, Fairness and Equity, Accountability, Understandable by Design, Safety, Resilience & Sustainability, and Trust as Foundation. Together, the DPDPA 2023, DPDP Rules 2025, and AI Governance Guidelines provide not a barrier to innovation, but a compliance blueprint for responsible data innovation. For AI startups and SMEs, embracing this framework early is not just about avoiding penalties, it is about building products that users, regulators, and investors can trust. 

Conclusion


The DPDPA 2023 and DPDP Rules 2025, which are in line with India’s AI Governance Guidelines, mark a new beginning for responsible data-driven innovation. These rules aren’t problems for AI startups and small businesses; they’re guidelines that help develop trust with users, lower regulatory risk, and set the stage for long-term growth. You have enough time to act in the 18-month compliance window, but only if you start immediately. Companies that build privacy infrastructure, employ clear data policies, and make sure their algorithms are fair will become leaders in their fields and gain the trust of users, regulators, and investors. Those that come up with new ideas in a responsible way will shape the future of AI in India. 

Start your compliance journey today. Find us on our website to build privacy and fairness into your AI from day one. 

We Help You to Grow Your Business Faster & Easier

Our Mission is to assist businesses in achieving compliance with data privacy, cybersecurity regulations & Responsible AI. We have worked with over 150+ Clients. Some of our key clients are Adani, Booking.com, NPCI, Godrej, DS Group, CRED, BharatPe, Aster DM, Vistara Airlines, Kotak Mahindra, Vodafone, Flipkart & more.


  • Comprehensive Compliance Support – From data privacy to Responsible AI, we cover it all.

  • Cybersecurity Expertise – Protect your business from evolving digital threats.

  • Proven Results – Trusted by top brands including Adani, CRED, and Flipkart.

  • Customized Solutions – Compliance strategies tailored to your business needs.

  • Global Standards – Align with GDPR, DPDP, and ISO frameworks seamlessly.

  • Efficient Implementation – Achieve compliance faster with expert guidance.

  • Trusted Advisory – Led by certified privacy and security professionals.

We Help You to Grow Your Business Faster & Easier

Our Mission is to assist businesses in achieving compliance with data privacy, cybersecurity regulations & Responsible AI. We have worked with over 150+ Clients. Some of our key clients are Adani, Booking.com, NPCI, Godrej, DS Group, CRED, BharatPe, Aster DM, Vistara Airlines, Kotak Mahindra, Vodafone, Flipkart & more.


  • Comprehensive Compliance Support – From data privacy to Responsible AI, we cover it all.

  • Cybersecurity Expertise – Protect your business from evolving digital threats.

  • Proven Results – Trusted by top brands including Adani, CRED, and Flipkart.

  • Customized Solutions – Compliance strategies tailored to your business needs.

  • Global Standards – Align with GDPR, DPDP, and ISO frameworks seamlessly.

  • Efficient Implementation – Achieve compliance faster with expert guidance.

  • Trusted Advisory – Led by certified privacy and security professionals.

We Help You to Grow Your Business Faster & Easier

Our Mission is to assist businesses in achieving compliance with data privacy, cybersecurity regulations & Responsible AI. We have worked with over 150+ Clients. Some of our key clients are Adani, Booking.com, NPCI, Godrej, DS Group, CRED, BharatPe, Aster DM, Vistara Airlines, Kotak Mahindra, Vodafone, Flipkart & more.


  • Comprehensive Compliance Support – From data privacy to Responsible AI, we cover it all.

  • Cybersecurity Expertise – Protect your business from evolving digital threats.

  • Proven Results – Trusted by top brands including Adani, CRED, and Flipkart.

  • Customized Solutions – Compliance strategies tailored to your business needs.

  • Global Standards – Align with GDPR, DPDP, and ISO frameworks seamlessly.

  • Efficient Implementation – Achieve compliance faster with expert guidance.

  • Trusted Advisory – Led by certified privacy and security professionals.

Create a free website with Framer, the website builder loved by startups, designers and agencies.