# Platform Safety Index > Independent evaluation of how digital platforms protect children and vulnerable users. The Platform Safety Index grades digital platforms on child safety across five categories: Behavioral Detection, Content Moderation, Age Verification, Transparency, and Data Privacy. Methodology is aligned with the EU Digital Services Act (DSA), US Kids Online Safety Act (KOSA), UK Online Safety Act, and Australia Online Safety Act 2021. ## Evaluation Criteria - **Behavioral Detection (30%)**: Can the platform detect grooming, radicalization, coercive control, or escalation patterns? - **Content Moderation (25%)**: Does it effectively filter CSAM, violence, hate speech, and self-harm content? - **Age Verification (20%)**: Are there meaningful age gates beyond self-declaration? - **Transparency (15%)**: Does the platform publish safety reports, audits, or incident response data? - **Data Privacy (10%)**: Is user data GDPR-compliant, minimized, and protected? ## Grading Scale - A+ (92-100): Excellent safety measures - A (85-91): Very good - B+ (78-84): Good - B (70-77): Above average - C+ (62-69): Average - C (55-61): Below average - D+ (45-54): Poor - D (35-44): Very poor - F (0-34): Failing — dangerous for children ## Parental Advisory Levels - **CRITICAL** (0-15): Do not allow children to use this platform - **SEVERE RISK** (16-30): Unsafe for children and teens - **HIGH RISK** (31-50): Not recommended for unsupervised minors - **MODERATE RISK** (51-70): Parental supervision recommended under 16 - **LOWER RISK** (71-100): Better than most, but no platform is fully safe ## API Evaluate any platform: visit /platform/{domain} (e.g., /platform/tiktok.com) ## Links - Website: https://platformsafetyindex.org - Methodology: https://platformsafetyindex.org/methodology - Regulatory alignment: EU DSA, US KOSA, UK Online Safety Act, Australia Online Safety Act 2021