The internet is getting carded. After a decade of hand-wringing over kids’ screen time and algorithmic rabbit holes, lawmakers are forcing platforms to verify age and rethink how minors see, and are seen by, advertising. That mandate falls squarely into the lap of marketers, who must now reconcile AI-driven targeting and creative with a patchwork of obligations that vary not only by country, but also by platform and even app store. The center of gravity is clear: if your activation could plausibly touch under-18s, regulators expect you to prove it won’t, or to redesign it so it’s safe if it does.
The Legal Baseline Is Rising Fast
In the U.K., the Online Safety Act moves beyond polite nudges and into requirements for “highly effective” age checks around pornographic and other harmful content, with enforcement guidance and deadlines that give platforms little room to improvise. In the U.S., the landscape is fragmented. Some states shift responsibility to social networks, while others place it squarely on Apple and Google by requiring parental permission before app downloads or in-app purchases. At the federal level, the FTC’s refreshed children’s privacy rule tightens third-party adtech and data retention on child-directed services. Across the EU, the Digital Services Act bans behavioral advertising to minors, and the AI Act adds a second layer of protection to prevent the deployment of systems that exploit children’s vulnerabilities or manipulate their behavior. Different statutes, same message: treat teens differently, and document how.
Platforms Are Becoming the First Line of Age Assurance
The biggest shift for advertisers is product plumbing. Platforms are moving age checks upstream, so advertisers automatically inherit these constraints. YouTube, for example, is rolling out AI-based age estimation that infers whether a user is likely under 18 from usage signals and applies teen protections accordingly. Instagram’s Teen Accounts feature has stricter defaults and requires parental approval before loosening them; attempts to change a birthday to 18+ can trigger a Yoti video selfie or ID check. Reddit is gating mature content in the U.K. behind Persona’s verification flow, retaining only the user’s birth date and verification status. TikTok employs multiple appeal paths, including a refundable card authorization or an ID-plus-selfie flow, and collaborates with vendors that focus on minimizing the data shared back to the platform. Even the app stores are in the loop: Apple has introduced a developer pathway to receive a child’s age range from a parent-approved account without exposing the exact birth date.
For marketers, the practical effect is twofold. First, more requests for “age range” rather than date of birth inside SDKs and campaign objects. Second, the spread of “age-unknown equals minor” defaults that turn off profiling and personalization for a broader slice of inventory. That reduces reach in some lookalike-heavy plans, but raises confidence that your campaign won’t accidentally target teens.
The Tools Are Imperfect and Your CX Needs to Be Ready
Every method has failure modes. Photo IDs can be faked or simply unavailable. Video selfies break down under poor lighting, and the same generative AI that powers brand creative also powers sophisticated deepfake attempts that liveness checks must catch. AI age estimation is elegant but probabilistic; misclassifications will happen. Families themselves sometimes open the gate—sharing devices, recycling emails, or green-lighting secondary accounts.
Marketers don’t control these systems, but they do own the customer experience when things go wrong. That means designing appeal paths into brand activations: offer alternate proofs (card micro-charge, document check, or live agent review), communicate clearly when access is denied, and—critically—avoid punitive tones that treat legitimate adults like fraudsters. In sensitive verticals, a make-good (such as free shipping or priority access) can turn a near-miss into loyalty rather than backlash.
What Changes for Targeting, Creative, and Measurement
The biggest immediate adjustment is targeting discipline. In the EU, you should assume that no behavioral ads are targeted at minors; if the age is unknown, treat the impression as if it were a minor and pivot to contextual targeting. In the U.S., child-directed zones now require parental opt-in before any third-party adtech is activated, which effectively encourages marketers to focus on on-site contextual placements, publisher-first-party segments, and clean-room measurement. Sensitive categories—alcohol, weight loss, gambling—need a lower tolerance for teen spillover everywhere, not just where the law enumerates it.
Creative is next. AI-generated or manipulated media triggers disclosure expectations in Europe, with a glide path toward more explicit labels and even watermarking. Smart brands are already building “synthetic asset registers” to track which files require labels by market. And for AI brand agents and recommendation engines, the AI Act’s prohibition on exploiting minors’ vulnerabilities is a design spec as much as a legal clause: avoid streak mechanics, pressure-driven copy, and “sticky” loops when a teen could be on the other end of the screen.
Privacy-Preserving Age Checks Are Coming Into View
There is a credible path out of the data-collection paradox. Zero-knowledge proofs can attest that a user is “over-18” without revealing who they are; several vendors are piloting tokenized attestations with major platforms. Digital ID wallets—already advancing in the EU—could enable a parent to approve a limited-scope proof that persists on the device and expires automatically. For brands, the value isn’t just legal compliance; it’s UX. A one-tap “over-18 token” that works across properties reduces drop-off and moves the friction away from your owned touchpoints, where abandonment hurts most.
The Marketer’s Playbook for the Next Quarter
Treat safety as a feature, not a gate. Shift mixed-audience reach from profiling to contextual, where age is uncertain. Align buys with platform teen protections; don’t toggle them off via custom SDK logic because it “improves performance.” Label synthetic creative in EU deliveries and keep watermarked masters. Exclude known-minor records from training and personalization datasets (or wall them off with tighter retention and erasure). When you do need first-party verification, such as for a gated AR experience, select vendors with liveness, strong deletion SLAs, and, ideally, privacy-preserving attestations on their roadmap. Finally, write an explainability pack: which signals your AI uses, how you avoid targeting minors, and how appeals work. It will save you time with platforms, partners, and, if necessary, regulators.
The Strategic Upside
Complying with the new rules can look like a tax on ambition. In practice, it serves as a catalyst for cleaner planning, enhanced creativity, and more resilient performance. Contextual and publisher signals get sharper when you stop trying to treat every impression like a profile. Creative earns its keep without overfitting to a lookalike seed. And the brands that build transparent, low-friction age assurance into their experiences will pick up something algorithms can’t: trust. In an era when the internet is finally checking IDs, that may be the most valuable audience signal of all.