In the early 1970s, Julie Inman Grant’s mother, Brenda, worked at the Seattle Crime Prevention Advisory Commission. One of her colleagues there was Ted Bundy, who went on to become one of America’s most notorious serial killers. Inman Grant mentions this story as a way of explaining her interest in aberrant behavior, but it’s not inapt. Bundy evaded capture for so long because he was very familiar with the techniques law enforcement used. Inman Grant is using what she learned as a tech-sector executive to thwart tech companies from commandeering too much attention and data from young people.
[time-brightcove not-tgx=”true”]
The 57-year-old Seattle-born, naturalized Australian citizen is an unlikely pioneer in the quest to reckon with the increasing influence technology companies have over adolescents’ lives and attention. As Australia’s eSafety Commissioner, she has been given the task of creating guidelines for what social media companies can and cannot do in that small but very digitally connected society, where more than 97% of the population uses the internet. At the behest of Prime Minister Anthony Albanese, she has steered world-first legislation that specifically limits tech platform’s access to teens. “We are creating friction in a system where friction hasn’t previously existed,” says Inman Grant of the protocols. “And we’re also going to be creating a normative change that is really important for parents.”
The new regulations, which come into effect on Dec. 10, force tech companies to deactivate the social media accounts of any Australian under the age of 16 and to use every feasible effort to prevent users 15 and under from signing up for a new account, using several means of age verification, without forcing teens to give away much biographical information. They also need to try to prevent subversion of the ban via VPNs and make regular reports on the effectiveness of their methods. Failure to comply will result in a fine of almost 50 million Australian dollars (about $33 million). “If a Tesla is imported into Australia, we expect that it will be built to Australian safety standards,” says Inman Grant. “Why shouldn’t we, as a sovereign nation, expect the technology companies to build to our safety standards?”
Read More: Australia’s Leader Takes On Social Media. Can He Win?
When the legislation first passed in November 2024, it affected obvious social media channels such as Facebook, Instagram, Snap, X, and TikTok, but there was a carve-out for YouTube, because of its potential educational value. In June, however, Inman Grant switched course and officially recommended adding YouTube, which is owned by Google, to the list of banned platforms. Under 16-year-olds could still watch whatever videos they wanted on the site with no account—or be shown educational ones—but there would be limited interactivity.
Then, in late September, Inman Grant asked 16 more companies, ranging from gaming platforms such as Roblox to messaging apps such as WhatsApp, to self-assess whether the ban should also apply to them. Not all of them will necessarily fall under the legislation, and any can challenge their inclusion, but the breadth of platforms Inman Grant was considering took some Australians aback and confused others. “Our members are currently engaging with the eSafety Commissioner to understand and apply this guidance,” a spokesman for the Digital Industry Group (DIGI) told TIME in a statement. Many platforms had already sensed a change in the wind; Roblox stepped up its safety controls in July.
Inman Grant is battling several hydra-headed giants here. No country has yet figured out how to weigh the complex competing interests of free speech and protection for minors that social media has raised. No country has found a way to call the mammoth digital platforms to account for the content they carry and the damage that content can do and has done. No government has been able to stave off, slow, or even manage the infiltration of giant, immensely lucrative tech companies into every corner of young humans’ lives and interaction. Adding to the complexity, the technological landscape is shifting rapidly; companies add new features frequently and regulations may be outdated before they move through the process of becoming law.
And yet the problems seem urgent; some studies have plausibly linked the increasing rates of teen depression and anxiety to the spread of social media. (Others say more research is needed.) Parents and schools are at their wits’ end trying to deal with the fallout of a constant online existence, while also using digital technologies to keep their children safe and help them learn. Reports of cyberbullying, misinformation, and grooming on the platforms are rife; more than 1,800 plaintiffs are suing leading social media platforms, alleging that they “relentlessly pursued a strategy of growth at all costs, recklessly ignoring the impact of their products on children’s mental and physical health.” Some experts point to evidence that Charlie Kirk’s accused killer was heavily influenced by gaming subcultures.
If Australia’s new provisions don’t succeed, it will be the biggest and most public failure to engineer a solution to these problems to date. Other countries are paying attention. “We in the E.U. will be watching and learning from you as you implement your world-first and world-leading social media ban,” European Commission President Ursula von der Leyen told Australian officials at the U.N. in September. New Zealand’s Prime Minister has endorsed a bill that would impose similar restrictions, while countries as diverse as the U.K., Fiji, and Malaysia have cited Australia’s laws as they develop their own.
Read More: YouTube to Start Using AI to Estimate Users’ Ages. Here’s What to Know
Several attempts to provide guardrails around technology have already capsized. In the U.S., both the Kids Off Social Media Act and the Kids Online Safety Act have stalled in Congress. Some European countries have required tech platforms to get onetime parental consent for users under 15, but these are laughably easy to work around. The U.K. passed an Online Safety Act which calls on platforms to enforce their de facto age limits, which are set at 13. Inman Grant believes that’s too young. “We’re not calling it a social media ban,” she says of the Australian legislation, “but rather a social media delay, because we’re basically restricting access from young people holding an account until the age of 16, to give us precious time to build their digital literacy, critical reasoning skills, resilience, and the like.”
When the social media law was first proposed, Inman Grant noticed an interesting phenomenon: Digital companies began to change the way they described themselves. She says that back in 2021, when asked to categorize their businesses for Australia’s Online Safety Act, which enacted non-teen-specific safety protocols, “YouTube identified themselves as a social media site, Pinterest identified themselves as a social media site, Snap identified themselves as a social media site.” But when asked to identify their purpose for the current legislation, she observed a minimizing of the social aspect of their business. According to Inman Grant, Pinterest described itself as a visual search engine. “Snap says they’re a camera app,” she says, “and YouTube said, ‘We’re a video-sharing platform.’”
Reaction from platforms to the ban has varied. Most focused on revving up their PR. Snap touted its safety-education programs and partnership with the Australian police. TikTok has run a campaign emphasizing the large amount of content made by and for teens that was educational. Meta had already introduced teen accounts in some countries with more safety measures (although a recent whistleblower report found them wanting) and in September announced they would roll the accounts out globally and partner with schools to expedite reports of bullying.
And then there was X CEO Elon Musk, who had already engaged in several public battles with Inman Grant over a prior safety vs. censorship dispute, during which free-market crusaders such as the Institute of Public Affairs gave her the title of E-Karen. Musk described the new regulations as “a backdoor way to control access to the Internet.” Inman Grant, who worked at X for two years when it was Twitter, deactivated her and the eSafety X account in early August. “It’s a different place,” she says of the platform. “I get credible death threats. My kids have been doxxed.” On top of that, she adds, the platform offered very little engagement.
Read More: Social Media Algorithms Led to Her Eating Disorder. Now She’s Suing TikTok and Instagram
To bolster her case for including YouTube in the ban, Inman Grant pointed to a 2025 survey of 2,600 Australian 10- to 15-year-olds her office commissioned that found that 96% of them had ever used a social media account, three-quarters of them had most recently encountered harmful content on social media, and more than a third of those said the place they experienced it most was YouTube, especially in the 10- to 12-year-old range. “This is things like misogynistic content, hateful content, grooming,” says Inman Grant. “We’re also seeing a trend of youth crime online, where there’s posting and boasting.”
But it’s not just the social aspect of the platforms that Inman Grant finds problematic. In her report to the Minister for Communications recommending the inclusion of YouTube in the ban, she referred to “persuasive design features”—infinite scroll, auto-play, “tailored and algorithmically recommended content feeds”—that YouTube and other platforms employ that may be contributing to an unhealthy amount of time young people spend staring at phones. In person, she’s less measured. “I often refer to it as outragement,” she says. “Outrage that stimulates engagement and stickiness.” She notes that recent reports suggest the search giant is lowering the bar for what content remains on the site, despite its potential harmfulness.
Google’s reaction to its inclusion was swift. “eSafety’s advice goes against the Government’s own commitment, its own research on community sentiment, independent research, and the view of key stakeholders in this debate,” a YouTube PR manager said in a blog post the day of the announcement. “Today’s position from the eSafety Commissioner represents inconsistent and contradictory advice, having previously flagged concerns the ban ‘may limit young people’s access to critical support.’” The following month the company publicized survey findings that showed that 74% of Australian parents “who use YouTube feel confident in their ability to guide their child on how to use YouTube (or YouTube Kids) responsibly.” (YouTube Kids is exempt from the regulations.)
Still, on July 30 Albanese announced that the video platform would also have to abide by the ban on signing up under-16-year-olds. “This rushed legislation overlooks the fundamental differences between services, and the unique benefits these can deliver to Australian youth,” said a Google spokesperson, adding that it would “consider next steps.” Company representatives, as of press time, declined to comment further.
Read More: ‘Everything I Learned About Suicide, I Learned on Instagram’
Not all Australians welcome the new laws, even though they passed with support from all sides of the political fence. When the ban was first announced, a coalition of mental health groups released a statement warning of the risk of “cutting young people across Australia off from mental health support, exposing them to new harms, and leaving many without any support.” UNICEF Australia also came out against the ban, saying that “the proposed changes won’t fix the problems young people face online.” Some in the tech industry privately opine that the clampdown is a response to Meta’s decision to exit the agreement in which it paid Australian news media sites a fee for using its content.
Many creators were baffled at the inclusion of YouTube in the regulations, especially since Inman Grant acknowledges teens can still access the content without having an account, and there’s plenty to see without logging in. “YouTube’s built the supervised experiences which have all these protections in place,” when users are logged in, Shannon Jones, the creator and executive producer of Australia’s most popular YouTube channel, Bounce Patrol, told Sky News in July. “There’s no personalised ads, there’s take-a-break reminders, there’s limitations on certain harmful content.”
Grant, who tends to speak in long, pause-free sentences that go several directions at once, has taken a similar leave-no-avenue-for-argument approach in her defense of the changes she’s championing. Her office commissioned a 10-volume investigation that assessed more than 60 technologies from 48 age-assurance providers into whether age-verification technology, including AI, would be sophisticated enough for online platforms to detect a person’s age without harvesting too much of their personal data.
Read More: One Community’s Experiment With a Phone-Free Childhood
In August the report concluded it was technically feasible, as long as more than one method was used. Others are not so sure. “The trial did not test the technology in a live environment, leaving key questions about user experience, usability, and adoption unanswered,” said Jennifer Duxbury, director of Regulatory Affairs, Policy, and Research at DIGI. “The findings also note a margin of error of around two to three years across the tested solutions.”
For someone who grew up in the Northern Hemisphere, Inman Grant is deft at deploying Australia-friendly similes to describe these restrictions: they’re a safety measure, just like swimming-pool fences, or the flags at the Australian beach, or seat belts (the wearing of which Australia was the first to mandate). She knows how to rile up the locals too, often describing tech as an “extractive industry.” Australians do not love that four of the five of the country’s top exports have to be dug out of the ground and are literally and environmentally not sustainable, so “extractive” is a dirty word.
The raw material being extracted in this case, Inman Grant argues, is personal data that companies can then sell to advertisers to help them target customers with alarming accuracy. “You are the commodity, particularly for social media,” she says. She first announced her recommendation that the “delay” would include YouTube at an Australian Press Club lunch, where the audience may be assumed to already have a dim view of the industry that has hollowed out its profitability.
One of the reasons Inman Grant is so intent on lessening the might of the tech giants may be as penance for the fact that she helped empower them. She has worked in or alongside the industry in some form since the ‘90s when, right out of Boston University, she worked for a local Washington State congressman. (Her other job offer, she says, was with the CIA.) “We had this small little company in our district called Microsoft, so we worked on technology and telecom issues,” she says.
After the congressman she worked for retired, and she completed a master’s in international communication, she crossed over into business, as one of Microsoft’s first government-relations employees. She was a member of the consortium that helped shape Section 230, the now-famous part of the 1996 U.S. Communications Decency Act that protects online platforms from civil liability for harm caused by content posted by their users.
“Never in our wildest dreams would we have thought that this provision that was developed to help remove intermediary liability from the platforms would persist for 30 years and be one of the main reasons that the companies aren’t responsible and aren’t accountable,” says Inman Grant. In her view the tech companies have used the notion of technological exceptionalism to behave irresponsibly. “You know, ‘we’re creating jobs. We’re creating economic growth. If you put any kind of constraints on us, it will undermine the economy,’” she says. “I used to write those talking points.”
Read More: Inside the Parent-Led Movement for Phone-Free Schools
She worked at Microsoft for 17 years, helping it to deal with the 1998 antitrust suit and to build its political action committee, activities she now claims “made me feel like I needed to take a shower.” At 32, single and overworked in D.C., she asked to be transferred somewhere. She was sent to Australia in 2000, where she met her husband and worked on Microsoft’s philanthropic initiatives, before moving back to Redmond in 2009 to oversee the company’s global privacy and safety commission.
It wasn’t an easy job: “I felt like I was a huge antagonist for safety at a time when the company was becoming much more enterprise-focused,” she says. Microsoft had recently acquired Skype and already owned Xbox, and she was dismayed by how toxic the messaging systems were. (“Microsoft has a long-standing commitment to tackling child sexual exploitation and abuse risks and we have made significant improvements over time,” says a Microsoft spokesperson. “We take proactive steps to deter, detect, and disrupt child exploitation in all forms across our services, including Xbox, and will continue to evolve our approach. As of May 5, 2025, Skype has been retired.”)
She returned to Australia in 2012 to work on government relations for Adobe and then Twitter, before being asked in 2017 by a future conservative Prime Minister, Malcolm Turnbull, to take over the commission, which at the time mostly dealt with child sexual abuse material (CSAM) and cyberbullying. “He wanted someone as the eSafety Commissioner who knew what the talking points were going to be before the companies walked in,” she says. “I understood what they were capable of, but also what their limitations were.” She has remained in the job through several administrations, and her staff has been beefed up to about 250 people.
“That office has had an important role, and it’s been consistently expanded by both sides of Parliament,” says former Communications Minister Paul Fletcher, who worked closely with Inman Grant for three years. “I think Julie’s done really important work. It’s not tenable to have these giant digital platforms refusing to comply with the law. It takes somebody pretty tough like her to get them to do so, particularly in relatively small countries that are not the U.S.”
Read More: No More Than Two Hours of Screen Time a Day, a Japanese City Tells Its Residents
As the mother of three teenagers, Inman Grant knows there will be workarounds to the government-imposed delay in social media activity, but hopes this helps establish a new baseline for parents, so that their offspring don’t feel they are missing out. She is also aware that social media sites provide a lot of support to more vulnerable communities. “Kids who are disabled, First Nations kids, and kids who identify as LGBTQI+, all of them say we feel more ourselves online than we do in the real world,” she says, adding that some exemptions have been built into the legislation. And she must know that there are dangers inherent in essentially forcing teenagers to watch YouTube without an account, as there are zero restrictions on content or watch time.
Inman Grant’s job is not nearly done. Her next task is to observe the implementation and see how or if it works. She’s lined up an academic group (headed, she mentions twice, by Stanford University) to study whether it makes life easier for parents, whether teens do interact in real life more, whether there are unintended consequences, and whether 16 years old, an age she acknowledges was plucked out of the air, is the right time to encounter social media. “Everybody’s focused on this,” she says. “I do think it will make an impact.”