Hot on the heels of the youth social media ban, the Online Safety Code comes into effect this week, compelling search engines to also age-verify users. Daniel Angus reports on the likely effects.
How we search for and discover information has changed significantly in the past year due to the introduction of various AI tools. However, this is just the beginning, as the new Online Safety Code indicates a much larger shift ahead.
From this week, if you’re not logged in, you will be assumed to be a minor.
The Online Safety Code in Australia requires platforms, including search engines, to verify the age of users from December 27, 2025. This introduces new challenges for Australians seeking details on sensitive topics such as mental health, sexuality, reproductive health, and domestic violence. Moreover, privacy concerns emerge when sharing personal information with these platforms.
These recent changes are creating newer gatekeepers of information and changing how existing players, such as Google, operate. What are these changes, and who are these new gatekeepers? How trustworthy can they be, and how can Australians find reliable information?
AI’s impact on search
AI in search uses a technology called RAG (retrieval augmented generation). Instead of showing a list of webpages or links, users now receive summarised results presented in a clearer, more straightforward format. This usually features an introductory paragraph, a main section, a conclusion, and highlights of the key points. In this way, search engines and AI platforms are transforming themselves into “answer engines,” in the words of Microsoft CEO Satya Nadella.
A recent survey across six countries (Argentina, Denmark, France, Japan, the UK, and the US) examined how people use generative AI. Their findings show that information-seeking is becoming the most common use case, with usage more than doubling from 11% in 2024 to 24% in 2025.
AI in search has greatly influenced website traffic, leading to a decline in visits, as shown by various web traffic analyses. This decline poses new challenges for publishers. ABC’s James Purtill observed that global news website traffic has decreased significantly, particularly affecting smaller publishers. Furthermore, a study by Pew Research Centre indicates that users are less inclined to click on AI-generated summaries, which are often derived from limited sources such as YouTube, Reddit, and Wikipedia.
The recent drop in web traffic is leading to some interesting ways of gathering information. More and more companies, such as Apify, Brightdata, SerpApi, are using web scraping to collect search engine result pages (SERPs) in real time and then share that data with AI platforms to create summaries.
While the traffic from human users declines, the use of web scraping increases. Scraping – basically copy and paste at warp speed -allows companies to pull information once and reuse it without returning to the original sites. Google is also rolling out new tools for website owners to help them analyse and group search terms to boost their traffic.
However, as Google points out, these tools are only really helpful for sites that already get a lot of searches. The recent acquisition of Semrush, a search engine marketing company, by Adobe indicates a sharp shift in the focus of search engines and platform companies toward faster analysis of search results and integration with AI tools.
We’re seeing a shift where old players in the industry are being reconfigured by new tech companies that want to take information and make it easy for users to access. For these new gatekeepers, information is a valuable asset rather than a public responsibility. It remains unclear how they establish guardrails or checks to address known issues of misinformation, disinformation, and polarisation.
With these changes, traditional publishers—such as those in news, social services, or health—are often left behind. They now need to learn new skills to produce content that can be scraped and used by AI tools. This also means that users have to be more careful and learn to trust these new ways of finding information.
Online Safety Code and search
Amid a technology-driven shift in the information ecosystem, the online safety code introduces a new layer of complexity. Once implemented, search platforms will be required to verify the age of any user who attempts to log in.
For those who are not logged in, the default assumption will be that they are minors, triggering the strictest protections and access limits. This represents a significant structural change for services that have historically operated without identity checks for basic search functions.
Users must provide some form of ID or consent to verify their age through assurance technologies. The potential for linking identity data and search logs poses increased risks, especially for marginalised communities. For example, searching for sensitive topics like abortion services, gender-affirming healthcare, domestic violence support, addiction treatment, or STI testing could link highly sensitive information to user identities.
Even where platforms and vendors promise strong protections, the combination of identity verification and detailed search histories creates
a powerful data asset that may be misused or accessed beyond its original purpose.
This also fails to deal with the primary issue; the problem with harmful content is not with the seekers themselves but with those who create and distribute it via search engines. Verifying users’ identities does not eradicate harmful content; it may hide it from some, but it can still reach others despite age checks. In this sense, age assurance can shift visibility without addressing the production and circulation of harmful material.
A further concern is the way the policy approach tends to conflate exposure with harm. This assumes that encountering problematic or age-restricted content is inherently damaging.
In practice, harm is shaped by the wider environment in which a user is situated. A child who inadvertently encounters difficult material online may be supported if they have strong guidance, digital literacy and trusted adults who can contextualise what they see. Another young person without these supports may still encounter troubling content even with strict guardrails in place, but will do so without scaffolding.
The outcome is not determined solely by the content but by the capacity to interpret and cope with it.
Safe search features should play a strong role in reducing access to truly harmful and illegal material, and these protections should apply to all users rather than being framed only as a child-specific safeguard. Opening this debate further will help shift policy from a binary focus on restricting exposure to building systems of support, literacy and care that reduce harm in a durable and equitable way.
How to cope with the change
With more AI-generated search summaries, fewer options to opt out, and the policy changes affecting how search results are displayed, users must learn new ways to verify information sources, risking being misled. What can users do in response to such drastic changes?
- Users should rely less on search engines and turn to websites or apps from reputable news publishers or information providers.
- Make a list of reliable sources of information, such as government organisations, credible not-for-profit organisations, university sources, and reputable news organisations.
- Verify information on multiple websites to ensure its validity. In case of ambiguous details, contact the organisation directly by phone or email.
- Use user fact-checking tools when relevant.
Like the social media age ban, how the new Online Safety Code will affect us all remains to be seen.
Prof Daniel Angus is a Chief Investigator at the Queensland University of Technology node of the ARC Centre of Excellence for Automated Decision-Making & Society (ADM+S), and a Professor of Digital Communication in the School of Communication.

