Artificial intelligence is rapidly eroding online anonymity, according to new research from ETH Zurich. Scientists have demonstrated that AI tools can reliably identify individuals behind pseudonymous social media accounts by cross-referencing seemingly innocuous details shared over time. This development has significant implications for privacy, surveillance, and online safety.
The Rise of AI-Powered Deanonymization
For years, many internet users have relied on anonymity to express opinions, share sensitive information, or engage in niche communities without fear of real-world consequences. However, this assumption of privacy is no longer guaranteed. The Swiss study reveals that large language models (LLMs) can now match anonymous accounts to their real-world counterparts with up to 68% accuracy and 90% precision – significantly outperforming manual investigation methods.
The process is simple: LLMs scan the web for fragmented personal details (employment history, location, hobbies) that individuals unknowingly leak across platforms. Even seemingly harmless mentions of past workplaces or hometowns, scattered over years, can be enough to break anonymity. This isn’t about superhuman AI; it’s about automation. LLMs can sift through data far faster and cheaper than any human investigator.
Who Is Most at Risk?
The most vulnerable users are those who consistently share personal details under pseudonyms – especially older individuals or those unfamiliar with advanced privacy practices. Researchers found that the more information someone reveals over time, the easier they are to unmask.
This poses a direct threat to whistleblowers, activists, journalists, and anyone else relying on anonymity to protect themselves from surveillance, harassment, or censorship. Governments could exploit this technology to identify dissidents; corporations could use it for hyper-targeted advertising or customer profiling; and malicious actors could launch highly personalized social engineering attacks.
The Technology Behind the Breakthrough
The researchers built their system using publicly available datasets from platforms like Hacker News, LinkedIn, and Reddit. They tested LLMs by deliberately splitting anonymized Reddit accounts and challenging the AI to match them to their original identities. The results were clear: AI-powered deanonymization is not just possible; it’s becoming increasingly efficient.
The study authors emphasize that the technology doesn’t require extraordinary computational power or specialized knowledge. The underlying mechanics are already in place, and they anticipate that within a few years, everyday users will have access to tools capable of unmasking anonymous accounts.
The Solution: Disposable Accounts
The most effective way to protect online anonymity is surprisingly simple: use disposable accounts for sensitive posts. Creating a one-time-use profile eliminates the trail of personal data that LLMs exploit. If you need to share something truly confidential, don’t use the same account you’ve used for years; create a new one specifically for that purpose.
“If you care about something being anonymous, if you have something to protect, be mindful of this,” says Daniel Paleka, lead author of the study. “The fundamentals of the technology are there. If there are no guards, I fully expect someone to misuse it.”
The window to address this issue is closing. As AI tools become more accessible, the erosion of online anonymity will accelerate. The study serves as a wake-up call: users, platforms, and policymakers must act now to safeguard privacy before it disappears entirely.
































