J47h.putty PDocsTechnology
Related
cPanel and WHM Security Update: Key Questions on Recent Vulnerability Fixes10 Key Insights for Modern AI-Driven Software Development7 Key Developments in the OnePlus-Realme Merger: What It Means for the Brand's FutureHow to Update Rust to 1.94.1 and Apply Critical FixesHow to Build a Real-Time Digital Twin for Enterprise AI Using Celonis and IkigaiUnlocking the Past: 10 Crucial Facts About macOS Tahoe's New Password Versioning FeatureUnderstanding Complex Systems with HASH: A Free Simulation PlatformSafari Technology Preview 240: New CSS Features and Bug Fixes

AI Chatbots Leak Real Phone Numbers: Urgent Privacy Crisis Unfolds

Last updated: 2026-05-14 06:12:37 · Technology

Breaking: AI Chatbots Expose Personal Phone Numbers—No Easy Fix

Urgent — Multiple individuals report that Google's Gemini and other AI chatbots are revealing real phone numbers to strangers. Experts warn of a growing privacy emergency.

AI Chatbots Leak Real Phone Numbers: Urgent Privacy Crisis Unfolds
Source: www.technologyreview.com

A Reddit user described a month-long ordeal: his phone rang constantly with strangers seeking a lawyer, product designer, or locksmith. Callers were misdirected by Google's generative AI. (MIT Technology Review could not independently verify his story.)

In March, software engineer Daniel Abraham in Israel received a WhatsApp message from a stranger—after Gemini provided incorrect customer service instructions containing his number.

In April, a University of Washington PhD candidate tricked Gemini into revealing a colleague's personal cell phone number.

Expert Warning: Widespread Exposure

AI researchers and privacy advocates have long warned about generative AI's privacy risks. Now those risks are materializing with real phone numbers appearing in chatbot outputs.

"These incidents confirm that large language models can regurgitate personally identifiable information from training data," says Rob Shavell, CEO of DeleteMe, a privacy removal service. "The mechanism isn't always clear, but the harm is immediate."

"A customer asks a chatbot something innocuous about themselves and gets back accurate home addresses, phone numbers, family members' names, or employer details."

— Rob Shavell, DeleteMe CEO

Shavell notes a 400% surge in customer queries about generative AI—now numbering a few thousand—over the last seven months. Fifty-five percent involve ChatGPT, 20% Gemini, 15% Claude, and 10% other tools.

Victims face two scenarios: either a direct hit where their own data is exposed, or a secondary leak where a chatbot fabricates — but inaccurately reveals — someone else's contact info.

Background: The Training Data Problem

Large language models like Gemini, ChatGPT, and Claude are trained on vast datasets scraped from the internet, including public directories, forums, and social media. When these models generate responses, they can inadvertently reproduce exact phone numbers from their training material.

AI Chatbots Leak Real Phone Numbers: Urgent Privacy Crisis Unfolds
Source: www.technologyreview.com

This is not a bug but a feature of how LLMs work: they remember and output patterns from their training data. Companies like Google and OpenAI implement filters to block PII, but data poisoning and prompt injection attacks can bypass them.

The Reddit user's case highlights the challenge: callers kept coming despite his pleas, and there is no easy opt-out. Privacy laws like GDPR and CCPA require data deletion—but not from AI models themselves.

What This Means: A New Privacy Battlefield

These incidents signal a shift: generative AI is no longer just a productivity tool—it's a vector for involuntary data exposure. Individuals have little recourse: they cannot easily remove their data from training sets, and companies are slow to patch vulnerabilities.

For victims, the consequences are severe: harassment, phishing risks, and loss of control over personal information. For organizations, reputation damage and legal liability loom.

The 400% increase in privacy requests to DeleteMe suggests this is just the tip of the iceberg. As AI chatbots become ubiquitous, expect more leaks—and growing public backlash requiring better safeguards.

Key Takeaways

  • Real phone numbers are being exposed by AI chatbots in response to innocuous queries.
  • Experts attribute this to training data contamination—models reproducing memorized PII.
  • No easy fix exists for individuals; removing data from AI systems is notoriously difficult.
  • DeleteMe reports a 400% increase in AI-related privacy requests since late 2023.

Updated: October 2023 — This is a developing story. Check back for updates on how companies are responding.