I didn’t expect to find myself Googling my own name at 2:00 a.m., cross-checking DOJ press releases and wondering if some faceless algorithm had quietly tagged me as “high risk.”
My stomach dropped.
Let me be clear right out of the gate:
There is a Bradley Beatty who was recently indicted in a DOJ case involving cryptocurrency fraud.
That is not me!
Different person.
Different life.
Different everything.
I’m Brad W. Beatty—a cybersecurity executive, UNLV graduate, author, and trusted federal leader who has spent over two decades building application security programs, designing secure architectures for government agencies, and leading human-centered security awareness initiatives. My work revolves around protecting people, not exploiting them. And yet somehow, because of an automated system’s inability to separate truth from similarity, I found myself caught in a digital shadow that wasn’t mine.
I’ve seen the rise of AI from both a builder’s and breaker’s perspective. I understand how these algorithms work. I also know how easily they can fail,especially when they mistake surface-level data for meaningful context.
And this time, the mistake had my name on it.
Welcome to the Age of AI Hallucinations
In an era where AI is rapidly integrating into every facet of our lives, from personalized recommendations to critical hiring decisions, the stakes for accuracy have never been higher.
In the AI world, when a model invents something that isn't true but sounds plausible, we call it a hallucination. When that hallucination gets tied to a real person’s identity,especially in hiring or background checks, it becomes more than a tech issue.
It becomes a threat to real lives, real careers, and real reputations.
Imagine being automatically moved to the rejection pile for a dream job because an algorithm mistakenly flagged you based on a name match, without a human ever seeing your resume.
You applied. You were qualified. But before a recruiter ever glanced at your certifications or experience, a system quietly scanned your name against scraped public data, matched it to someone else’s criminal record, and flagged your profile as high risk.
From there, you’re filtered out...
no explanation,
no appeal,
no interview.
Just silence.
That’s what makes these hallucinations so dangerous. They don’t arrive with fanfare. They slip in through the back door of automation and quietly rewrite your story.
This isn’t just a glitch.
It’s more like a virtual deepfake of your reputation.
Instead of faking your face or voice, it’s faking your background—a hallucinated narrative stitched together from fragments. One part a press release. One part a public record. One part an AI system making leaps with no human oversight.
This is what I call AI poisoning by association.
Not because your actions or data were poisoned, but because the system linking them together was fed too little context and too much confidence.
Why AI Can Get This So Wrong
AI-driven hiring platforms and background check systems aren’t evil. They’re built to reduce friction, accelerate vetting, and spot patterns. But the reality is, they often operate with limited inputs and no human guardrails.
Let’s walk through what can go wrong:
1. Name Matching Takes Priority Over Identity
Many AI systems focus too heavily on name matching. These platforms may flag or cross-reference individuals based on similar names without verifying other key identity markers like birthdates, middle initials, or geographic history.
If your name is common, or even close, to someone in a legal database, the system might grab that info, adding it to your “risk profile” without ever pausing to ask, “Is this the same person?”
2. Public Record Scraping Lacks Context
Court records, DOJ announcements, and news articles can be indexed by AI tools with little regard for nuance.
The problem is, public records often include just the name and the crime. While they may contain other identifiers like dates of birth, case numbers, or partial addresses, AI systems often prioritize name matching above these crucial corroborating details.
It’s not that the information is unavailable.
It’s that the algorithm wasn’t built to weigh it properly.
So the AI pulls it, tags it, and someone with a similar name could end up falsely flagged during a job application, even though the available data should have ruled them out.
Accuracy isn’t just about access.
It’s about intent, validation, and context.
3. Reputation Scores Mix Real Data with Wrong Assumptions
Some tools go a step further and assign you a trust or risk score based on what's found about you online.
This can include blog posts, articles, court cases, social mentions, or public directory listings. If an unrelated person with your name ends up in a negative database, it could drop your score without any actual misconduct on your part.
And here’s the worst part, you might never even know your score changed or why.
4. There’s Rarely a Human in the Loop
The problem isn’t just the AI.
It’s that too many of these systems are automated from end to end.
No recruiter ever sees the false connection.
No manager ever reads the flag.
The most insidious part?
This often happens silently. You’re simply not called back.
No explanation. No feedback. Just the quiet hum of a system making a decision about your future without ever really seeing you.
It’s a silent denial, an opportunity lost, and a growing confusion as you wonder why no one’s calling you back.
What We Can Do About It
We don’t need to shut down automation.
But we do need to make it smarter, more transparent, and more accountable.
Here’s how we begin to fix this:
Real Identity Verification Before Flagging
Relying on name-only matching is lazy and dangerous. Systems should use multiple factors; such as employer history, region, certifications, or education, to validate a person’s identity before assigning any reputation tag.
Bring Humans Back into the Loop
AI should assist the decision-making process, not replace it.
If a candidate is flagged for any reason, there must be a human review step before any hiring decision is made. A name alone is not evidence. It’s a clue, and it deserves context.
Clear Appeal Processes for Background Flags
If a candidate is flagged by an automated system, they should be notified and given a way to dispute or clarify the issue.
We’ve built appeals for credit scores and insurance rates, why not for employment risk assessments?
Professionals Must Own Their Digital Narrative
If your name is similar to someone in the public eye, or worse, someone in legal trouble, make it clear who you are.
Maintain an active, accurate LinkedIn.
Post an “About Me” blurb on your blog or portfolio site.
Include your certifications, history, and unique identifiers that AI and recruiters alike can anchor to.
The clearer your real story is, the harder it is for a hallucinated one to take hold.
Why This Matters to All of Us
I’m speaking out because I’ve seen how systems drift.
I’ve seen what happens when security controls assume too much and validate too little.
And I’ve seen how quiet errors, like this one, can derail someone’s entire trajectory without anyone realizing it happened.
This isn’t just a glitch in a job search.
It’s a signal that our reliance on AI without transparency is putting reputations at risk.
If it happened to me, a published, certified, security-cleared leader, it can happen to anyone.
So let’s design systems that don’t hallucinate trust.
Let’s build processes that double-check the truth.
And let’s not forget that behind every name in a database is a real person trying to move forward.
"I’m Brad W. Beatty."
"Not the one in the DOJ release."
"No, I'm the one building secure systems, helping teams get better, and pushing for a world where technology doesn’t confuse shadows for people."
If you’ve experienced something similar, or if you’re building AI tools and want to do better, I’d love to hear from you.
Perhaps even take a moment to Google yourself and see what story the internet tells.
- Brad W. Beatty
Comments ()