The “Vampire Question” and the Myth of Private AI - a Deeper Look
Once your data enters a public model, it becomes immortal. Here is how to drive the stake through the heart of a bad contract.
We need to have an uncomfortable conversation about “Free” AI.
If you are an HR leader right now, I know the pressure you are under. You are hearing it from your Board, your CHRO, and probably your kids. The message is loud and clear: “Innovate or die.”
So, a vendor pitches you a shiny new “AI Copilot.” It promises to write job descriptions, summarize performance reviews, and predict attrition instantly. It looks like magic. It feels like the solution to your burnout.
You ask the standard due diligence question: “Is our data safe?”
They smile, look you in the eye, and say: “We take privacy very seriously.”
Stop.
I need you to pause right there. “We take privacy seriously” is not a legal term. It is a vibes-based deflection. It is exactly what I tell my dentist when he asks if I’ve been flossing every day. I want it to be true, but that doesn’t mean it is.
In the Department of First Things First, we don’t accept vibes. We ask the Vampire Question.
Why “Vampire”?
I use this analogy because it perfectly describes the legal architecture of Large Language Models (LLMs).
Vampire rules are specific: They cannot enter your house unless invited. But once you invite them in, they drain you dry, and they live forever.
Public LLMs work the same way.
The Invitation: You paste your sensitive salary bands, your strategy documents, or a candidate’s resume into the prompt window.
The Drain: The model ingests that data.
Immortality: That data becomes part of the model’s training set. It learns from you. It becomes part of its neural pathways. You can never get it back.
Six months from now, a competitor could ask that same model for “compensation strategies in the pharmaceutical industry,” and the model might hallucinate an answer based specifically on the spreadsheet you uploaded today.
The Coach’s Corner: “Training” vs. “Inference”
To be the architect of the future, you have to understand the difference between these two words. Vendors love to blur the lines, but the difference is life or death for your data privacy.
1. Inference (The Good Guy)
This is when the AI processes your data to give you an answer, and then immediately forgets it.
The Analogy: Imagine showing a master chef a picture of a burger. He looks at it, cooks the burger for you, and then throws the picture in the shredder. He didn’t learn anything new; he just executed a task.
Status: Safe. (Assuming zero retention policies are in place).
2. Training (The Vampire)
This is when the AI keeps your data to make itself smarter for other customers.
The Analogy: You show that same chef your grandmother’s secret sauce recipe. He memorizes it. Next week, he puts “Grandma’s Sauce” on the menu for every other patron in the restaurant. You just gave away your IP for the price of a burger.
Status: Catastrophic.
The Trap: Many contracts hide this in a clause that says: “We may use customer data to improve our services or user experience.”
Translation: “We are training on your data.”
The “Spicy” Truth about Free Tools
If you are using a free version of ChatGPT, Gemini, or any “Free AI Resume Scanner,” you are the product.
These companies are burning billions of dollars on compute power. They aren’t giving you the tool for free out of the kindness of their hearts. You are paying them with your IP. Every time you paste a candidate’s resume into a free tool to “summarize it,” you are feeding that candidate’s PII (Personally Identifiable Information) into a public dataset.
As HR leaders, we are the guardians of the most sensitive data in the company. Salaries. Social Security Numbers. Performance issues. If Engineering leaks code, they lose a feature. If HR leaks data, people get hurt, and we get sued.
The Interrogation Checklist (The Tear Sheet)
I am a recovering “Yes Man,” but on this issue, I am the Department of “Absolutely Not.”
However, I don’t want you to just be the “Department of No.” I want you to be the “Department of Safe Yes.” You can use these tools, but you have to vet the vendor first.
Here is the script I use. Feel free to copy/paste this directly into your next RFI. This is a slight rehash from a previous post but I think it's that important to mention again.
The First Things First AI Governance Questions:
1. The Zero Retention Check
“Does your model use our inputs (prompts) or outputs for Training or Fine-Tuning your foundational models?”
Acceptable Answer: “No. We have a zero-retention policy for inference. Data stays in your tenant and is flushed immediately after the session.”
Unacceptable Answer: “We de-identify data to improve user experience.” (Run away. “De-identification” is incredibly easy to reverse with AI).
2. The “Blast Radius” Check (The RAG Problem)
“Does the AI respect existing Workday/HRIS security groups at the data retrieval layer?”
The Context: Most corporate AI uses “RAG” (Retrieval-Augmented Generation). It searches your documents to find answers.
The Danger: If I ask the AI “What is the CEO’s bonus?”, the AI might be running as a “System User” that has access to everything. It finds the document and shows it to me, bypassing the security role that says I shouldn’t see it.
The Requirement: The AI must inherit the security context of the logged-in user. If I can’t see it in Workday, the AI shouldn’t be able to read it to me.
3. The Audit Receipt (The Nuance)
“Show me the third-party audit for bias testing.”
The Trap: “Our internal data science team tests for fairness.” (That’s like a student grading their own homework).
The Pro Answer: “We do not commission the audit ourselves because paying for it creates a conflict of interest. However, we open our ‘black box’ to [Reputable Firm] or allow clients to run their own adverse impact testing.”
Why I like this: I actually respect vendors who say, “We don’t buy the audit because we don’t want to buy the result.” That shows integrity. But they must provide the transparency for an independent check to happen.
Final Thought
Innovation without governance isn’t strategy; it’s liability.
Your job isn’t to stop AI. I love AI. I use it every day to organize my thoughts and debug my code (safely, in an enterprise instance). Your job is to make sure that when you invite the vampire in, you’ve removed its fangs first.
Keep your garlic handy.
— Mike
P.S. speaking of safety rules...
My 11-year-old, Justin, is currently in that phase where he thinks he’s a lawyer. I told him about the “Vampire Rule” for data, and he immediately tried to apply it to our house rules.
Justin: “So, if I don’t invite you to check my screen time, looking at it is a violation of my privacy policy?”
Me: “Justin, I am the Administrator. I have root access. I am the System.”
Justin: “That sounds like a monopoly. I’m going to report you to the FTC.”
Me: “Give me the tablet.”
(He did not give me the tablet. We are currently in arbitration regarding the definition of “educational YouTube” vs. “setting Ferraris on fire in a cornfield” (Yeah, thanks for that “WhistlinDiesel”). I’ll keep you posted on the settlement.)
P.P.S. For those lucky parents who don’t know what I’m talking about, this is the Ferrari video that is currently warping the minds of 6th graders everywhere. Proceed with caution.



