AI Usage: Lying and Shame
Blog #6 of The AI Disruptor Playbook Series: On subtle enterprise culture, the quiet dishonesty at the center of modern knowledge work — and why transparency is the next status signal
We probably are facing a new identity crisis inside the workplace, introduced by AI usage.
Here is what’s going on:
Widespread Concealment: According to a January 2026 report from Yahoo Finance, nearly half (45%) of employees do not share that they use AI. While some see it as a normal workflow, Gen Z employees specifically cited concerns about being judged as a primary reason for nondisclosure.
The C-Suite Paradox: A March 2026 LinkedIn report revealed that 53.4% of C-suite leaders admit to concealing their AI habits despite being frequent users. This suggests that even those in power feel uneasy about the optics of relying on automation.
Competency Fears: A February 2026 Harvard Business Review analysis found that 60% of workers worry that using AI will lead colleagues to question their personal competency. Furthermore, 61% worry that AI might make others think they don’t bring unique value.
That should stop us in our tracks, because it suggests the biggest barrier to AI adoption may not be capability. It may be shame.
And because it is shame, what follows is lying.
Not dramatic lying. Quiet lying. The kind that lives in the spaces between what people do and what they are willing to say they did. A deleted line in an email. A sanitized version of “how I worked through this.” A meeting where every person in the room used AI to prepare, and no person in the room mentions it.
That is a far more interesting problem than “will AI replace jobs?”
The real story is that AI already has a seat at the table. The discomfort comes from how quickly it moved from novelty to necessity, and how threatening that feels to people who built careers on proving they did the thinking themselves. Academic research in PNAS confirms this is a measurable social dynamic, not a vibe — disclosure carries a documented evaluative cost.
In other words: we are not just adopting a new tool. We are negotiating a new identity crisis inside the workplace.
And like most identity crises, it hides the real question behind a fake one.
The Deterministic Trap, in Professional Clothes
Here is the pattern I keep seeing across every AI transformation conversation: we draw a hard binary line — AI-assisted or not AI-assisted, authentic or synthetic, original thought or derivative output — and then we sort every piece of work onto one side of it.
The line feels clean. It is also, increasingly, fiction.
This is the deterministic trap I have written about before, dressed up in professional clothes. It shows up in boardrooms, in performance reviews, in the quiet internal monologue of a senior analyst deciding whether to mention that an AI helped her pressure-test an argument. The binary is comforting because it preserves a familiar hierarchy — thinker above tool, human above machine, original above derivative. The binary is also wrong, because that is not how modern knowledge work actually happens anymore.
The most interesting work in any organization right now is happening in the space the binary refuses to acknowledge. Executives want AI-driven productivity but still reward the performance of human-only work. A manager praises a polished strategy memo and quietly wonders if a junior employee used AI to produce it. Leaders push innovation publicly while reacting to AI-assisted work as somehow less authentic than work that took longer and looked harder.
Is this another AI theater? And the lying follows the theater. Because when the official story does not match the lived reality, people do not change the lived reality. They change what they say about it.
If a machine helps a worker think faster, write better, and avoid dead ends, the result is not diminished. It is improved. Pretending otherwise is a status game disguised as a moral argument.
“Assistant” Is the Safe Word
Notice how comfortable people have become saying AI was an “assistant.”
That word preserves the human ego. It implies the person remained fully in control, with AI playing the role of a polite intern who fetched things from the archive. The framing is safe because it keeps the old hierarchy intact — you did the thinking, the tool did the typing.
But that is often not what is happening.
In real work, AI is increasingly a thinking partner, a sparring partner, a first-draft generator, a research accelerator, and sometimes a reasoning engine that helps people pressure-test assumptions before they act. That is not assistance. That is cognitive partnership. And the deeper the AI’s role becomes, the more threatening it feels to the mythology of individual brilliance — so the admission gets downgraded.
AI helped me format the deck is safe.
AI helped me find the flaw in my own strategy is not.
The distance between those two sentences is where the lying lives. One is technically true and socially comfortable. The other is fully true and socially expensive. Most professionals, most of the time, choose the comfortable one. And each small choice adds up to an entire workplace culture operating on a story that nobody in the room actually believes.
Mediocrizes the Passive, Sharpens the Expert
Here is the pattern I keep returning to in everything I write about this transition: the same tools mediocrize the passive user and sharpen the expert.
Hand a generative model to someone without taste, judgment, or point of view, and you get forgettable output. Hand the same model to someone operating in their zone — where their expertise, context, and conviction compound — and you get work that could not have existed without them or the tool.
Usage shame punishes exactly the wrong variable.
It stigmatizes the tool when what actually differentiates good work from bad is the human judgment applied through the tool. The person who refuses to admit they use AI is not protecting their authenticity. They are hiding the very evidence of their craft — the questions they asked, the answers they rejected, the framings they pushed the model to sharpen, the moments they overruled the output because something was off.
Taste, judgment, and cultural fluency don’t disappear when you bring an AI into the process. They become more visible, not less — if you are willing to show your work.
The uncomfortable truth is that admitting thoughtful AI usage is a stronger credential than hiding it. It signals that you know how to deploy the most powerful general-purpose technology of our time in service of your actual expertise — rather than pretending the technology doesn’t exist, or using it clumsily behind closed doors.
Every time someone chooses to hide, they do not just misrepresent their process. They erase the evidence of their own judgment.
The Shame Is the Signal
The fact that people hide AI use tells us something important: the shame is real enough to shape behavior. The PNAS study found that people expect a social evaluation penalty when AI is involved, which helps explain why disclosure feels risky even when the work is strong.
People are not just worried about policy violations. They are worried about being seen as lazy, less talented, or replaceable. They are worried that saying “AI helped me think” sounds like admitting weakness, when in many cases it is simply admitting to using the best available leverage.
That is absurd, but it is also predictable.
Every major productivity shift goes through this phase. The printing press was dismissed, then adopted privately, then normalized. The calculator was banned from exams, then required. The internet was a “toy” in serious offices for years before it became the office. Each general-purpose technology provokes the same pattern of panic, hiding, and eventual normalization.
What makes AI different is the emotional intensity of the backlash, because it touches the one thing professionals cling to most: the idea that their value is tied to the exclusivity of their own thoughts.
AI challenges that fantasy directly. And the fantasy fights back by making people lie.
Enterprise Culture Is Behind the Technology
This is why the enterprise conversation is so broken.
Companies are moving faster on deployment than on norms. Workers are already adopting AI while many organizations still lack clear expectations about disclosure, acceptable use, and accountability. So employees do what employees always do under ambiguity: they optimize in private and communicate in public.
That is a recipe for shallow adoption and hidden dependency. It also happens to be the exact opposite of what the technology needs to actually transform work.
If organizations want AI to be used responsibly, they have to make it safe to say the quiet part out loud: yes, AI helped me think; yes, AI shaped this draft; yes, AI improved this analysis, and here is how. If that sentence feels threatening inside your organization, the organization should ask why. The answer is almost never about risk. It is almost always about status.
The uncomfortable version of the same observation: as long as it is dangerous to tell the truth about how work gets done, people will keep lying about it. Not maliciously. Not dramatically. Just quietly, daily, at scale — and the organization will keep making decisions based on a picture of reality that has been sanitized by everyone contributing to it.
This is where the democratization story, the one I keep returning to, gets sharpest. The real unlock of AI is not anyone can now produce output. It is anyone with judgment, point of view, and willingness to show their work now has a collaborator that lets them operate at the level of their best thinking more consistently.
The bottleneck was never generation. The bottleneck was always judgment, reach, and the stamina to keep thinking clearly under pressure — and the culture that lets people admit how they actually do it.
Three Questions I’m Still Sitting With
I don’t have this fully resolved, and I’d rather leave the questions open than wrap them up neatly.
First: when a leader finally says out loud “this strategic insight came from a conversation with an AI partner, and here is how I pressure-tested it” — will the team respect that more, or less? I suspect more. And I suspect the leaders who go first will be disproportionately rewarded for modeling a behavior everyone else is waiting for permission to adopt.
Second: what happens when the tools are used without judgment, without taste, without point of view? The flood of AI-generated memos and strategies that all sound the same. The reports that are technically correct and totally forgettable. Usage shame does nothing to prevent this failure mode — it only drives it underground. The remedy is not more hiding. It is more visibility, better norms, and more honest conversations about what good AI-assisted work actually looks like.
Third, and the one I can’t stop turning over: what does a culture look like when it finally stops lying about this? Not just in output. In leadership. In the quiet craft of thinking hard about hard problems. I suspect it looks more honest, more collaborative, and more humble than the culture we have now. And I suspect the first organizations to get there will have a significant advantage over the ones still performing a story nobody in the room actually believes.
The Reframe: Transparency Is the New Status Signal
The conventional wisdom says AI-assisted work is a dilution of human contribution. A flood of the synthetic at the expense of the real.
I think we are watching something stranger. We are watching the workplace decide whether honesty or vanity will define professional credibility in the AI era.
Transparency with judgment.
This is what every general-purpose technology eventually does. It does not replace the human. It relocates what the human contributes. The press relocated authorship from the scribe to the writer. The calculator relocated mathematics from the computation to the problem. AI is relocating expertise from the hand to the point of view — and from individual performance to the honest choreography of human and machine.
The work is still human. It is just happening in a different place than we are used to looking.
Usage shame is what the old location looks like as it defends itself.
Transparency is what the new location looks like as it steps forward.
