How Good Research Gets Weaponized: A McKinsey Case Study
Over the years, i've been closely watching the slow-motion distortion of a genuinely useful research for personal gain. The latest in this trend is new research from McKinsey on AI and the future of work.
On November 25, McKinsey Global Institute released a 60-page report on AI and the future of work. Within days, my LinkedIn feed filled with posts citing it. They all followed a similar pattern:
"McKinsey just dropped a bomb: 57% of work will be automated."
"The $2.9 trillion opportunity nobody's talking about."
"AI fluency grew 7X—faster than any skill in history."
The reality is that all those numbers are taken direct from the McKinsey research. They come from the actual report. But the conclusions being drawn range from "not quite what McKinsey said" to "literally the opposite of what McKinsey said."
i’m writing this to share how quality research gets turned into data theater, and why we, as analysts, need to resist the temptation to do the same.
The Pattern We Should All Recognize
The lifecycle of research misuse looks like this:
Credible institution publishes detailed research
Someone reads the summary (or more likely a news article about it or a post on social media about it)
They pull impressive-sounding statistics
They add their own interpretation
They present it as if the research supports their interpretation
They never link to the source
The result is real data, wrong conclusions, borrowed credibility.
And honestly, we've all felt the pull to do this either personally or professionally.
You're making a point. You remember seeing a statistic that supports it. You find it in a report summary. You cite it. You move on.
The discipline to actually read the full report, understand the methodology, and accurately represent the findings? That takes time we often don't feel we have.
Let's Look at What Happened with the McKinsey Report
The McKinsey report "Agents, robots, and us: Skill partnerships in the age of AI" is serious research. It analyzes ~800 occupations, ~2,000 work activities, and ~7,000 skills from 11 million job postings.
Here's what it actually says, and how it's being misrepresented on social media:
Claim 1: "57% of work will be automated"
What the report actually says:
"Currently demonstrated technologies could, in theory, automate activities accounting for about 57 percent of US work hours."
What's missing:
"In theory" = technical potential, not prediction
First line of the report: "This is not a forecast of job losses"
Their actual adoption forecast: 27% by 2030
Even that 27% requires complete workflow redesign
The difference: "Could theoretically be automated" does not equal "will be automated"
It's like saying "Humans can theoretically run 28 mph" (Usain Bolt proved it's possible) and implying your uncle Dave will hit that speed at the company 5K.
Claim 2: "AI fluency demand grew faster than any skill in history. Including the internet boom."
What the report actually says:
"Demand for AI fluency has grown sevenfold in two years, faster than for any other skill in US job postings."
What's missing:
"In US job postings" (current data, not historical comparison)
The internet boom comparison doesn't appear anywhere in the report
It's growing from a tiny base: only 8M workers (~5% of workforce)
The difference: A superlative that sounds researched but isn't sourced. This is what data theater looks like, the art of sounding precise while being vague.
Claim 3: Case study success stories
What the social media posts say:
"A tech firm: 50% time savings in sales"
"A pharma giant: 60% faster clinical reports"
"A regional bank: 50% reduction in IT modernization time"
What the report actually says:
Tech sales: "Time saved ranged from 30 to 50 percent" for specific roles who "redirected time to strategic work." Requires complete workflow redesign.
Pharma: "Touch time for first human-reviewed drafts dropped by nearly 60 percent" which is just ONE STEP in a longer process. McKinsey notes "scaling these efforts can be challenging."
Bank: "Could reduce required human hours by up to 50 percent" in a pilot program only. Note the word "could."
The difference: Pilot results with caveats are not proven ROI you can count on.
The Most Important Finding (That Gets Buried)
Here's the statistic that should matter most to anyone making decisions about AI:
"Nearly 90 percent of companies say they have invested in AI, but fewer than 40 percent report measurable gains."
90% investment. 40% success rate.
McKinsey's diagnosis: Companies are "applying AI to discrete tasks within old processes" rather than "redesigning entire workflows."
This gap between investment and results is the actual story. Not the automation potential. The execution gap.
How Does This Happen?
i don't think most people doing this are being deliberately deceptive. Here's what i think happens:
The pressure to have a take. LinkedIn rewards timely commentary. A major report drops, and there's something like a 48-hour window where hot takes get traction. Read the full 60-page report or be first? The incentives are clear.
The summary trap. Most reports, including this one from McKinsey, come with executive summaries. They're designed to highlight key findings. But they strip out methodology, caveats, and context. Reading the summary feels like reading the report. It's not.
Confirmation bias. We notice data that supports what we already believe. When we find a statistic that fits our narrative, we grab it and move on. We don't keep reading to see if there's contradicting information.
The borrowed authority play. "McKinsey says" carries weight. It makes your point more credible. But only if McKinsey actually says what you claim they said.
The link omission. This is the tell. If you're genuinely trying to inform people, you link to the source so they can verify and learn more. If you're using the research as social proof for your own argument, you...don't.
Why This Matters for Analysts
As someone who's spent 25 years in analytics, i've seen this pattern destroy credibility, both individual and institutional.
A client asks a question. You remember seeing data that answers it. You pull the number from memory (or a quick search). You present it. Later, someone digs into the source and finds the context you missed. Now everything you've said is suspect.
The discipline required:
Actually read the source. Not the summary. Not the news coverage. Now Facebook posts. The actual report.
Understand the methodology. How did they get these numbers? What assumptions did they make? What did they measure vs. what are you claiming they measured?
Represent it accurately. Include the caveats. Distinguish between "could" and "will." Note when something is a pilot vs. proven at scale.
Link to the source. Always. If you're confident in your interpretation, you should have no problem letting people verify.
Say "I don't know" when you don't. It's better than confidently citing research you haven't fully read.
This takes more time. It's less dramatic. It doesn't generate the same engagement.
But it's the difference between analysis and performance.
A Framework for Better Data Hygiene
When you're tempted to cite research (and we all are), run through this checklist:
Before you cite:
Have I read the full source, not just the summary?
Do I understand how they got these numbers?
Am I distinguishing between potential and prediction?
Am I including important caveats?
Am I linking to the source so others can verify?
Red flags that you're doing data theater:
Adding superlatives that aren't in the source
Omitting context that would complicate the narrative
Using "research shows" without showing the research
Conflating correlation with causation
Cherry-picking case studies without mentioning failures
The gut check: If the original researchers read your interpretation, would they recognize their work?
Why This Matters Beyond Social Media
When research gets misrepresented at scale, it creates real problems:
Decision-makers get bad information. CEOs making workforce decisions based on "57% of jobs will be automated" will make different choices than if they understood it's "27% adoption by 2030 requiring complete workflow redesign."
The research institution loses credibility. When people eventually realize the viral interpretation was wrong, they blame the source, not the person who misrepresented it.
Trust in data erodes. When people see "research says" followed by contradicting claims, they stop trusting research entirely. We're seeing this play out in real time across multiple domains.
We reinforce bad habits. When speed-reading and cherry-picking gets rewarded with engagement, we train a generation of analysts that this is how it's done.
The Takeaway
The McKinsey report is good research. It offers useful frameworks for thinking about how work will evolve.
But it's also a perfect case study in how good research gets weaponized:
Real numbers, selectively presented
Technical caveats, systematically removed
Pilot results, presented as proven ROI
Nuanced findings, collapsed into binary choices
Source material, never linked
This happens because browsing a summary feels like reading a report. Finding a statistic feels like understanding the research. And citing McKinsey feels like doing your homework.
None of those things are true.
The discipline to actually read, think about, and accurately represent research is what separates analysis from performance.
And right now, we have way too much performance and not enough analysis.
Read the actual McKinsey report: "Agents, robots, and us: Skill partnerships in the age of AI"
It's 60 pages. It's worth your time. And it says something quite different than the viral posts suggest.