Do As I Say, Not As I Do
I still remember the day I collected my first car - a Hyundai Verna. The salesperson was genuinely enthusiastic, the kind of energy that’s hard not to catch. First-time buyer, freshly minted adult, susceptible to every word. He walked me through everything: the specs, the chassis, how different it was from everything else on the market. Music to my ears. By the end of it I was sold, even though it wasn’t my first choice walking in.
On collection day, he was still going. Still singing the car’s praises as he drove me over to pick it up. At some point, I turned and asked, almost as an afterthought: so what do you drive?
A Subaru.
The world didn’t end, exactly. But something shifted. Not because he’d lied since he hadn’t. He was probably a perfectly good salesperson doing his job with genuine enthusiasm. But there’s something about that moment that lodges in you. If the car is this good, why isn’t it good enough for you?
The Verna, as it turned out, was not great. The fuel efficiency was poor — I was at the petrol station embarrassingly often. The pickup was weak, especially going uphill. It spent more time at the workshop than I’d like to admit. Eventually, the car and I parted ways. I don’t miss it.
But I’ve thought about that moment many times since. Because I keep seeing the same thing play out, just at a much larger scale.
The pattern has a name.
Researchers call it the “say-do gap”: the well-documented disconnect between what people and organisations claim they believe and what they actually do. Studies suggest that stated preferences predict real behaviour with only about 34% accuracy. In other words, what someone tells you they’ll do is wrong two-thirds of the time.
Companies are not immune to this. In fact, they might be the worst offenders, because the stakes of the gap are higher and the audience is larger.
Take the AI industry right now.
You’re scrolling LinkedIn. The usual parade - someone just automated their entire sales pipeline with Claude, someone else dissolved their BDR team and replaced it with an AI agent, someone’s closing deals while they sleep. You’ve seen this post seventeen times this week, different avatar, same energy.
Then you check Anthropic’s careers page.
At the time of writing, Anthropic - the company whose product everyone on LinkedIn is using to replace their sales teams - has 133 open roles in their sales department. One hundred and thirty-three. Salaries ranging from $116K to $290K. Business Development Representatives, Account Executives, Growth roles. A full-scale human sales operation, being built in parallel with a product marketed as making human sales teams obsolete.
And Anthropic isn’t alone. A few months ago, Salesforce made headlines for aggressively hiring salespeople - this while actively marketing agentic AI to customers as a tool for reducing headcount in, you guessed it, sales.
We’ve seen this movie before.
There’s a concept in the tech industry called “dogfooding” — the idea that a company should use its own product internally before asking customers to rely on it. Microsoft popularised the term back in 1988. The logic is simple: if it’s good enough to sell, it should be good enough to use yourself. When companies do this, it builds credibility. When they don’t, it should raise questions.
A high school friend once tried to convince me to quit smoking. He made a pretty compelling case. He was smoking while he said it.
The crypto era is instructive here too. I know this one personally.
A friend got me into crypto around 2021. He was genuinely pumped — the kind of conviction that feels like insider knowledge when you’re on the outside of it. I trusted him. Put in $30,000. By the time the music stopped, I had $200 left. I later connected the dots: he had every incentive to bring people in. His reputation took a significant hit afterwards, and I wasn’t the only one who got burned.
I don’t think he set out to con anyone. But he was selling a story he personally stood to gain from, and I took the excitement at face value instead of asking harder questions. The loudest voices in the crypto conversation were often the ones most financially exposed to your belief in it.
Then there’s the online course industrial complex. Someone builds a modest income teaching people how to build a modest income. If the method worked as advertised, why would they bother teaching it to you? The question answers itself.
In a noisy, TLDR world, the loudest claim wins by default. Most people don’t have the time, or the incentive, to check whether the person selling the vision is actually living it. That ease is exactly what gets exploited.
So how do you tell the bluff from the real thing?
Get a little more cynical. Not paranoid butcynical. Paranoid is assuming everything is a lie. Cynical is asking, quietly, who benefits from you believing this.
A few things worth keeping in mind:
Watch what they hire, not what they announce. Press releases are marketing. Job postings are operational reality. When a company posts aggressively for roles they claim their product eliminates, that tells you where the technology actually stands. The hiring budget doesn’t lie the way the keynote does.
Are they eating their own dog food? If the AI company isn’t using AI to run its own sales pipeline, that’s worth noting. If the salesperson drives a Subaru, that’s worth noting too. Behaviour leaks out if you look closely enough. Even if there are legitimate reasons for the gap, the optics take a hit — and optics are often the point.
Who’s getting rich from the idea - and how? The crypto evangelist made money from your belief in crypto, not necessarily from crypto itself. The course seller made money from selling courses about making money. When the messenger profits more from the message than the method, slow down.
Give it time. NFTs had a very loud two years. Then the music stopped. Most hype cycles have a shelf life. The question isn’t whether something is exciting now — it’s whether the people selling it are still around in five years, building on it quietly rather than loudly.
None of this means AI isn’t genuinely transformative. It probably is, in ways we’re still figuring out. The honest version of the story - AI handles the volume, humans handle the stakes - might turn out to be exactly right.
But “might turn out to be right” is very different from the confident, frictionless certainty that fills your feed every morning. The LinkedIn version has already closed the deal, collected the commission, and is posting about it.
My Verna salesperson wasn’t dishonest. He was enthusiastic about something he had good reasons to be enthusiastic about. But he drove a Subaru home every night.
Ask people what they drive. It tells you more than the brochure ever will.

