I’ve spent 20+ years in technology—Fortune 500s, startups, financial services, healthcare. I’ve seen every hype cycle. I’ve watched technologies get declared “the future” only to become footnotes.
This time is different.
And not in the way most people think.
The Numbers That Keep Me Up at Night#
Let me share some statistics that fundamentally changed how I see the next decade:
AI is writing our software:
- GitHub Copilot now writes 46% of all code produced by its 20 million users—up from 27% at launch in 2022
- For Java developers, that number hits 61%
- 90% of Fortune 100 companies have adopted AI coding assistants
- Developers complete tasks 55% faster with AI assistance
- 256 billion lines of AI-generated code already exist as of 2024
AI is writing our content:
- Over 50% of new web articles are now primarily AI-generated
- Ahrefs found 74.2% of new web pages contain detectable AI content
- Europol predicts 90% of online content will be synthetically generated by 2026
- 10+ billion AI-generated pages have been published since 2023
AI is becoming autonomous:
- Gartner predicts 40% of enterprise applications will embed AI agents by end of 2026
- 75% of companies plan to invest in autonomous AI agents this year
- By 2028, 15% of day-to-day work decisions will be made autonomously by agentic AI—up from zero in 2024
Read those numbers again. We’re not preparing for an AI-dominated future. We’re already in one.
The Pattern Everyone’s Missing#
Here’s what most analysis gets wrong: they frame this as “AI vs. humans” or “automation vs. employment.”
That’s not what’s happening.
What’s happening is far more interesting—and far more consequential.
We’re witnessing the early stages of what I call The Great Inversion: the economic flip where abundant things become worthless and scarce things become priceless.
Think about it. For most of human history:
- Content was scarce. Creating a book, article, or piece of music required significant time, skill, and resources.
- Distribution was the bottleneck. Getting your work in front of people required publishers, record labels, broadcasters—gatekeepers with limited slots.
The internet inverted this. Suddenly:
- Distribution became free and infinite
- But content creation still required human effort
Now AI is completing the inversion:
- Distribution is free
- Content creation is free
- What’s left?
Authenticity.
The one thing AI cannot manufacture is the genuine article. The human experience behind the work. The soul.
Why I Really Built HumanMark#
I didn’t build HumanMark because I’m afraid of AI. I use AI tools every day. I’m writing this with AI assistance. (And I’m comfortable telling you that—which is itself part of the point I’m making.)
I built HumanMark because I saw a gap between two futures:
Future A: A world where we can no longer tell what’s real, trust erodes completely, and synthetic content creates a kind of informational anarchy.
Future B: A world where AI abundance actually increases the premium on human authenticity—where we develop the tools and norms to verify and celebrate genuine human creativity.
Future A is the default path. Future B requires intentional infrastructure.
HumanMark is my contribution to Future B.
The Trust Collapse Is Already Here#
This isn’t theoretical. We’re watching it happen in real-time:
The trust crisis in numbers:
- 95% of Americans report encountering suspicious or likely AI-generated content online
- 70% of respondents in the 2025 Edelman Trust Barometer worry that journalists purposely mislead people
- Only 7% of U.S. adults have a “great deal” of trust in mass media
- 71% of consumers worry about being able to trust what they see or hear because of AI
The head of Instagram, Adam Mosseri, put it bluntly: for most of his life he could safely assume photographs or videos were largely accurate captures of real moments. That assumption no longer holds.
We’re genetically predisposed to believe our eyes. That wiring is now a vulnerability.
Deepfakes aren’t just a fraud risk—they’re an epistemological crisis. When any video can be fabricated, two things happen:
- Fake things become believable
- Real things become deniable
This “liar’s dividend”—the ability to dismiss authentic recordings as probable fakes—may be more corrosive than the fakes themselves.
The Authenticity Premium Is Real#
Here’s where it gets interesting.
Just as AI content floods the internet, something counterintuitive is emerging: a measurable economic premium for verified human work.
- 45% of Americans would rather hire a human designer than use AI, even when AI is the cheaper option
- 90% of executives are more open to outreach from brands with high-quality human thought leadership
- 102% conversion lift from user-generated content vs. traditional marketing
- Only 29% trust AI-labeled content as much as human-created content
YouTube announced new monetization rules in 2025 that demonetize mass-produced AI content while rewarding original, human-made work. The platform explicitly moved to penalize “AI slop.”
We’re not heading toward a world where human content is obsolete. We’re heading toward a world where human content is luxury goods.
The Three Waves of AI Transformation#
To understand where we’re going, look at where we’ve been:
Wave 1: Automation
2010s
Structured, Repetitive Tasks
AI handled data entry, basic analysis, rule-based processes. Humans did everything else. The impact was real but contained to specific workflows.Wave 2: Generation
2020-2025
Content Creation at Scale
AI began creating—writing articles, generating images, composing music, writing code. Humans shifted to editing, curating, and prompting. The volume of synthetic content exploded.Wave 3: Agency
2026+
Autonomous Decision-Making
AI becomes autonomous—making decisions, executing multi-step workflows, operating as "digital workers" with their own KPIs. Humans become managers and auditors of AI workforces.
Each wave increases AI’s capability. But here’s what’s missed: each wave also increases the premium on verified human judgment.
- In Wave 1, humans were doing the work.
- In Wave 2, humans are directing the work.
- In Wave 3, humans are accountable for the work.
The question isn’t “will AI replace humans?” The question is: “How do we verify when humans were involved, and what that involvement means?”
That’s the question HumanMark answers.
The Healthcare Problem (And Why It’s Everyone’s Problem)#
I’ve spent years working in healthcare technology. In healthcare, the stakes of content authenticity are life and death.
Consider:
These aren’t edge cases. These are the majority of enterprise use cases.
Yet every commercial AI detection tool operates the same way: upload your content to our cloud, and we’ll tell you if it’s AI-generated.
That model is fundamentally broken for anyone who actually handles sensitive information—which is most organizations that matter.
HumanMark is self-hosted. Your data never leaves your infrastructure. You can run it air-gapped, offline, on-premise. No external API calls. No data leakage. No compliance nightmares.
This isn’t a feature. It’s the architecture.
Why Open Source Matters Here#
I made HumanMark open source (MIT license) for a specific reason.
Content authenticity verification is going to become critical infrastructure. It’s going to be embedded in publishing workflows, HR processes, legal discovery, academic submissions, journalism, government communications.
Infrastructure this important shouldn’t be controlled by a single company. It shouldn’t be a black box. It shouldn’t be something you rent.
The four pillars of trust infrastructure:
- Transparent: You can see how it works
- Auditable: You can verify it’s working correctly
- Controllable: You own the deployment
- Improvable: The community can make it better
Open source is the only model that delivers all four.
What I Actually Think Happens Next#
Here’s my bet on the next decade:
| Timeframe | Phase | What Happens |
|---|---|---|
| 2025-2026 | Detection Arms Race | AI detection tools proliferate. AI generation tools evolve to evade detection. This is a losing game if you’re trying to “catch” AI. The better strategy is building provenance and verification into content creation from the start. |
| 2027-2028 | Authenticity Standards Era | Major platforms, publishers, and institutions adopt content authenticity standards. “Human-verified” becomes a label like “organic” or “fair trade”—a signal that commands premium pricing and higher trust. |
| 2029-2030 | Human Premium Economy | A mature ecosystem emerges where human creativity is explicitly valued and compensated at premium rates. AI handles scale; humans handle soul. The best work combines both, with clear attribution. |
| 2030+ | The New Normal | Future generations grow up with content authenticity verification as default. “Can you prove a human made this?” becomes as natural a question as “is this website secure?” |
The Uncomfortable Question#
Here’s what I wrestle with, and I think you should too:
If AI can write code that works, generate content that converts, produce art that moves people—what’s left that’s uniquely human?
I don’t think the answer is “nothing.” I think the answer is “everything that matters.”
What remains uniquely human:
- AI can generate a technically proficient painting. It cannot have the experience of loss that informed it.
- AI can write a perfectly structured argument. It cannot have the conviction that makes it worth making.
- AI can produce a statistically optimal marketing campaign. It cannot have the taste to know when statistics miss the point.
The things that remain uniquely human aren’t technical capabilities. They’re the reasons for creating in the first place.
Purpose. Experience. Conviction. Taste. Soul.
These can’t be verified by algorithms. But they can be preserved by creating the infrastructure to distinguish authentic human expression from synthetic output.
That’s what HumanMark is for.
A Personal Note#
Twenty years ago, I wrote a book on Drupal development. It took me months of focused work—late nights, dead ends, moments of clarity.
Today, AI could generate something similar in hours.
But here’s what AI couldn’t have done: sit in that specific chair, at that specific time in my life, wrestling with those specific problems, making those specific decisions about what to include and what to leave out.
The book wasn’t just information. It was a record of a human mind engaging with a problem. That’s what made it valuable to readers who were looking for guidance from someone who had been where they were.
AI will make information abundant. But information was never really what we were seeking.
We were seeking understanding from someone who understands.
That remains scarce. That will become more scarce. And the tools to verify and protect that scarcity are among the most important things we can build right now.
The Invitation#
HumanMark is live, open source, and free forever.
If you believe—like I do—that the ability to verify human authenticity is going to be critical infrastructure for the next era, I’d love your help building it.
Star it. Fork it. Break it. Improve it. Tell me where I’m wrong.
The future isn’t AI versus humans. The future is figuring out how they work together—with clear roles, clear verification, and clear value for what each contributes.
Let’s build that future.
⭐ Star HumanMark on GitHub 💬 Join the Discussion
If this resonated, share it. Not because I need engagement, but because I think this conversation matters. We have a narrow window to build the right infrastructure for an AI-abundant world. Let’s not waste it.
