This Australia Day weekend my family camped on the South Coast of New South Wales with close friends. The bush was indecently alive: a lace monitor patrolled the campsite for barbecued offcuts like a small dinosaur; a mother emu and her baby wandered the beach, gulping down salt brush.

In the cool glade of the eucalypt forest we spied an echidna, shuffling between the cycads and the spotted gums in search of bull ants. Eastern greys and swamp wallabies luxuriated on the camp grass, manicured by their chewing to the length of a freshly snipped fingernail.

One morning we found a seal on the sand, its bleach-white ribs arched like the Sydney Opera House. It had been chunked apart hours earlier by a shark large enough to treat it as an hors d’oeuvre. Nature was everywhere, in thrilling abundance, and still running on the old rules. Something was always eating something else. The seal didn’t see the shark coming. Most incumbents don’t either.

Around the fire that night, we talked about who might be next. Over a second helping of Arabic chicken, the conversation turned to AI, and what it meant for government, policy, and work.

We have machines that can sift the written record of human knowledge. Too often, we use them to write politer emails. The gap between what most people think AI can do and its actual power is vast. That gap is an arbitrage window that won’t stay open long.

Only a month ago, AI was a brilliant colleague tied to a chair. It could read and write. But it could not act.

That changed when two capabilities arrived together with Claude Cowork. The first is computer use: the AI can see your screen, move a cursor, and type. The second is MCP, a protocol that connects AI directly to files, calendars, and databases. Think of it as USB-C for intelligence.

Consider the old way. You ask a chatbot for packaging suppliers in your city. It returns a list. Then you do all the work: clicking links, hunting contact pages, copying emails, writing messages, sending them. Thirty minutes. The AI contributed a sliver.

Now consider the new. You tell an agent to find five companies, get their emails, send your specifications, and ask for a quote. It searches. It hits dead ends and navigates around them. It drafts, sends, and logs. Thirty minutes becomes two. You do something else while it works on your behalf.

That is the shift in microcosm: output decoupled from time.

The common fear is that AI replaces workers whole. But full replacement keeps stalling because humans carry context machines cannot see. There is another path, less dramatic and far more likely. Instead of automating 100 percent of one job, you automate the drudgery in a hundred jobs at once. The data enters itself. The form fills itself. The follow-up sends itself. You stop being the person who types and become the person who directs: an orchestrator of a tireless workforce all your own.

This is not science fiction. TELUS has run the experiment: 70,000 employees, half a million hours of tedium vaporised. No mass layoffs followed. Tens of thousands of workers discovered how much of their lives had been spent moving information from one box to another.

But the strangest feature of these systems is not the time they save. It is that they improve when they break. The AI community has borrowed a word from metallurgy to describe this phenomenon: self-annealing.

In metalwork, annealing means heating metal to relax its internal structure, then cooling it to harden. The stress is removed. The material emerges tougher than before. The swordsmiths of medieval Damascus understood this, even if they could not explain it. Crusaders returned to Europe with tales of swords that held an edge through battle after battle. Their blades could slice a falling silk scarf.

For two hundred years, metallurgists tried to reverse-engineer the secret and failed. Only in 1998 did researchers crack it: trace vanadium in the Indian ore, present at three-thousandths of one percent, had made the steel possible. When the mines changed, the magic stopped. The smiths never knew why it worked, so had no way of bringing it back.

Agentic systems solve the problem the swordsmiths never could. They document their own learning.

Think of two employees. Employee A completes a task and moves on. Employee B completes the same task, notes what went wrong, and leaves instructions for next time. After a year, Employee A is still Employee A. Employee B has become a department. These agents are Employee B. Each failure leaves instructions for the next attempt.

The learning compounds on both sides. The agent learns your preferences, your edge cases, your patterns. You learn what to delegate, how to phrase instructions, when to intervene. After six months, your workflow is unrecognisable to someone starting fresh.

The temptation is despair. If machines do more, what is left for us?

The answer has always been the same. Machines cannot care. They cannot take responsibility. They cannot persuade a room full of sceptics or rebuild trust after failure. They produce outputs. Only humans can be accountable for them.

The skills that matter now are the ones we have always undervalued: asking the right questions, making decisions under uncertainty, explaining why something is true in a way that changes minds. These were buried under busywork. AI strips it away and leaves only what matters.

As the fire burned low, we kept circling the same thought. The window was open. Soon the tools would be easy. The advantage would belong to those who learned while they were still hard.

We’d love to hear your thoughts – email luke@bwdstrategic.com or message him on LinkedIn if you’d like to continue the conversation.

About the Author

Luke Heilbuth is CEO of sustainability strategy consultancy BWD Strategic, and a former Australian diplomat.