CFOtech Australia - Technology news for CFOs & financial decision-makers
Handshake human robotic arm law finance scales justice financial charts

AI scales data; humans scale trust

Mon, 22nd Sep 2025

There's a growing tension in professional services - on the one hand: acceleration, automation and augmentation. On the other: a creeping question - if machines can do more and more, do humans matter less and less?

It's easy to feel the heat of this question in the law, consulting and finance realms and beyond; especially when clients (and their CFOs) are asking: "If AI is doing all the legwork, why am I paying the same fees?"

But here's the truth of the matter - the arrival of general-purpose AI doesn't necessarily flatten the value of people. In fact, the very opposite may be true and we could be entering an era where human input becomes even more - not less - valuable.  

An insightful post on substack by Peter Evans-Greenwood - an incredible mind and valued collaborator - noted that today's AI systems (particularly large language models) are not so much 'artificial intelligence' as they are sophisticated tools for modelling and manipulating the (existing) structure of human language. As they scale, their abilities appear to 'emerge' not from becoming smarter but from access to progressively higher-order linguistic structures that move from words to arguments and narratives. Ultimately, LLMs are responsive indexes of human language, expanding how we manipulate meaning and raising urgent questions about governance, trust and the future of knowledge.

This distinction is crucial for understanding both the opportunities and limitations of AI in professional services.

Let's explore a few uncommon truths not many companies are talking about:

Truth 1: AI sets the baseline; humans set the standard   

If the last 18 months have shown us anything, it's this - AI is great at producing a plausible something by reconstructing patterns from vast linguistic data - a tidy draft, a summary of thousands of papers or even a logical list. But the closer the work gets to uncertainty, nuance and risk - the clearer it becomes where human insight shines and as more services become AI-assisted by 'default', a new pricing model could emerge.  AI's ability to generate seemingly novel content is bounded by the patterns it has learned. As Peter observes, it is adept at psychological creativity - offering fresh combinations within existing frameworks; but genuine innovation and assurance remain the domain of human professionals.

Think of it like this -  you order food from a restaurant. You could get the automated meal-prep version, quick and cheap. You could get the same meal, cooked by a chef and plated with attention. Or you could have the chef customise it to your dietary needs, table-side, based on an ongoing conversation.

The legal equivalent is AI-only. AI + human review. Fully bespoke.

Human expertise, oversight and judgement adds assurance and nuance. But importantly - it adds choice and when you offer choice, you create a new value ladder. Clients can decide what's worth paying for and providers can price accordingly.

For legal and risk advisory, this opens up brand new doors. AI may well lower the baseline cost of service delivery - but it also makes it easier to prove the delta between machine output and human expertise. Not all legal and risk advice needs to be wrapped in formal opinions; but some of it really does which means firms that can communicate where the value lies - rather than trying to compete on cost - will thrive in the new AI-normal.

Truth 2: Platforms scale the commodity; but people scale the trust

There's been a definite quiet shift in how global clients source advice. Take banking compliance across multiple jurisdictions. In the past, these projects were survey-based: time-consuming, complex and costly. They have now evolved into centralised platforms - some offered by legal firms, some by Big Four-style players. Today, AI is starting to automate parts of these entirely and it's not uncommon for clients to want to pay 'peanuts' for different 'modules' of this information.

And yet: when a client needs to sign off their exposure, they still want someone knowledgeable at the end of the phone - someone who's lived the local regulator's interpretation; someone who knows whether a red flag is real or just red-tape.  Whilst AI can manipulate language with remarkable fluency, it cannot replicate the embedded trust and contextual understanding that clients seek when the stakes are high. Human expertise, therefore, becomes not just a differentiator, but a premium offering. 

But those human assurances don't scale like software. They're harder to standardise and harder to mass-produce. This makes them, paradoxically, more valuable. Just like financial instruments with embedded risk, services with embedded human insight may trade at a premium because of their assurance, not in spite of it.

AI can write; it can model and it can analyse - but it cannot underwrite trust and that's where people come in.

Truth 3: The future model might be AI-first, but human-premium

Here's a thought experiment: what happens when AI gets so good at what it does that it replaces 80% of a firm's volume work? The often-cited answer given is that the firm doesn't need as many people - revenue dips, margins rise, clients pay less per task. Firms lean on subscriptions and software-like productisation. 

But what if the true opportunity isn't really about leaning into AI, but instead - leaning around it? What if the winning model isn't AI-only; but rather, one that helps clients understand where and why humans still matter?  Rather than viewing AI as a replacement, we should see it as how Peter colourfully describes as a 'cognitive prosthetic', amplifying our ability to process and communicate complex information. The real opportunity lies in combining AI's scale with human discernment, creating advisory tiers that transparently show where machine output ends and human assurance begins

That might look like advisory tiers that offer both efficiency and assurance. Or client dashboards showing which parts of a service were AI-generated and which were human-verified. This means trust layers built in, not tacked on - because regulatory and reputational risk demands it.

In short: there's a future in which we stop asking "How much can AI do?" and start asking "How much is human confidence worth?" 

For legal, risk and other professional services - this might just be the renaissance, not the reckoning…

Every leap forward in tech has created new roles, not just replaced old ones and the rise of AI will be no different. It might shrink the volume of work humans need to do but it could also raise the stakes (and the value) of what's left.

In that light, AI isn't erasing the human layer - but exposing it. For the firms who understand this, there's never been a better time to charge a premium for what only people can provide.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X