Our website uses cookies in order to be able to offer the best possible functionality. By using the website you agree to the use of cookies. More information can be found here.
Michael is a key partner in Signium’s market leading UK Executive Search team. He founded Digital 360, a specialist London based Executive Search firm in 2007. He is a member of Signium’s Professional Services and Technology practice grou...
In the words of tech entrepreneur Matt Mullenweg: “Technology is best when it brings people together.” As AI reshapes the business landscape, the challenge is not just building smarter systems, but using them to serve people better.
Despite all the buzz, most organizations are still struggling to extract real, sustainable value from artificial intelligence. Every boardroom is asking the same question: Will AI revolutionize our business, or simply distract it? As the noise grows louder, clarity has never been more important.
According to McKinsey and Deloitte, many companies are hitting roadblocks when trying to scale and realize the benefits of AI. Deloitte’s study describes a complex landscape where organizations are beginning to move beyond early-stage experimentation toward more strategic, enterprise-level adoption of generative AI. However, regulatory compliance has emerged as a major barrier to progress, rising from 28% to 38% in just a few months. Furthermore, 69% of organizations expect it will take more than a year to implement a comprehensive AI governance framework.
McKinsey’s research, published in January 2025 and based on interviews with global leaders and employees, indicates that almost all companies invest in AI, but just 1% believe these investments are at maturity. The report shows that the biggest barrier to scaling is not employees, who are ready, but leaders, who aren’t piloting progress fast enough.
The study also found that measurable value in profit and loss from AI initiatives is promising, but continues to lag.
So, is AI just the latest pipe dream sold by tech evangelists, or is there genuine value waiting to be unlocked? If so, how can companies get there? In short, how can organizations achieve faster and better RoAI – Return on Artificial (intelligence) Investment?
Jim Rowan, Applied AI Leader at Deloitte, notes: “Amid the promise of AI agents and the evolution of foundational models, future-thinking organizations are as bullish as ever in building bridges to ROI, all while understanding the need for nuance – and patience – as we embrace this next wave of GenAI. Anticipation is high, and now is the time for leaders to take the long view of their GenAI investments, with a focus on governance, collaboration, and continued iteration as key accelerators in the race for sustainable value.”
For all the excitement around AI, few organizations are seeing consistent success. Michael de Kare-Silver, Managing Partner at Signium UK, comments, “There doesn’t seem to be a single company that has discovered the silver bullet for achieving significant ROI out of AI. There’s a lot of investment, a flurry of activity and learning, and plenty of use cases being explored. Yet still, very few efforts are genuinely resulting in measurable returns.”
The collective research points to a handful of recurring obstacles that continue to get in the way.
1. Multiple random tests and pilots
Most companies have been allowing their staff to experiment with AI. In many cases, this means using tools like CoPilot to summarize a Teams or Zoom meeting and send out notes, or to generate a first draft of a document. However, these efforts are scattered – each team, department, or region is trialling different tools and use cases. It’s random, opportunistic, and individual, with vastly different levels of adoption and effectiveness.
2. No unified strategy or approach
In this “random” environment, there’s a clear lack of unified thinking. There’s no overarching plan, no guiding view on who should use AI and in what context, and no rules about when it’s appropriate or when it’s not. As a result, the opportunity to incorporate tried-and-tested AI tools into routine processes across an organization is lost.
3. Fear of the unknown
AI is still new. It’s uncertain. It’s not yet proven that it can drive sustainable revenue or profit, so the prevailing attitude in many companies is: let others go first.
“No leader wants to risk throwing money in the water,” says de Kare-Silver. “They prefer calculated risks, but in the world of AI, where so much is unknown, calculations are estimates, at best. Leaders want to see proven use cases, and only then can they be persuaded to try it themselves.”
4. Poor data quality
To really take advantage of an AI application, it needs to be able to leverage the company’s data. That is the bedrock of AI efficiency, after all. However, in many companies, data quality is poor. There’s often no uniform, up-to-date, hygienic database, and without clean data as a foundation, it becomes impossible to get sensible insights from AI analysis. This could lead to incorrect outputs, misinformation, and AI hallucinations.
5. Absence of experienced AI leadership
In most companies, responsibility for AI has simply been added to the job portfolio of an already-busy CTO or Head of Data. De Kare-Silver advises against this, saying, “A company’s CTO or Head of Data is likely already at their limit. They’re not going to be able to find the time to explore what the true advantages of AI could be. Testing AI and aligning its application with an organization’s goals, as well as industry regulations, is meticulous and time-consuming work. For a busy CTO, that’s simply not a priority.”
De Kare-Silver applauds organizations like BT, Schneider Electric, and ING, that have appointed senior, dedicated AI champions: “Considering the potential impact of AI – both good and bad – having an executive role dedicated to driving it is surely what’s needed to have any real shot at delivering RoAI.”
At the heart of the AI conversation lies a critical issue: how to move down this path without fundamentally undermining our people. If AI is to deliver the much-vaunted value it promises, then a significant part of that will presumably come through automation, process change, and the replacement of people with bots.
As we begin to explore that path, what will the impact be on our people? What are the consequences of fuelling further fear in the workforce, as the spectre of job cuts and reduced staffing looms larger? Do we even want to replace people with technology? After all, aren’t our employees the very core of what gives a business meaning?
Richard Branson’s well-known mantra comes to mind:
Happy employees = happy customers = market success
De Kare-Silver urges businesses to approach the human cost thoughtfully: “Branson’s business equation has proven itself, time and again. Do we really want to replace ‘happy employees’ with ‘efficient bots’? When AI and bots become the face of a business, how do we preserve the human connection consumers are looking for?”
Many leaders are becoming acutely aware of this dilemma, and it’s fast becoming one of the defining challenges beneath the AI hype. More organizations are looking for real evidence that replacing people with AI tools will truly deliver tangible, measurable business benefits.
In Deloitte’s study, only 16% of executives said they had been able to present a clear report to their CFO demonstrating value creation. Most admitted they struggled to define and measure the real impact of their AI efforts.
For many executives, the question now is how to turn early exploration into real impact. While the path forward isn’t without obstacles, emerging research and early success stories suggest there are six critical steps that can help organizations shift from scattered pilots to scalable, strategic outcomes:
1. Appoint a dedicated AI Champion
Many of the organizations claiming AI success have made a key appointment: a Chief AI Officer (CAIO), or a similarly senior figure tasked with leading the AI agenda. This doesn’t have to be a C-suite role, but it does need to be someone with enough experience, influence, and bandwidth to drive progress.
Depending on where the company is on its AI journey, this could be a strategy lead focused on shaping the agenda, defining milestones, and identifying the resources required, or a data and analytics expert capable of guiding a team of data scientists toward a desired outcome.
2. Manage key stakeholders to secure alignment
Whoever takes the lead must have strong stakeholder management skills. AI investment is often perceived as unfamiliar or secondary to other strategic priorities. This makes it difficult to shift from scattered pilot projects and random testing to a coordinated, company-wide approach. It requires buy-in from across the organization, and especially key stakeholders, like department heads, functional leads, regional teams, and C-suite decision-makers.
3. Set a clear vision and business case
One of the recurring challenges in AI adoption is demonstrating return on investment. A dedicated AI leader must be able to frame a compelling business case: what’s the potential upside? What’s it worth? Which priorities come first, and what kind of investment in people, software, and data will be needed to capture a return on AI?
4. Show a path to manage the people, workforce, and cultural impact
AI transformation is more than just a technical matter – it’s cultural. To preserve trust and momentum through what may feel like yet another major digital shift, companies must actively manage the impact on people. First, leaders must determine if the business is ready for the disruption, and then plan the steps needed to support teams through the change and maintain motivation along the way.
5. Encourage a culture of innovation
To tap into AI’s fullest potential, companies must enable people to experiment – safely, of course. This means allowing people to fail without recourse – and hopefully succeed too! – within the guardrails of a clear framework. Innovation can’t thrive under fear of failure or excessive constraint.
6. Get the data
Data management is where AI truly shines. That’s why organizations must invest in improving data quality, making it as complete, clean, and consistent as possible. It may never be perfect (which is why guardrails matter), but the best possible data foundation needs to be part of a company’s AI adoption strategy and RoAI plan.
To date, many of the most widely cited AI success stories have not been rooted in true generative AI, but rather in process automation – smarter, more sophisticated software tools that help organizations work more efficiently. One of the most frequently mentioned examples involves the use of chatbots in call centres, aimed at improving customer engagement. But is this really something new? The first AI chatbot was created back in 1966 by MIT professor Joseph Weizenbaum. Today’s versions may be faster and more refined, but can we truly call this a groundbreaking innovation of the 2020s?
Another often-quoted use case centres around document management and automation. This ranges from summarizing meetings to generating job specs, performance appraisals, CVs, job ads, advertising copy, legal precedents, and draft contracts. These tools are praised for boosting productivity, cutting turnaround times, and lightening admin loads. However, even with these benefits, they don’t always translate into significant cost savings or measurable revenue gains.
The legal profession offers a particularly telling example. The AI tool Harvey is now being used by more than 15,000 law firms worldwide to generate contracts, legal documents, research, and even opinions, essentially replicating much of what a junior lawyer might do. In theory, this should reduce the need for entry-level legal staff and bring down costs. Yet in practice, demand for junior lawyers has continued to rise, alongside a sharp increase in their salaries and compensation. Although the AI tool adds value, the anticipated cost savings simply haven’t materialized.
“What’s becoming clear is that while the hype around AI shows enormous potential, the benefits remain difficult to realize,” says de Kare-Silver. “Most AI applications we’re seeing aren’t reinventing anything – they’re streamlining what already exists. Can we really call this innovation? Although a path to value is beginning to emerge, few companies have followed through on the critical steps needed for meaningful adoption. In the near term, it’s likely only a small group will be able to demonstrate a real return on their AI investment.”
For companies serious about scaling AI, ethical governance is essential.
One company that has taken this responsibility seriously is Salesforce. Rather than treating ethics as a compliance box-tick, Salesforce has embedded it into the core of its AI development and deployment strategy.
Through its Office of Ethical and Humane Use of Technology, established in 2018, the company works across product, legal, and policy teams to ensure that technology – particularly AI – is developed in ways that uphold human rights, public trust, and long-term accountability. This office helps guide product decisions and usage policies, and ensures that the broader social impact of technology is considered from the outset.
Salesforce’s efforts are anchored in a set of Trusted AI principles, which provide internal guidance and external transparency. These include:
AI must safeguard human rights and protect entrusted data. Salesforce collaborates with human rights experts to educate, empower, and share research with customers and partners.
Customers should have control over their own data and models. Salesforce prioritizes model clarity and clear usage terms to make AI systems understandable and accessible.
Ethical oversight is built into the process. Salesforce gathers stakeholder feedback, follows guidance from its Ethical Use Advisory Council, aligns with external policy frameworks, and maintains its own internal review structures.
AI should reflect the values of all those it affects, not just its creators. This means testing models with diverse data sets, understanding potential impacts, and fostering inclusive teams behind the technology.
AI should benefit society at large by supporting growth, accessibility, and increased employment – not replacing human potential, but enhancing it.
To further support their mission, Salesforce developed an AI Ethics Maturity Model – a framework that helps organizations assess their current practices and take progressive steps toward more responsible AI usage. The model is designed to be practical, scalable, and adaptable to different stages of AI maturity.
“What makes Salesforce’s approach notable is its proactivity,” says de Kare-Silfer. “They’re not waiting for external regulation to catch up. They choose to lead, putting governance structures in place before issues arise. They show us that innovation and accountability go hand in hand.”
The promise of AI will be realized through neither hype nor hesitation – it will be earned through intent, integrity, and clear-eyed leadership. Organizations that succeed won’t be the fastest adopters, but the most deliberate: those willing to ask hard questions and address uncomfortable trade-offs. They commit to building the right foundations from the outset.
“Smart leaders don’t just deploy AI for the sake of hype. They shape it with intent.” They’re guided by the company’s purpose and values, ensuring that changes are pursued meaningfully and with great care. In the end, it’s not just about what AI can do. It’s about what we choose to do with it.”
Michael de Kare-Silver, Signium UK