Australia (English)
Austria (English)
Belgium (English)
Brazil (English)
China (English)
Denmark (English)
Egypt (English)
Finland (Suomi)
France (English)
Germany (Deutsch)
Greece (English)
India (English)
Ireland (English)
Italy (English)
Japan (English)
Korea (English)
Netherlands (English)
Peru (English)
Poland (English)
Portugal (English)
Romania (English)
Singapore (English)
South Africa (English)
Spain (English)
Sweden (Svenska)
Switzerland (English)
Turkey (English)
United Arab Emirates (English)
United Kingdom (English)
Venezuela (English)
Jon Ashcroft is a Managing Partner at Signium in the GCC, bringing over 25 years of recruitment industry experience to the role. Jon began his career in the UK before relocating to the Middle East in 2008. Prior to joining Signium, he led the Executi...
After years of experimenting with generative AI, organizations now face something far more transformative. What happens when AI is given the capability and permission to make decisions and take action?
In the past decade, organizations have embraced generative AI tools such as chatbots, summarization engines, and code assistants, and have started to see productivity gains from them. Now, a new class of systems is emerging that demands fresh thinking by leaders: autonomous AI agents. These are systems that do more than simply respond to a prompt. They plan, act, check, and iterate, managing workflows and even making decisions, being entrusted with goal-oriented work.
For executives, the implication is profound: this isn’t just about adopting a new tool but redesigning how work is organized, how roles are defined, and how governance is structured. This is especially relevant in regions such as the Middle East, where regulation and data-residency concerns add further complexity.
Jon Ashcroft, Managing Partner at Signium in Dubai, comments:
“The question shifted from if to how long ago. With autonomous AI agents now being entrusted with goal-oriented work and real decisions, we’re looking at an always-on future, where certain processes simply never sleep. The opportunity for growth is staggering, and organizations must ask what strategies are needed to enable their people to make the best of this change, rather than be disrupted by it.”
Generative AI refers to tools that produce output in response to a prompt. They are reactive: you ask, they deliver. AI agents differ in that they are tailored for purpose-driven workflows. Once given a goal, they set tasks, invoke tools or systems, monitor outcomes, adjust actions, and steer toward completion.
Why the distinction matters
While a generative tool supports a human, an AI agent may act on behalf of a human or system. That difference means agentic AI changes the dynamic of oversight, accountability, and role design.
When using autonomous agents, organizations need to plan where humans check in, what triggers an escalation, and how actions are logged. These controls must be part of the workflow strategy from the start, not added later.
A human is still responsible for workflow results, even if an agent completes the task. Leaders need to be clear about who reviews the output, who approves decisions, and who steps in when something goes wrong.
As agents take on routine tasks, many human roles shift toward monitoring, guiding, and improving their work. Most existing job descriptions don’t include these responsibilities, so they will need updates or entirely new versions.
Deloitte predicts that 1 in 4 companies currently using generative AI will have launched agentic AI pilots in 2025, with full adoption expected to reach 50% by 2027. The study goes on to emphasize that these technologies are not only reengineering jobs and business processes, but also reshaping workforce-planning itself, from static annual headcounts to real-time orchestration of human and machine talent.
“Leaders must recognize that agentic systems blur boundaries,” says Ashcroft. “Boundaries between human and machine, between decision-making and execution, and between roles and responsibilities. Making this work means setting clear expectations about how people and AI share tasks and decisions.”
AI expert Thomas H. Davenport writes, “Most AI success stories come from redesigning workflows, not replacing workers.” As agentic AI becomes more capable, the real task is integrating it in ways that support human workers rather than compete with them. Looking ahead, several shifts will reshape how companies organize talent.
Autonomous agents introduce work that simply did not exist before. This means organizations will need brand-new roles to manage how humans and AI collaborate. These roles focus on training, supervising, and guiding agent behavior.
Agent trainers will refine an agent’s instructions, correct its mistakes, and help it learn how to perform tasks more accurately.
Validation analysts are specialists who review agent output, spot patterns of failure, and ensure quality and accuracy.
These strategists map how multiple agents (humans and agents) work together, including handoff points, escalation rules, and task sequences.
People will need to be appointed to decide when an agent must escalate a task to a human, making sure that safety, compliance, and sound judgment remain in place.
Alongside the creation of new jobs, many longstanding roles are shifting as agents take on routine tasks.
Operations teams can expect less manual processing, more oversight of automated work, and dealing with exceptions that agents can’t resolve.
Instead of building reports, analytics teams will spend more time auditing agent output, checking data quality, and investigating inconsistencies.
Compliance teams will need to decide when and how agents escalate decisions to humans, and ensure that automated processes meet regulatory requirements.
IT teams will shift from purely technical setup to monitoring agent behavior, managing integrations, and ensuring secure access to systems and APIs.
“When agentic AI is used in the right way, everything in the organization should start to feel easier,” says Ashcroft. “Routine work is lighter, the information people rely on becomes clearer, and teams can focus on the things that actually need their judgment. Instead of getting stuck in small problems, they have more space to move the business forward.”
Even when roles remain the same, agentic AI may shift the expectations of those roles. Most legacy job descriptions no longer reflect the realities of working with autonomous agents.
Future job descriptions will likely need to include:
Beyond individual roles, autonomous agents also reshape how teams are organized. Companies will begin forming cross-functional pods where operations, technology, compliance, and business owners work together with shared responsibility.
Examples may include:
These will become increasingly common, where the agent handles routine tasks and humans handle judgment-based work and creative innovations.
New accountability systems will be important. These will map out agent tasks, human responsibilities, and how every action is logged and reviewed.
Since agents can work across many steps in a process, siloed task management no longer works. People will need to share ownership to keep the whole workflow running smoothly.
Ashcroft comments: “For all the fear-mongering we hear around AI in the workplace, there’s a lot of potential that organizations can’t ignore. Yes, it streamlines work and increases data accuracy. But even beyond this, it can be used to champion accountability and remove barriers between departments. AI could pull people closer together, enabling them to work toward shared goals much, much faster.”
As autonomous agents are given more responsibility, governance becomes the backbone of reliable adoption. Leaders must think beyond productivity gains and put proper safeguards in place to help human teams use agentic AI safely.
Autonomous agents should never run without human oversight. Organizations need clear rules about when an agent can act on its own, when it must hand a task back to a human, and who is responsible for the final decision.
Every action an agent takes should be documented. Clear records of tasks, decisions, and data help teams understand what happened, fix issues quickly, and meet compliance needs.
Agents can develop blind spots or repeat patterns that aren’t fair. Regular checks, testing, and reviews are needed to ensure the system remains accurate, balanced, and aligned with company values.
Most agents depend on outside tools or models, so organizations need to know how those partners handle data, manage security, and maintain their systems. Strong vendor practices help protect sensitive information and reduce risk.
Different regions have different rules governing the use of AI and data. Companies must ensure their agent workflows comply with local requirements, especially in regions with strict data-residency or cloud rules, such as the Middle East.
“It all comes down to the human touch,” says Ashcroft. “No matter how advanced the agent, success depends on the people who guide it. Having the right talent in place is what turns these systems into real value.”
Rolling out agentic AI isn’t something that should happen all at once. It works best when approached carefully, in stages, and with intentional objectives and milestones in mind.
Start small
When decision-makers introduce agentic workflows, it’s best to start in low-risk back-office areas like contract triage, invoice processing, knowledge retrieval, or routine support. These early trials create space to learn, refine, and fix issues before using agents in higher-risk settings, such as customer interactions or regulatory decisions.
Establish key performance indicators (KPIs)
Monitoring how agents perform is essential to making sure the investment is worthwhile and that the system delivers what the organization expected. Setting KPIs upfront helps leaders spot early signs of success, identify problems quickly, and take corrective action when needed.
Define maturity gates before scaling
To avoid scaling AI systems too quickly, leaders need clear criteria that show when an agent is ready for broader use. Each system should meet a minimum level of maturity before it expands. That includes consistent accuracy, reliable escalation patterns, clear records of what happens when it fails, and strong data and vendor safeguards. When these basics are in place, scaling can begin.
Scale responsibly
As maturity increases, organizations can move to multi-agent orchestration. This is when agents work with other agents or alongside human teams, supporting more complex, cross-department workflows. “Every step up in complexity needs its own safeguards,” Ashcroft reminds us. “Each stage of scaling should bring new risk checks, stronger oversight, and clear governance to keep human and agent roles aligned.”
A powerful example of what is possible in the agentic automation is JPMorgan Chase’s internal platform known as COIN (Contract Intelligence). The financial institution reported that the system reduced an estimated 360,000 lawyer-hours each year by automating the review of approximately 12,000 commercial loan agreements.
COIN uses machine learning and natural language processing to pull key details from contracts, convert unstructured documents into clean, structured data, and significantly reduce the risk of manual error.
From a talent perspective, the COIN story emphasizes several lessons:
“JPMorgan Chase put AI agency to use in some of the best ways possible,” notes Ashcroft. “It removed a huge administrative burden and freed up human hours and skills for more advanced work. Systems like this aren’t built over the weekend. They take training, iteration, and ongoing testing, but the result is unquestionably worth it.”
Today’s leaders are guiding their organizations through one of the most exciting (and demanding) periods of change in decades. Agentic AI offers extraordinary potential, but it also asks leaders to rethink roles, workflows, and accountability in ways that few eras have required before.
“Adopting new tools is only the first step,” says Ashcroft. “The real work is in reshaping how people and intelligent systems work together. In a very short amount of time, agentic AI changes the way we’ve been doing things for decades. It’s disruptive technology – a massive turning point – but it’s also undeniably humanity’s next leap forward.”
A thoughtful approach means having a clear human strategy, where the right people are involved with the skills and systems to support them. With these foundations in place, organizations can guide agentic AI in ways that strengthen confidence, performance, and trust.