The result: a function increasingly caught in reactive firefighting mode, moving from crisis to crisis rather than delivering the proactive assurance that keeps organisations safe, effective, and efficient.
Unfortunately, the cracks are starting to show as IA leaders look for alternative ways to deliver more with less. Enter, AI. But, how does AI fits into your 2026 IA toolkit? What are some of the AI adoption trends in New Zealand? And where is the technology is heading next?
The burning platform: Doing more with less
Our public sector survey late last year offered a window into pressures that will feel familiar to IA functions across all industries. In the past 12 months, 88% of respondents have experienced budget cuts across their IA function. In response, 62% have needed to downsize their in-house team, reducing assurance capacity, and 46% reduced their IA plan coverage, spreading it over a longer period to fit within constrained budgets and team sizes. All while 65% reported an increase in operational risks. The numbers are public sector specific, but the story told is not: shrinking capacity and growing risk exposure is a challenge facing IA functions right across New Zealand.
increase in operational risks
IA functions are stuck between a rock and a hard place; delivering the assurance required takes resources and capacity that aren't available. On the other hand, deferring assurance may save funds in the short term but can have a devastating impact when control frameworks begin to break down.
More than half of the IA leaders surveyed cited competing priorities as the top challenge they faced. Limited internal resources and difficulty keeping pace with emerging risks were close behind. This means IA teams are thinning out, and the pressure to do more with less is mounting.
It is perhaps unsurprising IA leaders are looking to technology, in particular AI, to help alleviate these pressures.
AI in action: What our research tells us, and what it looks like in practice
The barrier to entry for AI tools is lower than most assume, and the impact can be significant – every hour freed from administrative work is an hour that can be redirected toward judgement-driven assurance. Our research found the proportion of teams using AI already exceeded what had been planned the year prior, an indicator that adoption is being driven by genuine need rather than hype. Looking ahead, nearly 60% of IA leaders expect to increase their use of AI in the coming year, making it the single largest area of planned innovation. So, although the direction of travel is clear, what does AI actually look like in an IA context? Here are four use cases we are currently seeing help our clients make meaningful efficiency gains.
- Smarter audit planning. AI tools are being widely used to help refine audit plans. These tools are exceptionally good at identifying potential risks, drawing on internal data such as prior audit findings, financial results, and operational metrics to highlight where attention is needed. These tools can also look to external sources of information to provide insight about emerging risks and sector-specific issues, ensuring audit plans are responsive rather than simply repeating prior-year coverage.
- Data analytics at scale. Traditional data analytics analyse quantitative datasets to identify trends, patterns, and anomalies. AI tools have added a further dimension, enabling organisations to quickly gain insights from qualitative data: think survey responses, emails, and articles. AI models are also being used to summarise complex information beyond what a human review would catch.
- Automating the low-value work. Some of the most immediate gains we’re seeing from AI come from automating time-consuming tasks that add little value; drafting workpapers and audit findings, formatting reports, summarising meeting notes, the list goes on.
- Continuous monitoring. Although we're not seeing widespread adoption yet, there is growing potential for organisations to deploy AI models to continuously monitor controls and detect threats such as fraud, cyber anomalies, and unusual transactions. Historically, these issues would only have been identified through retrospective audits, potentially long after the event occurred, and well after the damage was done. The shift from periodic to continuous assurance is one of the more transformative possibilities AI presents for the profession.
These examples clearly demonstrate practical applications of AI are no longer theoretical. They are available today, and many can be accessed through tools that require little or no specialised tech skills to use.
The next frontier: Agentic AI and what it means for IA
Most of the AI tools internal auditors are using today, ChatGPT, Microsoft Copilot, Claude, are reactive. You ask a question, you receive the answer. They respond to prompts rather than taking initiative. Step up, agentic AI.
Agentic AI is the next frontier for artificial intelligence. Rather than waiting to be asked, agentic AI systems can take sequences of actions, engage with external data sources, and adjust their behaviour based on what they find, mostly independent of its operator. The Institute of Internal Auditors (IIA) described Agentic AI as moving from a 'library' model, where you ask a question and receive information, to an 'investigative journalist' model, where the AI goes out, gathers leads, makes decisions, and returns with a synthesised picture.
The potential benefits from agentic AI could be substantial for internal audit teams going forward, for example:
- Continuous, real-time risk assessment: Agentic AI could monitor transaction flows, communications, and data in real time, removing the need for periodic risk assessments. It could autonomously adjust risk scores as conditions change, flagging emerging issues before they escalate. This is a monumental shift from point-in-time assurance to ongoing oversight.
- Automated controls testing: These tools can increasingly test 100% of transactions against control requirements, rather than testing a sample, identifying exceptions, flagging anomalies, and escalating issues for review. What was previously a major undertaking transforms into a continuous background process.
- Intelligent anomaly detection: Agentic AI can identify subtle patterns that human review would miss. Think low value transactions clustered around unusual thresholds, third-party behaviour that deviates from historical norms, or correlations across datasets that no individual auditor would have time to analyse. The fraud detection implications alone are substantial.
- AI-driven reporting and follow-up: Agentic systems can autonomously draft findings, follow up on outstanding management actions, and generate structured updates, reducing the administrative burden on IA teams and accelerating the audit cycle.
The IIA's Vision 2035 research shows internal auditors see a future where the balance of their work continues to shift toward advisory and away from traditional assurance. Agentic AI is what can make that shift achievable: by automating the compliance work that currently dominates IA plans, freeing auditors to focus on the judgement-intensive, advisory work that creates the most value.
Looking ahead
Internal audit functions in New Zealand are at a crossroads. The pressures of the past two years, budget cuts, headcount reductions, and rising operational risk, are not going away, and neither is the expectation that IA functions will continue to deliver meaningful assurance and strategic value. AI, and the agentic capabilities on the horizon, represent a genuine opportunity to break out of the reactive cycle and fundamentally change what internal audit can deliver. The teams that engage with this technology now will be better placed to lead that shift.
But enthusiasm for AI's potential needs to be matched with an understanding of its limitations, and a framework for using it responsibly.