Get in touch

ERP testing is broken. Here’s how to fix it.

2i · 2nd July 2025

ERP programmes often fall short.  Not because the technology is flawed, but because testing is treated as a late-stage task. It's approached too narrowly, often focused on ticking boxes rather than verifying whether the system supports how people work. 

 

In 2025, many organisations are still repeating the same mistakes. Public sector bodies and financial institutions continue to face delays, budget overruns and failed go-lives. A major contributor to this is testing that’s misaligned with real-world use. 

 

What’s going wrong?

A typical ERP delivery follows a familiar pattern. The system is procured, configured and then internal teams are brought in to test it. These teams are usually responsible for running daily operations and aren’t equipped to design test cases or investigate system issues under pressure.

When this happens, testing becomes reactive. It’s often rushed, incomplete and disconnected from the actual processes the system is meant to support. Problems go unnoticed until late in the project. In many cases, they aren’t found until after the system goes live.

This has led to severe disruptions. In Birmingham, an under-tested Oracle Cloud ERP rollout left the council unable to reconcile accounts or manage supplier payments. In Gloucestershire, testing delays forced the local authority to postpone its SAP migration by a full year. In financial services, incomplete testing has allowed poor data quality and broken workflows to pass into production environments, damaging customer trust and creating regulatory headaches. 

These are not rare events. They’re a sign that something fundamental in the testing approach isn’t working. 

 

What testing should be doing instead 

The purpose of User Acceptance Testing (UAT) is not to prove that the system works in isolation. It’s to confirm that the system supports the work your organisation needs to do. That includes financial controls, payroll, procurement, reporting and case management—each with its own rules, data flows and exceptions. 

Testing needs to follow actual processes. It should involve users who understand the work, use the data they rely on and validate whether the system supports key decisions. A script that confirms a screen loads or not isn’t helpful if its output isn’t usable. 

To do this well, testing must begin earlier. It should influence how the system is configured, not just verify its behaviour. It must also focus on outcomes, not interface elements. 

 

What needs to change? 

Organisations can reduce ERP delivery risk by making a few practical adjustments: 

  1. Begin testing during design and build 
    This allows issues to surface before decisions are locked in. Early input from business users helps prevent configuration errors and gaps in functionality. 

  1. Use operational data 
    Using clean, synthetic production data exposes integration issues, calculation errors and reporting mismatches that test scripts rarely catch. 

  1. Ensure testing aligns with core tasks 
    Test scenarios should reflect the actions users take to get their work done. That means validating the full path through a process, not just whether a button works. 

  1. Protect internal teams from overload 
    Operational staff can’t carry the burden of system testing without support. Testing needs to be structured to limit disruption while still capturing the right feedback. 

  1. Make user acceptance testing meaningful 
    UAT should confirm that the system enables the business to function, not just pass a checklist. This requires careful preparation, clear objectives and realistic timelines. 

 

Why it matters now 

ERP systems are becoming more critical. In government, they are central to efforts to standardise processes across departments. In financial services, they underpin compliance, reporting and customer operations. 

At the same time, the technology is changing. Platforms like SAP release updates semi-annually whilst Oracle Cloud releases updates quarterly. Data is shared across systems and managed across multiple locations. Minor issues scale quickly if not caught early. 

Organisations cannot afford testing to operate separately from delivery. Testing has to move with the programme, not behind it. Poor testing multiplies risk, creates a false sense of readiness and shifts the cost of fixing problems into post-launch support. 

 

Fixing the process 

The traditional approach of waiting until late in the programme to run a few scripted test cycles no longer works. It delays problem discovery, relies too heavily on already stretched internal teams and fails to catch the issues that matter most. 

A better approach builds testing into delivery from the start. It uses operational knowledge to shape what gets tested and when. It focuses on what people need to do, not just what the system can display. 

This shift isn’t about tools or methodologies. It’s about designing the testing process to reflect how the organisation works so that when the system goes live, it reliably supports that work from day one. 

If you're planning an ERP programme or struggling to implement the right testing approach, speak to 2i. We work with public and financial sector teams to reduce risk, improve readiness and ensure ERP systems deliver from day one with AssureERP. 


Start the conversation

Contact 2i