search instagram arrow-down

Pilot Testing a New Business System

This article describes the concept for pilot testing a new business system. The approach works best in situations where the business has a hub and spoke type organization – a central “headquarters” (HQ) and several to many “field offices”, as shown in Figure 1 below. Usually, in an arrangement like this, headquarters is responsible for system acquisition and operations and maintenance. Most end users of the business system reside in the field offices where the “real work” happens. This type of organization is also characterized by periodic data exchanges from the field offices to HQ for reporting purposes. The data exchange can also be two-way between HQ and the field offices.

Hub and Spoke Organization

Figure 1 – Hub and Spoke Organization

Some examples of a hub and spoke type organization include the:

  • Federal judiciary consisting of the Administrative Office of the U.S. Courts as the “HQ” (for administrative purposes) and the 94 District Courts and 13 appellate courts (I’ll use the federal judiciary as an example later in this article)
  • U.S. Federal Reserve System with the Board of Governors of the Federal Reserve System as “HQ” and the twelve regional federal reserve banks
  • Public Water System Supervision  (PWSS) program managed by the U.S. Environmental Protection Agency as the “HQ” with 67 state, territorial, tribal, and EPA regional offices serving as reporting agencies

From the list above, you can see that the field offices may be part of the same organization represented by the HQ entity or separate organizations working cooperatively with HQ.

For example, in the U.S. Courts, all of the court districts are part of the same umbrella organization – the judicial branch of the U.S. Government. In the case of the Federal Reserve System, the Board of Governors is an independent federal agency in the executive branch while the twelve reserve banks are separate organizations within the Federal Reserve system, behaving much like a banking cartel. Some of the functions of the Federal Reserve, like banking supervision, are centrally managed by the Board with the individual Reserve Banks supervising the banks located within their regions. In this case, the Reserve Banks are the field offices and the Board of Governors is HQ.

At the somewhat extreme end of the spectrum is the EPA’s PWSS program, where the field offices, or “reporting agencies”, consist of  a mix of EPA regional offices and state, territorial and tribal government agencies.  (Readers involved in the PWSS program may take exception to the term “reporting agency” – I’ll refer them to the How Jargon can Affect System Design article.) In the case of the PWSS program, EPA delegates primary enforcement responsibility to the non-EPA governmental agencies for carrying out the PWSS program at the local level. EPA HQ in Washington, DC oversees the program and develops and provides software to the field offices (reporting agencies) to assist them with managing their local PWSS programs.

While the examples given above involved the Federal government (U.S. Courts and the Federal Reserve System) and Federal-State partnership (EPA’s PWSS program), many commercial enterprises are organized as a hub and spoke. Large retailers, commercial banks and insurance companies are just a few examples of organizations following this pattern.

Business Systems Compared to Productivity Software

For the purposes of this article, let’s divide software products into two categories: general productivity and business automation. General productivity software includes email, word processing, spreadsheets, collaboration suites, and such. Most organizations manage general productivity software as infrastructure. For example, a new employee arrives at the office and as part of the on-boarding process, receives a company supplied laptop computer loaded with the general productivity software tools. As these tools are off the shelf products, they don’t usually involve a lot of testing before deployment to the end users. In some situations, an organization may have an early adopter program where a select set of users are exposed to new versions of the software before its deployed throughout the organization.

For a virtual hybrid organization, like the PWSS program where most of the field offices are distinct and separate organizations, each field office may be responsible for maintaining its general productivity software. For example, the the state of California IT function maintains the general productivity software suite for the California PWSS reporting agency and not the EPA. However, EPA may provide access to a business automation system to the California PWSS reporting agency.

The kind of pilot testing described in this article involves business automation systems, such as accounting/financial, enterprise resource planning, warehousing, and such. This category also includes specialized business systems designed to automate business processes for improving efficiency, data quality, and, for commercial concerns, providing a competitive advantage. Unlike the general productivity software described above, business automation systems can significantly change the way the users do their work. Improved business processes “baked into” these kinds of systems often involve new procedures and different approaches to recording and retrieving information. In other words, these kinds of systems change the workplace – and human beings are not that good about handling change.

When it comes to deploying business automation type software to an organization (such as HQ deploying a new business automation system to the field offices), it’s important not to overwhelm the users by “unleashing” the business system on them as if it was general productivity software. Doing so would likely cause a backlash as users struggle to shift from legacy processes and systems to the new system, especially if the business system is complex (involves many steps, has lots of features, and processes the system supports may be highly technical and difficult to understand, for example). That’s why its important to have a software acquisition project plan that keeps stakeholders end users informed and engaged, but uses a deployment approach that eases the stakeholder and end users transition from the old business practices or legacy system to the new business system.

Pilot testing is a process that your project team can use to test the new business system in its production environment before deploying it out to the general user population. The big difference between the testing done during system development (or, if dealing with an off the shelf product, system configuration) is that development testing is a priori. Most Agile development teams use test driven development, which involves translating requirements (user stories) into test cases, then writing code to pass the test cases. The goal is to continue refining the software (through coding and short iterations like the “sprints” used in SCRUM) by developing only to the new test cases. This prevents the developers from adding new code to the system that hasn’t been proven by a corresponding test case. When creating the test cases, developers manufacture test data (they can also repurpose existing data and use it for testing) while knowing (from the user stories) what the correct outcome of the test will be (hence a-priori).

Pilot testing is different because the test data is live data and the outcome of the test is unknown until the system processes the data. Theoretically, the system should work correctly and generate the expected results as it would have passed development testing. But it doesn’t hurt to verify that everything is working correctly at a few test organizations before transitioning the remaining users to the new system. As Benjamin Franklin said, “an ounce of prevention is better than a pound of cure.”

Example Pilot Test

Back in the late 1990’s, the U.S. federal judiciary was using a custom developed financial management application in half the court units (of the 94 District and 13 Appellate total court units) . An earlier project to replace a obsolescent system with a modern (at that time) local area network (LAN)-based system had just failed. New leadership came into the courts and started a new project to acquire an off the shelf financial management system – the Financial Accounting System for Tomorrow, or FAS4T.

It took the FAS4T project team a little over two years to define requirements, refine and revise the existing governance structure (to encourage stakeholder involvement), identify off the shelf solutions, and evaluate and select a financial management system for deployment to the users. Before going into full-blown deployment, the project team conducted a “pilot test” on the new FAS4T system. This pilot test was the shakedown cruise for the new financial management system.

The pilot testing phase lasted about a year and involved four US District Courts as pilot test sites. During the testing period, the four courts operated their legacy financial systems in parallel with FAS4T. The idea was to gauge the “seaworthiness” of FAS4T in supporting the real work of the courts for an extended period of time.

Pilot testing took a lot of effort – court users had to deal with entering data into two systems. And it took project team effort – providing extra on-site and remote user support to take some of the load off each of the courts. The project team also made sure to share information between the pilot courts (things to look out for, known issues, etc.) and with the other court units. This wasn’t to difficult as the FAS4T project (actually, the financial systems program because there were two legacy systems in production) had a well-organized governance structure and a committed and involved project sponsor.

The project team developed a process for recording, prioritizing, and fixing errors found during pilot testing. Since it was difficult to anticipate all of the permutations in transactions the system would process, new issues would “pop up” during the pilot testing phase. The goal was to correct as many high priority errors as possible before the end of pilot testing.

At the end of pilot testing, the project sponsor declared the test a success and OK’ed FAS4T for deployment to the remaining 103 court units. Deployment took several years, but eventually, all court units received FAS4T by 2004 and the system (in its current version) was in use until replaced by a fully centralized version of the system in 2017. The pilot phase took longer than expected because the Judiciary selected a different database standard not supported by the financial system vendor. This required extra work to switch database back-ends and proved to be less disruptive than changing databases during production.

Planning for Pilot Testing

Drawing on the experience of the FAS4T project (and other not included here for brevity), here’s some advice for planning and executing a pilot testing phase as part of an effort to acquire and deploy a new business system for a geographically dispersed organization arranged in the hub and spoke pattern described above.

Favorable Project Environment

This is the most important success factor for any project and for any pilot testing effort. Like the FAS4T project, the project sponsor must be committed and involved. Creating and maintaining user governance groups (involving stakeholder executives, managers, and workers) is a great and proven way to enhance communications, manage user expectations, and give stakeholders the opportunity to make the system “their own”. Without these in place, chances for success are considerably diminished.

The Pilot Testing Plan

Pilot test planning should happen during development as a parallel activity. The pilot test plan should be approved by the project sponsor well before the end of the development phase. The plan, in itself, is a great tool for communicating the pilot testing phase to the project stakeholders and the sooner the project team completes the plan and the sponsor approves it, the better.

A complete pilot test not only involves “real world testing” by real end users, but also includes testing:

  • Training and training materials used to educate system in users.
  • User support, problem reporting, and help desk processes, especially if the intention is to use a web-based, “self service” help desk and ticketing system
  • Data conversion/translation processes to migrate data in the legacy system to the new system
  • End-to-end data exchanges between the new system and any interfacing systems
  • Business processes that are automatically triggered at predetermined times (for example, setting an alarm and then launching a process when the alarm goes off)

The pilot test plan should explain and describe the testing aspects listed above. The following subsections describe other aspects of pilot testing and how to include them in the pilot test plan.

Pilot Test Team Organization

The pilot test plan should also describe the pilot test team organization at each pilot testing site. The diagram in Figure 2 shows a pilot testing organization.

Pilot Test Team Organization

Figure 2 – Pilot Test Team Organization

The left side of Figure 2 shows the project team organization, led by the project sponsor and the project manager. On the right is the pilot site test team organization, which is duplicated for every pilot test site. For example, if there are four pilot test sites, then there will be a pilot test site organization for each site.

The project team has two roles for pilot testing:

  • Pilot test manager – has overall responsibility for planning and executing the pilot test.
  • Pilot test coordinator – coordinates testing activities between the project team and the pilot site test team. There may be one pilot test coordinator for all pilot test sites or, if the workload is too great, one pilot test coordinator for each pilot test site. As pilot test sites may vary in complexity and size, it might be possible to divide up the workload so that one coordinator handles two smaller pilot test sites while another handles a single, larger and more complex pilot test site.

At a minimum, each pilot site testing team should consist of:

  • Pilot site test sponsor – an executive of senior manager in the pilot test site organization who functions much like a project sponsor.
  • Pilot site test manager – a representative of the pilot organization who liaisons with the project team to coordinate test activities between the pilot site and the project team. The pilot site test manager is also responsible for planning and coordinating testing with the pilot site testers (described below) and reporting test results and observations to the pilot test manager.
  • Pilot site testers – one or more end users who will test the system. Pilot site testers should be willing to put in the extra work, especially if testing involves using two systems in parallel. The pilot site test sponsor should consider incentives for encouraging high performing staff members to participate in pilot testing.

Selecting Pilot Test Sites

Ideally, the project sponsor should select the pilot testing sites while the pilot testing plan is under development – if not sooner. It’s very important to select pilot sites that are committed to pilot testing and have a high probability of testing success.

As pilot testing involves actually deploying the system to the pilot test site (deployment is, as listed above, an item in the pilot test plan), avoid problematic field offices such as those with known personnel issues, low performance, and other issues indicative of organizational dysfunction which could come back and torpedo pilot testing. Candidate pilot test sites should demonstrate to the project sponsor and governance bodies that they have executive level commitment to participating in pilot testing.

Considering that pilot test sites are undertaking a project to test the new system, like any other project, it’s extremely important that the field office executive management commit to the pilot testing process. Do not take the word of a mid-level manager that their organization is committed to pilot testing – it’s possible their executive management may be unaware of the commitment.

As with most major decisions affecting project scope, the project sponsor should approve all pilot test sites. The project manager and stakeholder governance groups (such as a stakeholder steering committee) are there to give recommendations and advise the project sponsor on pilot site selection.

Regarding the number of pilot test sites, having any more than four pilot testing sites is usually more than most project teams can handle, especially considering the enhanced level of support required during pilot testing. Support effort per site should level off and decrease after pilot testing and the field offices have completed their transitions to the new system. If the support commitment doesn’t lessen over time or increases, then it’s likely there are serious issues with system quality/completeness that weren’t dealt with during development.

List the pilot test sites in the pilot test plan and name the pilot site test sponsors and test managers in the plan.

Pilot Testing Duration

The duration of pilot testing depends upon the system being tested and the business it supports. Let’s say that the new system is supporting a business requirement where the field offices finalize and report data every quarter with a special year-end closing process. At a minimum, the pilot test should run for two quarters with the option of ending testing earlier if the second quarter test isn’t required.

You could make a case for running the pilot test for a full year, but this would delay system deployment to the remaining sites. Depending upon when pilot testing starts, testing could run for a partial year – say one quarter and the year end. Another option is to simulate the year-end process by adjusting the system clock. This could cause some unintended consequences if done in a production environment, especially if the clock is reset to current time, so it’s better to do it in a separate test environment. While it won’t be a true production pilot test, this kind of testing will help identify problems with year end processing in advance.

Pilot Testing Communications

During the pilot phase the project team will be spending a lot of time with the pilot test sites and there will be a natural tendency to neglect communicating status with the stakeholder community. Project teams should avoid this tendency because stakeholders can misinterpret silence to mean that something is wrong with the project.

It’s just as, if not more, important to maintain effective communications between the pilot test sites. As testing progresses, pilot sites can share incident reports and observations with each other.The project team can use this information to improve training and user support materials. Consider the following strategy for establishing and maintaining effective communications during pilot testing:

  • Schedule period standup meetings (say twice per week) between the project team and each individual pilot test site. The pilot site test manager should be a required meeting attendee with the pilot site test sponsor as an optional attendee. The pilot site test manager should bring pilot site testers to the meeting if they are needed to explain or elaborate on testing activities which they were involved. On the project team side, the pilot test manager and coordinator for the pilot site should attend. Other project team members (such as the project manager, SCRUM master, product owner, and developers) should attended as needed.
  • Schedule a weekly standup meeting with all of the attendees noted in the previous bullet for all the pilot test sites. This meeting offers the opportunity for the different pilot test sites to exchange experiences amongst themselves and with the project team.
  • Add pilot testing status to any standing status meetings while pilot testing is underway. This includes meetings with the project sponsor, stakeholder governance groups, and general update meetings held with the stakeholder community. The pilot test manager should own this agenda item and should consider inviting the pilot site test managers to share their experiences with pilot testing.
  • Consider using collaboration software (like Microsoft SharePoint) to exchange information between the project team and the pilot site testing teams. Important matters shouldn’t wait for a meeting, but should be shared as soon as possible.

Include descriptions of these meetings and their schedule recurrence patterns and agenda templates in the pilot test plan.

Completing Pilot Testing

Pilot testing is completed when the pilot test manager is satisfied that the system behaves predictably in its operational environment. Of course the test manager does not reach this conclusion on their own, but with input from the pilot site test teams. It also means that the system doesn’t have to be perfect to pass testing. To paraphrase Voltaire, perfection is the enemy of good enough. Most system acquisition projects involve a series of trade-offs – there simply isn’t enough time and funding to build or buy the perfect system. So it’s OK if the system has a few warts – just as long as the warts don’t prevent the user from accomplishing an important task.

At some point during the pilot test, the pilot test manager, after review testing progress, may want to modify the test schedule. For example, pilot testing may be going exceptionally well meaning that pilot testing could end early. Conversely, the pilot test team may encounter several difficult issues and need more time for testing. Both situations are major changes to the project schedule and should be approved by the project sponsor, especially if extending the pilot testing schedule as this could have budgeting ramifications.

Pilot Test Report

The pilot test manager should be responsible for writing a pilot test report describing:

  • Who was involved in testing at each pilot test site
  • Observations gathered during pilot testing about the system
  • Any major changes that were made during testing and remain to be made after pilot testing is completed.
  • Lessons learned from pilot testing.
  • Any other pertinent information the pilot test manager deems appropriate for the report.

Consider the pilot test report should as a formal deliverable and maintained in the project files. Future project teams may find the test report useful for planning their project. Also share the pilot testing report with the user community.

What’s After Pilot Testing?

After a successful pilot test, the project team is ready to roll-out the system to the remaining field offices according to the deployment plan (which may have been updated based on information gleaned from pilot testing). The pilot test sites become early adopters of the new system.

Leave a Reply
Your email address will not be published. Required fields are marked *

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: