Types of Testing

One of the critical tasks faced by the practicing IT project manager is creating and executing on test plans.  This is required whether you are managing a product development project, implementing technology developed elsewhere, or managing a pure engineering project such as the build-out of a network or data center.  The types of testing your project requires might vary, as will the technical approaches and tools you’ll use, but testing is a part of nearly every IT project.  Over the next few weeks, I’ll be exploring testing and test plans.  This first installment is about the various types of testing.

Testing IT solutions has evolved significantly over the last thirty years.  Where once the focus was on finding and removing functional defects, testing now has a more diverse set of goals.  In addition, the testing roles have shifted somewhat.  Still, we tend to plan our testing in specific stages, including:

  • Unit testing –Historically, these tests were performed by the person responsible for creating the unit under test, e.g. the programmer or module configuration expert.  The goal here is to remove functional defects at the lowest level, such as a module or page.  The wide-spread availability of automated testing software in the 90’s gave rise to innovations such as test-driven development (TDD) and automated regression testing subsequent to frequent builds.  In TDD, the test scripts are created before the coding or configuration begins, as part of the design process.  All scripts are designed to fail before development begins, and development is only considered complete when all tests pass.
  • Smoke testing – The name comes from the hardware prototyping practice of “Let’s plug it in and see if any smoke comes pouring out.”  The goal here is to verify that a new build of a system was successful, by exercising key behaviors.  Modern smoke testing is almost always a part of the build process, using the same automated testing packages and a relatively small number of scripts.
  • Regression testing – Every time a change is made to a system, it could potentially break something else.  The goal here is to exercise a comprehensive set of tests that will identify any newly created functional defects.  These days, regression testing is usually automated, and usually consists of the scripts used in unit testing, plus whatever other scripts are needed to exercise the overall build.
  • Conversion validation – When one “heritage” system is replaced with a newer system, it is common for some historical records or current work in progress to be loaded into the new system at cut-over to production.  The goal here is to validate that the records are properly and completely represented in the new system.  This requires inspection of both the mapping of the data elements and the translations or manipulations accomplished prior to the load, as well as the related records created by the system during the load process.
  • Integration testing – Few useful systems are completely stand-alone.  Interfaces to and from other systems, whether operating as web services (pull), transaction passers (push), or batch transfers of files of transactions (scheduled or ad hoc), move data from systems of record to other subscribers.  The goal here is to ensure that the correct data is extracted, transferred, received, and used, all while observing the appropriate controls and security protocols.
  • UX / usability testing – Increasingly, we’re as concerned about the ability of the user to interact with the system as we are about the ability of the system to correctly operate on the user’s instructions.  The goal in usability testing is to fine-tune the design of the user experience, in order to maximize delivered value.
  • System testing – The notion of system testing has evolved from the early days of simply exercising all of the modules to a more comprehensive end to end test of the integrations with other systems, the correct evolution of a growing set of transaction data records, and validation of access and data security measures.  The goal is to ensure that the system, including all software, hardware, network and user actors, works reliably.
  • Load testing – As more and more actors make demands on a system whether human users or other systems, performance may be impacted.  The goal here is to provide a basis for “tuning” and optimizing delivery of services, whether UX or integration, under a variety of expected demands.
  • Survivability testing – Different systems have different availability requirements.  Those with high availability or continuity of business operations requirements generally have commensurately more complex designs.  The goal here is to exercise key failure modes to ensure that recovery, whether automatic or manual, works as expected.
  • User acceptance testing – Decision makers need a basis for accepting delivery of a completed system.  The goal here is to exercise not just the system under test but all supporting capabilities, from user training to problem reporting and resolution to managing updates and planned outages.  UAT is the final test before the go / no-go decision that determines whether the system can be put into production.
  • Parallel testing – This is usually reserved for applications like payroll, where the results of the system under test are compared to the results from the system in production.  Like UAT, for applications where it applies, some number of successful parallel cycles are required in order to approve the system for a move to production.

Not every project will require all types of testing, but every test plan needs to account for all types of testing required.  In the next installment, I’ll address roles and responsibilities, and defining the scope of testing.

This entry was posted in Quality Management and tagged , , by Dave Gordon. Bookmark the permalink.

About Dave Gordon

Dave Gordon is a project manager with over twenty five years of experience in implementing human capital management and payroll systems, including SaaS solutions like Workday and premises-based ERP solutions like PeopleSoft and ADP Enterprise. He has an MS in IT with a concentration in project management, and a BS in Business. He also holds the project management professional (PMP) designation, as well as professional designations in human resources (GPHR and SPHR) and in benefits administration (CEBS). In addition to his articles and blog posts, he curates a weekly roundup of articles on project management, and he has authored or contributed to several books on project management.