Managing the configuration of an IT solution is critical during project implementation and following maintenance. Key aspects of configuration management cover planning through identification, control, status accounting, verification and installation qualification across multiple system environments.


Anyone who has participated in validating a computer system in their organisation knows that the first element after specifying and building the system is to qualify the installation; this ensures that the system installation and configuration baseline is under control. Maintaining the system in a qualified state requires at least annual configuration baseline reviews and configuration status accounting, as well as installation qualification of any changes applied during maintenance.

The challenge of managing the configuration of a GxP-critical corporate system is no longer the ability to stay in control, but to reduce the significant costs associated with the level of control required to stay in compliance.

Configuration management (CM) is the heart of installation qualification and is central to ensuring IT infrastructure compliance throughout any system’s life cycle, from inception to retirement. The life sciences industry recognises this, and the ISPE has, with one of its latest good practice guide publications, IT Infrastructure Control and Compliance1, standardised the concepts behind CM and change control. This article revisits the key concepts and value drivers and elaborates on some good practices based on a real case study.

Setting the scene

At the base of CM is the configuration item (CI), a unit of configuration that can be individually managed and versioned, including unique identification, ownership and relations with other CIs. As with any management discipline, CM consists of a number of processes, including:

  • Planning
  • Identification
  • Control
  • Verification
  • Status accounting

Planning defines CM scope and objectives, roles and responsibilities, procedures and relationships with other management processes (for example, change management). Planning also defines the use of configuration management software tools.

Identification of CIs strives to maximise control with minimal records, that is, the level of detail must be justified by the business need, typically at the level of change. For example, a CI could identify an entire operating system, a service pack or a patch applied to that operating system. On the application level, customised components would change and thus need control, such as Java class files, like the users and their privileges, which would also change and be controlled as CIs. CIs should be recorded in a central configuration management database (CMDB).

Control ensures that only authorised and identifiable CIs are accepted and recorded in the CMDB, and that changes have associated control documentation – for example, request for change (RfC). Verification reviews the CIs in the system against their registered CIs in the CMDB. The output of verification is a new configuration baseline. This exercise is also referred to as a configuration baseline review. Status accounting ensures reporting on current and historical CIs, throughout their life cycles, and accounts for any deviations found during verification.

In validation, the installation qualification (IQ) is simply change verification and accounting of actual installed CIs against the expected result or configuration baseline.

Value drivers

Although merely staying in compliance can be perceived as an adequate value driver in the life sciences industry, CM has more merit to it, including the provision of accurate information about CIs required to support other management processes, traceability and security. CM also facilitates impact and trend analysis across changes to the baseline.

Perhaps the best example of a service not adequately configuration managed is when the service provider cannot explain what has happened to that service and why it is failing to deliver, or where previously resolved problems suddenly reappear, and has no idea why.

Any GxP-critical application is at least annually verified and accounted for to ensure compliance. Such an exercise on a full-scale corporate application is tedious and can easily cost hundreds of man-hours and take weeks to complete. But it is possible to reduce incurred costs by investing in good configuration management practices. Any request for proposal (RfP) should require efficient verification and accounting, also during maintenance.

Case study: environmental monitoring

Senior project manager Robert Lauritzen was put in charge of implementing an environmental monitoring solution throughout the production facilities of a global pharmaceutical corporation. The project was responding to both increased regulatory requirements and the need to increase production capacity, which was not possible with the current set-up. The solution was chosen for its usability and user acceptance, enterprise readiness and configurability.

Just below 600 users at 12 locations across three continents would be using this GxP-critical solution 24/7, all year around. The users expected their system to be highly flexible and reliable in handling changes. CM planning was essential to satisfying those expectations in a cost-effective way, reducing verification to a matter of hours instead of taking weeks.

Managing the configuration

The CM plan described a hierarchical perspective of the system including users and their access privileges, SOPs, documentation, system environments, application-specific items, and the relations to other CIs including infrastructure platforms and networks. Application-specific items included elements such as web pages, data elements and their configuration, Java class files and JavaScripts.

With a plan of what to control, the next step was determining the most efficient way of managing the configuration. Technical challenges with a cost reduction focus included:

  • Uniquely identifying any kind of CI in the CMDB
  • Producing the configuration baseline
  • Migrating changes from the source (typically development) to target environments
  • Verifying that the source and target is correct
  • Accounting for the status of the changed items and any deviations found

The project developed a migration tool, a verification report tool and a mechanism for uniquely identifying CIs. Both tools rely on the ability to extract any kind of CI (see figure). The migration tool extracts CIs into installable change packages. CIs are tagged with an RfC identifier, and the tool builds an installable package extracting only the RfC-tagged CIs. If a release consists of several changes, this allows for independent change testing on separate environments.

The verification report compares expected CI identifiers from a source, either the CMDB or a system, against the actual CI identifiers in the specific target system. Deviations are flagged for accounting. This leads to the question of how CIs are identified uniquely.

Contemporary technologies rely heavily on relational database management systems to contain significant portions of program logic and configuration. Uniquely identifying a database-contained CI becomes a question of querying across multiple joined normalised entities, ensuring that the query yields the exact same result set every time, as long as the CI has not changed.

By using mathematical algorithms to calculate the checksum of the extracted CI, a simple and efficient way to uniquely identify and record any kind of CI is achieved. The CI, be it either a database query result set or a file, is treated as a stream of data from which a checksum is calculated. Any change to the underlying CI will change the checksum. Migration preserves the checksum and verification becomes a simple comparison between checksum values.

Low-cost quality

Many corporations suffer under the costly burden of inefficient CM, either because they implemented systems not easily supporting automated CM, or because the opportunity was never identified. CM can be partially automated, as demonstrated, and doing so lowers the cost of both implementation and maintenance significantly, while increasing the quality of work by eliminating tedious and error-prone manual work.

References
1.IT Infrastructure Control and Compliance, September 2005, ISPE Good Practice Guide Publication.