Pages

Friday, February 19, 2010

Scope Verification of Software Projects

A reader over at PM Hut asks, "How [can I] calculate delivery compliance in an engineering/ Software(IT) project?"

From a Project Management viewpoint, I had some difficulty in understanding the term "delivery compliance", so I've taken the liberty of thinking of this as scope verification -- how we determine that what was delivered meets the project requirements. There is then a second part to this inquiry -- how is it measured?

This can be a complex question or issue. We all know from the 70s and the work of Gerald Weinberg (The Psychology of Computer Programming) that testing software does not prove the absence of bugs. Where scope statements require high reliability then, the measurement becomes uncertain. Complex and large projects also create challenges for verification and measurement.

I believe I can put some simple structure around this complex question. During the early years of my software and software project management career, I worked with engineering/software development teams at GE Information Services that were challenged to:

- develop systems that worked with high reliability - 99.999%
- meet the demand that memory and cpu utilization vary no more than 1% from release to release (since clients were billed on this basis)
- provide high client satisfaction -- an "A" average was required on client report cards

This was "RASM" (Reliability, Availability, Scalability, Maintainability) before the acronym was coined! Here's how this was achieved:

Verification
Verification required a robust process to maintain the chain of accountability for the requirements. This process included:

- written SMART requirements (specific, measurable, achievable, realistic, time bound)
- diligent requirements reviews (indirectly more of Weinberg's work)
- diligent design reviews
- coding standards
- peer reviews of all code
- management sign off on all changes
- developer created unit test cases which contributed to future regression tests
- test cases which went through a similar process to the software development process.

Testing was also a critical part of the process. In my opinion, testing these days has become too reliant on use cases. Use cases make fine exemplars or clarifications of requirements, but are poor test cases -- users rarely do what's expected of them 100% of the time. Testing needs to be a good mix of black box, white box, and glass box testing to ensure you are capturing as many issues as possible. The requirements need to become your checklist. Further, in addition to looking at the checklist an item at a time, you need to step back and make sure they meet the big picture (here's a nice role for use cases, as well as additional testing to verify the interaction of component).

Critical components of testing in the environment included:

- a strong, multi- and cross-platform configuration management system to manage builds and their versions
- ability to easily integrate developer unit tests into other test beds through defined test frameworks
- automated testing to drive high test coverage in shorter periods of time
- a measurable regression test bed (ensure no unexpected changes in CPU/memory or functionality)
- load testing (how many simultaneous users could use the system with no database or other system deadlocks)
- stress testing (what happened when the expected limits of any system aspect were exceeded).

Metrics

Now that we've had a look at the intended delivery outcomes and the process to achieve them, let's take a look at the key metrics which were in place and how they were used to assure delivery:

1) Regular reporting of number of open tickets and issues by priority, software system, client, and age
2) Regular reporting on system and application availability. Goal was 99.999% (scheduled down time was considered meeting goal, unscheduled was not)
3) Regular client survey results, requiring a specific average (usually something like A- or better). Clients were proactively called on a regular basis by client services to gather the information.

To further reinforce the compliance of delivery, these three metrics were generally tied to financial compensation and overall employee performance reviews. Everyone in the organization carried a responsibility to help meet the metrics. (Note: See my earlier article Time, Deliverables, or Outcomes? for more on the perspective of the triangular relationship of quality, availability, and high client satisfaction.)

It is also interesting to note that we found client satisfaction directly linked to number of open tickets. It wasn't enough to just close the high priority issues. Initially there was an attempt to ignore lower level issues. As the number of these issues grew, client satisfaction dropped. Once out of compliance, we initiated a period in which only low level tickets were addressed. Client satisfaction increased within two quarters and we then adopted a more balanced approach than just closing high priority, high impact tickets.

After these key metrics, there were other regular reports available including:

- measurement of time on projects
- measurement of time on issues and other non-project work
- measurements of system and application usage on display monitors throughout all facilities, including metrics such as up time, availability time, number of simultaneous users, transaction rates, etc.

Conclusion

Delivery compliance is a complex topic. Its not about a simple metric or measurement. I've attempted to put together a picture of a framework for a development process which drives an organization toward effective, high quality delivery. The process was supported by three key, simple metrics which kept everyone aligned to the highest priorities and values of the company. Additional metrics helped build team work and rally employees around milestones (e.g. the day we reached 10,000 simultaneous users on our private telecommunications network and mainframe processors).

No comments: