Two Perspectives on Standish Group's "More Projects Failing"

| No Comments | No TrackBacks

Since 1994 the Standish Group “Chaos” reports have been regularly mentioned in various posts in software and IT blogs. The following figure from the 2002 study is quite representative of the data provided in the Standish annual surveys of the state of software projects:

the-standish-group-2002-report.png

The January/February 2010 issue of IEEE Software features an article entitled The Rise and Fall of the Chaos Report Figures. The authors - J. Laurenz Eveleens and Chris Verhoef of the VU University, Amsterdam - give the following summary of their findings:

In 1994, Standish published the Chaos report that showed a shocking 16 percent project success. This and renewed figures by Standish are often used to indicate that project management of application software development is in trouble. However, Standish’s definitions have four major problems. First, they’re misleading because they’re based solely on estimation accuracy of cost, time, and functionality. Second, their estimation accuracy measure is one-sided, leading to unrealistic success rates. Third, steering on their definitions perverts good estimation practice. Fourth, the resulting figures are meaningless because they average numbers with an unknown bias, numbers that are introduced by different underlying estimation processes. The authors of this article applied Standish’s definitions to their own extensive data consisting of 5,457 forecasts of 1,211 real-world projects, totaling hundreds of millions of Euros. The Standish figures didn’t reflect the reality of the case studies at all.

I will leave it to the reader to draw his/her conclusion with respect to the differences between the Standish Group and the authors. I would, however, quote Jim Highsmith’s deep insight on the value system within its context we measure performance. The following excerpt is from Agile Project Management: Creating Innovative Products:

It we are ultimately to gain the full range of benefits of agile methods, if we are ultimately to grow truly agile, innovative organizations, then, as these stories show, we will have to alter our performance management systems…. We have to be as innovative with our measurement systems as we are with our development methodology.

See pp. 335-358 of Jim’s book for details on transforming performance management systems. His bottom line is elusively simple:

The Standish data are NOT a good indicator of poor software development performance. However, they ARE an indicator of systemic failure of our planning and measurement processes.
Jim refers to the standard definition of  project “success” - one time, on budget, all specified features. His focus is usually on software development. I would contend, however, that his good counsel is much broader. IMHO it applies to any IT project.

 

No TrackBacks

TrackBack URL: http://www.bsmreview.com/cgi-bin/mt/mt-t.cgi/63

Leave a comment

   

Type the characters you see in the picture above.

About this Entry

This page contains a single entry by Israel Gat published on January 11, 2010 2:49 PM.

Automation of Application Deployments and Configurations was the previous entry in this blog.

Thoughts on CA's acquisition of Oblicore is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Pages