Welcome Everyone Who:

Wants:

· To do right things right first time.

Does not want:

· To do things without value.

Knows that:

· Software Testing never ends, it just stops.

· Software Quality does not happen by accident; it has to be planned.

Believes that:

· There's always one more bug during Software Testing.

· Software Testing is the art of thinking.

Powered By Blogger

Tuesday, March 30, 2010

Reasons for Software Development Failures

Software is an important but troubling technology. Software applications are the driving force of modern business operations, but software is also viewed by many chief executives as one of the major problem areas faced by large corporations [1, 2, 3, 4].

The litany of senior executive complaints against software organizations is lengthy, but can be condensed down to a set of three very critical issues that occur over and over in hundreds of corporations:

  1. Software projects are not estimated or planned with acceptable accuracy.
  2. Software project status reporting is often wrong and misleading.
  3. Software quality and reliability are often unacceptably poor.

When software project managers (PMs) themselves are interviewed, they concur that the three major complaints levied against software projects are real and serious. However, from the point of view of software managers, corporate executives also contribute to software problems [5, 6]. The following are three complaints against top executives:

  1. Executives often reject accurate and conservative estimates.
  2. Executives apply harmful schedule pressure that damages quality.
  3. Executives add major new requirements in mid-development.

Corporate executives and software managers have somewhat divergent views as to why software problems are so prevalent. Both corporate executives and software managers see the same issues, but these issues look quite different to each group. Let us examine the root causes of the five software risk factors:

  1. Root causes of inaccurate estimating and schedule planning.
  2. Root causes of incorrect and optimistic status reporting.
  3. Root causes of unrealistic schedule pressures.
  4. Root causes of new and changing requirements during development.
  5. Root causes of inadequate quality control.

These five risk areas are all so critical that they must be controlled if large projects are likely to have a good chance of a successful outcome.

Root Causes of Inaccurate Estimating and Schedule Planning

Since both corporate executives and software managers find estimating to be an area of high risk, what are the factors triggering software cost estimating problems? From analysis and discussions of estimating issues with several hundred managers and executives in more than 75 companies between 1995 and 2006, the following were found to be the major root causes of cost estimating problems:

  1. Formal estimates are demanded before requirements are fully defined.
  2. Historical data is seldom available for calibration of estimates.
  3. New requirements are added, but the original estimate cannot be changed.
  4. Modern estimating tools are not always utilized on major software projects.
  5. Conservative estimates may be overruled and replaced by aggressive estimates.

The first of these estimating issues – formal estimates are demanded before requirements are fully defined – is an endemic problem which has troubled the software community for more than 50 years [7, 8]. The problem of early estimation does not have a perfect solution as of 2006, but there are some approaches that can reduce the risks to acceptable levels.

Several commercial software cost estimation tools have early estimation modes which can assist managers in sizing a project prior to full requirements, and then in estimating development staffing needs, resources, schedules, costs, risk factors, and quality [9]. For very early estimates, risk analysis is a key task.

These early estimates have confidence levels that initially will not be very high. As information becomes available and requirements are defined, the estimates will improve in accuracy, and the confidence levels will also improve. But make no mistake, software cost estimates performed prior to the full understanding of requirements are intrinsically difficult. This is why early estimates should include contingencies for requirements changes and other downstream cost items.

The second estimating issue – historical data is seldom available for calibration of estimates – is strongly related to the first issue. Companies that lack historical information on staffs, schedules, resources, costs, and quality levels from similar projects are always at risk when it comes to software cost estimation. A good software measurement program pays handsome dividends over time [10].

For those organizations that lack internal historical data, it is possible to acquire external benchmark information from a number of consulting organizations. However, the volume of external benchmark data varies among industries, as do the supply sources.

One advantage that function points bring to early estimation is that they are derived directly from the requirements and show the current status of requirement completeness [11]. As new features are added, the function point total will go up accordingly. Indeed, even if features are removed or shifted to a subsequent release, the function point metric can handle this situation well [12, 13].

The third estimating issue – new requirements are added but the original estimate cannot be changed – is that of new and changing requirements without the option to change the original estimate. It is now known that the rate at which software requirements change runs between 1 percent and 3 percent per calendar month during the design and coding stages. Thus, for a project of 1,000 function points and an average 2 percent per month creep during design and coding, new features surfacing during design and coding will add about 12 percent to the final size of the application. This kind of information can and should be used to refine software cost estimates by including contingency costs for anticipated requirements creep [14].

When requirements change, it is possible for some projects in some companies to revise the estimate to match the new set of requirements. This is as it should be. However, many projects are forced to attempt to accommodate new requirements without any added time or additional funds. I have been an expert witness in several lawsuits where software vendors were directed by the clients to keep to contractual schedules and costs even though the clients added many new requirements in mid-development.

The rate of requirements creep will be reduced if technologies such as joint application design (JAD), prototyping, and requirements inspections are utilized. Here too, commercial estimating tools can adjust their estimates in response to the technologies that are planned for the project.

The fourth estimating problem – modern estimating tools are not always utilized on major software projects – is the failure to use state-of-the-art software cost estimating methods. It is inappropriate to use rough manual rules of thumb for important projects. If the costs are likely to top $500,000 and the schedules take more than 12 calendar months, then formal estimates are much safer.

Some of the commercial software cost estimating tools used in 2006 include: COCOMO II, Construx Estimate, COSTAR, CostXpert, KNOWLEDGEPLAN, PRICE-S, SEER, SLIM, and SOFTCOST.

For large software projects in excess of 1,000 function points, any of these commercial software cost estimating tools can usually excel manual estimates in terms of accuracy, completeness, and the ability to deal with tricky situations such as staffing buildups and growth rate in requirements.

Estimating tools have one other major advantage: when new features are added or requirements change, redoing an estimate to accommodate the new data usually only takes a few minutes. In addition, these tools will track the history of changes made during development and, hence, provide a useful audit trail.

The fifth and last of the major estimating issues – conservative estimates may be overruled and replaced by aggressive estimates – is the rejection of conservative or accurate cost estimates and development schedules by clients or top executives. The conservative estimates are replaced by more aggressive estimates that are based on business needs rather than on the capabilities of the team to deliver. For some government projects, schedules may be mandated by Congress or by some outside authority. There is no easy solution for such cases.

The best solution for preventing the arbitrary replacement of accurate estimates is evaluating historical data from similar projects. While estimates themselves might be challenged, it is much less likely that historical data will be overruled.

It is interesting that high-tech industries are usually somewhat more sophisticated in the use of estimating and planning tools than financial services organizations, insurance companies, and general manufacturing and service groups. The high-tech industries such as defense contractors, computer manufacturers, and telecommunication manufacturers need accurate cost estimates for their hardware products, so they usually have estimating departments that are fully equipped with estimating tools that also use formal estimating methods [15].

Banks, insurance companies, and low-technology service companies do not have a long history of needing accurate cost estimates for hardware products so they have a tendency to estimate using informal methods and also have a shortage of estimating tools available for software PMs.

Root Causes of Incorrect and Optimistic Status Reporting

One of the most common sources of friction between corporate executives and software managers is the social issue that software project status reports are not accurate or believable. In case after case, monthly status reports are optimistic that all is on schedule and under control until shortly before the planned delivery when it is suddenly revealed that everything was not under control and another six months may be needed.

What has long been troubling about software project status reporting is the fact that this key activity is severely underreported in software management literature. It is also undersupported in terms of available tools and methods.

The situation of ambiguous and inadequate status reporting was common even in the days of the waterfall model of software development. Inaccurate reporting is even more common in the modern era where the spiral model and other alternatives such as agile methods and the object-oriented paradigm are supplanting traditional methods. The reason is that these non-linear software development methods do not have the same precision in completing milestones as did the older linear software methodologies.

The root cause of inaccurate status reporting is that PMs are simply not trained to carry out this important activity. Surprisingly, neither universities nor many in-house management training programs deal with status reporting.

If a project is truly under control and on schedule, then the status reporting exercise will not be particularly time consuming. Perhaps it will take five to 20 minutes of work on the part of each component or department manager, and perhaps an hour to consolidate all the reports.

But if a project is drifting out of control, then the status reports will feature red flag or warning sections that include the nature of the problem and the plan to bring the project back under control. Here, more time will be needed, but this is time very well spent. The basic rule of software status reporting can be summarized in one phrase: No surprises!

The monthly status reports should consist of both quantitative data on topics such as current size and numbers of defects and also qualitative data on topics such as problems encountered. Seven general kinds of information are reported in monthly status reports:

  1. Cost variances (quantitative).
  2. Schedule variances (quantitative).
  3. Size variances (quantitative).
  4. Defect removal variances (quantitative).
  5. Defect variances (quantitative).
  6. Milestone completions (quantitative and qualitative).
  7. Problems encountered (quantitative and qualitative).

Six of these seven reporting elements are largely quantitative, although there may also be explanations for why the variances occur and their significance.

The most common reason for schedule slippage, cost overrun, and outright cancellation of a major system is that they contain too many bugs or defects to operate successfully. Therefore, a vital element of monthly status reporting is recording data on the actual number of bugs found compared to the anticipated number of bugs. Needless to say, this implies the existence of formal defect and quality estimation tools and methods.

Not every software project needs the rigor of formal monthly status reporting. The following kinds of software need monthly status reports:

  • Projects whose total development costs are significant (>$1,000,000).
  • Projects whose total development schedule will exceed 12 calendar months.
  • Projects with significant strategic value to the enterprise.
  • Projects where the risk of slippage may be hazardous (such as defense projects).
  • Projects with significant interest for top corporate management.
  • Projects created under contract with penalties for non-performance.
  • Projects whose delivery date has been published or is important to the enterprise.

The time and effort devoted to careful status reporting is one of the best software investments a company can make. This should not be a surprise: status reports have long been used for monitoring and controlling the construction of other kinds of complex engineering projects.

During the past 20 years, a number of organizations and development approaches have included improved status reporting as a basic skill for PMs. Some of these include the Project Management Institute, the Software Engineering Institute’s (SEI) Capability Maturity Model® (CMM®), the reports associated with the Six Sigma quality methodology, and the kinds of data reported when utilizing International Organization for Standardization (ISO) Standards.

Unfortunately, from examining the status reports of a number of projects that ended up in court for breach of contract, inaccurate status reporting still remains a major contributing factor to cost overruns, schedule overruns, and also to litigation if the project is being

No comments:

Post a Comment