Welcome Everyone Who:

Wants:

· To do right things right first time.

Does not want:

· To do things without value.

Knows that:

· Software Testing never ends, it just stops.

· Software Quality does not happen by accident; it has to be planned.

Believes that:

· There's always one more bug during Software Testing.

· Software Testing is the art of thinking.

Powered By Blogger

Wednesday, March 31, 2010

How to select QA Tools

Selecting QA tools

I have read an article written by Margaret Fross, sharing it with you.

This paper examines the methods for identifying and choosing a QA collaboration tool, which can serve to support and reinforce process for an organization.

  • The following points will be addressed:
  • Introduction—increasing importance of quality and what tools can facilitate quality
  • Definition of the term QA collaboration tool—what is it and what can it do for you
  • How to select a QA collaboration tool—key components of a good QA collaboration tool

Introduction

Because of customer expectations and some recent high-profile debacles, the software industry has begun to shift its focus toward improving software quality. For example, companies such as Bank of America, Microsoft, and Cisco were recently featured in an InformationWeek.com article touting "Quality First"

Of course time to market is still critical, but there is somewhat less willingness to sacrifice quality just to push software out the door.

  • When organizations turn their focus to quality, they can begin by asking themselves a few questions:
  • What parts of the organization have a stake in quality?
  • What key quality assurance components are missing from our current development methods?
  • What tools can be adopted and used by all members within the organization to improve quality?

The answers are fairly simple. All members of an organization have a stake in quality. This includes people selling, answering phones, developing applications, testing applications, marketing, and managing at an upper executive level. Anyone who may have contact at any time with the product or the customer has a stake in quality.

Next, organizations need to evaluate what goes on—from soup to nuts—when a new product concept or release comes up, or is planned. This is the organization’s process to get from the drawing board to the marketplace.

Ultimately, when talking about tools to unite all components of an organization to support its quality objectives, a good place to start is collaboration tools.

What is a QA Collaboration Tool?

A QA collaboration tool encompasses key aspects of the software quality process, providing many functions such as requirements, defect, and test case management in one easy-to-use tool. The purpose of a QA collaboration tool is to bring team members together, providing a central location for the functions above as well as provide a communication forum, document storage, and shared appointment creation. It can serve to alleviate many challenges faced by organizations, including:

Process: Lack of a defined process leads to chaotic unpredictable development. A defined process understood and used by all team members leads to repeatability and predictability. Project estimators can look at past projects to provide time estimate for future projects, predict defect rates, and make better decisions about resource needs.

Communication: Lack of communication leads to misunderstood requirements, missed deadlines, false status updates and general ignorance in the organization. This results in extra work for all team members. Cross-functional collaboration provides a forum for all team members, regardless of location or department, to communicate effectively, keeping everyone on the same page. This shared knowledge leads to a more coordinated effort resulting in on-time and on-budget delivery of high-quality software.

Traceability: Lack of a robust report tool leads to confusion among team members and upper management. Status reports are a critical component of any project. They serve to alert managers to staffing needs, development issues, and whether the project is on track or not. The ability to pull up accurate, real-time status reports at any given time allows decision makers to take action on items. Managers are better able to make informed decisions and mitigate risks. The team members in charge of delivering work items are empowered to take more ownership of their specific piece of the pie, giving them defined goals to shoot for and providing a better sense of accomplishment.

The benefits described above are just a few of the many organizations will experience should they choose to adopt a QA tool promoting collaboration.

What Constitutes a Good QA Collaboration Tool?

When selecting a QA Collaboration Tool, as with any tool, you must first assess the needs of the organization versus the budget and time that is allotted to bring in a new tool. Factors to consider are:

Cost: Examine the cost of each tool by looking at various licensing schemes (e.g., named users or concurrent users). Take into consideration any additional fees (e.g., yearly or other support fees). Identify any costs associated with implementation, conversion of existing data, staff training, and hardware.

Ease of use: Too many companies have "shelf-ware" (software collecting dust on shelves) because the tools purchased proved to be too difficult to implement or were too time-consuming to maintain. This is a tremendous waste of money and resources, remembering the time that originally went into selecting and attempting to implement the tool. Look for a tool that will fit well with what you may already be using. Pay particular attention to any import/export features that may prove useful when moving data from old systems to new systems. Good tools are ones in which an administrator can make changes on the fly, that don’t require specialized training or development skills, and are intuitive to your target user. It cannot be stressed enough: the easier you can implement a tool, the more likely you are to effectively use the tool.

Reliability: Will the tool support the user activity you predict? Can all potential users adequately perform their job functions free from frustration? Is there an uptime or availability guarantee with the tool? Does the vendor discuss the frequency with which updates are sent and will those updates necessitate down time for installation? These are important points to consider when selecting a tool.

Support: Verify that the vendor provides support. If you are paying support fees or maintenance fees, what are you getting for those fees? Look at what type of support the vendor provides. Questions to ask are: what is the average call back time for phone support and how fast are e-mail inquiries resolved? What is the charge or availability for on-site support and training?

Now let’s look at the components that make up a QA collaboration tool. Remember that the more functions you can get from one tool the easier the tool will be for your users, because they will not have to go into multiple applications to perform different functions and to get information. Desired components are:

Scheduling: A place where meetings and reminders can be created and added to team member’s calendars; project deadlines can also be displayed here.

To Do List: All projects have items that do not fit into obvious categories like Requirements and Test Case management and do not require other team members. A To Do List is a place where individuals can add personal items (such as reminders to provide a status update).

Project Tasks: Every project has deliverables and milestones. Deliverables include deliverables from the project and deliverables that are necessary to complete the project. Milestones are important because they provide goals for the group as well as a sense of accomplishment when those goals are reached. Milestones provide a simple way to tell at a glance if a project is on schedule. Having a place where tasks like deliverables and milestones can be stored, edited, and viewed by all team members promotes the accountability and traceability aspects of a solid quality assurance process.

Requirements Management: Successful projects start with clearly defined and agreed-upon requirements. One way to ensure you get good requirements is to make sure everyone is providing the same information for every requirement. That’s where a function for requirements management comes in. Users have a format with specific fields in which to enter requirement data. Requirements are then assigned directly to team members. A central repository where users can add and edit requirements for specific projects provides a way to streamline and promote the requirements definition process. The history of that particular requirement is also tracked, which can help when inspecting requirements and comparing how a requirement evolved to its current state.


Test Case Management: Test cases should tie back to specific requirements. Using the same tool for both requirements and test case management instantly provides that traceability without impacting the user. Test cases should also be as detailed as possible. Use of a tool for test case development helps ensure the users are all adding critical details when writing their test case. Test cases can then be assigned to specific team members for maintenance and execution. With some tools, changing the status of a test case to "Failed" will automatically generate a Defect record. As with requirements, test cases will need frequent editing and the change history must be tracked.

Defect Management: Defects are typically filed when a test case fails. By selecting a tool with integrated components users will be able to trace their defects back to specific test cases that trace back to specific requirements. Managing defects in this way assists team members with risk mitigation and analysis of likely failure points in the application. Defect management tools also provide a place to track support issues and enhancement requests. Use of a standard form for reporting issues guarantees several thing: easy decision making when setting work priorities, faster resolution of items, and a faster turn around time once the item reaches the test team.

Reporting: Decision makers within any organization rely on accurate, real-time information upon which to base their decisions. This is where reporting is crucial. Look for a tool that allows users to create custom reports that they can then generate at will. A good reporting function facilitates easy distribution, resulting in information sharing. With robust reporting tools, organization members—particularly upper-management—will be able to obtain accurate status updates when they most need them. This reduces the amount of time a project or team lead spends providing status to different audiences. The reporting function also comes in handy when estimating future projects. Timelines and staffing needs are predicted more accurately when looking at specific data from past projects.

Document Storage: Over the course of a project many documents are created (design specifications, requirements matrices, test plans, test reports). Using a central repository for all documents created during the life of a project ensures that all team members can access the information they need to complete their tasks. Documents stored in a central location are easier to update and maintain than those stored on local hard drives or scattered around in different network locations.

Open Communication Forum: With the influx of telecommuting, companies have a difficult time getting team members together. Team members not working out of a central office often feel left out of the loop. Creating an open communication forum for both local and remote team members will open the lines of communication. Remote team members will feel more part of the team and less isolated. Team members will be able to easily communicate when customer needs change, when technical questions come up, and when project timelines shift. This promotes better decision-making and also provides an archive of what information was available when decisions were made. Remember that shared knowledge leads to more coordinated and successful efforts.

If you examine the functions detailed above and compare them to the process components of the Software Development Life Cycle you will see that the functions provide a vehicle to promote a stable, predictable quality process.

Additional Components and Considerations

Other than the core features described above, what else can you look for in a collaboration tool?

Look for a tool that supports some form of customization. You will want to be able to add your own fields as well as delete existing fields from the forms used to enter requirements, test cases, and defects. This helps you adapt the tool to your organizational needs.

Don’t forget security. Most companies try to avoid airing their dirty laundry to clients (i.e. defects). If you find yourself needing to allow clients to view, modify, or add requirements, you probably do not want them to view other things—especially the defect list. Being able to add, delete, and modify users (and their access permissions) and change security configurations is a basic function that should be present in any tool you consider.

Any tool you look at seriously should be robust, expandable, and have no limits regarding the number of projects you can create. Look for a tool that will allow you to easily create and manage multiple projects simultaneously.

A good Help system can make a big difference in adoption and usage of any QA collaboration tool. This includes online help, documentation (including training and installation documents), and vendor support. All team members should be thoroughly trained on the product prior to usage and understand the chain of events for resolution of a question or support item. This can save your team members a lot of stress and aggravation and they will be more willing to accept the changes the tool promotes.

What about related products? Does the vendor have any other products in its suite that work with the tool to further enhance the collaborative nature of the tool? For instance, what about a trouble-ticket function? Something that your clients can access to log defects directly without allowing them to see existing defects filed internally or by other clients? Defect management can end up being time-consuming and frustrating if you have to monitor two different systems depending on who filed the defect. Other peripheral components to consider are e-mail and database functions.

Select a tool that will work well with your e-mail system (since typically users are notified when items are assigned to them via e-mail). Verify whether or not existing data can be imported into the tool you are evaluating. Examine the relative ease of getting data into the system, and of unloading the data if necessary. Finally, do not purchase any tool without first receiving a product demonstration. Participate in a multi-week trial and try to start implementing the tool with your data. You would never purchase a car before driving it; purchasing a tool is no different.

Tuesday, March 30, 2010

Reasons for Software Development Failures

Software is an important but troubling technology. Software applications are the driving force of modern business operations, but software is also viewed by many chief executives as one of the major problem areas faced by large corporations [1, 2, 3, 4].

The litany of senior executive complaints against software organizations is lengthy, but can be condensed down to a set of three very critical issues that occur over and over in hundreds of corporations:

  1. Software projects are not estimated or planned with acceptable accuracy.
  2. Software project status reporting is often wrong and misleading.
  3. Software quality and reliability are often unacceptably poor.

When software project managers (PMs) themselves are interviewed, they concur that the three major complaints levied against software projects are real and serious. However, from the point of view of software managers, corporate executives also contribute to software problems [5, 6]. The following are three complaints against top executives:

  1. Executives often reject accurate and conservative estimates.
  2. Executives apply harmful schedule pressure that damages quality.
  3. Executives add major new requirements in mid-development.

Corporate executives and software managers have somewhat divergent views as to why software problems are so prevalent. Both corporate executives and software managers see the same issues, but these issues look quite different to each group. Let us examine the root causes of the five software risk factors:

  1. Root causes of inaccurate estimating and schedule planning.
  2. Root causes of incorrect and optimistic status reporting.
  3. Root causes of unrealistic schedule pressures.
  4. Root causes of new and changing requirements during development.
  5. Root causes of inadequate quality control.

These five risk areas are all so critical that they must be controlled if large projects are likely to have a good chance of a successful outcome.

Root Causes of Inaccurate Estimating and Schedule Planning

Since both corporate executives and software managers find estimating to be an area of high risk, what are the factors triggering software cost estimating problems? From analysis and discussions of estimating issues with several hundred managers and executives in more than 75 companies between 1995 and 2006, the following were found to be the major root causes of cost estimating problems:

  1. Formal estimates are demanded before requirements are fully defined.
  2. Historical data is seldom available for calibration of estimates.
  3. New requirements are added, but the original estimate cannot be changed.
  4. Modern estimating tools are not always utilized on major software projects.
  5. Conservative estimates may be overruled and replaced by aggressive estimates.

The first of these estimating issues – formal estimates are demanded before requirements are fully defined – is an endemic problem which has troubled the software community for more than 50 years [7, 8]. The problem of early estimation does not have a perfect solution as of 2006, but there are some approaches that can reduce the risks to acceptable levels.

Several commercial software cost estimation tools have early estimation modes which can assist managers in sizing a project prior to full requirements, and then in estimating development staffing needs, resources, schedules, costs, risk factors, and quality [9]. For very early estimates, risk analysis is a key task.

These early estimates have confidence levels that initially will not be very high. As information becomes available and requirements are defined, the estimates will improve in accuracy, and the confidence levels will also improve. But make no mistake, software cost estimates performed prior to the full understanding of requirements are intrinsically difficult. This is why early estimates should include contingencies for requirements changes and other downstream cost items.

The second estimating issue – historical data is seldom available for calibration of estimates – is strongly related to the first issue. Companies that lack historical information on staffs, schedules, resources, costs, and quality levels from similar projects are always at risk when it comes to software cost estimation. A good software measurement program pays handsome dividends over time [10].

For those organizations that lack internal historical data, it is possible to acquire external benchmark information from a number of consulting organizations. However, the volume of external benchmark data varies among industries, as do the supply sources.

One advantage that function points bring to early estimation is that they are derived directly from the requirements and show the current status of requirement completeness [11]. As new features are added, the function point total will go up accordingly. Indeed, even if features are removed or shifted to a subsequent release, the function point metric can handle this situation well [12, 13].

The third estimating issue – new requirements are added but the original estimate cannot be changed – is that of new and changing requirements without the option to change the original estimate. It is now known that the rate at which software requirements change runs between 1 percent and 3 percent per calendar month during the design and coding stages. Thus, for a project of 1,000 function points and an average 2 percent per month creep during design and coding, new features surfacing during design and coding will add about 12 percent to the final size of the application. This kind of information can and should be used to refine software cost estimates by including contingency costs for anticipated requirements creep [14].

When requirements change, it is possible for some projects in some companies to revise the estimate to match the new set of requirements. This is as it should be. However, many projects are forced to attempt to accommodate new requirements without any added time or additional funds. I have been an expert witness in several lawsuits where software vendors were directed by the clients to keep to contractual schedules and costs even though the clients added many new requirements in mid-development.

The rate of requirements creep will be reduced if technologies such as joint application design (JAD), prototyping, and requirements inspections are utilized. Here too, commercial estimating tools can adjust their estimates in response to the technologies that are planned for the project.

The fourth estimating problem – modern estimating tools are not always utilized on major software projects – is the failure to use state-of-the-art software cost estimating methods. It is inappropriate to use rough manual rules of thumb for important projects. If the costs are likely to top $500,000 and the schedules take more than 12 calendar months, then formal estimates are much safer.

Some of the commercial software cost estimating tools used in 2006 include: COCOMO II, Construx Estimate, COSTAR, CostXpert, KNOWLEDGEPLAN, PRICE-S, SEER, SLIM, and SOFTCOST.

For large software projects in excess of 1,000 function points, any of these commercial software cost estimating tools can usually excel manual estimates in terms of accuracy, completeness, and the ability to deal with tricky situations such as staffing buildups and growth rate in requirements.

Estimating tools have one other major advantage: when new features are added or requirements change, redoing an estimate to accommodate the new data usually only takes a few minutes. In addition, these tools will track the history of changes made during development and, hence, provide a useful audit trail.

The fifth and last of the major estimating issues – conservative estimates may be overruled and replaced by aggressive estimates – is the rejection of conservative or accurate cost estimates and development schedules by clients or top executives. The conservative estimates are replaced by more aggressive estimates that are based on business needs rather than on the capabilities of the team to deliver. For some government projects, schedules may be mandated by Congress or by some outside authority. There is no easy solution for such cases.

The best solution for preventing the arbitrary replacement of accurate estimates is evaluating historical data from similar projects. While estimates themselves might be challenged, it is much less likely that historical data will be overruled.

It is interesting that high-tech industries are usually somewhat more sophisticated in the use of estimating and planning tools than financial services organizations, insurance companies, and general manufacturing and service groups. The high-tech industries such as defense contractors, computer manufacturers, and telecommunication manufacturers need accurate cost estimates for their hardware products, so they usually have estimating departments that are fully equipped with estimating tools that also use formal estimating methods [15].

Banks, insurance companies, and low-technology service companies do not have a long history of needing accurate cost estimates for hardware products so they have a tendency to estimate using informal methods and also have a shortage of estimating tools available for software PMs.

Root Causes of Incorrect and Optimistic Status Reporting

One of the most common sources of friction between corporate executives and software managers is the social issue that software project status reports are not accurate or believable. In case after case, monthly status reports are optimistic that all is on schedule and under control until shortly before the planned delivery when it is suddenly revealed that everything was not under control and another six months may be needed.

What has long been troubling about software project status reporting is the fact that this key activity is severely underreported in software management literature. It is also undersupported in terms of available tools and methods.

The situation of ambiguous and inadequate status reporting was common even in the days of the waterfall model of software development. Inaccurate reporting is even more common in the modern era where the spiral model and other alternatives such as agile methods and the object-oriented paradigm are supplanting traditional methods. The reason is that these non-linear software development methods do not have the same precision in completing milestones as did the older linear software methodologies.

The root cause of inaccurate status reporting is that PMs are simply not trained to carry out this important activity. Surprisingly, neither universities nor many in-house management training programs deal with status reporting.

If a project is truly under control and on schedule, then the status reporting exercise will not be particularly time consuming. Perhaps it will take five to 20 minutes of work on the part of each component or department manager, and perhaps an hour to consolidate all the reports.

But if a project is drifting out of control, then the status reports will feature red flag or warning sections that include the nature of the problem and the plan to bring the project back under control. Here, more time will be needed, but this is time very well spent. The basic rule of software status reporting can be summarized in one phrase: No surprises!

The monthly status reports should consist of both quantitative data on topics such as current size and numbers of defects and also qualitative data on topics such as problems encountered. Seven general kinds of information are reported in monthly status reports:

  1. Cost variances (quantitative).
  2. Schedule variances (quantitative).
  3. Size variances (quantitative).
  4. Defect removal variances (quantitative).
  5. Defect variances (quantitative).
  6. Milestone completions (quantitative and qualitative).
  7. Problems encountered (quantitative and qualitative).

Six of these seven reporting elements are largely quantitative, although there may also be explanations for why the variances occur and their significance.

The most common reason for schedule slippage, cost overrun, and outright cancellation of a major system is that they contain too many bugs or defects to operate successfully. Therefore, a vital element of monthly status reporting is recording data on the actual number of bugs found compared to the anticipated number of bugs. Needless to say, this implies the existence of formal defect and quality estimation tools and methods.

Not every software project needs the rigor of formal monthly status reporting. The following kinds of software need monthly status reports:

  • Projects whose total development costs are significant (>$1,000,000).
  • Projects whose total development schedule will exceed 12 calendar months.
  • Projects with significant strategic value to the enterprise.
  • Projects where the risk of slippage may be hazardous (such as defense projects).
  • Projects with significant interest for top corporate management.
  • Projects created under contract with penalties for non-performance.
  • Projects whose delivery date has been published or is important to the enterprise.

The time and effort devoted to careful status reporting is one of the best software investments a company can make. This should not be a surprise: status reports have long been used for monitoring and controlling the construction of other kinds of complex engineering projects.

During the past 20 years, a number of organizations and development approaches have included improved status reporting as a basic skill for PMs. Some of these include the Project Management Institute, the Software Engineering Institute’s (SEI) Capability Maturity Model® (CMM®), the reports associated with the Six Sigma quality methodology, and the kinds of data reported when utilizing International Organization for Standardization (ISO) Standards.

Unfortunately, from examining the status reports of a number of projects that ended up in court for breach of contract, inaccurate status reporting still remains a major contributing factor to cost overruns, schedule overruns, and also to litigation if the project is being

Sunday, March 21, 2010

7 principles of Software Development

The First Principle: The Reason It All Exists
A software system exists for one reason: to provide value to its users. All decisions should be made with this in mind. Before specifying a system requirement, before noting a piece of system functionality, before determining the hardware platforms or development processes, ask yourself questions such as: "Does this add real VALUE to the system?" If the answer is "no", don't do it. All other principles support this one. Value is a relative not an absolute

The Second Principle: Keep It Simple!

Software design is not a haphazard process. There are many factors to consider in any design effort. All design should be as simple as possible, but no simpler. This facilitates having a more easily understood, and easily maintained system. This is not to say that features, even internal features, should be discarded in the name of simplicity. Indeed, the more elegant designs are usually the more simple ones. Simple also does not mean "quick and dirty." In fact, it often takes a lot of thought and work over multiple iterations to simplify. The payoff is software that is more maintainable and less error-prone.

The Third Principle: Maintain the Vision

A clear vision is essential to the success of a software project
. Without one, a project almost unfailingly ends up being "of two [or more] minds" about itself. Without conceptual integrity, a system threatens to become a patchwork of incompatible designs, held together by the wrong kind of screws.

The Fourth Principle: What You Produce, Others Will Consume
Seldom is an industrial-strength software system constructed and used in a vacuum. In some way or other, someone else will use, maintain, document, or otherwise depend on being able to understand your system. So, always specify, design, and implement knowing someone else will have to understand what you are doing. The audience for any product of software development is potentially large. Specify with an eye to the users. Design, keeping the implementers in mind. Code with concern for those that must maintain and extend the system. Someone may have to debug the code you write, and that makes them a user of your code. Making their job easier adds value to the system.

The Fifth Principle: Be Open to the Future
A system with a long lifetime has more value. In today's computing environments, where specifications change on a moment's notice and hardware platforms are obsolete when just a few months old, software lifetimes are typically measured in months instead of years. However, true "industrial-strength" software systems must endure far longer. To do this successfully, these systems must be ready to adapt to these and other changes. Systems that do this successfully are those that have been designed this way from the start. Never design yourself into a corner. Always ask "what if ", and prepare for all possible answers by creating systems that solve the general problem, not just the specific one.

The Sixth Principle: Plan Ahead for Reuse
Reuse saves time and effort. Achieving a high level of reuse is arguably the hardest goal to accomplish in developing a software system. The reuse of code and designs has been proclaimed as a major benefit of using object-oriented technologies. However, the return on this investment is not automatic. To leverage the reuse possibilities that OO programming provides requires forethought and planning. There are many techniques to realize reuse at every level of the system development process. Those at the detailed design and code level are well known and documented. New literature is addressing the reuse of design in the form of software patterns. However, this is just part of the battle. Communicating opportunities for reuse to others in the organization is paramount. How can you reuse something that you don't know exists?

The Seventh Principle: Think! This last Principle is probably the most overlooked. Placing clear, complete thought before action almost always produces better results. When you think about something, you are more likely to do it right. You also gain knowledge about how to do it right again. If you do think about something and still do it wrong, it becomes valuable experience. A side effect of thinking is learning to recognize when you don t know something, at which point you can research the answer. When clear thought has gone into a system, value comes out. Applying the first six Principles requires intense thought, for which the potential rewards are enormous.

Saturday, March 20, 2010

QA in Startups

Much has been written, about the risks of e-Business applications."Web-time" is a widely acknowledged phenomenon. We all agree that quality is imperative for an e-Business, as all the competition is just a click away. Unfortunately, most of us can agree that a Web startup is not an environment in which quality testing is typically found.

Development is fast and loose. Marketing is pushing to be beat the competition to market.The rules change every day. An e-Business needs to respond immediately to market pressures. And an e-Business cannot afford poor performance or that big revenue drain, downtime.

Any startup or DotCom is a work in progress. Even when the whole company understands and is committed to the importance of quality assurance testing, unexpected events lead to surprises. The key is to keep plugging away at the following tasks:
  • Work Smart
  • Define Processes

I Work Smart

Here's my advice for making the testing organization lean and mean. This is especially critical in an Extreme Programming environment or anywhere the ratio of developers to testers is high.

Evaluate tools. Put as much time as you can into tool evaluation, such as those for automated testing, defect tracking, and configuration management. Identify the vendors who can help you the most, and get as much information from them as you can. Ask fellow testers for their recommendations and experiences. Install new tools and try them out. Select tools that are appropriate for you and your company. It doesn't do any good to buy a tool you don't have time to learn how to use, especially if your testing team is small. You might end up choosing tools that are lesser known but still meet your needs. For example:
  • Depending on your budget, you can use product as expensive as QTP or go with an open source Selenium.
  • For defect tracking you can useTestTrack . It is far less expensive than its competitors, but it is easy to implement and customize. If you are really tight on budget you can use the free Bugzilla.
  • For configuration management, innovative company which produces an inexpensive, easy to implement and learn yet robust tool, Perforce. If you are looking for free tools use freeware, CVS - it lacks some features, but if the development team is small and can work around its drawbacks, it will work.
  • These tools won't necessarily meet your needs - just be open and creative when evaluating tools. Investigate alternatives!

Design automated tests that work for you.

  • Modular and self-verifying to keep up with the pace of development.
  • Verify the minimum criteria for success. Make sure the developers write comprehensive unit tests. Acceptance tests can't cover every path through the code.
  • Perform each function in one and only one place to minimize maintenance time.
  • Contain modules that can be reused, even for unrelated projects
  • Do the simplest thing that works. This XP value applies as much to testing as to coding.
  • Report results in easy-to-read format. Create easy-to-read reports from your test results and post them so that everyone can monitor status and keep the project on track. The project team will gain confidence as they see the percentage of successful tests increases.

In addition, the developers try to design the software with testability in mind. This might mean building hooks into the application to help automate acceptance tests. Push as much functionality as possible to the backend, because it is much easier to automate tests against a backend than through a user interface.

Search the Web for resources. Here are some examples:

www.softwareqatest.com/index.html
Everything from basic definition and articles on how to test Web applications to comprehensive lists of Web tools to links to other informative sites.

www.kaner.com/writing.htm
Articles by Cem Kaner

www.qaforums.com

>Get input about quality from all departments in the company.

>Insist on a test environment that is exact replica of, but is entirely independent from, production. You can't emulate a production load without the equivalent of production hardware and software. Since the production architecture is likely to change in response to increased traffic and other considerations, this is a moving target. The test environment will need to be updated in synch with the production environment. The architecture is key too. If the production servers are clustered, your testing had better be done in a clustered environment. If part of an application runs on a stand-alone machine, it must do so in your test environment. Establishing and keeping up development, test and production environments can be a huge challenge, but it is very important. Even when the entire company is sold on the idea of a proper test environment, there are business and technical reasons (read: excuses) that get in the way of reproducing the production environment in for testing . Don't be complacent, and never give up. Make sure you have the best test environment you can get for each application going into production, and work actively with your information systems team to get the environment you really need. Even small applications can deceive you.

In short: Dig your heels in and refuse to launch until some semblance of a test environment is established. Remember, it is harder to get the test environment once the new application is in production. Make it a requirement of release.


II. Define Processes


>Define quality. Work with marketing and product development to define quality for each product: Should the priority be good, fast, or cheap?


>Enforce the process.

>Innovate! Look for new ways. You have creative people at your company who can help! Get input from as many different groups as you can.

Summary - As You Grow

All companies change as they grow beyond the 'startup' size and environment. As your organization grows, educate new employees about project process and quality practices. Listen to them; take advantage of their fresh outlook and new ideas.Take the initiative. If a gap results from a re-organization, fill it yourself. Quality assurance can be a frustrating job,especially in a Web startup. Pick your battles. Keep striving for better quality. Above all, enjoy the experience!