Thursday, March 19, 2009

Certified Software Tester (CSTE) certification, Certificate in Software Testing

Certified Software Tester (CSTE) certification is a formal recognition of a level of proficiency in the software testing industry. The recipient is acknowledged as having an overall comprehension of the Common Body of Knowledge (CBOK) for the Software Testing Profession

Inherent Benefits

For the Individual
· CSTE certification is proof that you've mastered a basic skill set recognized worldwide in the Testing arena
· CSTE certification can result in more rapid career advancement
· Results in greater acceptance in the role of an adviser to upper management
· Assists individuals in improving and enhancing their organization's software testing programs
· Motivates personnel having software-testing responsibilities to maintain their professional competency

For the Organization
· CSTE is expected to be a 'change agent', someone who can change the culture and work habits of individuals to make quality in software testing happen
· Aids organizations in selecting and promoting qualified individuals
· Demonstrates an individual's willingness to improve professionally
· Defines the tasks (skill domains) associated with software testing duties in order to evaluate skill mastery
Acknowledges attainment of an acceptable standard of professional competency

For further details on the Software Certification Program, visit www.softwarecertifications.org

Usability testing

Usability testing is also known as User-Friendliness Testing. Usability testing is done if User Interface of the application stands an important consideration and needs to be specific for the specific type of users

Usability testing is the process of working with end-users directly or indirectly to assess how the user perceives a software package and how they interact with it. This process will uncover areas of difficulty for users as well as areas of strength. The goal of usability testing should be to limit and remove difficulties for users and to leverage areas of strength for maximum usability.

Usability testing should ideally involve direct user feedback, indirect feedback (observed behavior), and when possible computer supported feedback. Computer supported feedback is often (if not always) left out of this process. Computer supported feedback can be as simple as a timer on a dialog to monitor how long it takes users to use the dialog and counters to determine how often certain conditions occur (i.e. error messages, help messages, etc).

Often, this involves trivial modifications to existing software, but can result in tremendous return on investment. Ultimately, usability testing should result in changes to the delivered product in line with the discoveries made regarding usability. These changes should be directly related to real-world usability by average users. As much as possible, documentation should be written supporting changes so that in the future, similar situations can be handled with ease

Wednesday, March 18, 2009

How can World Wide Web sites be tested?

Web sites are essentially client/server applications - with web servers and 'browser' clients. Consideration should be given to the interactions between HTML pages, web services, encrypted communications, Internet connections, firewalls, applications that run in web pages (such as JavaScript, flash, other plug-in applications), the wide variety of applications that could run on the server side, etc. Additionally, there are a wide variety of servers and browsers, various versions of each, small but sometimes significant differences between them, variations in connection speeds, rapidly changing technologies, and multiple standards and protocols. The end result is that testing for web sites can become a major ongoing effort. Other considerations might include:
  • What are the expected loads on the server (e.g., number of hits per unit time?), and what kind of performance is required under such loads (such as web server response time, database query response times). What kinds of tools will be needed for performance testing (such as web load testing tools, other tools already in house that can be adapted, load generation appliances, etc.)?
  • Who is the target audience? What kind and version of browsers will they be using, and how extensively should testing be for these variations? What kind of connection speeds will they by using? Are they intra-organization (thus with likely high connection speeds and similar browsers) or Internet-wide (thus with a wide variety of connection speeds and browser types)?
  • What kind of performance is expected on the client side (e.g., how fast should pages appear, how fast should flash, applets, etc. load and run)?
  • Will down time for server and content maintenance/upgrades be allowed? how much?
  • What kinds of security (firewalls, encryption, passwords, functionality, etc.) will be required and what is it expected to do? How can it be tested?
  • What internationalization/localization/language requirements are there, and how are they to be verified?
  • How reliable are the site's Internet connections required to be? And how does that affect backup system or redundant connection requirements and testing?
  • What processes will be required to manage updates to the web site's content, and what are the requirements for maintaining, tracking, and controlling page content, graphics, links, etc.?
  • Which HTML and related specification will be adhered to? How strictly? What variations will be allowed for targeted browsers?
  • Will there be any standards or requirements for page appearance and/or graphics throughout a site or parts of a site?
  • Will there be any development practices/standards utilized for web page components and identifiers, which can significantly impact test automation.
  • How will internal and external links be validated and updated? how often?
  • Can testing be done on the production system, or will a separate test system be required? How are browser caching, variations in browser option settings, connection variability, and real-world internet 'traffic congestion' problems to be accounted for in testing?
  • How extensive or customized are the server logging and reporting requirements; are they considered an integral part of the system and do they require testing?
  • How are flash, applets, JavaScript, Active X components, etc. to be maintained, tracked, controlled, and tested?

What if there isn't enough time for thorough testing?

Use risk analysis, along with discussion with project stakeholders, to determine where testing should be focused.
Since it's rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects. This requires judgment skills, common sense, and experience. (If warranted, formal methods are also available.) Considerations can include:
  • Which functionality is most important to the project's intended purpose?
  • Which functionality is most visible to the user?
  • Which functionality has the largest safety impact?
  • Which functionality has the largest financial impact on users?
  • Which aspects of the application are most important to the customer?
  • Which aspects of the application can be tested early in the development cycle?
  • Which parts of the code are most complex, and thus most subject to errors?
  • Which parts of the application were developed in rush or panic mode?
  • Which aspects of similar/related previous projects caused problems?
  • Which aspects of similar/related previous projects had large maintenance expenses?
  • Which parts of the requirements and design are unclear or poorly thought out?
  • What do the developers think are the highest-risk aspects of the application?
  • What kinds of problems would cause the worst publicity?
  • What kinds of problems would cause the most customer service complaints?
  • What kinds of tests could easily cover multiple functionalities?
  • Which tests will have the best high-risk-coverage to time-required ratio?

How can it be known when to stop testing?

This can be difficult to determine. Most modern software applications are so complex, and run in such an interdependent environment, that complete testing can never be done. Common factors in deciding when to stop are:
  • Deadlines (release deadlines, testing deadlines, etc.)
  • Test cases completed with certain percentage passed
  • Test budget depleted
  • Coverage of code/ functionality/ requirements reaches a specified point
  • Bug rate falls below a certain level
  • Beta or alpha testing period ends

What is 'configuration management'?

Configuration management covers the processes used to control, coordinate, and track: code, requirements, documentation, problems, change requests, designs, tools/compilers/libraries/patches, changes made to them, and who makes the changes.

What should be done after a bug is found?

The bug needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested, and determinations made regarding requirements for regression testing to check that fixes didn't create problems elsewhere. If a problem-tracking system is in place, it should encapsulate these processes. A variety of commercial problem-tracking/management software tools are available. The following are items to consider in the tracking process:
  • Complete information such that developers can understand the bug, get an idea of it's severity, and reproduce it if necessary.
  • Bug identifier (number, ID, etc.)
  • Current bug status (e.g., 'Released for Retest', 'New', etc.)
  • The application name or identifier and version
  • The function, module, feature, object, screen, etc. where the bug occurred
  • Environment specifics, system, platform, relevant hardware specifics
  • Test case name/number/identifier
  • One-line bug description
  • Full bug description
  • Description of steps needed to reproduce the bug if not covered by a test case or if the developer doesn't have easy access to the test case/test script/test tool
  • Names and/or descriptions of file/data/messages/etc. used in test
  • File excerpts/error messages/log file excerpts/screen shots/test tool logs that would be helpful in finding the cause of the problem
  • Severity estimate (a 5-level range such as 1-5 or 'critical'-to-'low' is common)
  • Was the bug reproducible?
  • Tester name
  • Test date
  • Bug reporting date
  • Name of developer/group/organization the problem is assigned to
  • Description of problem cause
  • Description of fix
  • Code section/file/module/class/method that was fixed
  • Date of fix
  • Application version that contains the fix
  • Tester responsible for retest
  • Retest date
  • Retest results
  • Regression testing requirements
  • Tester responsible for regression tests
  • Regression testing results
A reporting or tracking process should enable notification of appropriate personnel at various stages. For instance, testers need to know when retesting is needed, developers need to know when bugs are found and how to get the needed information, and reporting/summary capabilities are needed for managers.

What's a 'test case'?

A test case describes an input, action, or event and an expected response, to determine if a feature of a software application is working correctly. A test case may contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results. The level of detail may vary significantly depending on the organization and project context.

Note that the process of developing test cases can help find problems in the requirements or design of an application, since it requires completely thinking through the operation of the application. For this reason, it's useful to prepare test cases early in the development cycle if possible.

What's a 'test plan'?

A software project test plan is a document that describes the objectives, scope, approach, and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the 'why' and 'how' of product validation. It should be thorough enough to be useful but not so thorough that no one outside the test group will read it. The following are some of the items that might be included in a test plan, depending on the particular project:
  • Title
  • Identification of software including version/release numbers
  • Revision history of document including authors, dates, approvals
  • Table of Contents
  • Purpose of document, intended audience
  • Objective of testing effort
  • Software product overview
  • Relevant related document list, such as requirements, design documents, other test plans, etc.
  • Relevant standards or legal requirements
  • Traceability requirements
  • Relevant naming conventions and identifier conventions
  • Overall software project organization and personnel/contact-info/responsibilties
  • Test organization and personnel/contact-info/responsibilities
  • Assumptions and dependencies
  • Project risk analysis
  • Testing priorities and focus
  • Scope and limitations of testing
  • Test outline - a decomposition of the test approach by test type, feature, functionality, process, system, module, etc. as applicable
  • Outline of data input equivalence classes, boundary value analysis, error classes
  • Test environment - hardware, operating systems, other required software, data configurations, interfaces to other systems
  • Test environment validity analysis - differences between the test and production systems and their impact on test validity.
  • Test environment setup and configuration issues
  • Software migration processes
  • Software CM processes
  • Test data setup requirements
  • Database setup requirements
  • Outline of system-logging/error-logging/other capabilities, and tools such as screen capture software, that will be used to help describe and report bugs
  • Discussion of any specialized software or hardware tools that will be used by testers to help track the cause or source of bugs
  • Test automation - justification and overview
  • Test tools to be used, including versions, patches, etc.
  • Test script/test code maintenance processes and version control
  • Problem tracking and resolution - tools and processes
  • Project test metrics to be used
  • Reporting requirements and testing deliverables
  • Software entrance and exit criteria
  • Initial sanity testing period and criteria
  • Test suspension and restart criteria
  • Personnel allocation
  • Personnel pre-training needs
  • Test site/location
  • Outside test organizations to be utilized and their purpose, responsibilties, deliverables, contact persons, and coordination issues
  • Relevant proprietary, classified, security, and licensing issues.
  • Open issues
  • Appendix - glossary, acronyms, etc.

What steps are needed to develop and run software tests?

The following are some of the steps to consider:
  • Obtain requirements, functional design, and internal design specifications, user stories, and other available/necessary information
  • Obtain budget and schedule requirements
  • Determine project-related personnel and their responsibilities, reporting requirements, required standards and processes (such as release processes, change processes, etc.)
  • Determine project context, relative to the existing quality culture of the product/organization/business, and how it might impact testing scope, aproaches, and methods.
  • Identify application's higher-risk and mor important aspects, set priorities, and determine scope and limitations of tests.
  • Determine test approaches and methods - unit, integration, functional, system, security, load, usability tests, etc.
  • Determine test environment requirements (hardware, software, configuration, versions, communications, etc.)
  • Determine testware requirements (automation tools, coverage analyzers, test tracking, problem/bug tracking, etc.)
  • Determine test input data requirements
  • Identify tasks, those responsible for tasks, and labor requirements
  • Set schedule estimates, timelines, milestones
  • Determine, where apprapriate, input equivalence classes, boundary value analyses, error classes
  • Prepare test plan document(s) and have needed reviews/approvals
  • Write test cases
  • Have needed reviews/inspections/approvals of test cases
  • Prepare test environment and testware, obtain needed user manuals/reference documents/configuration guides/installation guides, set up test tracking processes, set up logging and archiving processes, set up or obtain test input data
  • Obtain and install software releases
  • Perform tests
  • Evaluate and report results
  • Track problems/bugs and fixes
  • Retest as needed
  • Maintain and update test plans, test cases, test environment, and testware through life cycle

What's the role of documentation in QA?

Generally, the larger the team/organization, the more useful it will be to stress documentation, in order to manage and communicate more efficiently. (Note that documentation may be electronic, not necessarily in printable form, and may be embedded in code comments, may be embodied in well-written test cases, user stories, etc.) QA practices may be documented to enhance their repeatability. Specifications, designs, business rules, configurations, code changes, test plans, test cases, bug reports, user manuals, etc. may be documented in some form. There would ideally be a system for easily finding and obtaining information and determining what documentation will have a particular piece of information. Change management for documentation can be used where appropriate. For agile software projects, it should be kept in mind that one of the agile values is "Working software over comprehensive documentation", which does not mean 'no' documentation. Agile projects tend to stress the short term view of project needs; documentation often becomes more important in a project's long-term context.

What makes a good QA or Test manager?

A good QA, test, or QA/Test(combined) manager should:
  • be familiar with the software development process
  • be able to maintain enthusiasm of their team and promote a positive atmosphere, despite what is a somewhat 'negative' process (e.g., looking for or preventing problems)
  • be able to promote teamwork to increase productivity
  • be able to promote cooperation between software, test, and QA engineers
  • have the diplomatic skills needed to promote improvements in QA processes
  • have the ability to withstand pressures and say 'no' to other managers when quality is insufficient or QA processes are not being adhered to
  • have people judgement skills for hiring and keeping skilled personnel
  • be able to communicate with technical and non-technical people, engineers, managers, and customers.
  • be able to run meetings and keep them focused

What makes a good Software QA engineer?

The same qualities a good tester has are useful for a QA engineer. Additionally, they must be able to understand the entire software development process and how it can fit into the business approach and goals of the organization. Communication skills and the ability to understand various sides of issues are important. In organizations in the early stages of implementing QA processes, patience and diplomacy are especially needed. An ability to find problems as well as to see 'what's missing' is important for inspections and reviews

What makes a good Software Test engineer?

A good test engineer has a 'test to break' attitude, an ability to take the point of view of the customer, a strong desire for quality, and an attention to detail. Tact and diplomacy are useful in maintaining a cooperative relationship with developers, and an ability to communicate with both technical (developers) and non-technical (customers, management) people is useful. Previous software development experience can be helpful as it provides a deeper understanding of the software development process, gives the tester an appreciation for the developers' point of view, and reduce the learning curve in automated test tool programming. Judgment skills are needed to assess high-risk or critical areas of an application on which to focus testing efforts when time is limited.

Top 5 common problems in the software development process

  • poor requirements - if requirements are unclear, incomplete, too general, and not testable, there may be problems.
  • unrealistic schedule - if too much work is crammed in too little time, problems are inevitable.
  • inadequate testing - no one will know whether or not the software is any good until customers complain or systems crash.
  • changes in requirement- requests to add on new features after development goals are agreed on.
  • miss-communication - if developers don't know what's needed or customer's have erroneous expectations, problems can be expected.
Let me know if you have any comment on this email me at jayesh.katariya@gmail.com

Top 5 FAQ for those who are new in Software Testing

If you are new into Software Testing, Learn following five things in depth, Let me know if you have any query, Learn from the help of different sites

What is 'Software Quality Assurance'?

What is 'Software Testing'?

Why does software have bugs?

What is verification? validation? walkthrough ? inspection ?

What is software 'quality'?

Top 5 Software Testing Books

Lessons Learned in Software Testing, by C. Kaner, J. Bach, and B. Pettichord (2001)
An excellent compilation of ideas from three well-respected people in software testing, Cem Kaner, James Bach, and Bret Pettichord. The book contains more than 300 statements/questions/ideas, in the form of a sentence or two, and each is followed by several paragraphs of explanatory information, all in a highly readable format. Includes a great deal of practical advice along with testing philosophies.

Testing Computer Software, by C. Kaner, J. Falk, and H. Nguyen (1999)
This book has been a standard reference for software testers since it's first edition was published in 1988 and second edition in 1993. Chapters include "The Objectives and Limits of Testing", "Test Case Design", "Localization Testing", "Testing User Manuals", "Managing a Testing Group", and more. The authors are all experienced in software testing and project management, and the book discusses many of the practical and 'human' aspects of software testing. (Note: The 1999 edition is the same as the 1993 edition)

Perfect Software and Other Illusions About Testing, by G. Weinberg (2008)
Weinberg is a prolific author of software engineering books including 'The Psychology of Computer Programming' and the 'Quality Software Management' series. 'Perfect Software' is an accessible and readable discussion of many of the non-technical yet highly challenging aspects of software testing. Topics include 'What Testing Cannot Do', 'Why Not Just Test Everything', 'How to Deal with Defensive Reactions', 'What Makes a Good test', 'Major Fallacies About Testing', 'Testing Scams', and more.

How to Break Web Software, by M. Andrews and J. Whittaker (2006)
The full title is 'How to Break Web Software: Functional and Security Testing of Web Applications and Web Services'. This is a practical and readable book focusing on web security testing, with chapters on how web security testing issues are different, testing attack strategies, authentication, privacy, web services, and more.

Testing Applications on the Web, by H. Nguyen, R. Johnson, and M. Hackett (2003)
This book's author is also a co-author of another top software testing book, 'Testing Computer Software' (see above). The book covers topics such as a comparison of web testing to traditional testing, test planning, document templates, load and stress testing, functional web testing, database testing, security testing, mobile web app testing, and includes real examples of web tests and bugs and web test tool information.

Wednesday, February 18, 2009

Top 5 Software Testing Web Sites

  1. http://www.onestoptesting.com/
  2. http://www.testinggeek.com/
  3. http://www.softwaretestinghelp.com/
  4. http://cwe.mitre.org/

more sites will be listed asap...

What is Software Testing? or What is Testing?

Software Testing is the process of checking software under test and to verify that it is as per the requirement(s)
This result(s) in detecting Error(s) (Defect or Bugs are other terms used instead of Errors, I will prefer 'Defect')

Tuesday, January 13, 2009

Introduction

hello friends... welcome