Today's employers want test professionals - not test dummies. There are unique skills of testers who are highly sought by employers to make the testing process more efficient and effective. See the checklist for a cross section of skills to get a professional test

✚ know the best places to find bugs and defects and improved methods to discover their
✚ know how to test the priorities given the risks
✚ Develop test strategies to test the balance with the needs of budget management and commitment
Apply testing methodologies, processes and analytical techniques
✚ who have skills in basic techniques using test (egscripting VB / Perl file / batch), the fundamentals of SQL, MS Office tools for data collection and reporting
generation
✚ understand test automation tools and frameworks such as the advantages, disadvantages and the adoption process
Learn the techniques and audit
✚ issues include non-functional tests, such as loading and performance, availability, failover / recovery, compatibility, usability and security
✚ manage projects and coordinate testing
✚ build a team to test and promote the culture of an effective test team
✚ gather metrics and measurements to evaluate the test effectiveness and product quality, and communication mechanisms to inform management
Results
✚ Know approaches to improve the testing process.

Not everyone in the team needs to have all these skills - indeed, it may be quite rare to live in all areas. However, there should be a strategy to ensure that the team can be assembled with a diversified profile of these skills. If you are an employer, to take account of these skills in describing the position of the tester. If workers are deficient in some of these areas, ask yourself the cost of your projects, and plan a strategy to increase the capacity of the existing team and / or seeking additional staff. If you're a tester, do a quick check to see if you have these skills and experience in these areas. If someone is absent, to develop a strategy to increase basic skills - you reap the rewards.

Other comments: It is possible that Certified Software Testing professionals with unique expertise in test management, test automation and performance testing load / can increase their employment prospects and income. Certification is required, the reason is - ". 'Companies to convince their customers by saying that we have certified professionals to test software, we offer a high quality product" "needs of customers who require only individuals certified in their project or

Certification for more information contact CSTP "Dr Kelvin Ross, who is the coordinator for the CSTP program. E-mail: kelvinr@kjross.com.au

There are many good reasons to start automation. But sometimes - Enterprise test automation begin with enthusiasm but end up with manual testing only. Here are 5 mistakes that management may make during the implementation of automation:

1. "Test automation is the answer to the problems of the test." Management says "We have so many tests and so little time, we will use the test automation. It will solve our problems. "

However, it is not true. Implementation / design scripts automation tool can take 3 times 100 times longer than the implementation of manual test scripts. So if your project is critical and final stage, so using automation testing might not be a good idea.

2. "Development of automation scripts (or test process automation) can not be started because the application is not yet stable. Developers are still under development application. "

It is not true. Phases of test automation and software development processes are almost identical. Automation itself is a process of development. Once the initial development activities at the same time planning to test automation should be started. Development of scripts can be started earlier. You can access the software (which is under construction) of the development team and start preparing for automation scripts. You can help document low-level design.

3. "Now our team has a test automation tool. They will develop tests more efficiently. "

Automated tests development is development code. Are you sure, your test team is trained in this area? At least there should be a programmer in the team. Training must be given to the automation team, so they better test design.

4. Management (with test manager): ". A sales manager demonstrated an automation tool for us and it seems that the tool will add value to our tests, we think we should the buy. "

Seller gives demonstration at which the application? Is it you or is it the request of the seller himself? Tool is the same technology on which your application is developed? How much programming is needed to develop tests? (Note - this is directly linked to the skills of persons and the return on investment). Consider costs to automate testing.

Involve automation experts / architects test while buying any automation tool.

5. "It has been three months and we have not seen any progress so far."

How testers are involved in automation? Are they 100% allocated to automation?

Sometimes a single expert in automation is allocated to the project. Management thought to automate fewer resources are needed. However, it is not always true. In addition, the automation team must be 100% dedicated to the automation project only. They should not be shared with other projects.

In the latter, remember - A fool with a tool is still a fool. ~ By Anne Mette Jonassen Hass (AST Guide)

Neatly document the test plan and test strategy for the application under test. Test Plan serves as the basis for all testing activities throughout the testing life cycle. Being an umbrella activity this should reflect the customers needs in terms of milestones to be met, the test approach (test strategy), resources required etc. The plan and strategy should give a clear visibility of the testing process to the customer at any point of time.

Functional and Performance test plans if developed separately will give lot more lucidity for functional and performance testing. Performance test plan is optional if the application does not entail any performance requirements.

Below are some useful Do’s and Don'ts:

The Do’s:

* Develop test plan based on a approved Project Plan
* Document test plan with major testing milestones
* Identify and document all deliverables at the end of these milestones
* Identify the resources (Both Hardware/Software and Human) required
* Identify all other external systems, which is going to interact with the application. For example the application can get the data from any other mainframe server. Identification of such systems will help one plan for integration testing as well
* If performance testing is under the scope of testing then clearly identify the application performance requirements like number of hits/second, response time, number of concurrent user etc. Details about different testing methodologies (Spike testing, Endurance testing, stress testing, capacity testing) during the performance testing phase can also be documented.
* Get the test plan approved.
* Include Features to be tested to communicate to customer what all will be tested during the testing life cycle
* Include Features not tested to communicate to customer what all will not be tested during the testing life cycle. (As part of risk management)

The Don'ts:

* Do not use draft (unapproved) test plans for reference
* Do not ignore the test strategies identified in the test plan during testing.
* Do not make changes to any approved test plan without official change request
* Do not mix the stages of testing (Unit testing, Integration testing, System testing, Functional testing etc) with the types of testing (Regression testing, Sanity testing, User Interface testing, Smoke testing etc) in the test plan. Identify them uniquely with their respective input and exit criteria.

Building trust may seem mysterious, something that just happens or develops through a process unknowable. The good news is there are concrete actions that tend to build confidence (and actions that are almost guaranteed to break the trust).

First, let's agree on a definition of trust in the workplace. We all know that trust is the foundation of teamwork. But to hear some people talk, you'd think the team members got married, and not the creation of software together. What we need in the workplace is the professional trust. Professional trust said: "I am convinced that you are competent to do the job that you share relevant information, and you have good intentions towards the team." Taken together, this is the confidence of the communication commitment and competence.
0. People trust Other

The step of rank zero in a climate of trust is to display confidence. One way is to make a generous interpretation when someone makes a mistake or disappoint you in some way. Those who always jump to the worst conclusion about the competence of others and motivation to inspire distrust, not trust.

Most people do not try to be evil or stupid, give him the benefit of the doubt until you have data that proves you're wrong.
1. Directly address issues

Ruffled feathers come with a close collaboration is bound to happen that a person rubbing another way evil. Maybe that's how your companion cube chewing his chewing gum or listening to voicemail on the speakerphone. Maybe someone has used your laptop and change the preferences or broken building, then left for lunch.

When one team is listening to you, speaks directly to that person develops confidence. He said: "I value our working relationship, and I'm ready to have an uncomfortable conversation to make it better." He said, "You know where you stand with me, I will not go behind your back. "

These conversations are not always easy, but the alternatives are worse.

Some people avoid discussing uncomfortable and let their anger and resentment build until it explodes. It almost always leads to more damage difficult to repair than the irritation of origin.

Another way people avoid the conversation is to tell their leaders about the problem. If you really want to undermine the confidence with colleagues, playing tattle tale and complain to the boss. (As with everything, there are exceptions. If the situation involves sexual harassment, an impropriety or physical safety, talk to your boss).

When people do not know how to have difficult conversations or think someone else use to navigate a working relationship, trust is eroded. And that's why people need a framework to talk about interpersonal feedback.
2. Share relevant information

Knowledge is power, but it is more powerful when it is shared. When someone from the team holds an opinion or concern about something and returned later to say, "I thought it was a bad idea from the outset," other team members feel caught unawares. This trust breaks. If you do not support an idea or an approach, say so. (Of course, there are more effective and less effective ways to do it.)

The relevant information is the task, but it is also about you. People tend to trust people they know as individuals can identify. A shared experience, common interests, and the soil solid form of identification that people can put on when there is friction and conflict. You do not have to share your deepest secrets, but to let other people on the team to know something about life outside of work makes people "real." It is difficult to trust a number, but much easier to trust and be generous with someone who shares some of the same challenges and interests you do.
3. Monitoring of commitments, or give prompt notice when you can not

For teams to function, team members need to believe that their colleagues are reliable. Without the confidence that others are reliable and take their share of the load, some will commit to a common goal.

No reasonable person expects that each person can meet all the commitments of all time. Sometimes a piece of code turns out to be more complex than expected, or we discover that we do not understand the task when we made our estimate. But when you expect when the task was due to let people know it's going to be late, you seem unreliable. So that people know as soon as you know, and renegotiate.
4. Say no when you mean No.

Sometimes you can not take another job or do a favor for someone to ask. Most of us are programmed from an early age to other people please, if we are afraid of being labeled selfish or "not a team player" if we say no. But if you really can not do what is asked, it is more respectful to say no and let the other person's needs met elsewhere.

Say yes without following through other leads to doubt your word. If you can not say no, your yes means nothing.
5. Share what you know and what You Do not Know

Feel free to share your knowledge (without the help inflict). But also be prepared to hear the ideas of others, rely on them, and help others shine. Admit when you do not know the answers. There's nothing worse than a know-it-all who is wrong.

It may seem paradoxical, but the competence trust is, the confidence of your colleagues in your abilities, sometimes is to admit you do not have all the answers. Asking for help help others see you as a real person, and people generally like to be helpful.

Most people enter a new situation with a basic level of trust. This level can be high or low, depending on their perspectives and life experiences. But from there, every interaction is an opportunity to increase or decrease trust. With the techniques I listed above, you are now armed with several ways to build a solid foundation of confidence for your team.

After finalization of possible test for current project, Test Lead category people concentration on test plan document preparation to define work allocation in terms of What, Who, When & How to test. To prepare test plan document, test plan order follows below approach.

Test Planning - Step by Step

1] Team Formation:

In general, Test planning process starts with testing team formation. To define a testing team, test plan author depends on below factors;

1. Availability of testers
2. Test duration
3. Availability of test environment resource

2] Identify Tactical Risk:

After Testing team formation Plan author analysis possible & mitigation (ad hoc testing)

# Risk 1: Lack of knowledge of Test Engineer on that domain

# Soln 1: Extra training to Test Engineers

# Risk 2: Lack of Resource

# Risk 3: Lack of budget {less no of time}

# Soln 3: Increase Team size

# Risk 4: Lack of Test data

# Soln 4: Conduct test on past experience basis i.e., ad hoc testing or contact client for data

# Risk 5: Lack of developer process rigor

# Soln 5: Report to Test Lead for further communication between test & development PM

# Risk 6: Delay of modified build delivery

# Soln 6: Extra hours of work is needed

# Risk 7: Lack of communication in between Test Engineer - > Test team and Test team - > Development team

3] PREPARE TEST PLAN:

After completion of testing team formation & Risk analysis, Test plan author concentrate on Test Plan Document in IEEE format.

01) Test Plan ID: Unique No or Name e.g. STP-ATM

02) Introduction: About Project description

03) Test Items: Modules / Functions / Services / Features / etc.

04) Features to be tested: Responsible Modules for Test design (preparing test cases for added modules)

05) Features not to be tested: Which feature is not to be tested and Why? (Due to test cases available for the old modules, so for these modules no need to be tested / no test case)

Above (3), (4) & (5) decides which module to be tested – > What to test?

06) Approach: List of selected testing techniques to be applied on above specified

modules in reference to the TRM(Test Responsible Matrix).

07) Feature pass or fail criteria: When a feature is pass or fail description

(Environment is good) (After testing conclusion)

08) Suspension criteria: Possible abnormal situations rose during above features testing

(Environment is not good) (During testing conclusion)

09) Test Environment: Required software & Hardware to be tested on above features

10) Test Deliverables: Required testing document to be prepared (during testing, the type of documents are prepared by tester)

11) Testing Task: Necessary tasks to do before start every feature testing

Above (6) to (11) specifies -> How to test?

12) Staff & Training: Names of selected Test Engineers & training requirements to them

13) Responsibilities: Work allocation to every member in the team (dependable modules

are given to single Test Engineer)

14) Schedule: Dates & Times of testing modules

Above (4) specifies -> When to test?

15) List & Mitigation: Possible testing level risks & solution to overcome them

16) Approvals: Signatures of Test plan authors & Project Manager / Quality Analyst

4) Review Test Plan:

After completion of plan document preparation, Test plan author conducts a review of completion & correctness. In this review, Plan author follows below coverage analysis

* BRS based coverage (What to test? Review)
* Risks based coverage (When & Who to test? Review)
* TRM based coverage (How to test? Review)

5) TEST DESIGNING:

After completion of Test Planning & required training to testing team, corresponding testing team members will start preparing the list of test cases for their responsible modules. There are three types of test cases design methods to cover core level testing (Usability & Functionality testing).

a) Business Logic based test case design (S/w RS)

b) Input Domain based test case design (E-R diagrams / Data Models)

c) User Interface based test case design (MS-Windows rules)

I) Introduction

When we can measure what we are speaking about and express it in numbers, we know something about it; but when we cannot measure, when we cannot express it in numbers, our knowledge is of a meager and unsatisfactory kind: it may be the beginning of knowledge, but we have scarcely, in your thoughts, advanced to the stage of science.

Why we need Metrics?

“We cannot improve what we cannot measure.”

“We cannot control what we cannot measure”

AND TEST METRICS HELPS IN

* Take decision for next phase of activities
* Evidence of the claim or prediction
* Understand the type of improvement required
* Take decision on process or technology change

II) Type of metrics

Base Metrics (Direct Measure)

Base metrics constitute the raw data gathered by a Test Analyst throughout the testing effort. These metrics are used to provide project status reports to the Test Lead and Project Manager; they also feed into the formulas used to derive Calculated Metrics.

Ex: # of Test Cases, # of Test Cases Executed

Calculated Metrics (Indirect Measure)

Calculated Metrics convert the Base Metrics data into more useful information. These types of metrics are generally the responsibility of the Test Lead and can be tracked at many different levels (by module, tester, or project).

Ex: % Complete, % Test Coverage

Base Metrics & Test Phases

* • # of Test Cases (Test Development Phase)
* • # of Test Cases Executed (Test Execution Phase)
* • # of Test Cases Passed (Test Execution Phase)
* • # of Test Cases Failed (Test Execution Phase)
* • # of Test Cases Under Investigation (Test Development Phase)
* • # of Test Cases Blocked (Test dev/execution Phase)
* • # of Test Cases Re-executed (Regression Phase)
* • # of First Run Failures (Test Execution Phase)
* • Total Executions (Test Reporting Phase)
* • Total Passes (Test Reporting Phase)
* • Total Failures (Test Reporting Phase)
* • Test Case Execution Time ((Test Reporting Phase)
* • Test Execution Time (Test Reporting Phase

Calculated Metrics & Phases

The below metrics are created at Test Reporting Phase or Post test Analysis phase

* • % Complete
* • % Defects Corrected
* • % Test Coverage
* • % Rework
* • % Test Cases Passed
* • % Test Effectiveness
* • % Test Cases Blocked
* • % Test Efficiency
* • 1st Run Fail Rate
* • Defect Discovery Rate
* • Overall Fail Rate

III) Crucial Web Based Testing Metrics

Test Plan coverage on Functionality

Total number of requirement v/s number of requirements covered through test scripts.

* • (No of requirements covered / total number of requirements) * 100

Define requirements at the time of Effort estimation

Example: Total number of requirements estimated are 46, total number of requirements tested 39; blocked 7…define what is the coverage?

Note: Define requirement clearly at project level

Test Case defect density

Total number of errors found in test scripts v/s developed and executed.

* • (Defective Test Scripts /Total Test Scripts) * 100

Example: Total test script developed 1360, total test script executed 1280, total test script passed 1065, total test script failed 215

So, test case defect density is

215 X 100

---------------------------- = 16.8%

1280

This 16.8% value can also be called as test case efficiency %, which is depends upon total number of test cases which uncovered defects

Defect Slippage Ratio

Number of defects slipped (reported from production) v/s number of defects reported during execution.

* • Number of Defects Slipped / (Number of Defects Raised - Number of Defects Withdrawn)

Example: Customer filed defects are 21, total defect found while testing are 267, total number of invalid defects are 17

So, Slippage Ratio is

[21/ (267-17)] X 100 = 8.4%

Requirement Volatility

Number of requirements agreed v/s number of requirements changed.

* • (Number of Requirements Added + Deleted + Modified) *100 / Number of Original Requirements
* • Ensure that the requirements are normalized or defined properly while estimating

Example: VSS 1.3 release had total 67 requirements initially, later they added another 7 new requirements and removed 3 from initial requirements and modified 11 requirements

So, requirement Volatility is

(7 + 3 + 11) * 100/67 = 31.34%

Means almost 1/3 of the requirement changed after initial identification

Review Efficiency

The Review Efficiency is a metric that offers insight on the review quality and testing

Some organization also use this term as “Static Testing” efficiency and they are aiming to get min of 30% defects in static testing

Review efficiency=100*Total number of defects found by reviews/Total number of project defects

Example: A project found total 269 defects in different reviews, which were fixed and test team got 476 defects which were reported and valid

So, Review efficiency is [269/(269+476)] X 100 = 36.1%

Efficiency and Effectiveness of Processes

* • Effectiveness: Doing the right thing. It deals with meeting the desirable attributes that are expected by the customer.
* • Efficiency: Doing the thing right. It concerns the resources used for the service to be rendered

Metrics for Software Testing

Defect Removal Effectiveness

DRE= (Defects removed during development phase x100%) / Defects latent in the product

Defects latent in the product = Defects removed during development

Phase+ defects found later by user

Efficiency of Testing Process (define size in KLoC or FP, Req.)

Testing Efficiency= Size of Software Tested /Resources used

"The best tester isn't the one who finds the most bugs or embarrasses the most programmers," says Dr. Cem Kaner, Professor of Software Engineering at the Florida Institute of Technology. "The best tester is the one who gets the right bugs fixed."

Finally Bug Advocacy is on the way. 1st class of 2011 will starts from Feb 13, 2011.I would like to suggest to all testers must gone thru this course.

About Bug Advocacy

Bug reports are not just neutral technical reports. They are persuasive documents. The key goal of the bug report author is to provide high-quality information, well written, to help stakeholders make wise decisions about which bugs to fix. Key aspects of the content of this course include:

* Defining key concepts (such as software error, quality, and the bug processing work-flow)
* The scope of bug reporting (what to report as bugs, and what information to include)
* Bug reporting as persuasive writing
* Bug investigation to discover harsher failures and simpler replication conditions
* Excuses and reasons for not fixing bugs
* Making bugs reproducible
* Lessons from the psychology of decision-making: bug-handling as a multiple-decision process dominated by heuristics and biases.
* Style and structure of well-written reports
* Gaining real world experience writing bug reports in a public forum, suitable for presenting at interviews or to an employer.

The video lectures can be find at: http://www.viddler.com/explore/testingtruck/videos/2/

More Details on Bug Advocacy can be found at: http://www.associationforsoftwaretesting.org/training/courses/bug-advocacy/

A software metric is a measure of some property of a piece of software or its specifications.

A metric is a quantitative measure of the degree to which a system, system component, or process possesses a given attribute.

A quality metric is a quantitative measurement of the degree to which an item possesses a given quality attribute.

Metrics are the most important responsibility of the Test Team. Metrics allow for deeper understanding of the performance of the application and its behaviour. The fine tuning of the application can be enhanced only with metrics. In a typical QA process, there are many metrics which provide information.

The following can be regarded as the fundamental metric:

• Functional or Test Coverage Metrics.

• Software Release Metrics.

• Software Maturity Metrics.

• Reliability Metrics.

– Mean Time To First Failure (MTTFF).

– Mean Time Between Failures (MTBF).

– Mean Time To Repair (MTTR).

Usability requirements are not always testable & cannot be measured accurately. Classic non-testable requirement: "System must be user-friendly." But think about this – User friendly to whom? Who are the users?

Suggested Approaches for Usability Testing:

1. Qualitative & Quantitative
2. Qualitative Approach:

Qualitative Approach

* Each and every function should available from all the pages of the site.
* User should able to submit each and every request with in 4-5 actions.
* Confirmation message should be displayed for each and every submit.

Quantitative Approach:

* Heuristic Checklist should be prepared with all the general test cases that fall under the classification of checking.
* This generic test cases should be given to 10 different people and ask to execute the system to mark the pass/fail status.
* The average of 10 different people should be considered as the final result.

Example: Some people may feel system is more users friendly, If the submit is button on the left side of the screen. At the same time some other may feel its better if the submit button is placed on the right side

Free & Open Source security testing tools:

1. SkipFish – A fully automated, active web application security reconnaissance tool by Google. Get Details here .

2. Nikto – is an Open Source (GPL) web server scanner which performs comprehensive tests against web servers for multiple items, including over 6400 potentially dangerous files/CGIs, checks for outdated versions of over 1000 servers, and version specific problems on over 270 servers. Get Details here .

3. BFBTester is great for doing quick, proactive, security checks of binary programs. BFBTester will perform checks for single and multiple argument command line overflows and environment variable overflows. Get Details Here.

4. Netsparker – detect SQL Injection + cross-site scripting issues. Get Details Here.

5. Babel Enterprise – Babel evaluates compliance level of any security policy in a company, in order to help to achieve their goals, for instance, whether LOPD, ISO/IEC 27001:2005 policies are being accomplished. Get Details Here.

6. Paros – for people who need to evaluate the security of their web applications. It is free of charge and completely written in Java. Through Paros’s proxy nature, all HTTP and HTTPS data between server and client, including cookies and form fields, can be intercepted and modified. Get Details here.

7. Wapiti allows you to audit the security of your web applications.

It performs "black-box" scans, i.e. it does not study the source code of the application but it will scan the webpages of the deployed webapp, looking for scripts and forms where it can inject data. Download it here.

8. Burp Suite – is an integrated platform for performing security testing of web applications. Its various tools work seamlessly together to support the entire testing process, from initial mapping and analysis of an application’s attack surface, through to finding and exploiting security vulnerabilities. Get Details Here.

9. Achillies – The first publicly released general-purpose web application security assessment tool. Achilles acts as a HTTP/HTTPS proxy that allows a user to intercept, log, and modify web traffic on the fly. Download it here .

10. Webstretch – Primarily used for security based penetration testing of web sites, it can also be used for debugging during development. Seen as part of a hacker toolkit. Download It here .

11. Spike – When you need to analyze a new network protocol for buffer overflows or similar weaknesses, the SPIKE is the tool of choice for professionals. While it requires a strong knowledge of C to use, it produces results second to none in the field. SPIKE is available for the Linux platform only. Download it Here.

12. SQLInjector – SQLInjector uses inference techniques to extract data and determine the backend database server. Download it here.

13. Sqlninja – Fingerprint of the remote SQL Server (version, user performing the queries, user privileges, xp_cmdshell availability, DB authentication mode) and many more. Download it here.

14. x5s is a Fiddler addon which aims to assist penetration testers in finding cross-site scripting vulnerabilities. Download and get more details here.

15. sqlmap is an open source penetration testing tool that automates the process of detecting and exploiting SQL injection flaws and taking over of back-end database servers. It comes with a broad range of features lasting from database fingerprinting, over data fetching from the database, to accessing the underlying file system and executing commands on the operating system via out-of-band connections. Download and get more details here.

16. Absinthe is a gui-based tool that automates the process of downloading the schema & contents of a database that is vulnerable to Blind SQL Injection. Download and get more details here.

17. Exploit-Me is a suite of Firefox web application security testing tools designed to be lightweight and easy to use. Download and get more details here. It has three addons

* XSS-Me: for testing reflected XSS vulnerabilities
* SQL Inject Me: for testing SQL injection vulnerabilities
* Access-Me: for testing access vulnerabilities.

18. Watcher is an Open source Web Security Testing Tool and PCI compliancy auditing utility is a runtime passive-analysis tool for HTTP-based Web applications. Download and get more details here.

19. SWF Intruder - SWFIntruder allows testers to easily analyze Flash applications by using the methodology researched by Stefano Di Paola, CTO and Director of Minded Security Research Labs, and presented in Testing Flash Applications and in Finding Vulnerabilities in Flash Applications. Download and get more details here.

20. WebGoat is a deliberately insecure J2EE web application maintained by OWASP designed to teach web application security lessons. Download and get more details here.