Thursday, August 15, 2013

Test Scenario, Test Case and Test data

Test Scenario:

  • Test scenarios are the generalized statements describing all the major functions in a module.
  • A scenario does not go deeply into a function but it just mentions the function name, its purpose and any major filed restrictions through simple English statements. 
  • Only one major function is covered in a scenario.
  • Creating scenarios is like splitting the module into a several sub-modules.
Test Case:
  • Test case is a simple English statement that checks a small feature of a scenario or a module.
  • It would contain steps explaining how to test the feature.
  • Test cases are useful because they are repeatable, reproducible under same environment and easy to improve upon feedback.
                                           Typical test case template

Test Data:
  • When a test case needs to be checked with various combinations of inputs, instead of randomly giving input while executing the test case, test engineers prepare test data.
  • It contains all the possible combinations of inputs
  • While writing the test cases itself test engineers prepare test data.

Test Case Design

User Interface based test case design:
 In general test engineers are selecting test cases depending on global user interface rules and prototype to conduct user interface testing.

  1. Verify screen elements contrast(i.e., distinguishable or not)
  2. Verify functional grouping of user interface elements(Ex: Date, Month, Year)
  3. Verify boarders of functionally grouped elements
  4. Verify alignment of the objects in the screen.
  5. Verify the font uniformity through out the screen.
  6. Verify the size uniformity through out the screen.
  7. Verify the spacing uniformity through out the screen.
  8. Verify the first letter of the label capitalized.
  9. Verify the instructions before the elements to which they apply.
  10. Verify the redundancy avoidance.
  11. Verify the full forms of the abbreviations(Ex: DOB (DD/MM/YYYY)
  12. Verify grouping of the buttons when that buttons are providing alternate functionality (OK, Cancel)
  13. Verify keyboard access of every control on the screen.
  14. Verify default object in every screen.
  15. Verify short cut menus without duplicates.
  16. Verify meaningful HELP messages.
  17. Verify uniform background color.

Business logic based test case design:
After preparation of usability test case writing, we test engineers concentrate on business logic based test case writing.
  1. Collect all required functional specifications.
  2. Select one functional specification from the list.
  3. Identify 
    • Entry point
    • Input required
    • Flow
    • Output or Outcome
    • Exit point
    • Alternative flows and exceptions.
  4. Prepare test scenarios
  5. Review those scenarios for completeness and correctness
  6. Prepare test cases for the reviewed scenarios.
  7. The above steps should be repeated until completion of all functional specifications.

Input domain based test case design:
After completion of user interface based and business logic based test case design, test engineers prepares input domain based test cases.
  1. Collect data models in design documents.
  2. Read data models to understand size, type and constraints on every input.
  3. Prepare Data matrix with above collected data. 
                                                  Format for data matrix




Tuesday, August 6, 2013

Colors of Testing

Brief information about different colors of testing :-)

White-box testing : This is a method of testing the application at the level of the source code. This is also known as clear box testing, glass box testing, transparent box testing, and structural testing.

White-box test design techniques include: 
  •          Control flow testing
  •         Data flow testing
  •         Branch testing
  •         Path testing
  •         Statement coverage
  •         Decision coverage

 Black box testing: This is a method of testing by executing the code. Here user does not have the access to view the source code. User concentrates on the UI and functionality of the application as specified in the documents.


 Gray box testing: This is a combination of white box and Black box testing.


 Red BoxTesting: "Acceptance testing" or "Error message testing" or "networking , peripherals testing and protocol testing"


Green Box Testing: "co-existence testing" or "success message testing.


Yellow Box Testing: "Testing warning messages" or "integration testing"


Saturday, August 3, 2013

Types of Testing

Different types of testing in brief.

Smoke Testing:
  •   It is to find out if the build or the unit is worth testing or not
  •   This is done by checking the primary purpose of the build
  •   If the primary purpose is okay then we would proceed with further testing
  •   A smoke test is scripted – either using a written set of tests or an automated test
  •   A smoke test is designed to touch every part of the application
  •   It is shallow and wide
  •   Smoke testing will be conducted to ensure whether the most crucial functions of a program work but not bothering with finer details.
  •   Smoke testing is normal health checkup to a build of an application before taking it to testing in depth.

Sanity Testing:
  •  A Sanity test is a narrow regression test that focuses on one or a few areas of functionality
  •  Sanity testing is usually narrow and deep
  •  Sanity testing is usually unscripted
  •  A sanity test is used to determine a small section of the application is still working after a minor change
  •  Sanity testing is to verify whether requirements are met or not, checking all features breadth-first.

Functional Testing: Functional testing is a combination below mentioned testing.

  • Behavioral coverage
  • Error handling coverage
  • Input domain testing
  • Calculations coverage
  • Back end coverage
  • Service levels coverage

Usability Testing:
  1. User Interface Testing (UI Testing) - This is a combination of below tests.
    • Look and Feel
    • Ease of use
    • Speed in Interface
     2.  Manual Support Testing
    • Checking the help documents

Security Testing:
  • This testing checks the security setup present in the application.
  • Authentication - The ability of the application to allow valid users to get into it and reject invalid users with appropriate error messages.
  • Authorization - The ability of the application to restrict the users from accessing those areas of the application to which they are not granted access and allow them to access only the areas to which they are granted access rights.
  • Encryption and Decryption are part of security testing.

Performance Testing:
  • This is to find out the optimized working conditions or environment for an application.
  • Optimized working environment - The optimized number of users that can hit the server at the same time with a good response time from the server of the application.
  • Ex:  
    • For 1000 users the response time is 1.8 seconds
    • For 1100 users the response time is 1.9 seconds
    • For 1200 users the response time is 2.9 seconds
  • In the above example the optimized working condition is 1100 users with response time of 1.9 seconds

Load Testing:
  • This is to find out the optimized response time for the application at a promised load.

Stress Testing:
  • This is to find out the maximum number of users that the application can handle.

Scalability Testing:
  • A test design to prove that both the functionality and performance of a system will scale up to meet specified requirements.
  • Generally the load and stress tests should continue the point where system fails and its scalability is proven.

Re-Testing:
  • Testing the functionality with failed test cases only.

Regression Testing:
  • This is done to ensure that a bug-fix or new functionality has not introduced new error else where in the application.
  • Here we will execute the test cases of the functional areas which are identified in impact analysis.
  • It is preferable when major bugs are fixed and any new requirements are added.
  • It is must in
    • Re-engineering or migration project
    • Enhancement project
    • Product support

Progression Testing:
  • Testing the newly added functionality alone.

Inter-systems or End to End or Interoperatability Testing:
  • This is to validate whether our application build co-exist with our other software application to share common resources or not.

Concurrency Testing:

  • This is to determine the effects of accessing the same application code, module or database records.

Internationalization(I 18 N) Testing:
  • This testing relates to handling foreign text and data with in the application.
  • This would include sorting importing and exporting text and data, correct handling of foreign currency and date formats and so forth.
  • It can also be said as a combination of localization (L 10 N) and globalization (G 11 N)

Compatibility or Configuration Testing:
  • Testing done to check whether the application is compatible with multiple hardware configurations and multiple operating systems.
  • It is also the process of determining whether two systems with different hardware configuration are able to communicate properly or not.

Disaster and Recovery Testing:
  • This is to check whether the application recovers from unexpected events like system failure, improper exit etc. without loss of saved data.
  • This is verified by making several improper exits from the application.

Installation and Un-installation Testing:
  • This testing ensures that the product is getting installed and un-installed properly, getting properly configured for a network and communicating properly with the server.

Cluster or High availability Testing:
  • This is the process of maintaining a backup connection link between the server in the offshore development center(OSDC) to the server(s) or ordinary computers present in the client's place.

Volume Testing:
  • This testing checks whether the application, which may have to handle large floating-point numbers at times, is capable of handling them without failure.
  • Methods like truncation or rounding-off extra decimal digits should be incorporated with in the application.

User acceptance Testing:
  • Alpha testing:
    • The activity of testing done before the client at 90% completion of the project is called alpha testing.
    • It is done by the client but at company's site.
    • Since the functionality of the application would only be quite clear when it is almost complete, alpha testing is done at the 90% stage.
  • Beta testing:
    • Testing done before the client after 100% completion of the project.
    • This is done at the client place.
    • This is done by the real end users for the products. Generally companies release beta versions of their products for the end users.

GUI Testing:
  • This is testing the application in order to ensure that it is meeting Microsoft GUI standards or not.

Input domain Testing:
  • This is done in terms of BVA and ECP.
  • BVA is concentrating on "Range/Size" of the object.
  • ECP is concentrating on :Type" of the object.

Documentation Testing:
  • Verifying whether all the documents are written according to their corresponding standard company templates or not..

Mutation Testing:
  • This refers to testing the effectiveness of all test cases that are written.
  • Generally test lead would do this.

Ad-hoc Testing:
  • Testing without following formal methods is called ad-hoc testing.
  • Due to the risks in testing, testing people are conducting adhoc testing instead of planned testing.
  • The risks are
    • Lack of time
    • Lack of resources to conduct good testing
    • Lack of knowledge on that project domain
    • Improper documentation about project
    • Lack of communication.


Exploratory Testing:
  • Conducting a test depending on available documents, past experience, discussions with others and through connect to internet.


Buddy Testing:
  • This is an informal testing done by your buddy.
  • When you have written some test cases and ask your buddy to review, this type of testing happens. This is also called peer review.

Pair Testing:
  • Due to lack of knowledge or inexperience of testing, junior test engineers are grouping with senior test engineers to share their testing knowledge.