Saturday, March 8, 2014

Defect Life Cycle

Before jump into defect life cycle i want to share an incident with you.

Few days back when I tried to call from my Samsung slide mobile phone, I observed that the display suddenly turned into multi color and I am not able to view anything clear on the screen. I approached the service center and informed the issue. The technician verified my mobile and said that this is because of the strip inside the phone which was broken.  They informed me that the strip will be replaced and will be delivered by next day morning.  Next day I went to the store and collected my phone by paying the bill. I have verified whether I am able see the screen properly and I made a call to my friend to confirm it is working. This is a common situation to all of us with mobiles or other accessories.  Right?

The same life cycle is applicable to software defects also.

           The client emails or logs the problem they are facing. The SQA team will verify whether it is the expected behavior of the system or is it really a problem. If team identifies it as a problem they try to reproduce it. Once the defect is reproduced they will intimate the client the root cause of the problem and the estimated time to deliver the fix. After taking approval from the client the fix will be made ready and will be tested internally and delivers to the client within scheduled time.  Client will test the defect and they are satisfied with the solution provided then they will close the defect.

The above mentioned scenario is a simple happy flow.

Let us discuss more detailed about defect life cycle.

Let us start with the defect identified by the tester in testing phase.

Tester identifies a mismatch between expected and actual results and logs a defect by providing required information in the bug tracking tool or in spreadsheet. Now the defect status is “New”.  All the new defects will be discussed in triage meeting where the managers and the leads verifies whether the defects are valid or not or requires more information on the defect. 



In Triage if they think more information required then they assigns that defect to the tester who raised it with defect status as “More Information required” or “Insufficient Information”.  Tester provides the requested information and assigns back with defect status “New”. This defect will again comes for discussion in next triage meeting and if it is considered as invalid then that would be assigned to tester with status “Not a Defect” or “User error”. If tester wants to discuss on the defect then he participates in next triage otherwise he closes the defect.

If the defect is considered as valid defect then the next question will be is this defect reproducible? If it is inconsistent then that will be assigned for thorough analysis.  If bug is reproducible then comes next question is this defect worth of a fix? i.e., how much it impacts the business of the client. If the impact is low then it will be deferred to next releases or bug can be closed. If the impact is high then that will planned for the immediate releases and assigns to the developer to provide fix with defect status as “Assigned

Developer fixes that defect and unit tests that defect and assigns to the Lead/manager/tester with defect status as “Fixed”. Once the build is released to SQA and defect was assigned to concern tester, Tester retests that defect and if found that the problem is properly addressed then closes the defect at the same time he tests the impacted areas so that no regression defects were introduced. If tester identifies that the problem still exist then assigns the defect to the developer with status “Not Fixed

Sunday, March 2, 2014

Metrics

Metrics helps mainly to track



  • Schedule
  • Size/Complexity
  • Cost
  • Quality
Metrics gives better picture of where we are and what are the necessary steps to be taken to improve the process.

Some of the important Testing Metrics



  • Test Case Writing Efficiency = Number of Test cases Written/Time
  • Test Case Execution Efficiency = Number of test case Executed/Time
  •  Test Case Effectiveness  = Number of test cases that resulted in logging defects / the total number of test cases
  • Defect removal efficiency (DRE) = (Number of Defects Removed / Number of Defects at Start of Process) * 100
  • Defect Rejection Ratio = Number of rejected defects/The total number of defects logged * 100
  • Test Efficiency Score = Number of defects were leaked to the users /The number of defects reported by the testing team * 100
  • Review Efficiency = Number of defects detected /Pages reviewed per day

  • Scope Changes = The number of changed items in the test scope/The total number of items * 100

  • CFDR (Cost of finding Defect in Review) = Cost Spend in Review/ Number of Review Defect
  • CFDT (Cost of finding Defect in Testing) =Cost spending in testing/ Number of Testing Defects
  • E.V. (Effort Variance) = Actual Effort/Planned Effort * 100
  • S.V. (Schedule Variance) =Actual Number of Days/Plan Number of Days * 100