Listen Now
Subscribe on iTunes
Check out the podcast on Google Play Music

SPaMCAST 448 features our essay on uncertainty. Al Pittampalli said, “uncertainty and complexity produce anxiety we wish to escape.” Dealing with uncertainty is part of nearly everything we do our goal should be to address uncertainty head on.

The second column features Steve Tendon talking about Tame The Flow: Hyper-Productive Knowledge-Work Performance, The TameFlow Approach and Its Application to Scrum and Kanban published J Ross (buy a copy here). We tackle Chapter 18.  

Our third column is the return of Jeremy Berriault and his QA Corner. Jeremy discusses leading in  QA.  Jeremy  blogs at https://jberria.wordpress.com/

Re-Read Saturday News

Chapter 10 concludes our re-read of Holacracy: The New Management System for a Rapidly Changing World by Brian J. Robertson which was published by Henry Holt and Company in 2015.  This week’s chapter is titled, The Experience of Holacracy. In this chapter, Robertson wraps up most of the loose ends. Next week we will conclude this re-read with some final comments and thoughts. (more…)

Listen Now
Subscribe on iTunes
Check out the podcast on Google Play MusicListen Now

The Software Process and Measurement Cast 438 features our essay on leveraging sizing in testing. Size can be a useful tool for budgeting and planning both at the portfolio level and the team level.

Gene Hughson brings his Form Follows Function Blog to the cast this week to discuss his recent blog entry titled, Organizations as Systems and Innovation. One of the highlights of the conversation is whether emergence is a primary factor driving change in a complex system.

Our third column is from the Software Sensei, Kim Pries.  Kim discusses why blindly accepting canned solutions does not negate the need for active troubleshooting of for problems in software development.

Re-Read Saturday News

This week, we tackle chapter 1 of Holacracy: The New Management System for a Rapidly Changing World by Brian J. Robertson published by Henry Holt and Company in 2015. Chapter 1 is titled, Evolving Organization.  Holacracy is an approach to address shortcomings that have appeared as organizations evolve. Holacracy is not a silver bullet, but rather provides a stable platform for identifying and addressing problems efficiently.

Visit the Software Process and Measurement Cast blog to participate in this and previous re-reads.

Next SPaMCAST

The next Software Process and Measurement Cast will feature our interview with Alex Yakyma.  Our discussion focused on the industry’s broken mindset that prevents it from being Lean and Agile.  A powerful and possibly controversial interview.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.

33668249915_7a8d73072b_k.jpg

Which size metric makes sense for a testing organization is influenced by how testing is organized, where testing is incorporated into the value delivery chain, and whether the work is being done for a fee. (more…)

33510320232_40376a5052_k

Test case points is only one approach to determining the size of work that needs to be tested. The other measures fall into three broad categories.  The categories are: (more…)

Strengths and Weaknesses are up in the air!

Jeremy Berriault provided an example from this presentation at QAI Quest 2017 for us to count test case points.  Jeremy, QA Corner,  indicated baseline data was required to effectively run the three test cases in his example

The logon, transaction

, reports and expected output blocks represent verification points.  The arrows from one test case to another represent interfaces and steps are. . .steps.  The results of the count is as follows:

Test Case Number of
Steps
Interfaces Verification
Points
Baseline
Test Data
Complexity

test case 1

4 0

4

required medium
test case 2

4

0

4

required medium
test case 3 3 2 3 required

medium

Deriving the complexity leverages the following chart: (more…)

 

How do you measure out of the ordinary packages?

Independent testing groups are often asked how long and how much effort is required to test a piece of work.  Several size estimation techniques are actively in use in many organizations.  Each of these techniques begins by deriving size either based on a set of rules or through relative sizing.  Size, once derived, is used to estimate effort.  Effort is then used to generate cost, staffing and duration estimates.  The first sizing technique is“Test Case Points”

Test Case Points are a unit of measurement generated from the the testable requirements based on a set of rules.  The process is straightforward:

  1. Identify the testable requirements in a piece of work.  Use Cases or technical requirements documents are used for identifying testable requirements.  
  1. Identify the complexity of each testable requirement.  Test case points evaluate four factors to determine complexity:
    1. The number of test steps. The number of execution steps needed to arrive at an expected (or unexpected) outcome after all preconditions have been satisfied.
    2. The number of interfaces to the other requirements. A simple count of the number of interfaces in the test case.
    3. The number of verification points. A simple count of the points in the test case that the results are evaluated for correctness.
    4. Need for baseline test data. An evaluation of whether data needs to be created to execute the test case.  


Once all of the simple, medium and complex test cases are identified, they are summed by category.

  1. Weight each category.  

  1. Sum the weighted categories together to yield the total test case points

The goal of test case points is to use size to generate an estimate.  Every version of test case points I have worked with uses a set of factors to adjust the size as part of the sizing process.

  1. Develop an estimation adjustment weighting based on a set of factors (for those familiar with IFPUG Function Points this adjustment is a similar process to the one for determining the value adjustment factor). The factors are:
  1. Count or Single Factor Adjustment Factors
    Factor 14 – Operating System Combinations (simple count)
    Factor 15 – Browser Combinations (simple count)
    Factor 16 – Productivity Improvement from Second Iteration Onwards (percentage)
  2. Factors that leverage a combination of fixed factor and complexity weighting
    Factor 1 – Domain Knowledge & Complexity
    Factor 2 – Technical Know How
    Factor 3 – Integration with other Hardware Devices such as Handheld Devices, Scanners, Printers
    Factor 4 – Multi-lingual Support
    Factor 5 – Software/Hardware Setup
    Factor 6 – Environment Setup
    Factor 7 – Build Management
    Factor 8 – Configuration Management
    Factor 9 – Preparation of Test Bed
    Factor 10 – Stable Requirements
    Factor 11 – Offshore/Onsite Coordination
    Factor 12 – Test Data Preparation
    Factor 13 – Network Latency

 

  1. Generate an estimate using the following formula

Weighted Test Case Points X Adjustment Factor X Historical Productivity Rate

In many cases, organizations generate estimates for types of work separately using the adjustment factors that that would affect the type of work.  An example of a type of work is test case generation.  Factor 5, software/hardware setup, would not be predictive of the effort for setting up test cases.

The process for deriving test case points is fairly straightforward (steps 1 – 4).  The process of turning the test case points into an estimate is more complicated. Next, we will develop a short example and examine the strengths and weaknesses of the process – some which are very apparent and other are not.  

 

Listen Now
Subscribe on iTunes
Check out the podcast on Google Play MusicListen Now

The Software Process and Measurement Cast 436 features our essay titled, Change Fatigue, Tunnel Vision, and Watts Humphrey, in which we answer the question of whether the state and culture of the organization or team, can have a large impact on whether a Big Bang approach or an incremental approach makes sense to change.

Our second column is from Jeremy Berriault. Jeremy discusses user acceptance testing and Agile. There are lots of different ways to accomplish user acceptance testing in an Agile environment.  The only wrong way is not to do UAT in Agile.  Jeremy  blogs at https://jberria.wordpress.com/  

Jon M Quigley brings his column, The Alpha and Omega of Product Development, to the Cast. This week Jon puts all the pieces together and discusses systems thinking.  One of the places you can find Jon is at Value Transformation LLC.

Re-Read Saturday News

This week we wrap-up our re-read of Carol Dweck’s Mindset: The New Psychology of Success (buy your copy and read along).  In the wrap-up, we discuss overall impressions of the book and suggest a set of exercises to reinforce your growth mindset.

The next book in the series will be Holacracy (Buy a copy today) by Brian J. Robertson. After my recent interview with Jeff Dalton on Software Process and Measurement Cast 433, I realized that I had only read extracts from Holacracy, therefore we will read the whole book together. (more…)