Software Size


Listen to the Software Process and Measurement Cast (Here)

The Software Process and Measurement Cast features our interview with Charley Tichenor and Talmon Ben-Cnaan on the Software Non-Functional Assessment Process (SNAP).  SNAP is a standard process for measuring non-functional size.  Both Talmon and Charley are playing an instrumental role in developing and evolving the SNAP process and metric.  SNAP helps developers and leaders to shine a light on non-functional work required for software development and is useful for analyzing, planning and estimating work.

Talmon’s Bio:

Talmon Ben-Cnaan is the chairperson of the International Function Point User Group (IFPUG) committee for Non-Functional Software Sizing (NFSSC) and a Quality Manager at Amdocs. He led the Quality Measurements in his company, was responsible for collecting and analyzing measurements of software development projects and provided reports to senior management, based on those measurements. Talmon was also responsible for implementing Function Points in his organization.

Currently he manages quality operations and test methodology in Amdocs Testing division. The Amdocs Testing division includes more than 2,200 experts, located at more than 30 sites worldwide, and specializing in testing for the Telecommunication Service Providers.

Amdocs is the market leader in the Telecommunications market, with over 22,000 employees, delivering the most advanced business support systems (BSS), operational support systems (OSS), and service delivery to Communications Service Providers in more than 50 countries around the world.

Charley’s Bio:

Charley Tichenor has been a member of the International Function Point Users Group since 1991, and twice certified as a Certified Function Point Specialist.  He is currently a member of the IFPUG Non-functional Sizing Standards Committee, providing data collection and analysis support.  He recently retired from the US government with 32 years’ experience as an Operations Research Analyst, and is currently an Adjunct Professor with Marymount University in Washington, DC, teaching business analytics courses.  He has a BSBA degree from The Ohio State University, an MBA from Virginia Tech, and a Ph.D. in Business from Berne University.

 

Note:  Charley begins the interview with a work required disclaimer but then we SNAP to it … so to speak.

Next

In the next Software Process and Measurement Cast we will feature our essay on product owners.  The role of the product owner is one of the hardest to implement when embracing Agile. However how the role of the product owner is implemented is often a clear determinant of success with Agile.  The ideas in our essay can help you get it right.

We will also have new columns from the Software Sensei, Kim Pries and Jo Ann Sweeney with her Explaining Communication series.

Call to action!

We are in the middle of a re-read of John Kotter’s classic Leading Change on the Software Process and Measurement Blog.  Are you participating in the re-read? Please feel free to jump in and add your thoughts and comments!

After we finish the current re-read will need to decide which book will be next.  We are building a list of the books that have had the most influence on readers of the blog and listeners to the podcast.  Can you answer the question?

What are the two books that have most influenced you career (business, technical or philosophical)?  Send the titles to spamcastinfo@gmail.com.

First, we will compile a list and publish it on the blog.  Second, we will use the list to drive future  “Re-read” Saturdays. Re-read Saturday is an exciting new feature that began on the Software Process and Measurement blog on November 8th.  Feel free to choose you platform; send an email, leave a message on the blog, Facebook or just tweet the list (use hashtag #SPaMCAST)!

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Trail Length Are An Estimate of size,  while the time need to hike  is another story!

Trail length is an estimate of size, while the time need to hike it is another story!

More than occasionally I am asked, “Why should we size as part of estimation?”  In many cases the actual question is, “why can’t we just estimate hours?”  It is a good idea to size for many reasons, such as generating an estimate in a quantitative, repeatable process, but in the long run, sizing is all about the conversation it generates.

It is well established that size provides a major contribution to the cost of an engineering project.  In houses, bridges, planes, trains and automobiles the use of size as part of estimating cost and effort is a mature behavior. The common belief is that size can and does play a similar role in software. Estimation based on size (also known as parametric estimation) can be expressed as a function of size, complexity and capabilities.

E = f(size, complexity, capabilities)

In a parametric estimate these three factors are used to develop a set of equations that include a productivity rate, which is used to translate size into effort.

Size is a measure of the functionality that will be delivered by the project.  The bar for any project-level size measure is whether it can be known early in the project, whether it is predictive and whether the team can apply the metric consistently.  A popular physical measure is lines of code, function points are the most popular functional measure and story points are the most common relative measure of size.

Complexity refers to the technical complexity of the work being done and includes numerous properties of a project (examples of complexity could include code structure, math and logic structure).  Business problems with increased complexity generally require increased levels of effort to satisfy them.

Capabilities include the dimensions of skills, experience, processes, team structure and tools (estimation tools include a much broader list).  Variation in each capability influences the level of effort the project will require.

Parametric estimation is a top-down approach to generating a project estimate.  Planning exercises are then used to convert the effort estimate into a schedule and duration.  Planning is generally a bottom-up process driven by the identification of tasks, order of execution and specific staffing assignments.  Bottom-up planning can be fairly accurate and precise over short time horizons. Top-down estimation is generally easier than bottom-up estimation early in a project, while task-based planning makes sense in tactical, short-term scenarios. Examples of estimation and planning in an Agile project include iteration/sprint planning, which includes planning poker (sizing) and task planning (bottom-up plan).  A detailed schedule built from tasks in a waterfall project would be example of a bottom-up plan.  As most of us know, plans become less accurate as we push them further into the future even if they are done to the same level of precision. Size-based estimation provides a mechanism to predict the rough course of the project before release planning can be performed then again, as a tool to support and triangulate release planning.

The act of building a logical case for a function point count or participating in a planning poker session helps those that are doing an estimate to collect, organize and investigate the information that is known about a need or requirement.  As the data is collected, questions can be asked and conversations had which enrich understanding and knowledge.  The process of developing the understanding needed to estimate size provides a wide range of benefits ranging from simply a better understanding of requirements to a crisper understanding of risks.

A second reason for estimating size as a separate step in the process is that separating it out allows a discussion of velocity or productivity as a separate entity.  By fixing one part of the size, the complexity and capability equation, we gain greater focus on the other parts like team capabilities, processes, risks or changes that will affect velocity.  Greater focus leads to greater understanding, which leads to a better estimate.

A third reason for estimating size of the software project as part of the overall estimation process is that by isolating the size of the work when capabilities change or knowledge about the project increases, the estimate can more easily be re-scaled. In most projects that exist for more than a few months, understanding of the business problem, how to solve that problem and capabilities of the team increase while at the same time the perceived complexity[1] of the solution decreases. If a team has jumped from requirements or stories directly to an effort estimate  it will require more effort to re-estimate the remaining work because they will not be able to reuse previous estimate because the original rational will have change. When you have captured size re-estimation becomes a re-scaling exercise. Re-scaling is much closer to a math exercise (productivity x size) which saves time and energy.  At best, re-estimation is more time consuming and yields the same value.  The ability to re-scale will aid in sprint planning and in release planning. Why waste time when we should be focusing on delivering value?

Finally, why size?  In the words of David Herron, author and Vice President of Solution Services at the David Consulting Group, “Sizing is all about the conversation that it generates.”  Conversations create a crisper, deeper understanding of the requirements and the steps needed to satisfy the business need.  Determining the size of the project is a tool with which to focus a discussion as to whether requirements are understood.  If a requirement can’t be sized, you can’t know enough to actually fulfill it.  Planning poker is an example of a sizing conversation. I am always amazed at the richness of the information that is exposed during a group-planning poker session (please remember to take notes).  The conversation provides many of the nuances a story or requirement just can’t provide.

Estimates, by definition, are wrong.  The question is just how wrong.   The search for knowledge generated by the conversations needed to size a project provides the best platform for starting a project well.  That same knowledge provides the additional inputs needed to complete the size, complexity, capability equation in order to yield a project estimate.  If you are asked, “Why size?” it might be tempting to fire off the answer “Why not?” but in the end, I think you will change more minds by suggesting that it is all about the conversation after you have made the more quantitative arguments.

Check out an audio version of this essay as part of  SPaMCAST 201


[1] Perceived complexity is more important than actual complexity as what is perceived more directly drives behavior than actual complexity.

Function points and story points are similar, but different. Like the different kinds of eggs.

Function points and story points are similar, but different. Like the different kinds of eggs.

What is the difference between story points and function points? Both function and story points are a measure of size, but they are derived by different means. The big difference is that story points are a measure of size determined by the team, while function points are a measure of size based on a standard set of rules[1].  There is not a perfect analogy, but one that I use is that the difference is like measuring the distance between New York and Chicago in miles or the number of rests stops I’d need if I was driving. Both are predictable, however only one is understandable outside of my team.

Function points (see What is a function point?) are a measure of the functionality delivered by the project or application.  All of the different types of function points are based on a set of rules that can be applied consistently by any trained practitioner.  Size is generally reflective of a count of a set of components (External Inputs, External Outputs, External Inquires, Internal Logical Files and External Interface Files).  The size of each component is judged based on attributes like fields, files and groups of data.

Story points are based on the team’s perception of the size of the work.  The determination of size is based on level of understanding, how complex and how much work is required compared to other units of work.  For example, one team might feel that developing a new service to insert customer records in a database is complex and large, while another team might perceive the same piece of work as less difficult. Scales for story points vary, however the two most common are the Fibonacci sequence and Cohn’s numbers.

Both function points and story points can be used to gauge how much work can be accomplished in an iteration or release.  The difference between the two sizes is generally who is involved in making that determination: the project team or an estimator. Another difference between the two measures of size can be seen in the how story points and function points can be used in organizational measurement programs.  Simply put, since story points are a reflection of the perception of a specific team they can’t be used to compare performances across teams, nor can they but summed up to generate organizational metrics.

Story point and function points represent size.  Story points are created by a specific team based on the team’s cumulative knowledge and biases.  The result of developing story points is useful to help the team plan, but not useful outside of the team.  Function points are a reflection of size based on a standard set of rules, rules not developed by the team therefore less intuitively understandable, but more useful at an organizational level.  If the question then becomes, “Which measure of size should I use?” The answer, as always, is it depends on what you need the information for.  If you are collecting organizational metrics, using parametric estimation or have dynamic team structures (common in matrix organizations) then function points make sense.  If size is used by a team for its own use and the team structure fixed then story points can be very useful.

 


[1] There are several international function point standard including IFPUG, COSMIC and NESMA.

Size Matters. . .But Which Size

If size matters then you must measure.

If size matters then you must measure.

Size matters.  I will provide a moment of silence to let the jokes and requisite tittering die down.  In software development the size of the software being built or changed matters because size affects productivity, size affects risk and size affects how projects are managed.  I suggest that these reasons are just the tip of the iceberg of why size matters.  Because size is such a major determinant in how a project is run, I would suggest we can’t leave the knowledge of how big the software component of a project is to shear guess work.  We must measure software size.  Unfortunately there are numerous measures of size and each of these measures has its own strengths and weakness making selection a chore.  Finding the right measure or at least the right category is more than an academic discussion unless you want to adopt a measure that does not meet your needs.

Fortunately the landscape of software size measures can be simplified by consolidating all of the available size measures into three categories.  The first category (and oldest) is that of physical measures, second is functional measures and the third is relative measures. Each of these categories is valid in specific circumstances.  Each category contains measures that are valuable as a tool to answer specific questions.  Each category waxes and wanes in its explanatory power across portions of the development life cycle.  Lets explore each in a bit more detail in anticipation of creating a simple selection algorithm.

Relative size measures use the measurer’s perspective as a framework to assess size.  These measures are much akin to stepping off a distance and declaring it to be so many yards or feet.  The measure is relative to size of the measure’s stride (your stride and mine are probably different).  Relative measures are fine for an approximation but fail to deliver precision or comparability.   Relative measures provide their greatest explanative power when there is the least amount of information or when there is no standard measure available. One of the most common uses of relative measures has historically been in budget activities when only a small amount of information is known; analogies are the most common type of relative measure used in in the budgeting process.   In recent years story points (a relative measure) have been used by some organizations as a means to to develop an approximate size during the development of requirements.

Functional size measures evaluate software size by assessing the size based on a set of rules focusing on “user” recognizable functionality.  Most standard functional measures focus on sizing only what was requested (and later delivered).  The focus on sizing business functionality means that functional measures can be used as soon as requirements or stories are identified.  The ability to size requirements based on a common set of rules allows the transition from relative to functional measures; from perceived size to rules based size. The transition from relative measures to  functional measures is also marked by an increase in the the number of rules required to determine size (and therefore to some extent the amount of effort required to determine size) while at the same time the level level of  abstraction typically decreases (what is being measured is closer to what is being delivered).  An example of the decrease in level of abstraction is shown in the the level of detail of IFPUG Function Points.  IFPUG Function points are determined by measuring and counting five types of components (external inputs, external outputs, external inquires, internal logical files and external interface files).  Examples of functional measures include IFPUG Function Points, Cosmic Function Points and Use Case Points to name a few.

Physical measures of size count tangible “things” like lines of code, modules, objects or typical software components.  The rules for counting physical size measures tend to vary by language, technology or are tied directly to specific technologies.  Because of the variance in the rules aggregating data in non-homogeneous organizations tends to be problematic.  For example, what does comparing 10 lines of Java and 10 lines of ASP.net mean or worse yet one object, three pages of documentation and 1,000 lines of generated Java?  Physical measures reach their zenith in explanative and predictive power during the specific activities that create the physical item being counted; code during coding, test cases during testing or pages of documentation during the creative writing phase called user documentation.
Size matters for estimation and planning regardless of methodology or technique.  Whether a project uses a relative, functional or physical measure matters less than measuring work and using that measure to create information.  Which method you use depends on organizational and project culture.

Size Matters. . .But Which Size
Thomas M. Cagley Jr.

Size matters.  I will provide a moment of silence to let the jokes and requisite tittering die down.  In software development the size of the software being built or changed matters because size affects productivity, size affects risk and size affects how projects are managed.  I suggest that these reasons are just the tip of the iceberg of why size matters.  Because size is such a major determinant in how a project is run, I would suggest we can’t leave the knowledge of how big the software component of a project is to shear guess work.  We must measure software size.  Unfortunately there are numerous measures of size and each of these measures has its own strengths and weakness making selection a chore.  Finding the right measure or at least the right category is more than an academic discussion unless you want to adopt a measure that does not meet your needs.

Fortunately the landscape of software size measures can be simplified by consolidating all of the available size measures into three categories.  The first category (and oldest) is that of physical measures, second is functional measures and the third is relative measures. Each of these categories is valid in specific circumstances.  Each category contains measures that are valuable as a tool to answer specific questions.  Each category waxes and wanes in its explanatory power across portions of the development life cycle.  Lets explore each in a bit more detail in anticipation of creating a simple selection algorithm.

Relative size measures use the measurer’s perspective as a framework to assess size.  These measures are much akin to stepping off a distance and declaring it to be so many yards or feet.  The measure is relative to size of the measure’s stride (your stride and mine are probably different).  Relative measures are fine for an approximation but fail to deliver precision or comparability.   Relative measures provide their greatest explanative power when there is the least amount of information or when there is no standard measure available. One of the most common uses of relative measures has historically been in budget activities when only a small amount of information is known; analogies are the most common type of relative measure used in in the budgeting process.   In recent years story points (a relative measure) have been used by some organizations as a means to to develop an approximate size during the development of requirements.

Functional size measures evaluate software size by assessing the size based on a set of rules focusing on “user” recognizable functionality.  Most standard functional measures focus on sizing only what was requested (and later delivered).  The focus on sizing business functionality means that functional measures can be used as soon as requirements or stories are identified.  The ability to size requirements based on a common set of rules allows the transition from relative to functional measures; from perceived size to rules based size. The transition from relative measures to  functional measures is also marked by an increase in the the number of rules required to determine size (and therefore to some extent the amount of effort required to determine size) while at the same time the level level of  abstraction typically decreases (what is being measured is closer to what is being delivered).  An example of the decrease in level of abstraction is shown in the the level of detail of IFPUG Function Points.  IFPUG Function points are determined by measuring and counting five types of components (external inputs, external outputs, external inquires, internal logical files and external interface files).  Examples of functional measures include IFPUG Function Points, Cosmic Function Points and Use Case Points to name a few.

Physical measures of size count tangible “things” like lines of code, modules, objects or typical software components.  The rules for counting physical size measures tend to vary by language, technology or are tied directly to specific technologies.  Because of the variance in the rules aggregating data in non-homogeneous organizations tends to be problematic.  For example, what does comparing 10 lines of Java and 10 lines of ASP.net mean or worse yet one object, three pages of documentation and 1,000 lines of generated Java?  Physical measures reach their zenith in explanative and predictive power during the specific activities that create the physical item being counted; code during coding, test cases during testing or pages of documentation during the creative writing phase called user documentation.
Which category fulfills which measurement need?  I suggest the following simple set of rules:
1.  If you have a very stable team with a need to measure at a team only level; relative measures are fine.
2.  If you need the data for an organization view, forget relative measures and focus on physical or functional.
4.  If you have technically homogeneous environment and need to estimate specific parts of the project life cycle, physical measures are the nail for your hammer.
3.  If you need the data to estimate projects and to measure your overall organization’s performance in a non-homogeneous environment, your best choice are functional measures.
Which measures in each category fit which situatio