We are enjoying a bit of a holiday.  Yesterday I toured La Sagrada Familia in Barcelona. The basilica was started well over 100 years ago and is now planned to be completed in 2026.  I am struck by how persistent and motivating an idea can be.  While function points are not as old, they are equally as persistent and useful.  Please enjoy this throwback essay on function points: (more…)

Listen to the Software Process and Measurement Podcast

SPaMCAST 317 tackles a wide range of frequently asked questions, ranging from the possibility of an acceleration trap, the relevance of function points, whether teams have a peak loads and safe to fail experiments. Questions, answers and controversy!

We will also have the next installment of Kim Pries’s column, The Software Sensei! This week Kim discusses robust software.

The essay starts with “Agile Can Contribute to an Acceleration Trap”

I am often asked whether Agile techniques contribute to an acceleration trap in IT.  In an article in The Harvard Business Review, Bruch and Menges (April 2010) define an acceleration trap as the malaise that sets in as an organization fails prey to chronic overloading. It can be interpreted as laziness or recalcitrance, which then elicits even more pressure to perform, generating an even deeper malaise. The results of the pressure/malaise cycle are generally a poor working atmosphere and employee loss. Agile can contribute to an acceleration trap but only as a reflection of poor practices. Agile is often perceived to induce an acceleration trap in two manners: organizational change and delivery cadence.

Listen to the rest now

Call to action!

We are in the middle of a re-read of John Kotter’s classic Leading Change of on the Software Process and Measurement Blog.  Are you participating in the re-read? Please feel free to jump in and add your thoughts and comments!

After we finish the current re-read will need to decide which book will be next.  We are building a list of the books that have had the most influence on readers of the blog and listeners to the podcast.  Can you answer the question?

What are the two books that have most influenced you career (business, technical or philosophical)?  Send the titles to spamcastinfo@gmail.com.

First, we will compile a list and publish it on the blog.  Second, we will use the list to drive future  “Re-read” Saturdays. Re-read Saturday is an exciting new feature that began on the Software Process and Measurement blog on November 8th.  Feel free to choose you platform; send an email, leave a message on the blog, Facebook or just tweet the list (use hashtag #SPaMCAST)!


SPaMCAST 318 features our interview with Rob Cross.  Rob and I discussed his INFOQ article “How to Incorporate Data Analytics into Your Software Process.”  Rob provides ideas on how the theory of big data can be incorporated in to big action.


Upcoming Events

DCG Webinars:

Agile Risk Management – It Is Still Important
Date: December 18th, 2014
Time: 11:30am EST

Register Now

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Story points make a poor organizational measure of software size.

Story points make a poor organizational measure of software size.

Recently I did a webinar on User Stories for my day job as Vice President of Consulting at the David Consulting Group. During my preparation for the webinar I asked everyone that was registered to provide the questions they wanted to be addressed.  I received quite a few responses.  I did my best to answer the questions, however I thought it would be a good idea to circle back and address a number of the questions more formally. A number of the questions concerned using story points.

The first set of questions focused on using story points to compare teams and to other organizations.  

Questions Set 1: Story Points as an Organizational Measure of Software Size

Story points make a poor organizational measure of software size because they represent an individual team’s perspective and can’t be used to benchmark performance between teams or organizations.

Story points (vs function points) are relative measure based on the team’s perception of the size of the work.  The determination of size is based on level of understanding, how complex and how much work is required compared to other units of work. Every team will have a different perception of the size of work. For example one team thinks that adding a backup to their order entry system is fairly easy and call the work five story points, while a second team might size the same work as eight story points.  Does the difference mean that the second team thinks the work is nearly twice as difficult or does it represent a different frame of reference?  Story points do not provide that level of explanative power and should not be used in this fashion. Inferring the degree of real difficulty or the length of time required to deliver the function based on an outsiders perception of the reported story point size will lead to wrong answers.

There are many published and commercially available benchmarks for function points include IFPUG, COSMIC, NESMA or MarkII varieties (all of which are ISO Standards).  These benchmarks represent data collected or reported using a set of internationally published standards for sizing software. Given that story points are by definition a measure based on a specific team’s perception and not on a set of published rules, there are no industry standards for story point performance. 

In order to benchmark and compare performance between groups, an organization needs to adopt a measure or metric based on a set of published and industry accepted rules. Story points, while valuable at a team level, by definition fail on this point. Story points, as they are currently defined, can’t be used to compare between teams or organizations. Any organization that is publishing industry performance standards based on story points have either redefined story points OR just does not understand what story points represent.


Are function points relevant in 2014? In this case, the question is whether function points are relevant to the size of an application, a development or an enhancement project. IFPUG Function Points were proposed in 1979 by Allan J. Albrecht, published in 1983 by Albrecht and Gaffney while at IBM and then updated and extended over the years. Just like using a tape measure to determine the size of the room, function points are a tool to determine the size of the application or project. In order to determine relevance we need to answer two questions:

  1. Do we still need to know “size”?
  2. Is knowing size sufficient to tell us what we need to know?

Size as a measure has many uses, but the two most often cited are as a component in parametric estimation and as a denominator in metrics such as time-to-market and productivity. While there still might be an intellectual debate on the effectiveness of estimation, there has been no reduction in the sponsors, executives, purchasing agents and the like requesting a price or an end date that you will be held accountable to meet.  Until those questions cease, estimation will be required. Parametric estimation processes (the second most popular form of estimation after making up a number) require an estimate of size as one of the inputs.  Parametric estimation helps to avoid a number of the most common cognitive biases exhibited by IT estimators: optimism and assumption of knowledge.

Size is also used as a normalizing factor (a denominator) to compare effort (productivity), duration (time-to-market) and defects (quality). This type of quantitative analysis is used to answer questions like:

  • Is our performance improving?
  • Are the techniques being used delivering value faster?
  • Are we staffed appropriately?

Function points deliver a consistent measure of functional size based on a consistent set of rules.

The second and perhaps more critical question is whether the balance between functional requirements (things users do) and non-functional requirements (things like usability and maintainability) have changed when implemented in the current environment. If the balance has changed then perhaps measuring functional size is not relevant or not sufficient for estimation or productivity analysis.  A literature search provides no quantitative studies on whether the relationship between functional and non-functional requirements (NFRs) has changed.  Anecdotally, the new architectures, such as heavily distributed systems and software as a service, have caused an increase in the number and complexity of NFRs. However there is no credible academic evidence that a change has occurred.

It should be noted that some measurement organizations, like IFPUG, have developed and begun evolving measures of non-functional size.  IFPUG has released the SNAP version 2.1, which measures the size of NFRs. These measures are still in the process of being incorporated into software estimation tools and are considered an augmentation to functional size measures like IFPUG Function Points or COSMIC (another form of function points).

Function points are still relevant because organizations, sponsors and purchasing agent still want to know how much a project will cost and what they will get for their money.  Organizations still want to benchmark their performance internally and externally.  Answering these kinds of questions require a standard measure of size. Until those questions stop being important, function points will be relevant.

FYI: Many times the question of relevance is really code for: “Do I have to spend my time counting function points?”  We will tackle that issue at a later date, however until then if effort is the real issue, call me and let’s discuss Quick and Early Function Points.



Why should anyone spend the time and effort needed to count function points?  While some value can be gained from the process of counting function points (it can be leveraged as a formal analysis technique), the value from IFPUG function points comes primarily from how they are used once counted.  Function points have four primary uses.

Estimation: Size is a partial predictor of effort or duration. Estimating projects is an important use of software size. Effort can be thought of as a function of size, behavior and technical complexity.  All parametric estimation tools, homegrown or commercial, require project size as one of the primary input. An example of the relationship between size and effort is seen in the Jones equation for estimation, which says that effort is a function of size, behavior and complexity.

Denominator: Size is a descriptor that is generally used to add interpretive information to other attributes or as a tool to normalize other attributes. When used to normalize other measures or attributes, size is usually used as a denominator. Effort per function point is an example of using function points as denominator. Using size as a denominator helps organizations make performance comparisons between projects of differing sizes.  For example, if two projects discovered ten defects after implementation, which had better quality?  The size of the delivered functionality would have to be factored into the discussion of quality.

Reporting: Many measures and metrics are collected and used in most organizations to paint a picture of project performance, progress or success. Organizational report cards may also be leveraged, again with many individual metrics, any one of which may be difficult to compare individually.  Using function points as a denominator to synchronize many disparate measures so that they may be compared and reported.

Control: Understanding performance allows project managers, team leaders and project team members to understand where they are in an overall project or piece of work and therefore take action to change the trajectory of the work. Knowledge allows the organization to control the flow of work in order to influence the delivery of functionality and value in a predictable and controlled manner.

Organizations that have found the greatest value use the counting process as an analysis technique. If Agile they use function points to size stories and review sprint efficiency, estimation and reporting are uses for function points that can generate value for all organizations.  IFPUG Function Points (or any functional metric variation) only have value if used.



Putting Things Together!

Putting Things Together!

The process for counting IFPUG Function culminates by the counter translating the sized data and transaction functions into a number.

Using our examples from ‘Counting IFPUG Function Points: Small, Medium and Large Logical Files?’ and ‘Counting IFPUG Function Points: Sizing Transactions.’ Our function point count would be:

  • Employee ILF: 2 RETs and 15 DETs – Low
  • Zip Code EIF : 1 RET and 1 DET – Low
  • Add Employee EI: 2 FTR and 10 DETs – Average
  • Inquire on Employee: 1 FTE and 10 DETs – Low

The count could be translated into a simple matrix as follows:





Internal Logical File


External Interface File


External Input


External Output
Internal Inquiry


IFPUG Function Points provide a weight for each component/size combination. The weight translates the low, average and high representations of size into a number that can be used for estimation and other metrics.  We can create an unadjusted function point count by adding the weights to the count matrix and then multiplying each component count by the weight.  The sum of all of the extended weights yields the unadjusted count:






Internal Logical File

1 x 7

__ x 10

__ x 15

7 fp

External Interface File

1 x 5

__ x 7

__ x 10

5 fp

External Input

__ x 3

1 x 4

__ x 6

4 fp

External Output

__ x 4

__ x 5

__ x 7

0 fp

Internal Inquiry

1 x 3

__ x 4

__ x 6

3 fp

Total:  19 fp

If we are doing a project comprised only of changes to the four components in the example, the total unadjusted function point count would be 19 function points.  The International Standard Organization compliant version (ISO/IEC 14143-1:2007) of IFPUG Function Points uses only this unadjusted count.  The classic version of IFPUG function point counting includes two further steps.

Converting an unadjusted count into an adjusted function point count require an assessment of fourteen General System Characteristics (GSC). GSC are a set of typical features that applications exhibit that are not generally counted as function points. The features that the GSCs evaluate were originally identified to correct for the differences seen between batch and on-line applications.  For example GSC #1—Data Communication—is rated on a scale ranging from 0 (pure batch application) to 5 (the application is more than a front-end and supports more than one type of TP communication protocol).  Each of the 14 GSCs is evaluated using guides and a similar 0 to 5 scale.  Summing all 14 GSC ratings for an application will result in a value between 0 and 70; that sum is referred to as the Total Degree of Influence (TDI).  TDI is then used to create the value adjustment factor (VAF) using the following expression: VAF = (TDI * 0.01) + 0.65.  The product of the VAF and our original unadjusted count to create the adjusted function point count. The adjustment factor can adjust a function point count by plus or minus 35%.  So, in the example above, the adjusted count could range from 12 to 26 function points.

Every application will have its own unique VAF.  VAFs generally do not change to a huge degree after an application is initially developed.  However, the ISO version of the counting process does not use this process and IFPUG now judges the VAF as optional. Furthermore, the unadjusted count is used in most commercial estimation tools, which subsequently use their own criteria for adjusting the count for other factors that impact development and support effort.  In the long run, the unadjusted count will likely become the norm because the process is simpler and quicker to use.




Waves come in multiple sizes just like transactions.

Waves come in multiple sizes just like transactions.

Each of the three types of transactions identified in the IFPUG Function Point methodology are classified into three categories: low, average and high (or small, medium and large).  The process for sizing transactions is similar to the process we used to size data functions. The size of a transaction is based on the interplay between file types referenced (FTR) and data element types (DETs).  A FTR refers to an internal logical files read or updated or an external interface file read.  The function point counter will review each transaction and count the number ILFs read or updated and the EIFs read.  The total FTRs will be used to determine the size (remember IFPUG uses the work ‘complexity’).  In our example of an HR system, we described the human resource clerk sitting in front of a computer entering the data needed to add a new employee (Employee Number, Name, Address, City, State, Zip, Work Location, and Job Code), and after entering the data the clerk hits the enter key and the data is committed to the employee ILF. Upon review it was pointed out that the Zip Code entered was checked against the Zip Code file provided by the US Post Office.  The number of FTRs for this external input transaction would be two (Employee and Zip/Postal Code).  The counting rules for FTRs are no different whether the transaction is an EI, EO or EQ, with the exception that an EQ can never update a logical file. Therefore the FTRs should only reflect files that are read for EQs.

DETs are defined as unique, user-recognizable, non-repeated attributes.  This is the same definition of a DET that we used when discussing sizing data functions.  Counting data DETs for transactions is similar to counting DETs for data transactions with a few more transaction-related rules.  The rules:

Count “one” for each DET that enters or exits the boundary during the processing of the transaction.

Count “one” DET per transaction for the ability to send a response message (only one per transaction)

Count “one” DET per transaction for the ability to make the transaction happen (only one per transaction)

Using our example of entering an employee, the clerk types in 8 fields therefore the counter would count 8 DETs entering the boundary of the application.  When he or she is finished typing they will click on the post icon (or press enter) when the Zip Code is validated.  A message is returned if the Zip Code is wrong or if it is correct, and if the employee does not already exist a message is displayed saying that the employee is added.  In this case we would count a DET for the message and a DET for the ability to make the transaction happen. In our example the total number of DETs would be 10.

Just like the data transactions, IFPUG provides a simple matrix to derive size of external inputs.

FTRs 1 – 4 DETs 5 – 15 DETs 16 + DETs
0 – 1 Low Low Average
2 Low Average High
3+ Average High High

Using the matrix is a matter of counting the number of FTRs a transactions uses, finding the corresponding row and then finding the corresponding column for the number of DETs that you counted for the transactions.  In the example two FTRs and 10 DETs equates to an average external input.

The size/complexity matrix for external outputs and external inquires is a little different.

FTRs 1 – 5 DETs 6 – 19 DETs 20 + DETs
0 – 1 Low Low Average
2 – 3 Low Average High
4+ Average High High

A quick example of an external inquiry using our HR example would be if our mythical HR clerk needed to look up an employee (with same 8 fields noted before).  To accomplish this, the clerk types in an employee number and then presses enter. If the employee number is bad (or an employee does not exist) a message is returned.  If found all eight fields are displayed.  We would count 10 DETS.  We count one DET for employee number entering, one DET for pressing Enter one DET for the ability for a message and then seven DETs for all of the employee data returned (exits) except that employee number both enters and exits therefore is only counted once.   The Zip Code would not be validated on the external inquiry therefore the transaction would have one FTR and 10 DETs therefore would be a low external inquiry.

The process is repeated for each transaction.

Whether an External Output or External Inquiry the goal is present data to the user!

Whether an External Output or External Inquiry the goal is present data to the user!

As noted in Counting IFPUG Function Points, The Process and More, after classifying and counting the data functions our attention turns to the transaction functions.  There are three types of transactions; external inputs (EI), external inquires (EQ) and external outputs (EO). The person who taught me IFPUG Function Points more than a few years ago pointed out you can recognize the transaction functions because they move data.

The precise definition of an external input is “an elementary process that processes data or controls information sent from outside the boundary[1].”  The definition goes on to say that an EI must either update one or more ILFs and/or alter the behavior of the system.  The former is more typical and the later more esoteric. The EI transaction can bring data into an application from a screen, a file, a feed from another application or be data from a sensor.  An EI can be batch or online.  Here are a few examples of an external inputs:

Common:  A human resource clerk sits in front of a laptop and enters the data needed to add a new employee (Employee Number, Name, Address, City, State, Zip, Work Location, and Job Code) and after entering the data the clerk hits the enter key and the data is committed to the employee ILF.

Less Common:  A temperature sensor reads the temperature from a pressure reactor in a chemical process.  The data is sent to a control application that raises or lowers the temperature in the reactor by regulating the heating coils.  The input is used by the software to adjust the behavior of the application.

The precise definition of an external inquiry is “an elementary process that sends data or control information outside the boundary[2].”   The EQ must retrieve the data from a logical file, and can’t contain directed data, perform math, change the behavior of the system or update an internal logical file. The easiest way to imagine an EQ is a simple direct retrieval of data.  Using our human resource system example, a simple EQ would be for the HR clerk to ask type an employee name into a search field, press the enter key and then see the information retrieved.

The third transaction is an external output and is defined as “an elementary process that sends data or control information outside the boundary and includes additional processing beyond that of an external inquiry[3].”  The processing is one or all of those things that an EQ can’t do, i.e. contain directed data, perform math, change the behavior of the system or update an internal logical file.  Examples of EOs abound.  Every morning I run and review a report of the download from the Software Process and Measurement Cast.  The report for each podcast includes the monthly download for the past three months, an overall total and then a calculated grand total since the beginning of the podcast.  A report with a calculated total would be an external output.

All three definitions use the term ‘elementary’, which just means that the transaction must represent the smallest whole unit of work that is meaningful to the user (any person or thing that interacts with the application). IFPUG function points include three basic transactions that move data to and from internal logical files and external interface files.

Like the data functions, the transaction functions come in three distinct sizes, which we discussed here.

[1] Function Point Counting Practice Manual 4.0, Part 2 7-3

[2] Function Point Counting Practice Manual 4.0, Part 2 7-3

[3] Function Point Counting Practice Manual 4.0, Part 2 7-3

Small, Medium and Large or Low, Average  and High?

Small, Medium and Large or Low, Average and High?

As noted in Counting IFPUG Function Points, The Process and More, there are two data functions: Internal Logical Files (ILF) and External Interface Files (EIF).  ILFs are logical groups of user-recognizable data maintained within the boundary of the application being sized and EIFs are logical groups of user-recognizable data referenced within the boundary of the application being sized, but maintained in another application.  Both of these data functions come in three sizes.  IFPUG uses the labels of low, average and high, but because of my personal background small, medium and large feels better. The different sizes represent file size complexity. I feel that using the word complexity when determining the size of a file (or a transaction) is confusing as complexity in development and maintenance generally is applied to a set of concepts that is much broader than size.

How do we size ILFs and EIFs?

The size of logical files is determined by the number of data element types and the number of record element types.  Clear as mud?  A data element type (DET) is defined as a user-recognizable, non-repeated attribute[1].  The IFPUG Counting Practice Manual goes into depth on rules to help recognize and count DETs in a standard manner.  If you ever do a count with me you might hear me ask whether the DET is maintained or retrieved; whether the DET is unique; whether the DET should be counted as part of another DET, or whether the DET is needed to create a relationship with another group of data.  Once we have a handle on the rules to ensure we only count unique, user-recognizable attributes, counting the DET is as simple making tic marks on a piece of paper and then summing them up.

Where DETs is a simple concept, record element types is more difficult. The definition of a record element type (RETs) is a user-recognizable subgroup of data elements within an ILF or EIF.  Classifying groups of data as RETs requires understanding the relationships between groups of data.  For example, an HR system that has a group of DETs used to define employees and another group of used to define employee dependents.  Looking at the relationship between the logical groups “employees” and “dependents,” it is apparent that a dependent can’t exist without an employee, but that an employee does not need to have a dependent.  We would consider employee and dependents to be two RETs, employee is a mandatory subgroup and dependent is an optional subgroup.  Understanding the basic data modeling and normalization techniques is useful for identifying RETs.

Once you have worked through counting DETs and RETs for either an ILF or an EIF, we can determine the size of the logical group.  IFPUG provides a simple matrix to derive size.


1 – 19 DETs

20 – 50 DETs

51 + DETs





2 – 5








Using the matrix is a matter of counting the number of RETs in a logical group, finding the corresponding row and then finding the corresponding column for the number of DETs that you counted for the logical group.

Using the employee and dependent example from above, let’s assume that employee has 8 DETs (Employee Number, Name, Address, City, State, Zip, Work Location, and Job Code) and dependent had 8 DETs (Employee Number, Name, Address, City, State, Zip, Relationship and Birth Date).  The total count of DETs would be 15 (employee number is not counted twice because it is repeated in both entries).  We would find the column and row in them matrix that corresponds to two RETs and 15 DETs which tell us that the logical group was classified as low. We would then repeat the process for each ILF and EIF we identified and counted during the function point count.

[1] Function Point Counting Practice Manual 4.0, Part 3 2-24

The CPM includes all of the IFPUG Function Point Rules

The CPM includes all of the IFPUG Function Point Rules

We defined IFPUG Function Points as a measure of the functionality delivered by a project or application. We also talked about their uses and criticisms in the Metrics Minute. What this blog has not explored is how to count function points.  Function points are generated by counting five basic components that represent the features and functions of the project or application based on a set of rules. The rules are for counting IFPUG Function Points are documented in the IFPUG Counting Practices Manual (CPM). The following counting process has been slightly modified from the IFPUG CPM:

  1. Determine the type of count and determine the counting scope.  IFPUG recognizes three types of counts.
    1. An application count whose scope includes all of the functionality for a specific application.
    2. An enhancement count scope includes the functionality a project adds to, changes or deletes from an existing application and any conversion functionality.
    3. A development project that includes all the functionality built for a new application and supporting conversion functionality.
  2. Determine the boundary (ies). The boundary defines what we are interested in sizing and represents a line in the sand between the application and the user domains.
  3. Gather the available documentation. Once the count type, counting scope and boundaries have been established, the counter will gather the documentation available for the count.  It may include demonstrations of the system, interviews with developers and architects, requirements, user stories, data flow diagrams and models, design documents and user manuals.  This list is incomplete, and depending on when in the flow of development the count is performed, a subset of the listed documentation may exist.  Sometimes you will need to use a less conventional approach. For example, in an Agile project I recently witnessed, the counter sized the delivered stories as part of the each demo (I detail this process here), rather than relying on project documentation.  Note that the IFPUG CPM calls for counters to do this step first, however I find it more efficient to gather the documentation after the type of count and scope are determined.
  4. Identify the functional user requirements. IFPUG function points are an interpretation of the functional user requirements based on the rules identified in the IFPUG CPM.  The CPM defines functional user requirements as “what the software shall do, in terms of tasks and services[1].”
  5. Measure the data functions.  IFPUG recognizes two types of data functions.  The first is an Internal Logical File (ILF).  An ILF is a user-recognizable group of logically related data maintained within the boundary of the application being measured.  Employee is an ILF typically found in a human resources application.  The logical grouping of data needed to define an employee would be grouped together as single logical group (even if it were in multiple tables or objects).  The logical group data named “employee” would be easily recognizable by users of the systems. This logical group of data can have records added, changed and deleted by an HR system, therefore the ILF can be maintained. The second type of data function is called an External Interface File (EIF).  An EIF is an ILF from another application that is used as reference.  For example, the HR system may require the entry of a health benefit package when adding or changing an employee. Let’s assume that the definition of the health benefits package was maintained in the benefits application. The health benefits package would be an ILF in the benefits application and an EIF if used for validation or reference in the HR application.  A counter would identify the ILFs and EIFs in the scope of the count. When doing an enhancement count the counter would count all of the ILFs or EIFs that were added (new), changed (modified structure) or deleted.  Application counts would count all ILFs within the boundaries and any EIFs referenced in the application.  Finally a development project would count any ILFs added (by definition a development project is equivalent to a new application.  Once the counter has identified the data functions he or she will determine their size.  We explore the science of sizing ILFs and EIFs here.

The process continues with:

  1. Measure the transaction functions:  There are three types of transactions: External Inputs, External Outputs and External Inquires.
  2. Getting to a number and some optional bits:  Once all of the data and transaction functions have been identified and sized, generating a count is simple mathematics.  The outcome of a count at this point is called an Unadjusted Function Point Count.  A set of 14 characteristics can be appraised for each application in the count to adjust the count for the differences based on a set of criteria yielding an adjustment factor.  Application of the adjustment factor to the unadjusted count yields an adjusted function point count (this is the optional bit).

[1] IFPUG Counting Practices Manual 4.0, January 2010, Part 2, Section 1-3