200606 A CAI State of the Practice Interview with Charles Symons (pub 2006)

Founder of COSMIC and creator of the Mark II Function Point

Charles Symons has 45 years experience in the use of computers for business and scientific purposes, in both public and private sectors, in all the major disciplines of the Information Systems function.

He is currently joint project leader of COSMIC, the Common Software Measurement International Consortium (www.cosmicon.com).  COSMIC – an informal grouping of software metrics experts – developed a method of software functional size measurement, applicable to business, real-time and infrastructure software.  The COSMIC-FFP was the first such ‘new generation’ method to become an International Standard (ISO/IEC 19761:2003).

Before leading the development of COSMIC-FFP, Charles invented the Mk II Function Point Analysis technique for sizing software requirements, which became the UK Government mandated method for software sizing and estimating.

Charles is also the author of Software Sizing and Estimating (Wiley, 1991).

This interview between Charles Symons and Michael Milutis, Executive Director of the IT Metrics and Productivity Institute, was conducted in June of 2006.

CAI: Could you tell us a little bit about what you are working on today?

CHARLES SYMONS: I am semi-retired and employed occasionally as a consultant.  However, the one thing that currently absorbs a significant amount of my time these days is the COSMIC project, which keeps me quite busy.  Although the method is stable, there is still a lot of work to be done with the finer details.  As with any new method or language, there are always words and phrases that confuse people and I am always looking for ways to refine these definitions.  I am also trying to expand the case studies and I do occasional marketing for the project, too.

CAI: Why is software measurement so important?  Why should IT executives pay attention to this subject?

CHARLES SYMONS: Many people in my position simply offer the cliché “you can’t manage what you can’t measure.”  However, if you look around you will find that this statement is simply not true.  People who do not measure do seem to manage.  They find all sorts of creative ways of doing this.  They have no past measurements to rely on; yet somehow, they get by.  

In light of this, why should IT executives be interested in software measurement? The answer is that properly measuring your software activities will help you improve and do a better job.  If a CIO is serious about process and process improvement then he has got to get serious about software measurement, too.

In my view, software metrics is relatively primitive compared to other areas within software engineering.  The quality of our metrics is quite poor.  Consequently, in order to make advances, the industry urgently needs better metrics.  The COSMIC size measure was developed partly as a means toward that end.

CAI: What are function points? Could you give us a succinct definition?

CHARLES SYMONS: I prefer to use the more generic term – functional size measure.  I would define a “functional size measure” as the amount of information processing that a piece of software can achieve.

There is no definitive way to measure functional size, but there are several plausible theories.  The first hypothesis was the Albrecht method, more commonly known as the IFPUG (International Function Point Users Group) method.  The Mark II method was a second alternative that was developed a decade later.  The COSMIC method that I am currently involved with represents the next generation of functional size measurement.

CAI: Could you illustrate the importance of functional size metrics? How can they be beneficial?

CHARLES SYMONS: In order to examine productivity or compare productivity across two different projects you need to measure.  The most basic measure of productivity is size divided by effort.  However, other important metrics are speed of delivery, or the size per unit of time, and defect density, or the number of defects per unit size.

These three variables are strongly correlated. For example, if the time of the project must be compressed, how much additional effort will need to be contributed?  What might be the corresponding effect on quality?  While most manufacturers understand these things, very few software people know how to quantify them.  And in my opinion, those who view themselves as software engineers must learn to make these calculations.

CAI: What are the relative advantages of functional size metrics versus more traditional size metrics, such as lines of code? 

CHARLES SYMONS: When measuring the size of software functionality it is advantageous to have metrics that are independent of the technology being used.  A traditional metric, such as lines of code, will obviously depend upon the technology being used.  What I mean is that different programming languages have different expressive powers.  A line of COBOL, for example, will not equal a line of C+ or a line of Assembler.  In this respect, functional size measures are quite useful. They essentially allow one to measure the size of software functionality independent of the technology utilized.

CAI: Can software practitioners use function points to estimate size if they lack sufficient requirements detail; for instance, during the early stages of a project?

CHARLES SYMONS: In the IFPUG method, one must decide how simple or complex their inputs, outputs and logical files are.  If it is too early to decide how complex these components are then you can still identify the components and assume they are all average.  This will give you an approximate size.  The same argument would apply with the COSMIC method as well.  If functional processes cannot be broken down into data movements then the list of functional processes is already something to work with.  Using comparable software, an average size can be determined and assigned to those processes.  Thus, there are ample methods to approximate even though sufficient data is unavailable to perform more accurate measurements.

CAI: Could you give us a specific case where simple metrics such as hours per function point might be insufficient or even incorrect?

CHARLES SYMONS: I have a nice example of this from an outsourcing case from a few years ago.

One of Britain’s largest retail chains had initiated an outsourcing contract with one of the world’s largest outsourcing services suppliers.  One of the goals written into the contract was to raise their current level of productivity to the top quartile in five years based upon this particular supplier’s benchmarks.

I came in and did some productivity measures and found them to be very low, somewhere in the bottom quartile.  However, when I measured speed of delivery I found it to be extremely fast.

At this point, I told my client that instead of five years, I would be able to raise them into the top quartile of productivity in one day!

As I explained earlier, productivity, speed and quality are all interrelated.  By sacrificing speed of delivery, I would be able to raise them into the top quartile of productivity overnight.  Assigning one employee for each project might take forever, but the company would be a lot more efficient.  I could have accomplished the same thing by sacrificing quality.

This was not exactly what my client wanted from their outsourcing contract or from their IT department.  This was a fast moving retail business that was constantly   looking for innovations.  Moreover, in order to roll out products in several hundred stores, quality needed to be very high.

However, simply demanding in the contract to raise productivity, without putting any constraints on speed of delivery or quality, could have opened this company up to some horrendous problems.  Luckily I was able to get the contract changed for them.  I strongly believe that if the contract would not have been revised my clients would have had a mess on their hands.

CAI: You mentioned three different types of function points – Albrecht function points (IFPUG), Mark II and COSMIC. What are the major differences between each of these variations?

CHARLES SYMONS: The first major difference between the three methods is their ease of use for developers.  This difference can be traced back to the time period in which each of these methods was first developed.

The function point method or IFPUG method originated with Allan J. Albrecht’s work at IBM in the mid 1970s.  The items one has to identify and count for this method – such as logical files – are items and concepts that were around in the 1970s.  I will give you an example. People today talk about object classes.  They graduate from university, go into development teams and they end up working with objects and relational databases.  So right away there is a communication problem; namely, the translation of archaic measurement concepts – such as logical files – into the concepts that are actually being built.

The Mark II Method was designed in the late 1980s.  It is based on logical transactions and entity relationship modeling, concepts that were quite popular at that time.

In contrast to Mark II and IFPUG, the COSMIC method was designed in the 21 st century.  It was designed on basic software engineering principles that will be valid until the 22nd century and beyond.

The second major difference between the three methods is their scope of applicability.

Mark II, which is referred to as a first generation measure, was designed to work specifically – with business application software.  The method’s underlying models cannot be used with real-time systems, such as those found in operating system software, process-control imbedded software, or the telecom industry.

The COSMIC method was designed at the outset to work for both business and real- time software.  In fact, the only software that is incompatible with the COSMIC method is any kind of software that is saturated with algorithms, e.g. weather forecasting or aerodynamic software.  No one has yet developed a good method for measuring the size of algorithms.

The third important difference between the three methods is the measurement scale itself.

In the IFPUG method, one must identify what are called “elementary processes.” These processes include inputs, outputs, and inquiries.

After identifying these elementary processes, the IFPUG method requires that they be categorized as simple, average or complex.  Moreover, one of the rules of the IFPUG method is that a complex process may only be twice as big as the simplest one.  This is because the set of data, for which Albrecht calibrated, only required a basic scale with a very limited range.  Nevertheless, we have known for quite a while that there are very complex transactions in the business application world that require a measurement scale with a much broader range.

Try to think about the software that operates the avionics for the Euro-fighter, a new weapon created by the Royal Air Force.  It has single functional processes with sizes as large as 120.  This is due to the vast amount of diverse data processed in single transactions.  For example, if a missile is approaching, the plane must alter course, send information to the pilot, relay other information to ground control, etc. Many complex things are happening simultaneously.

Perhaps arbitrary cut-off points for measurement scales worked well with simple software, such as what Albrecht was dealing with 30 years ago.  However, in today’s world of complex software, it is impractical to have a scale of one, two, three – and big.

CAI: Please briefly explain what COSMIC is and why you originally established it?

CHARLES SYMONS: About 12 years ago, I was in a working group under the Joint Technical Committee of the ISO (International Organization of Standardization) and IEC (International Electrotechnical Commission).  There were several software and systems engineering subcommittees serving software and systems engineering.  One subcommittee was working specifically on developing international standards for functional size measurement.

Advocates for IFPUG wanted to make IFPUG the international standard, but others felt it was not good enough.  The subcommittee struggled with this. By November of 1998, there were still only IFPUG and Mark II methods available.  Faced with this, several members of this subcommittee came up with the idea of starting a new project and, as a result, COSMIC-FFP was born.  COSMIC, a new functional size measure, was the brainchild of the Common Software Measurement International Consortium (COSMIC).

Our consortium was charged with the task of producing a functional size measure for estimation and performance purposes.  It had to be practical and useful – not just a theoretical exercise.  We wanted it to be applicable to business as well as real-time software.  Our group would claim today that we largely met those objectives.  We now have a reliable method that is an international standard.  And it has been adopted by a number of well know users who can attest to its effectiveness.

CAI: Could you briefly outline the main stages of the COSMIC method?

CHARLES SYMONS: There is a three stage process: strategy, mapping and measurement.

The importance of strategy is to define the scope or viewpoint.  For example, let’s say you have a distributed application where a PC is talking to a work server, which in turn is talking to an old mainframe system.  The end user is only interested in the functionality attainable on their PC.  The developer, on the other hand, is forced to write software for each of these three components, each in their own unique language.  Thus, it is logical that the developer’s model of the software should be different than the end user’s model.

This is an advantage of the COSMIC method and it is what enables COSMIC users to measure real-time software and various infrastructure components.  First generation methods only had one model which showed the functionality to the end user.  With COSMIC, we have introduced a measurement method that provides for multiple viewpoints.

No matter what other viewpoints exist, the end user’s model – which we refer to as the COSMIC generic software model – is the one that identifies events and triggers function processes.  An example for real-time software would be a process control system that controls the temperature and a clock.  With each tick of the clock, a functional process is triggered that reads the temperature and compares it to the target temperature.  This repetitive process will determine whether the heat switch will turn on, off, or remain the way it is.

Functional processes are then broken down into sub-processes, which are counted in units of data movements.  Any further explanation would be difficult without the use of a diagram.  The important point, though, is that data manipulation counted as data movements is fundamental.  

Next, the software that is to be measured must be mapped back to the COSMIC generic software model.  Very rarely are requirements written in such a way that the mapping is straightforward.  There are many possible types of mapping processes. These mapping processes cannot be described by the method, but must be identified by the organization.  All we can do is carefully define the end model that the organization must map to.

Once an organization has placed its software into the COSMIC generic software model the rest is elementary.  What remains is simply to identify the function processes of the software and begin counting the data movements.  The unit of measure becomes each individual data movement.  Add up the number of data movements and you have your size measurement.

CAI: Do you have any statistics on the relative usage of the COSMIC?

CHARLES SYMONS: I do not know of any statistics, but I am quite certain that less than 1% of all IT organizations use any type of functional size measure.  Of those who do, I would say that 99% of them are probably using the IFPUG method.

There is a strong inertial force in this business.  If an organization with business applications is using a method and has already built up an impressive amount of measurements, it will not be inclined to throw everything away and jump to a new method.   Even if you are a real-time software user and you’ve set up an entire measurement process based on lines of code, it is going to be difficult to approach your boss and suggest a change.

When the Albrecht method was originally launched IBM sent out 20,000 salesmen across the country to promote it.  Well, we do not have any salesmen right now.  We grow by word-of-mouth and we survive on our excellence.

The COSMIC method has been more attractive to the real-time software community primarily because their only alternative, until COSMIC came along, was the lines of code approach.  However, since today’s software involves heavy infrastructure in web interfaces and cannot be measured properly with the IFPUG method, we are starting to attract more business users.

Overall I am very pleased.  Our users consist of several major European corporations and a lot of Canadian organizations as well.  Not much in terms of US companies, though.

CAI: Do you have any final advice for organizations that are just getting started with functional size measures?

CHARLES SYMONS: One point I would stress is not to start a metrics program until there are repeatable processes in place.  An organization will not be able to learn anything if all of its projects are completely different. It will simply not be possible to analyze why a particular project’s productivity was high or low.  Only when there is some sort of repeatability in an organization’s processes can you begin utilizing measurements to determine what is helping or hindering productivity.

A second piece of advice is to understand that software metrics is a long term investment.  Size measurement must almost always be done manually, and that means that an organization must make a sizable investment before seeing any real results.  Management is always happy to adopt new methods or cut out wasteful processes if the results are immediate and highly tangible.  However, these measurements will not contribute directly to the short-term bottom line. Consequently, unless management is committed to the program and the method, it could very easily find its way into the next budget cut.  

Once an organization has measured approximately ten projects and they have all been implemented in a highly similar way, there should then be enough good data to measure productivity, speed of delivery and quality.  The organization can then begin to analyze the correlation of the various productivity measures and to understand why some projects were more productive than others.  However, if your projects are all different or you have only measured one or two of them, there is very little that you will be able to learn from your measurements.

The IT Metrics and Productivity Institute (http://www.itmpi.org) has a free weekly electronic newsletter that you can subscribe to here http://www.itmpi.org/subscribe.

Comments are closed.

What’s New

DCG-SMS Webinar: Outcome Based Metrics 9th July 2013. Alan Cameron considers how project metrics need to change to keep pace with today's approaches to applications development.
Register. >>

DCG-SMS Webinar: Contracting for Agile
17th September 2013. Susan Atkinson of Keystone Law considers how the approach to contracting for software needs to change to leverage the business advantages of Agile.
Register >> 

Archive
DCG Trusted Advisor
Click here to access DCG expertise on-demand
DCG-SMS Webinar Archive
Click here to view DCG-SMS webinar recordings
Every CIO's Guide to Risk Management
Click here for presentations from the Executive Seminar held at the British Library on 18th April 2012
UNICOM
Click here for presentations from the following events:-
  • Application Lifecycle M'ment Forum
  • Lean & Agile Seminars
  • Enterprise Architecture Forum

  • Business Measurement and Improvement Forum
    Click here for presentations from DCG-SMS Forums
    
    Marquee Powered By Know How Media.