I find the trend in naming of software metrics just a little bit irritating, in that they are often named according to what the inventor wants them to measure, rather than what they actually measure. So what do I mean by this?
For the most part, our interest in metrics in the software world is to help us describe the quality attributes of software (that is, no different from other engineering disciplines). Quality attributes consist of such properties as how easy it is to change something (Modifiability), how easy it is to add resources (Scalability), how easy it is for a user to use it (Usability), how easy it is to check it's behaviour (Testability), how well we can divide it up into independent bits (Modularity), and so on.
We would like to be able to measure quality attributes, but generally they are external attributes - we cannot measure them considering only the system itself, but have to take in to consideration the environment the system is in. So, we cannot measure the modifiability of a software system without knowing something about the modifications that are to be made (some are easy, some are hard), or the language it is written in (some languages make some changes easier than other languages), or the skill or experience of the developer.
Some quality attributes we're not even that sure what they are. "Complexity" is a good example. It seems generally believed that a software system's complexity is the best predictor of many properties of the system. For example, it is generally held that the more complex the system, the more fault-prone it is.
The usual approach to dealing with measuring external attributes or attributes we don't full understand is to identify internal attributes — attributes that can be measured by considering only the entity itself — that tell us something useful about the external attribute we're interested in. So, rather than measure health directly, we measuring blood pressure, height, weight, and so forth, and from those measurements come up with some measurement of health.
The problem in the software world is that there is this tendency to come up with quite reasonable metrics some internal attribute, but then name it as if it's measuring the quality attribute we actually care about. Not only is it mis-leading, but it is very confusing for the uninitiated.
A classic example of this is McCabe's Cyclomatic Complexity Number (CCN) [McC76]. This is a metric for the number of linearly independent paths through a piece of source code of a procedure (ignoring, as most people do, the difficulties caused by procedure calls in that procedure). Now if McCabe had named his metric "Number of linearly independent paths" (NLIP), and then postulated a connection between the internal attribute being measured and the attribute he cared about, namely "complexity", then the world would have been much less confusing place. The fact is, CCN, despite it's name, does not measure "complexity" (whatever it is). It's not hard to come up with examples where the CCN would indicate one is "more complex than" the other, but any human looking at them would relate them the other way (that is, CCN does not meet the representation condition). Furthermore, at least one metrics luminary argues that there is no possibility of a single metric for complexity [Fen94].
Other examples come from the CK metric suite [CK91], such as Coupling Between Objects (CBO). This is a metric that, by its name, is supposed to measure "coupling". In my opinion, coupling is one of those attributes the we don't really understand. A common definition is [SMC74,p117]:
Coupling is the measure of the strength of association established by a connection from one module to another.This definition is not specific as to what "strength" means or what "association" means, the few discussions about coupling ever are. What CBO does is, by necessity, make choices for strength and association. But are they the right ones? A fairly good indication that they may not be is the number of other metrics for "coupling" that have been proposed [BDW99]. Again, we see a metric named for what someone hopes it is measuring, rather than what it really is. (Furthermore, what it really measures is something to do with connections between classes not objects.)
Lack of Cohesion in Methods (LCOM) is another CK metric that I believe has an unfortunate name. Again, it is intended to measure an attribute that we don't have a good understanding for, and again there are lots of similar metrics (both in terms of how they are defined and what their inventors are hoping they measure) [BDW98]. I will note that the other CK metrics (DIT, NOC, RFC, WMC) are much more reasonably named.
So how should metrics be named? They should be named according to what they actually measure, rather than what we hope they measure. Thus, we get "number of linearly independent paths" rather than CCN, "number of line-feeds" or "number of executable statements" instead of LOC, "number of member references" instead of CBO, or "number of top-level user-defined modules", instead of "number of classes".