Pairwise comparison
revision 1 — 2007/06/20 14:41:04 — Fil Salustri
A tool to rank a set of decision-making criteria and rate the criteria on a relative scale of importance.
Contents
IntroductionMaking decisions requires comparing alternatives with respect to a set of criteria. If there are more than two criteria, determining which criteria are more important can itself be a serious problem. One would like to be able to rank the criteria in order of importance, and to assign to the criteria some relative ranking indicating the degree of importance of each criterion with respect to the other criteria. Here are some examples: Planning a Vacation: your criteria might be cost, number of locations visited, quality of the locations, travel time as a fraction of total vacation time, and closeness to foreign friends and family. Which one of these is most important? How can you plan your vacation to optimize your enjoyment? Choosing a Job Offer: your criteria might be salary, benefits, opportunity for advancement, location, travel times, stock options and other incentives, viability of the company, and working atmosphere. Which is most important? How can you increase the likelihood of choosing the best job? Designing a Toaster: your criteria are cost, quality, expected product life, reliability, manufacturability, aesthetics, and safety. Which is most important? How do you choose the best design with respect to all these criteria? It can be very difficult to rank and weight criteria. It can become insurmountable in complex problems because every criterion must be weighted with respect to every other criterion; this is a problem that grows arithmetically. (For example, given 5 criteria, there are 4+3+2+1=10 relationships to consider, and for 10 criteria there are 45 relationships to consider). Pairwise comparison is one way to determine how to evaluate alternatives by providing an easy and reliable means to rate and rank decision-making criteria. It is often used to assign weights to design criteria in concept development?. Pairwise comparison is implemented in two stages:
Pairwise Comparison in DesignIn design, we can use the product characteristic set of a design problem. The enabling Product characteristic set are the foundation of an Initial product design specification; they are the most fundamental characteristics of a design. All other requirements and constraints flow from them. Each one of the enabling Product characteristic set can be thought of as a tree of information, rooted at the characteristic and blossoming out to functional requirement sets, then Constraint sets, and finally performance metric sets. Basically, this means that when we say usability, we're not just using a generic word, but rather we are using it as a label that stands for all the information in all the functional requirement sets, Constraint sets and performance metric sets that blossom out from it. So long as we remember this, we can use the enabling product characteristic sets as the basic criteria that we want to rank with pairwise comparison. The Pairwise Comparison MethodIdentify the Criteria to be RankedFor our approach, this step has already been done, through the definition of a initial product design specification. The criteria we will use are the enabling product characteristic sets. Remember that each characteristic is really just a label for all the information flowing from it in the initial product design specification. Arrange the Criteria in a NxM MatrixUsing the enabling product characteristic set gives:
Obviously, we only need one triangle of the matrix. That is, since the rows and columns contain exactly the same things in the same order, one triangle of the matrix will contain just a mirror image of the other. Furthermore, the diagonal of the matrix is irrelevant – it simply doesn't make sense to consider how important one criterion is with respect to itself! So now we have:
Compare Pairs of Criteria Across RowsFor each row, consider the criterion in the row with respect to each criterion in the rest of the row. The first comparison in our matrix is functionality versus durability. Which is more important? Discuss this in your teams and reach a consensus to that question. In the corresponding cell of the matrix, put the letter of the criterion that is most important. Then go on to the next pair (in our example, functionality versus quality). If we really, really think that two criteria are of equal importance, then we put both letters in the corresponding cell. Note that the individual comparisons are pairwise – we completely ignore the other criteria. Say we decide that functionality is more important than durability. We would then put a A into cell (2, 4) of the matrix. We continue doing this till all the empty cells have been filled. We might end up with something like the following:
Notice the double letters in some cells. We have used this convention to indicate that there is no difference in importance between the items being compared. Create the RankingNow we simply make a sorted list of the criteria, ranked by the number of cells containing their flag letter. For example, functionality is marked by the letter A, so functionality gets a rank value of 4, because there are 4 As in the matrix. This leads to:
Assign WeightsFinally, we want to associate weights with our criteria so that the relative ranking from the pairwise comparison is satisfied. There are two basic constraints on how we assign the weights.
We can either begin to wrestle with the problem in a strictly ad-hoc manner, or we can try to structure our solution. It's inevitable that some iteration will be required, so there's no point in looking for a method that will give us actual weights in one pass. However, we can try to set up an initial set of values that does satisfy the constraints, and then tweak the values until they are satisfactory for all stakeholders. One very easy way to set that initial set of values is to assume a linear proportion between all the weights and solve the following equation (for our example): 100 = 7x + 6x + 6x + 5x + 5x + 4x + 4x + 2x + 0x or, x = 2.56 (approx), where the coefficients in the equation are the number of occurrences of each criterion in the pairwise comparison matrix. This leads to:
NOTE: the "1%" for maintainability arose by gathering up all the round-off error from the other calculations. There are obvious problems with this approach. For example, the lowest ranked item in a pairwise comparison can often turn out to have a weight of zero, as we found in the example above. However, we cannot assume that zero importance implies we can omit it altogether. If we do that, we have to redo the entire pairwise comparison, and then some other criterion may end up having zero importance. So, when we have a criterion that appears to have zero importance, we borrow a fractional amount from other criteria and give it to that last-ranked criterion, just to make sure it counts for something. Other strange effects can occur. For example, it is possible to get a situation where given three criteria, say A, B, and C, we find that A is more important than B, that B is more important than C, and that C is more important than A. This is a paradoxical situation (known as Arrow's Paradox) in that it makes no sense. Generally, this happens when people work individually rather than in teams. So no matter what else happens, however, it is essential that all stakeholders in a design agree to the actual weights. Pairwise comparison is therefore best done in teams and not individually. |