| Decisions | Guidelines
Option | Opportunity Cost |
| Need | Values | Search | Compare| Select | Implement | Review | Change |
After defining the criteria (step 2), the decision-maker is ready to begin evaluating the alternatives step 3). Before doing so, the decision-maker should try to improve the choices available. A new choice can be invented simply by picking the best features of all the alternatives and rolling them up into a new alternative with a process called "cherry picking". . First, search through the existing set of alternatives to find their most attractive features (the cherries). Second, invent new alternatives by cobbling together as many of the cherries as possible (or in different feasible combinations) into new alternatives. Also attempt to add these attractive feature to the default option. By "cherry picking" the best features of the competing alternatives, the decision-maker builds new and possibly better alternative to select.
To reduce the number of comparison, the decision-maker should remove the obvious inferior choices. Alternatives should be eliminated for any of these two reasons. First, if an alternative does not satisfy the key needs identified in step 1 (solve the problem), it should should be eliminated. Second, if any alternative is not as attractive as the default option - which is always one of the alternatives under consideration - it should be eliminated. Just ask these two questions of each alternative as part of a pre-selection screening action. Question one, "Will this alternative solve my primary need?" If the answer is no, then eliminate this alternative. Before eliminating an alternative, investigate if there is a way to capture or transfer any of its attractive features to either the default option or another alternative. Question two, "Is this alternative better than the default option?" If the answer is yes, then eliminate this alternative. Again, extract and transfer any desirable features to another alternative, the default option if possible.
The alternatives that remain after the cherry picking and the screening out of inferior choices are the ones the decision-maker must now evaluate to find the one best suited to satisfy the decision-maker's needs.
The alternatives will be evaluated based on relative comparisons to each other (for this decision context and decision-maker). The decision-maker will make the comparisons. Each of the alternatives will be scored against each of the evaluation criteria. The easiest way to do this to use a relative scale. For each criteria, sort the list of alternative and pick the one that best meets the criteria and the one the least meets the criteria. For example, in a choice for selecting a new job with one of the criterion being the shortest commute distance, one alternative will have the shortest commute time and one will have the longest. The other alternatives will have commute times in-between. In all scoring, the alternative that best satisfies the criterion is given a score of 1.0, and the the alternative that least satisfies the criterion is given a score of 0. Note that this a "forced spread" scale in order to improve the ability of the decision-maker to discriminate between alternatives. For quantitative criteria, the "forced spread" using a scale from 0 to 1.0 is easy to use. However, for qualitative criteria, the process requires a few more steps, because the decision-maker must create a new scale.
For each criterion, whether the measurement scale is quantitative or qualitative, the first step is to rank-order all the alternatives from most preferred to least preferred. On a quantitative scale, either more is better than less (salary increase) or less is better than more (commute time) depending on whether the attribute is desirable or undesirable. On a qualitative scale, the process is somewhat more difficult, but still doable. In the process of rank-ordering all the alternatives on a single criterion, the decision-maker must be able to explain why one alternative is better than other. Using these descriptions or explanations of goodness, the decision-maker needs to describe the best and worst alternative (why are they the best and the worst). Then, intermediate descriptions need to be defined between the worst and the best. At a minimum, the decision-maker should have at least three intermediation descriptions between the worst and best for each criterion. It doesn't matter so much what the descriptions are so long as the decision-maker can place a relative weighting to them. The description of the best outcome is rated a 1.0. The description of the worst outcome is rated a 0. All the other descriptions in-between must be given progressively lower numbers (from best to worst), depending on much worst they are from the description above them.
In the example shown in the figure below for a quantitative Criteria 1 (Salary Increase), the alternatives are ranked from highest to lowest expected salary increase (Alternatives A is ranked first and the default option is ranked last). The points awarded to each alternative based on its rank order is up to the decision maker. Normally, the top ranked alternative gets a score of 1.0 and the lowest ranking alternate is assigned a score of 0. The alternatives in-between are scored proportionately. If alternative C has a salary increase that is halfway between the best and the worst, it would receive half the score as the best alternative or a .5.
For non-quantitative criteria like positive work environment (Criteria 2), the approach is the same. Each alternative is ranked in terms of desirability. As explained above, the decision-maker writes a short description of the expected outcome for each alternative, and then ranks the desirability of each of the descriptions. Again, the best and lowest ranked alternatives are selected first and then given scores of 1.0 and 0 respectively. Note that a score of zero does not mean that the alternative fails to satisfy the criteria. It simply means that it offers the lowest level of satisfaction for this group of alternatives. Based on the description written for each alternative, the decision maker must assign a relative or comparative score to all the in-between alternatives. The decision-maker asks how must better is any particular alternative compared to the worst and how much better is it compared to the best. The decision maker must give the outcome of each alternative a score between 1.0 and 0 using the best and the worst alternatives as reference points. These comparisons are not an exact science and depends solely on the decision-maker's value system.
After all the alternatives are ranked and scored against each of the criterion, the scores are consolidated. Since some criteria have already been determined to be more important than others (the decision maker previously assigned weights of important to them with 10 being the highest weight - see values). A final score is computed for each alternative with the "weight and rate" method shown below. For each alternative (row), the score in each cell (how well the alternative satisfies the criterion defined at the top of the column) is multiplied by the criterion weight. For example, Alternative A is ranked the best outcome on Criteria 1, so it is given a comparative score of 1.0. But Criteria 1 is the most important criterion to the decision maker with a weight of 10. So, the rating of 1.0 is multiplied by 10. Likewise, Alternative A is also the ranked as the best outcome on Criteria 5. But Criteria 5 is not as important as Criteria 1. In comparison to Criteria 1 which was assigned a weight of 10, Criteria 5 was only assigned a relative weight of 4. So, the rating of 1.0 is multiplied by 4. The same process is used to assign weighted scores in each of the other cells.
The total sum of products of each row (alternatives) is added together for the total score and placed in the column to the right. For example, the evaluation score for alternative A is derived by multiplying 1.0*10 (alternative A's score of 1.0 on Criterion 1 times its importance weight of 10) + .8*8 (alternative A's score of .8 on Criterion 2 times its importance weight of 8) + 0*.8 (alternative A's score of 0 on Criterion 3 times its importance weight of 8) + 0*.6 (alternative A's score of 0 on Criterion 4 times its importance weight of 6) + 1.0*4 (alternative A's score of 1.0 on Criterion 5 times its importance weight of 4) for a grand total score of 20.4. As a reminder, all evaluations are relative. Criteria were previously weighted, during the value assessment, based on each criterion's importance relative to each other. Alternatives are also scored by comparison against each other by how well they satisfy the same criterion.
The highest scoring alternative is C with a total score of 25.4. The default option scored 16.0 almost as good as alternative D with a score of 16.8. Unless new criteria or new alternatives are identified, alternative C is the best choice, and it now becomes the default option. Unless the decision-maker can find a better alternative between now and when it comes time to make the decision (or to implementation the decision), then alternative C becomes the decision-maker's preferred choice. Based on all the alternatives considered, it best satisfies the decision-maker's set of wants and needs. It may not satisfy them completely, but the highest scoring alternative is the best choice available.
Website last updated on 10/19/08
Copyright ©2005 Charles W. Sooter. All rights reserved.