Assigning Weights Using the AHP Methodology

In this section, we illustrate how we determine the weights assigned to categories in one meta-category by referring to the example mentioned above of measuring auditing performance in local government in terms of its effectiveness (Mizrahi and Ness-Weisman 2007). Consider meta-category 1, which includes three categories as presented in Appendix 5.1. In the first stage, we calculate the geometric mean for each category based on the answers of the content experts. The results are: A = 5.34, B = 4.95, C = 1.04. In the next stage, we compare each pair of categories, following the method described above, in order to assign a relative weight, w,, to each one of the categories. Table 5.2 presents this comparison. Finally, we calculate the relative weight, w,, of each one of the categories included in this metacategory using the equation w, = P—. Table 5.2 presents the results.

Si

Table 5.2 shows that the weight of category A - findings and deficiencies in the area of decision making - is 0.72, meaning that it is the most important factor in auditing the policy-making area. The weights of categories B and C are both 0.14. Similar calculations can be made for each of the other meta-categories and the categories included in them.

As explained in the previous section referring to stage 3 b in the framework, based on such calculations, we then verify the weights and eliminate the categories with very low weights. This procedure involves testing the influence of professional orientation, as expressed by education and experience, on the answers given to the full questionnaire. We basically distinguish between an economic and social orientation. Economically oriented experts are those with an educational background and professional experience in economics, finance, and accounting, while socially oriented experts are those with an educational background and experience

Table 5.2 A comparison between each pair of criteria for meta-category 1 - policy making

A

В

C

Si

Wi

A

1.00

5.34

4.95

(1 x 5.34 x 4.95)(1/3' =2.98

2.98/4.13 = 0.72

В

= 0.191/5.34

1.00

1.04

(0.19 x 1 x 1.04)(1/3) =0.58

0.58/4.13 = 0.14

C

= 0.21/4.95

= 0.961/1.04

1.00

(0.2 x 0.96 x l)(1/3) = 0.58

0.58/4.13 = 0.14

?>,- = 2.98 + 0.58 + 0.58 = 4.13

1

Since there are three criteria in this meta-category (n = 3), then superscript (1/3) = (1/n).

in areas such as the social sciences, law, and the humanities. Clearly, there may be other variables such as the experience of a specific expert with a particular player or political biases that influence the experts’ evaluations. However, such variables are very specific and, therefore, hard to measure, while a professional orientation is sufficiently general and permits measurement and comparison. Since the criteria and alternatives that comprise the role of the internal audit are intangible in essence, when comparing them, experts tend to incorporate their own experience, insight, and intuition into the process, whether consciously or not. The incorporation of these factors is actually considered one of the advantages of the AHP (Sinuani-Stern et al. 1995).

With this distinction in mind, we reexamine the questionnaires filled out by the content experts using the AHP methodology. If the results show that in several areas the weights differ between the two groups, it means that the experts’ professional orientation may have had an impact on the importance attributed to different issues.

The next stage of the verification process includes the isolation of those issues that receive relatively low weights within a given meta-category. These issues are eliminated from the questionnaire in order to reduce the number of comparisons and allow its distribution among a larger group of experts. The threshold that determines the elimination of issues in each category is calculated by the equation: 100 divided by the number of issues in the category. This calculation was chosen because it represents the result that would have been obtained if all issues were weighted equally. For example, if there are 10 issues in the category, the equal weight of each issue is 10%, and any issue that scores under this threshold should be eliminated. Based on this threshold determination, we eliminate those categories that have been assigned low weights by both economically and socially oriented experts. Categories that have been assigned low weights by only one group of experts - either economically or socially oriented - are not eliminated at this stage.

Given a reduced number of criteria, the framework suggests constructing a shorter version of the experts’ questionnaire and distributing it to a larger group of experts, equally divided according to economic and social orientation. The completed questionnaires should be examined by the transitivity criterion, and those questionnaires that do not fulfill this condition are eliminated. In addition, in the questionnaires of the respondents who were identified as economically oriented, the categories that had been assigned a low weight in the first round by the economically oriented experts are eliminated, and the same is done for the socially oriented respondents. As a result, we have different categories for each professional orientation. After this verification process, the remaining questionnaires are analyzed using the AHP methodology.

As mentioned earlier, the AHP method provides a tool for measuring and improving the consistency of the data collected in the questionnaires. In principle, as the number of respondents and the number of items in the questionnaires increases, there are more chances that the data will be inconsistent. However, this is not necessarily the case, and we may find consistency even in very complicated situations. In order to calculate the consistency index (CI) we apply the equation CI = (Amax — n)/(n 1) and the consistency ratio (CR) is calculated by CR = CI/RI, where RI is the random CI for the nine items. In our case, the calculations yield CI =

0.072 and CR = 0.05, which implies good consistency (CI < 0.1).

The complex process of evaluation and verification described earlier has a strong potential for reducing gaming strategies in the planning of a performance management system simply, because the respondents cannot be sure what type of evaluations may serve their organizational interests. They therefore have difficulty manipulating their answers toward specific ends. Such uncertainty is required in order to minimize the use of gaming strategies (Bevan and Hood 2006; Hood 2006). It also integrates employees into the process, thus enhancing learning processes.

 
Source
< Prev   CONTENTS   Source   Next >