In our case studies the availability of data was an important pro

In our case studies the availability of data was an important problem. Even if data existed, it took effort to find out how and where to access it. The problem of data availability was indicated in other studies as well,

e.g. dealing with environmental indicators (Stein et al., 2001), evaluating tourism sustainability (O’Mahony et al., 2009), or discovering information about the local community (Ballinger et al., 2010). One method for to overcome the data availability gap is standard, repeatable, and cost effective information gathering surveys (O’Mahony et al., 2009). According check details to SUSTAIN partnership (2012b), ‘the approach to score through ranges instead of using precise values, provides the method with flexibility: even data which could not be specifically identified or might be considered imprecise or give just an approximation can be used if identified within a range.’ Table 2 shows an example spread-sheet for the issue ‘Economic opportunity.’ In detail, the approach includes several subjective pre-definitions that have significant

influence on the results: the definition MG-132 research buy (boundaries) of the classes, the choice of non-equidistant classes, the definitions of the minimum and maximum of the total range, and the allocation of scores from 0 to 10 to each class. Further, the approach has mathematical weaknesses. If no data is available, the score for an indicator is zero. It is not removed from the calculation but included in the average calculation, reducing the result. Further, indicators that are dependent on each other, like the percentage of employment in primary, secondary and tertiary sectors of the economy (Table 2), are treated as independent indicators in the average calculations, causing an overestimate of the indicator ‘employment by sector’. Scoring through classes is a simple approach which is easy to understand and allows for

the combination of different data (e.g. relative, classified, and numerical data), but includes a problematic loss of information and reduces the overall quality of the indicator performance. It can hardly be regarded as an advantage in cases where data is uncertain or has to be estimated. Due to these experiences, HAS1 we thoroughly revised several parts of the scoring spread-sheet. Indicator scores are averaged to calculate issue scores, and these are further aggregated into pillar scores. Does aggregation stabilise the results and improve reliability? The average scores for every issue are shown for Warnemünde (Fig. 2) and for Neringa (Fig. 3). For every issue the results between the 4 (5) groups of evaluators differ strongly. The total average over all issues in Warnemünde is five. The averaged minimum scores are two scores lower and the averaged maximum two scores higher than the average. The same is true for Neringa (Fig. 3). The differences between aggregated results at both issue and pillar levels are very high.

Comments are closed.