Performance Indicators and University Distance Education Providers

Doug Shale and Jean Gomes

VOL. 13, No. 1, 1-20

Abstract

Higher education systems throughout the world are coming under increasing public and governmental scrutiny with respect to what they do, how well they do it, and at what cost. Distance education has always been especially accountable because it has generally been viewed as outside the mainstream of university education. However, identifying performance measures is even more problematic for distance education than for conventional education. This is due partly to the many different forms distance education can take and partly to the unique processes used for organizing and delivering education at a distance. Many of the metrics used for measuring traditional education do not transfer well to distance education practices. The experiences of two university distance education providers involved in system-wide performance measurement are used to illustrate this claim and to serve as a vehicle for reviewing major measurement issues faced by distance education. We include a discussion of the kinds of indicators we think would be more appropriate and effective.

Résumé

Les systèmes d’enseignement supérieur à travers le monde font l’objet d’un examen public et gouvernemental, aujourd’hui intensifié, à l’égard de ce qu’ils font, à quel prix et avec quel succès. L’enseignement à distance a toujours été soumis à une singulière imputabilité à cause de sa position en marge du courant universitaire conventionnel. De plus, l’enseignement à distance présente une difficulté particulière au chapitre du choix des indicateurs de performance, attribuable à la fois à la variété des formes qu’il adopte et à la spécificité des démarches nécessaires pour l’organiser et le dispenser. Les paramètres usuels servant à évaluer l’éducation conventionnelle s’appliquent souvent mal aux pratiques de l’éducation à distance. Les expériences de deux organismes dispensateurs d’enseignement à distance, relatives à la mesure de la performance à l’échelle du système, servent à illustrer ce propos et à mettre en évidence les questions auxquelles se heurte l’éducation à distance, touchant de telles évaluations. Nous examinons également les types d’indicateurs que nous croyons susceptibles d’être à la fois applicables et mieux appropriés à cette métrique.

Higher education systems in many countries have come under increasing public and governmental scrutiny with respect to what they do, how well they do it, and at what cost. In some instances, a formalized accountability exercise has been implemented, usually based on the notion of “performance indicators.” In some cases, attempts may have been made to formulate a theoretical basis from which to derive performance indicators. More often, though, performance indicators are defined operationally and are often arrived at through some political process involving negotiations between government agencies and educational institutions. As a result, sets of performance measures may differ somewhat from jurisdiction to jurisdiction. In general, though, performance indicators are developed for application to conventional campus-based institutions.

In the case of university distance education providers, these indicators have not generally been appropriate. For example, counts of students in conventional universities are based on an assumption that the norm is a full-time student carrying a full course load. By their nature, distance education providers typically cater to a group of students who necessarily must study part-time. Moreover, the mandates of distance education providers may often be antithetical to the mandates presumed by performance indicators formulated for conventional universities. For example, many distance education institutions have a stated commitment to lifelong, continuous learning. Numbers of degrees awarded, graduation rates and time to degree completion might (arguably) be indicative of how effective or efficient a conventional university may be. However, these kinds of indicators are less meaningful for most distance education providers because many of their students will not have a start-to-end degree program as an objective.

Other issues arise because the teaching/learning process in distance education can be different from the usual lecture/laboratory/tutorial structuring found in conventional classroom based instruction. Contact hours, materials preparation, student load, and even office hours can take on many variant forms in distance education. The identification of an appropriate set of measures becomes a major part of the challenge in formulating performance indicators for distance education. Moreover, the considerable variation in the forms distance education can take means that we need more than one standard set of measures. For example, distance education that is largely materials-based (as is the case in traditional correspondence education) functions differently from distance education offered by interactive videoconferencing. An efficiency measure, for instance, can be formulated for the one form that is not appropriate for the other.

This article describes the dilemmas faced by uni-mode distance education providers because they do not generally fit conventional standards well. Illustrations are provided from efforts to apply key performance indicators formulated for conventional universities to two university-level distance education institutions: the Open University/Open College of British Columbia and Athabasca University in Alberta. The article concludes with a discussion of what kinds of indicators would be more appropriate and effective for distance education providers.

The Concept of Performance Indicators

As Borden and Bottrill (1994) point out, “The term performance indicators may seem straightforward, but even a brief examination of the literature reveals that many shades of meaning have been attached to this concept” (p. 11). A useful (albeit less than definitive) characterization of performance indicators is according to their primary use. For example, Kaufman (1988) argues that performance indicators should be linked to specific processes or activities because such a link is essential to determine whether a process or method is performed correctly. The relevance of this view to conventional classroom instruction is arguable—as evidenced by the prolonged debate over how to improve teaching and reengineer university curricula. However, many forms of distance education (and especially correspondence education) are based on distinct, industrial-like processes that can benefit from detailed management information.

An alternative view of performance indicators is to use them to guide institutional resource allocation and institutional planning. Typically, performance indicators at this level condense detailed operational data into simpler, summative measures that derive much value-added information because of their more direct relationship to a specific organizational context.

A third view of performance indicators is their role in addressing issues of political accountability and funding priorities. Dochy, Segers, and Wijnen (1990) characterize performance indicators in this context as a “public sector surrogate for the information generated elsewhere by the market system” (p. 48). Inherent in this concept of performance indicators as accountability measures is a sense of expectation of what educational institutions are supposed to do, how they do it, and how efficiently and effectively they function. However, institutions providing university-level education have consistently had to contend with ill-defined, often inconsistent expectations that vary according to the audience to be addressed. Even within the walls of the university academy there is considerable ambiguity about the role of universities in our modern world. This state of affairs is complicated because of the various constituencies served by the universities (with their differing expectations) and the fact that most universities are largely funded with public money (which implies another confounded set of expectations).

Even to the extent that expectations may be clarified and agreed on, there is still the considerable challenge of formulating an appropriate measurement and obtaining the requisite data. In many instances, some sub-optimal or proxy measure must suffice. Often there is just plain disagreement about what expectations are reasonable and how best to measure any given expectation. When governments, policy-making bodies, the institutions themselves, and sometimes miscellaneous other agencies are involved, formulating performance indicators and obtaining measures for them becomes a political process. As a result, there are not necessarily absolutes with regard to the formulation and measurement of performance indicators in the public accountability context. That said, at least so far as conventional universities are concerned, there does seem to be a surprising commonality in practice.

This article addresses only performance indicators in this accountability sense for two reasons. One is that each level of interest, as described here, requires its own detailed treatment of performance indicators (PIs) because PIs are so context-dependent and can vary according to the general purpose they are meant to serve. The other reason is that the two organizations available for purposes of illustration clearly have had to respond to pressures of “political accountability and funding priorities,” and the process whereby the indicators were arrived at and the specific form they took have been shaped by these pressures. As we see from an examination of these PIs, the pressures of “political accountability and funding priorities” have led to an emphasis on measuring input and to a lesser degree outcome, not process.

The issue of performance indicators in university-level distance education is a somewhat different matter. At present, there seemingly has been no similar process in this sector of higher education for formulating performance indicators and their measurement, although papers by Landstrom, Mayer, and Shobe (1997) and Madan (1997) have considered performance measurement in distance education from a more hypothetical perspective.

In the two situations elaborated on in this article, the distance education providers have simply been incorporated into the same process used for the conventional institutions. However, distance education has explicitly been set up to meet particular social and educational objectives that may have no analogues in conventionally offered higher education. As a result, performance indicators unique to the context of distance education are required. But as we see in the next section, there can also be substantial variability among distance education providers with respect to what these special features are.

Performance indicators and related measures are only a necessary first stage in assessing institutional performance. There remains the issue of deciding what meaning and implications should be attached to the numerical values. This takes us into the realm of “benchmarking” (in the sense of performance standards). Unlike the situation with performance indicators, benchmarking presents the same generic problems for both conventional higher education and distance education: with what do we compare a given measure and how much of a difference is significant? However, the dearth of comparable distance education providers does mean that benchmarking is more difficult for such institutions. The topic of benchmarking, important though it is, is beyond the scope of this article.

The Concept of Distance Education

In its most basic form, distance education is characterized by a situation in which teacher and learner are in physically separated locations. This may be due to the inability or unwillingness of the learner to attend classes at a designated site—or, as a corollary to this, students may not be able to accommodate themselves to the fixed scheduling required for on-site classes. Contact between teacher and learner is mediated by some form of technology. In the earliest form of distance education (generally known as correspondence study in North America or extension studies in Australia and New Zealand), the mediating technology was print- and mail-based. More recent technologies are audioteleconferencing, videoconferencing, and computer-mediated conferencing.

At one time, this would have served as an adequate and comprehensive characterization of distance education. However, in more recent times, distance education has been viewed as one aspect of an open learning system that provides access to people seeking further education who might otherwise not be able to avail themselves of such an opportunity. In this context, freedom from the constraints of time and place of study are seen as two important dimensions of an open learning system. Often, though, open learning systems are also expected to provide a means for integrating specific components of the educational systems of which they are a part. For example, an open learning system may be a nexus for collaborative degree programs with other universities because such a system is more flexible regarding the recognition and coordination of credits granted by other institutions. This is a role that the Open University/Open College fills in the British Columbia open learning system.

Other important features of “openness” may include any one or more of the following:

These features often exist in distance education institutions in a mix-and-match fashion. Few distance education providers will possess all (or even most) of these characteristics, although both Athabasca University and the BC Open University/Open College do. Each of these functions requires one or more specialized indicators. Consequently, a comprehensive set of performance indicators for a given distance education institution ought to be tailor-made in accordance with those features of openness associated with it. The roles of distance education providers in lifelong, continuous learning and in collaborative programming are also important features of these institutions. As such, their importance should be reflected in appropriate, specialized measures of performance. As we see in the cases described here, it is difficult to institute specialized measures given the bureaucratic context in which accountability-oriented performance measures are usually embedded.

The Distance Education Providers

In this section we describe two provincially initiated performance measurement exercises and their implications for the two major Canadian distance education institutions affected: Athabasca University in Alberta and the Open University/Open College component of the Open Learning Agency in British Columbia.

Athabasca University (AU) is an autonomous institution with a mandate to offer primarily baccalaureate-level studies through distance study. It does so mainly using course materials distributed by mail and supported by a system of telephone tutors. AU has an open admissions policy, meaning that no formal educational qualifications are required for admission. It is Alberta’s fourth and newest university; the other three are campus-based, traditional institutions. In Alberta, the colleges and technical institutes have been regarded as one subsystem in the performance indicator accountability exercise; the universities (including AU) have been another. The processes used for developing and implementing key performance indicators differed somewhat between these two sectors (as did some of the indicators themselves).

The Open Learning Agency in British Columbia is a unique organization that comprises four distinct suborganizations: the Open University/Open College (OU/OC), the Knowledge Network (an educational broadcasting facility), Workplace Training Systems, and Open Schools. We are concerned here only with the OU/OC operation. In addition, as a centralized service function, the Open Learning Agency runs a “credit bank” (called the BC Educational Credit Bank). The OU/OC is a course materials/telephone-tutor-based, open admissions institution similar to Athabasca University. The Open Learning Agency is considered to be a part of the BC College, Institute and Agency System. The BC universities are regarded as a separate component and are not a part of the accountability-reporting exercise described here. This difference in the peer groups to which AU and the OU/OC belong is at least partly responsible for the substantial emphasis in Alberta on research-related performance indicators and faculty workload—and in BC on employment rate and employment satisfaction.

The Performance Indicators

Interestingly, a similar consultative process was used in both BC and Alberta involving government and institutional representatives. In both instances, collaborative working groups identified aspects of performance of interest and formulated associated measures and technical specifications. Draft documents were circulated throughout the higher education system for review and comment. In both instances, a trial or pilot implementation phase was introduced to further refine the operationalization of the measures. Table 1 is a summary of the performance measures arrived at in the two jurisdictions as at December 1997. The performance indicator initiatives were both still underway at this time, and additional modifications may be made as experience is gained in both jurisdictions. Although there are some differences in categorization and language, there is a surprising degree of similarity in the two sets of indicators. In the interests of brevity—and because of the similarities between the BC OU/OC and Athabasca University—comments regarding the appropriateness of the 10 indicators for these institutions are offered in conflation.

Participation/Access

The participation/access indicator is meant to provide a measure of the total number of students served during a reporting period—hence an emphasis on head counts and registrations. In the case of head counts, this is a straight count of bodies served. In the case of registrations, this is a count of students served within program areas.

However, participation and access will necessarily mean something different with respect to open learning institutions. Conventionally, these notions relate to the extent to which the higher educational system serves the 18- to 24-year-old cohort (the conventional university population base). The mandates of most distance education providers specify that such institutions are to provide access to university-level education to those who might otherwise be excluded. These institutions are also expected to provide a means for supporting lifelong learning. Issues of participation and access for distance education providers should be considered in this context. Many distance education students are working adults with families (Powell, 1997; Bernier, 1995). This implies quite a different comparative basis and interpretation. Both AU and the OU/OC report course registrations as their response to this indicator, but this is an inadequate reflection of their success in meeting their mandates regarding access and participation. (It may be worth noting that course registrations are quite different from the program registrations mentioned above; because students may take more than one distance education course, there will be an indeterminate amount of duplicate counting.)

Another consideration is that both the OU/OC and AU provide a substantial academic program service through their credit assessment, credit coordination, and credit banking functions. In many instances, this may be the only service used by students (both the OU and AU offer Bachelor of General Studies degrees that may be awarded on the basis of credits earned elsewhere). Credit review and assessment is a time-consuming process requiring staff with specialized skills and so it is expensive. There is no recognition of this service in any of the indicators used.

Finally, the reporting requirement for Course Hour Equivalents (CHE) in the BC context and student FTEs in both jurisdictions deserves comment. The Course Hour Equivalent is meant to be an extension of the classical Student Contact Hour (SCH) that provides a measure independent of delivery mode. A CHE is deemed to be a “learning experience” equivalent to one hour of scheduled class experience. The mechanism for assessing the CHE as an “estimated average effort” to complete a course is through the judgment of a course designer and the institution. Clearly there is a serious conceptual problem in attempting to equate the number of hours an average student would spend working through a course and SCHs. We return to this point below.

FTEs present a different dilemma. There are a number of quite different formulas for calculating full-time equivalents. In general, though, all the formulations attempt to represent total instructional load (including part-time with full-time students) in terms of the load represented by a full-time student. A standardized way to do this is to define a normal full course load and make that the full-time norm. Differing instructional loads are then prorated relative to the normal load (the units can be numbers of courses, credit hours, or even CHEs). The complication here is determining what an appropriate notional normal instructional load is for a full-time (and hence FTE) distance education student. It is probably not the same with respect to the amount of resources required to be expended compared with a classroom-based FTE, but the instructional load will vary according to the primary method of course delivery and on how course development and delivery is done.

Completion/Retention

Historically, distance education students have not generally undertaken full-time distance-based study or enrolled in full distance-delivered degree programs (Powell, 1997; Spanard, 1990). This is not surprising given their educational backgrounds, educational aspirations, and personal situations. In addition, earning a degree through distance study is a daunting task. Even a notional three-year degree (requiring, say, 30 3-credit courses for completion) can take six years assuming the student completes five courses per calendar year—a crushing workload for most students because of work and/or family responsibilities. Consequently, it is not surprising that relatively few students complete distance delivered degree programs (let alone at a rate deemed efficient by the usual performance indicator specifications). For a variety of reasons, a majority of students do not persist beyond the first course they take. Recently, for example, a growing number of distance education course registrations at AU and the OU/OC are due to “visiting” students (Powell, 1997: BC Open University Planning Council, 1995). These are students enrolled in programs at other institutions who take distance education courses to transfer to those programs. In other instances, students take only one or two courses because that is the extent of their interest or need.

For these reasons, distance education institutions like the BC OU/OC and AU often report course completions and use course completion rates as a measure of performance. However, even this limited characterization is complicated because generous provisions for suspending study, year-round continuous enrollment, and self-pacing confound simple calculation and interpretation of completion rates. In addition, a much higher proportion of students are considered to be “unclassified students,” which means they are not admitted into programs of study, than is the situation in the conventional institutions (Powell, 1997; BC Open University Learning System, 1995). Indicators of completion and retention are usually formulated on the assumption that the students are program students. A modified indicator used by AU is the ratio of graduates to FTE. However, this is not a program completion measure in the conventional sense—but there is a considerable risk that it might be too easily interpreted to be so, and these rates will be low when put alongside conventional graduation rates.

Historically, distance education institutions have cited anecdotal information that some distance education students say they get as much as they want from a course without having to complete it formally. In fact, both AU and the OU/OC report having large numbers of “unofficial students”—people who view the television programming for some courses (and who may even write in to obtain course material prepared for the programs). This kind of service is difficult to document, let alone reflect in the quantitative procedures required by performance indicators. In conventional institutions, there is a student registration category called audit status, wich characterizes those students who follow a course out of interest but do not wish to complete the required assignments and exams to obtain credit. The drop-in/tune-in student is a sort of audit student, but these two types of students can be quite different in principle. For example, there is probably no direct resource commitment to the distance audit student, whereas conventional audit students, by virtue of coming on campus and occupying seats in class, imply some resource commitment, which is why they are assessed a course fee (albeit at a reduced level).

Completion/retention is an indicator that potentially can be damaging to distance education institutions. Despite all reasonable explanations concerning students’ expectations, personal circumstances, and the invaluable services rendered to such students, there is a strong sociopolitical view that degree-granting institutions are effective and efficient only to the extent that they graduate students at the degree level in some optimal period of time.

Transfer Student Performance

The transfer student performance indicator pertains to how well institutions prepare students to transfer to programs at the universities. Usually, this implies that such students originate from formally designated transfer programs at the colleges and that the students move immediately from college to university. Strictly speaking, then, this indicator is not particularly relevant to the distance education institutions themselves (and in fact AU does not report data for this indicator). However, variations on this theme are potentially important. One is the extent to which distance education institutions provide for the needs of visiting students. As noted above, an increasing number of students enrolled at the conventional colleges and universities are taking courses at the OU/OC and at AU as visiting students, which permits them to transfer credit for such courses directly into their regular degree programs. Viewed from this perspective, this service is an important contribution to the provincial postsecondary systems and should be reflected in an appropriate indicator of performance. Ironically, the provision of courses to other universities’ students by the distance education providers stands to improve the perceived performance of the students’ home institutions as measured by the indicators and procedures that have been implemented.

Another transfer-like, value-added feature of the uni-mode distance education institutions is their role in collaborative programming. Although students take a variety of courses from both distance education and conventional institutions, the students might be regarded as “shared” in a programmatic sense, rather than transferring—even though students are individually enrolled with the institution from which the course originates. This kind of arrangement has been formalized in the BC system where the OU/OC is part of the provincial Open University Consortium (which comprises the OU/OC and the public universities in BC that offer university-level distance education courses and program. Yet another variant of the shared student concept is through the kind of collaborative “laddering” programs offered by AU in conjunction with the Alberta public colleges. The effectiveness of the distance education provider in this role is a quite distinct and unique feature that would need a performance measure of a different kind.

Financial Indicators

The financial indicators are multidimensional with respect to what they might show. Although some aspects of the financial indicators (and how they are operationalized) differ in the approaches taken in BC and in Alberta, there are many points of similarity. For example, Direct Education Costs as a proportion of Total Operating Cost is a concern in both jurisdictions because this measure is taken as an indication of the amount of resources allocated to instruction and the priorities that allocation reflects. From a political point of view, the optimal allocation would be for most funding to be spent on instruction. However, the simple arithmetic in the ratio of Direct Education Costs to Total Operating Cost would lead to higher ratios for institutions like AU and the OU/OC—and this in turn could too easily lead to a mistaken interpretation that teaching costs are higher in these two institutions.

Another aspect of the financial indicators is the Operating Revenue by Source as a percentage of Total Revenue. This is meant to reflect the extent to which institutions rely on different sources of funding—and presumably on how successful they are in diminishing their reliance on government funding. This indicator will be particularly problematic for AU and the OU/OC because as innovative, nontraditional forms of education, they are less likely to attract the kind of industry and private support enjoyed by the conventional “brand name” institutions.

Program costing is invariably considered an important financial performance indicator. However, the approaches taken by Alberta and BC differ markedly. In BC, Direct Instructional Cost per Course Hour Equivalent is intended to be the measure of the costs of courses (and by extension costs of programs). Conceptually, this indicator is relatively straightforward. Methodologically, there is the difficulty mentioned above of clarifying the measurement of Course Hour Equivalents. The procedures for allocating expenditures in the costing may also require special, idiosyncratic adaptations from institution to institution. The Alberta approach is to attribute instructional “costs” on a per-course basis to a student in a program (averaged over students in the program, hence it is referred to as cost per student). In addition, the Alberta indicator requires a cost per graduate measure that is computed as a multiple application of the program costing algorithm over each graduate (and averaged over all graduates). Different costing approaches will have to be taken depending on how the distance education function is organized (which in turn would depend on what technologies are used). It should be noted that any differences in program costs for similar programs offered by AU and the OU/OC may be as much a reflection of the costing methodology as of any real differences in costs.

The economic structures of institutions like AU and the OU/OC can also cause complications. Because much of the teaching function can be contracted to external consultants (for both course development/revision and student tutorial support), there is a relatively small number of teaching faculty, and the centralized nature of many administrative functions can cause administrative overheads to look disproportionately large. In conventional universities and colleges, a significant amount of administrative overhead is embedded in faculty and departmental cost structures. Preliminary indications from the AU KPI submissions are that AU is at about 20% for administrative overheads compared with 8% for the other three universities. In addition, at AU only 20-30% of the operating budget flows through to the teaching units, whereas it is about 75% at the other universities. Some care needs to be taken to tease out this effect and reflect it properly in the costs reported.

Distance education is often touted as a less expensive way to deliver higher education. In the case of correspondence-based distance education, it has also been argued that economies of scale should reduce unit costs. The KPI data reported by AU indicate this may be the case. As the costing indicators are implemented, the experience should provide some interesting information on this point.

Space Utilization

The space utilization indicator is meant to be a measure of how effectively conventional institutions use their costly physical plants. However, the status of this indicator in both BC and Alberta is unresolved at the time of writing. In any event, there is no direct analogue of this indicator for distance education institutions, and AU has indicated it will not report on space utilization. Often the absence of this type of substantial overhead cost is used as an argument that society must look more toward distance education as a cost-effective way of addressing the ever-growing demand for higher education.

Student Satisfaction

Data for student satisfaction are collected by survey methodology. In the case of the conventional universities, the survey is usually (but not necessarily) conducted on graduating students. In the case of the distance education institutions, this is more likely to be a survey of all their current students at some designated time. Some of the questions asked of students are reasonably generic and apply equally well to both campus-based and distance education institutions. That said, clearly it will be inappropriate to ask distance education students about some on-campus services and facilities. Conversely, such features as course packages, credit assessment, and specialized student support are unique to the distance education setting.

Distance education institutions will need to modify their survey procedures for evaluating collaborative programs, because students may be satisfied or dissatisfied with aspects of the program that are the responsibility of other institutions.

Employment Indicator

The employment indicator is also determined through survey. In the case of traditional on-campus students the issue generally is whether they have found employment after graduating, whether the employment they find is related to their university work, and how well their university education serves them in the world of work.

The emphasis on Employer and Employee Satisfaction in the case of the BC indicators is a reflection of the current emphasis on the performance of the colleges and technical institutes in producing employable, job-ready graduates. This emphasis has quite a different flavor for the distance education institutions.

As noted above, most students studying with the OU/OC and with AU are adults already in the workforce (or who are homemakers). Hence employment is not quite the same issue for them. A minority of students will be trying to enter the full-time work force on the basis of their distance study. However, a good many others study part-time for a variety of other reasons (Wallace, 1996). If the study is for employment-related reasons, these students wish to add to their skills and knowledge and advance themselves professionally. So an aspect of the employment indicator is the student’s view of the relevance of their courses and programs of study to their jobs. Because many distance students select their distance study for just this reason, distance education institutions should have a built-in advantage with this indicator—and in fact AU was ranked first on the indicator (in relation to the three other universities). Given that this version of an employability indicator favors the distance education institution, perhaps an “employment value-added” measure might offer a more equitable basis for comparison. This kind of measure would be intended to assess the economic and professional benefits to the students of their distance-based study. The data for the measure would be self-reported and gathered by surveying the students.

Employer Satisfaction

Employer satisfaction has been identified as an indicator in the BC system, but not in the KPI exercise in the Alberta system (at least not as yet, but there has been a recent initiative to add such an indicator). The early inclusion of employer satisfaction in BC probably reflects the fact that the BC accountability exercise is predominantly directed at the colleges and technical institutes. The OU/OC is caught somewhat out of context here. Generally, the university functions of institutions are (arguably) not appropriately reflected by employer satisfaction and employment indicators. In addition, because many distance education students are already employed as they do their courses, the context is different. An alternative approach that would look at returns to employers would seem to be warranted here (i.e., returns in the sense that employees would not have to leave employment either on a long- or short-term basis to achieve professional development or returns in the sense of improving the value of the employee to the company).

Research Indicators

Research indicators have not proven to be an issue at the OU/OC, even though the Open Learning Agency is a degree-granting institution. Most of the OU/OC staff are academic/administrative. Because the OU/OC do not have resident teaching faculty as such, there is no requirement for a major institutional commitment to research.

This is not quite the case for AU. As a statutory university with continuing teaching faculty, AU has a research mandate (as well as the usual teaching and service roles). Because the faculty are expected to do disciplinary research, the question arises as to how successful they are. However, because AU’s teaching is provided through distance education and it is primarily an undergraduate teaching institution with limited programs and a small faculty complement, expectations regarding research intensity must necessarily be scaled down appropriately. How this is decided and what benchmarks are reasonable are important, but as yet unresolved questions. At present, Athabasca University reports research activity in a more descriptive format that summarizes the research activities of the faculty.

Moreover, as part of their mission statement, AU has a commitment to research and development that advances the state of understanding and practice of distance education. AU has argued the case that their specialized contributions in this area should count as a measure of their performance as a distance education university.

Community Service and Economic Impact

Community service has traditionally been considered part of the mandate of universities, and this is why this indicator appears in the Alberta set. However, describing community service, let alone measuring it, has proven to be difficult in practice. The compromise in Alberta was to determine and report the economic impact of a university on its local community. The methodology for doing this has been around for a long time and is well known (Caffrey & Isaacs, 1971). It is based on determining the amount of local expenditures that can be attributed to the university. The greatest portion of this expenditure is derived from the operating revenues of the university.

However, there are two other major components: the related expenditures attributed to students while they are living in the local community and expenditures attributed to people from outside the community visiting the university.

These latter two components do contribute materially to the local economies of distance education institutions. No students are brought into the local economy (they are at a distance), nor are there typically many outside visitors (the facilities of distance education providers are not designed to support the usual kind of visitor activities such as conferences).

Because the classical economic impact methodology is not appropriate to distance education, AU is not required to report on this indicator. Although the BC indicators do not include an impact assessment, it is interesting to consider the challenge presented by the unique configuration of the OLA because it comprises such disparate but interrelated components. For example, how could one assess the economic impact of the Knowledge Network and factor it into a consideration of the economic impact of the OU/OC?

Interestingly, the full economic impact of a distance education provider is a kind of mirror image of the conventional view. Much of the service, and even a considerable amount of physical facilities, are regionalized in distance education systems in order to reach and support the geographically dispersed students. In the traditional calculation of economic impact, these components would not be included. Moreover, an argument can be made that a value-added multiplier should be applied to these local economic effects because the students also contribute to the economy by virtue of being employed as they pursue their studies. Another aspect to such a multiplier would be the social and political value derived because students studying in their local communities would enhance regional development—rather than possibly undermining it as would be the case if students had to leave to pursue further studies.

Discussion

As is apparent from the item-by-item account given above, adapting conventional performance indicators to distance education operations yields mixed results. In some cases, there simply are no ready analogues to generalize to in distance education. In some cases, the indicators transfer passably well—although the difference in context necessarily should cause different interpretations and judgments. In other cases, specialized indicators are clearly required for distance education.

In summary, the following are some aspects of distance education we think merit special consideration.

Student Contact Hours (SCHs) have traditionally been used as a proxy for how much work a course of study requires of “the average student.” The concept of SCH needs to be replaced with something that better reflects all the work required of a student (which would include all study time, time spent on assignments, as well as the usual amount of time subsumed in an SCH). The BC indicators attempted to adjust to this by formulating their Course Hour Equivalent. An alternative is to attempt to assess (or measure) the total hours expended by the average student in completing a course and adjusting course content accordingly. Some institutions have been trying such a system and have coined the term Student Effort Hours (SEHs) to refer to the units of work. Finding some basis for the equivalence of courses taught at a distance and on-campus has always been a crucial issue in establishing the academic credibility of distance education. Typically, distance education courses have overcompensated in the work they require of students (Henderson, Hodgson, & Nathenson, 1977). Distance education course workloads are notorious for being heavy. Experience with the distance education courses and feedback from students have helped to make the workloads more equitable with on-campus study. However, a more systematic, empirically based approach to the problem would be helpful.

Faculty workload has traditionally been measured by a variety of units: numbers of courses taught; average section size; course enrollees per FTE faculty; numbers of contact hours. None of these adapts well to the distance education context. We need the equivalent of the Student Effort Hour. One approach, which has actually been tried at some conventional universities, is to formulate a teaching workload hour (or unit). In this approach, every faculty member would be deemed to have a specified number of hours available for teaching-related activities. In distance education, these could be course development, course revision, or delivery of existing courses (with a concomitant student load). The number of teaching workload hours required for each of these activities should vary, of course, and this could be worked out on the basis of past experience and refined on an ongoing basis. Different workload units would need to be assigned according to whether a person was developing a course (with variation according to the estimated intensity required depending on subject, whether the course is being adapted or written from the ground up, or wrapped around an existing text), revising a course, or delivering a course. Another type of measure is likely to be required when dealing with student load, particularly if a system of tutors is used to support students in courses. And yet another measure is needed to account for the supervision of such a system of tutors.

Indicators of cost need further refinement. Even in distance education institutions themselves, the different technologies used to support distance education have different cost structures, and these need to be differentiated. This will be particularly important for institutions that offer both on-campus and distance study courses.

Specialized features of distance education such as credit banking and the research and development of effective distance education practice require recognition. This may be as simple a matter as identifying such functions and deciding what should be counted. In the case of lifelong learning and collaborative offering of programs, mere counts may not be adequate to assess how well the distance education providers do. Perhaps a testimonial style of qualitative account obtained from student surveys would be more appropriate.

Participation and access need to be thought of in different terms for distance education. Because the extension of access to otherwise excluded groups is such an important feature of distance education, the effectiveness of providing such an opportunity to the various subgroups should be an important indicator of performance. One way to recognize this feature is to describe and argue for its value. However, in the past, this kind of special pleading has not been effective. Perhaps a value weighting applied to enrollments in targeted subgroups might result in an indicator with more impact.

A value-added economic indicator is perhaps one of the most useful indicators distance education institutions could have. A tremendous economic benefit is realized through people continuing their employment and staying in their respective communities while studying. A methodology for assessing this effect would be most useful. In addition, the incremental benefits of distance education study to people attempting to extend or advance their existing careers is another related value-added service that would be well worthwhile documenting in some way (particularly as distance study may be the only choice available to them).

Conclusion

For the foreseeable future, performance indicators mandated by governments are a reality with which educational institutions will increasingly have to contend. However, evidence regarding the sensibleness and efficacy of this kind of centralized planning initiative has yet to materialize; although implementing the indicators may be informative and useful because institutions are forced to look critically at various aspects of what they do. However, ends invariably overwhelm intentions in bureaucratic exercises like those described. How the indicators are used will most assuredly affect how measures are formulated and reported. Experience will soon tell us how meaningful the exercise has been.

Distance education institutions in particular will probably find the summative kinds of performance indicators described here of limited use in effecting improvements. Specialized indicators appropriate to distance education are required with a much wider scope than is displayed in those presently used.

References

Advanced Education and Career Development. (1997). Key performance indicators reporting manual for Alberta post-secondary institutions. Edmonton, AB: Author.

Bernier, R. (1995). Distance learning—An idea whose time has come. Education Quarterly Review. Ottawa: Statistics Canada (Catalogue number 81-003, Volume 2, number 3, pp. 35-49).

Borden, V.M.H., & Bottrill, K.V. (1994). Performance indicators: History, definitions, and methods. In V.M.H. Borden & T.W. Banta (Eds.), Using performance indicators to guide strategic decision making: New directions for institutional research (pp. 5-21). San Francisco, CA: Jossey-Bass.

Caffrey, J., & Isaacs, H.H. (1971). Estimating the impact of a college or university on the local economy. Washington, DC: American Council on Education.

Dochy, F.J.R.C., Segers, M.S.R., & Wijnen, W.H.F.W. (Eds.). (1990). Management information and performance indicators in higher education: An international issue. The Netherlands: Van Gorcum.

Henderson, E., Hodgson, B., & Nathenson, M. (1977). Developmental testing: The proof of the pudding. Teaching at a Distance, 10, 77-82.

Kaufman, R. (1998, September). Preparing useful performance indicators. Training and Development Journal, 80-83.

Landstrom, M., Mayer, D., & Shobe, C. (1997). Indicators to measure performance in distance education: A double-edged sword. Paper presented at the International Council for Distance Education, Pennsylvania State University.

Madan, V.D. (1997). Systemic research and performance indicators in open and distance learning. Paper presented at the International Council for Distance Education, Pennsylvania State University.

Office of Research and Planning, Association of Colleges of Applied Arts and Technology of Ontario. (1996). Accountability in a learning-centred environment. Discussion paper prepared for the Colleges of Applied Arts and Technology of Ontario.

Open University Consortium. (1995). The British Columbia University open learning system, 1994-95. Burnaby, BC: Author.

Performance measurement in BC’s college, institute and agency system: Implementation bulletin #4. August 12, 1996.

Powell, R. (1997). Athabasca University student/registration profile, 1992-93 to 1996-97. Athabasca, AB: Department of Institutional Studies, Athabasca University.

Spanard, J-M.A. (1990). Beyond intent: Reentering college to complete the degree. Review of Educational Research, 60(3), 309-344.

Wallace, L. (1996). Changes in the demographics and motivations of distance education students. Journal of Distance Education, XI(1), 1-31.

Doug Shale is currently an institutional researcher in the Office of Institutional Analysis at the University of Calgary. He has been extensively involved in the Key Performance Indicator reporting exercise mandated for Alberta postsecondary institutions and as a result wishes he had become a dentist.

Jean Gomes has been an institutional researcher with the University of Calgary and has also worked for the Open Learning Agency. While at the OLA she was involved with developing and reporting key performance indicators for the Open University/Open College sector. She is presently a graduate student in the Department of Sociology at the University of Calgary.

ISSN: 0830-0445