Education-line Home Page

Effectiveness and improvement: school and college research compared

Dr Paul Martinez

Paper presented at the Annual Conference of the British Educational Research Association, Leeds University, 13-15 September 2001

I am indebted to Tim Simkins for his detailed comments on earlier drafts of this article

 

Dr Paul Martinez
Development Adviser
Learning and Skills Development Agency
Robins Wood House, Robins Wood Road
NOTTINGHAM NG8 3NH

Telephone: Nottingham (0115) 929 9121
Fax: Nottingham (0115) 929 3505

E-Mail: pmartinez@lsda.org.uk

 

 

Abstract

There are relatively well established traditions of research in both school effectiveness and school improvement. These traditions are quite mature in that they have distinct research methodologies, agreed terms of references and discourse and, not infrequently, some consensus around advice offered to both policy makers and practitioners. There are, moreover, some signs of convergence between the hitherto distinct traditions.

The position of colleges is quite different. For a variety of reasons, colleges have provided less of a focus for educational research. At first sight, moreover, there is no equivalent to the school effectiveness and improvement traditions.

This article reviews the research that is currently available to argue that:

 

Background

Over the last thirty years there has been an explosion of interest in two related areas of research: school effectiveness and school improvement. The first attempts to answer the question: What are the factors that are associated with effective schools? The second has a focus on the ways that schools improve to become more effective. Both traditions are inspired by the belief that schools "can make a difference". In other words, they are premised on the belief that some schools and some departments within schools perform better than others and that teachers, heads of departments and head teachers can develop and implement measures to improve performance or, in the alternative, to maintain existing high levels of performance.

This research effort is substantial, growing in volume, supported by a number of dedicated research centres in universities and international in scope. Similar research efforts are being undertaken throughout the English speaking world and beyond. Recent and comprehensive introductions to this work can be found in Macbeath and Mortimore (2001), Gray et al (1999), Sammons (1999), Mortimore (1998) Reynolds (1990), Sammons et al (1997), Reynolds et al (1997). A convenient introduction and overview written for an intended further education audience has been produced by Somekh et al (1999).

In the context of this considerable research effort in the school sector, a number of FE researchers have identified the absence of a research culture in further education (Brotherton 1998; Elliot 2000). Indeed, a recent article noted the "dearth of research to date on college effectiveness" (Cunningham, 1999, p403).

The purpose of this article is to compare and contrast the literatures of school effectiveness and improvement with similar research in the college sector. It does so by way of:

In terms of scope, the article concentrates on British research on schools, colleges, adult education services and government funded, work based training schemes. It excludes research that looks at improvement and effectiveness issues within higher education.

 

School effectiveness research

The literature of school effectiveness is firstly concerned with the outcomes of education and making comparisons between schools. In the words of Somekh:

" a more effective institution is typically defined as one whose students make greater progress over time than comparable students in comparable institutions" (1999, p 25).

School effectiveness studies generally:

There appears to be a certain amount of agreement over issues thrown up by research to date, and how these might be addressed. Thus, problems associated with a "top-down", whole institutional focus with its implicit reliance on the views of the head and senior teachers (Ouston 1999) are being addressed by work at departmental and subject level (Sammons, Thomas, Mortimore 1997, Harris et al 1997, Busher and Harris 2000). The difficulty caused by the failure of quantitative research to replicate findings from qualitative research concerning the role of school leaders (Scheerens and Bosker 1997) appears to have been resolved by the development of more sophisticated conceptual models and the application of robust quantitative research techniques (Hallinger and Heck 1998).

The growing maturity of the field can be illustrated by reference to the increasingly nuanced discussion of issues such as school context and pupil intakes. In the domain of research, at least, the old arguments around the impact of economic and social deprivation on school performance have given way to more fruitful discussions of how best to make like-for-like comparisons which will control for independent variables as far as possible (Mortimore 1998, Sammons 1999). Indeed, the school effectiveness field might be said to be mature in that meaningful discussions of methodology can take place on the basis of a substantial amount of shared theory (eg Ouston 1999).

Effectiveness research has paid attention to all phases of schooling from nursery to sixth form. It has an international dimension with British researchers increasingly engaged in a dialogue and in collaborative research with researchers from abroad, notably North America, Europe and Australasia. Further evidence of the relative maturity of school effectiveness research can be found in the meta-studies that have begun to appear. These studies typically propose theoretical frameworks based on syntheses and evaluations of scores and sometimes hundreds of research projects (Creemers 1994, Scheerens and Bosker 1997, Wang, Haertel and Walberg 1993).

In terms of the messages coming out of this research, there appears to be a broad consensus around a number of factors that have a significant impact on school outcomes. Different researchers place their emphasis slightly differently but the following list of factors drawn from Reynolds et al (1997) and Mortimore (1998) is reasonably typical:

Finally, and perhaps most importantly the fundamental issue raised by Reynolds over a decade ago that effectiveness research "has had much more to say about what makes a "good" school than about how to make schools "good" (1999 p23), is being addressed by a growing convergence between school effectiveness and school improvement research (eg Sammons 1999, Gray et al 1999. Macbeath and Mortimore 2001).

 

School Improvement Research

School improvement research is concerned with the question of how schools might become better or more effective.

"School improvement is about raising student achievement through enhancing the teaching and learning process and the conditions that support it" (Hopkins et al 1994 p xx).

School improvement too has an extensive literature to which Stoll and Fink provide a convenient introduction (1996).

The great strength of school improvement research and presumably the source of its continuing attraction for practitioners is that it is:

There are some indications, however, that school improvement is a less mature field of research than school effectiveness. Many of the publications take the form of case studies of action research (eg Bowring-Carr and West-Burnham 1999) and suffer the problems associated with such an approach, notably difficulties of generalisation. More generally, some school improvement research has been criticised for being "lightly empirical and naively quantitative", too reliant on ex post facto explanations and poorly predictive in terms of "what works" (Gray 2000, p 1).

These weaknesses are currently being addressed through efforts to:

What improving schools seem to have in common are that they share:

 

Research on college effectiveness

There is quite a substantial volume of research which addresses issues of college effectiveness (CE), but researchers and practitioners may have to look quite hard to discover it, since most of it is not published at all or because it exists in the "grey" literature of unpublished research dissertations and presentations and papers at conferences.

In comparison with effectiveness research on schools, most of this research addresses issues of effectiveness only indirectly. Indeed, it could be said to be addressing two more preliminary questions: what are the causes of student failure and, by extension, what variables are most significant in accounting for student outcomes (attendance, retention, achievement and attainment)? Notable exceptions to this general observation can be found in work on value added and on differential achievement in colleges whose students are recruited predominantly from areas of high social deprivation.

The discussion which follows attempts to:

Research within individual institutions is by far the largest category of research. It is being generated by individual colleges and adult education services as they attempt to identify the reasons for drop-out and exam failure in order to develop improvement strategies. Because it is usually intended for internal consumption and use, the work is largely unknown and unseen outside the originating institutions.

Institutionally based research takes many and diverse forms but the most common include research undertaken as part of a programme of post graduate study (MA, MEd, MBA and, occasionally, DEd); surveys of withdrawn students often accompanied by staff surveys; reports produced for management purposes (typically combining analyses of data and student surveys); research commissioned by a college from an external agency.

The questions addressed most frequently in college effectiveness research are:

At the beginning of the 1990s, the prevailing view, faithfully reflected in an authoritative report from HMI (1991), was that drop out was largely due to factors external to colleges. The main thrust of CE research since then has been to displace that view.

 

Demographic factors

In Britain, withdrawn students do not have a markedly different profile from completing students in terms of age, ethnicity or gender (Martinez 1995, Martinez 1997a, Martinez and Munday 1978, Stack 1999).

Unlike schools, colleges do not have entitlement to free school meals to serve as a convenient indicator of social class. The proxy indicator used in the further education sector is therefore the relative economic and social deprivation of the electoral ward where a student lives. Research has demonstrated, however, that social deprivation measured in this way correlates poorly with retention and achievement across the college sector as a whole (Davies and Rudden, 2000p2). A relationship has been identified, but only in the 10% of colleges which recruit the highest proportion of their students from deprived postcodes. Even in this minority of colleges, variations in the demographic composition of the student intake seem to account for no more than 50% of the variation in college performance as measured by the achievement of qualification aims (Davies 2001). The only study which has asserted a significant "postcode impact" (Vallender 1998) did not consider any intervening variables, notably mode of attendance, level of programme or subject/curriculum area.

 

Student Motivation

While college improvement (CI) research has shown consistently that efforts to improve or maintain student motivation can lead to better retention and achievement (Martinez 1997 and 2000), research on college effectiveness suggests strongly that the initial motivations of students as expressed by their reasons for enrolling, aspirations, expectations of college etc, do not vary significantly between students who subsequently stay and students who leave (Martinez 1995, Martinez 1997a, Lamping and Ball 1996, FEDA 1998, Kenwright 1997).

A detailed study that explored student self-esteem and action-control beliefs with a relatively small sample of successful and unsuccessful students, did not find any marked differences between them (Stack 1999).

 

Student Decision Making

Medway and Penney (1994) were among the first to suggest that the student decision making process could be characterised as a continuous weighing of the costs of continuing with or abandoning a programme of study and that decisions to leave resulted "from rational decisions to respond to the difficulties [students] faced" (ibid p38). These early conclusions have been borne out by subsequent research. Using a variety of methods and with samples of up to 9,000 students, CE research has shown that college students have complex and multiple reasons for withdrawing from programmes of study and that decisions to withdraw can be seen as rational and positive from the point of view of students (Searle 1998, Crossan 1996, Martinez 1995, Freeman 2000, Bloomer and Hodkinson 1999, FEDA 1998, Adamson and McAleavy 2000, Adamson et al 1998, Martinez and Munday 1998).

Several studies show that students usually leave courses for several reasons (Vick 1997, Medway and Penney 1994, Martinez 1995, Kenwright 1997). One implication of this finding is that the widespread practice of recording only one, or the "main" reason for student withdrawal by colleges, officially sanctioned by FEFC (FEFC 1996 p4), misrepresents the student decision making process and gives a false picture of reasons for withdrawing (Martinez 1995, Hooper et al 1999, Kenwright 1996).

In terms of the reasons given by students for withdrawing the conclusions of a number of different studies are remarkably consistent.

Causes of dropout fall into three broad categories college-, work- and personal/family-related (Martinez 1995, Strefford 1999, Kenwright 1996, FEDA 1998, Adamson and McAleavy 2000, Davies et al 2000, CSET Lancaster University 1994, Bale 1990, BTEC 1993).

 

College Related Issues

Studies which limit themselves to surveys of withdrawn students can indicate the range of causative factors and can identify those factors that are college-related (Adamson and McAleavey 2000, Adamson et al 1998, Bannister 1996, Barrett 1996, Davies et al 2000, Freeman 2000, Gill 1998, North Tyneside college 1997, Hall and Marsh 1998, Longhurst 1999, Strefford 1999). Depending on the sample size and the degree of sophistication of the research design, this can produce valuable information and insights. However, the absence of control groups of completing and successful students can make it difficult to:

These issues have been addressed in a number of larger scale studies (Martinez 1997a, Martinez and Munday 1998, FEDA 1998, Davies 1999, Responsive College Unit 1998, Medway and Penney 1994, Kenwright 1997). These studies show that withdrawn students are most strongly differentiated from completing students by:

Specifically, withdrawn students tend to be less satisfied than completing students with:

Withdrawn students are, moreover, much less willing to recommend the college to others (Martinez and Munday 1998, Martinez 1997a, Davies 1999, FEDA 1998, Kenwright 1997, Medway and Penney 1994).

The same studies demonstrate that withdrawn students are not strongly differentiated from completing students by:

Further, the incidence of financial hardship does not seem to be strongly associated with decisions to drop out in order to gain employment (Martinez and Munday 1998 p 29). The Responsive College Unit with a sample of almost 6,000 students came to virtually identical conclusions using a longitudinal research design (1998). It found, in addition, that neither part-time work nor "external time commitments" correlated strongly with drop out.

The research in the Isle of Wight college (Medway and Penney 1994) and the research conducted by Davies with a large sample of colleges and school sixth forms (FEDA 1998), suggests that the same sorts of factors which are closely associated with withdrawal, are also associated with unsuccessful completion (where students complete their programme but fail to gain the intended qualification). The earlier study concluded that:

"...the factors affecting non-completion were the same factors which led to unsuccessful completion. Half of unsuccessful completers would have left before completion if an acceptable alternative opportunity had arisen" (Medway and Penney 1994 p 36).

Most CE researchers have found significant differences between the views of students and staff. With some notable exceptions, staff tend to emphasise those factors associated with student withdrawal over which they feel they have little or no control, including the nature of the student intake, resources and college policies (Gill 1998, Martinez 1995, CSET Lancaster University 1994, Davies et al 2000, Kenwright 1996, FEDA 1998) .

 

Advice and Guidance

The more quantitative and larger scale research tends to emphasise the importance of teaching, learning the support processes. The smaller scale, often more qualitative CE research, provides some more detailed findings about these processes. This research points to the importance of information advice and guidance processes to help place students appropriately on courses. For younger full time students, the issue does not appear to be lack of access to advice but rather its quality (Lea 2000). According to Wardman and Stevens (1998 p 5):

"Most of this group [of withdrawn students] had experienced some elements of careers education, at least one careers interview and had completed at least one career action plan....the research suggests that there is scope to improve the quality of guidance...".

The leading longitudinal study of drop out found that students who felt well informed about their course were less likely to withdraw (Responsive College Unit 1998). Conversely, studies of withdrawn students have found evidence of:

 

Teaching and Learning

Student withdrawal and unsuccessful completion appear to be associated with a number of different aspects of teaching and learning. In no particular order of priority, these would include:

 

Value added research

The college research which most closely resembles the school effectiveness research can be found in discussions of value added approaches and in work on colleges which recruit their students from particularly deprived catchment areas.

Value added research in colleges is based on the exploration of significant relationships between prior attainment at GCSE and subsequent performance at A/AS Level. This work is still in its infancy in the sense that the only large scale studies to date have concentrated on methodological issues, particularly different ways of calculating value added scores, and consequent implications for the construction of "league tables" of institutional performance expressed in value added terms. Some preliminary work has also been undertaken on the relative performance of different types of educational institutions and on the performance of students by gender, (O'Donoghue et al 1997, Yang and Woodhouse 2000). Some further work on the relationships between GCSE and GNVQ will be published shortly (Martinez and Rudden 2001). Such studies have not progressed to a consideration of the processes that give rise to the value added outcomes. Indeed, the only work to date which moves beyond a consideration of patterns of performance to the reasons for such patterns can be found in occasional small scale studies based on individual A Level subjects (Holy Cross College 1999, Solihull Sixth Form College 2000, Little 2000, Park College 2001).

The other college research which closely resembles school effectiveness research can be found in recent work by the Learning and Skills Development Agency which explores issues of effectiveness in colleges serving particularly deprived catchment areas. Based on analyses of the detailed student data set contained in the Individualised Student Record, Davies and Rudden (2000) conclude that a "college effect" accounts for around 50% of the variation in college performance and that the main distinctive features of the more effective colleges in the sample is the relative maturity and sophistication of their improvement strategies (Davies 2001).

 

Work based training

There is a much smaller body of work devoted to work based training schemes (DfEE 1999 a, b and c; DfEE 2000). Given the paucity of this research, it is unfortunate that its methods are relatively unsophisticated. All of these reports place great reliance on interview evidence from staff and managers. One of the reports (DfEE 1999 c) is based solely on such interviews. Another (DfEE 1999 a) is based largely on focus group discussions with just 85 former trainees. A third (DfEE 1999 b) includes a telephone survey of some former trainees but its methodological information is so scant (it does not even give the number of telephone interviews), that it is impossible to form any judgement concerning the validity and reliability of its findings. Only the most recent study (DfEE 2000) includes a relatively large survey of non-completers (772 respondents). None of these reports attempts to improve the interpretation of information from withdrawn trainees by making comparisons with evidence drawn from a control group of successful or continuing trainees.

 

The limits of college effectiveness research

With the exceptions noted above, the dominant model of CE research is that it:

Leaving aside the lack of research on adult learners, the main limitation of this research is that it does not give rise to the ability to make robust like-for-like comparisons between colleges or indeed component parts of colleges. It is not, therefore, possible to identify in a systematic way the variables which colleges control and which distinguish high from low performing colleges, nor to identify the variables which are most critical.

This has implications for both policy makers and practitioners. Uncertainty remains concerning key processes and variables that colleges need to focus on in order to be or indeed become more effective. Specific questions that have yet to be answered include:

 

Research on college improvement

There is now a substantial body of college improvement (CI) research that quite closely resembles school improvement research. We have already noted that the field of school improvement research is less mature than that of school effectiveness research. The same applies to college improvement research, which takes the form of case studies and occasional syntheses of case studies.

One of the largest bodies of institutional research, comprises a group of over 160 case studies from English colleges. The case studies present and reflect on the experience of projects to improve achievement and/or retention undertaken as part of the Raising Quality and Achievement (RQA) programme led by the Learning and Skills Development Agency. The case studies are drawn from a wide cross section of colleges and their content is equally varied. Some strategies have been developed and implemented across a whole college; others are implemented within departments or programme areas and still others are based in individual courses. Case studies from the first and second round of projects can be searched and downloaded from the RQA website (www.rqa.org.uk). Two hundred further case-studies will be added to the website over the next two years.

Three reports are available which provide syntheses of the RQA case studies (Martinez 2000 and 2001b and Cousin 2001). There are a small number of other syntheses based on college case studies outside the RQA programme (Kenwright 1997, Martinez 1996 and 1997, National Audit Office 2001).

The case studies have all the weaknesses and strengths associated with an action research approach. They are empirically based, collaborative ventures led by practitioners which are intended to make a difference in the real world. They have a strong focus on improvement and the transfer of successful practice, together with a level of detail sufficient to facilitate replication and transfer and to allow practitioners to make their own judgements about relevance. Almost all of the RQA case studies have been evaluated against measures of effectiveness (retention, achievement and attainment). They are also variable in their method, the rigour of analysis and sometimes the robustness of their data. It is difficult, moreover, to derive generalisations concerning "good" (still less "best") practice, transferability to different contexts and, sometimes, cause and effect relationships. They do not distinguish easily those strategies that have had a particularly large impact on retention/achievement and those with a relatively small impact. Finally, because of a reluctance to report on improvement efforts that have failed to achieve their objectives, improvement case studies invariably lack a control group of colleges where improvement efforts have failed.

Two broad sorts of findings emerge from this work concerning the content of improvement strategies and the process of designing and implementing such strategies.

 

Content of improvement strategies

In terms of content, there is a high degree of complementarily between CE and CI research. The former identifies problems and implicitly predicts that their resolution will give rise to greater effectiveness as expressed by student retention and achievement. In many ways, the CI research validates these predictions.

Whether implicitly or explicitly, most CI research assumes a process model of the student experience which extends from initial contact, advice and guidance, to recruitment and selection, student preparation, induction, initial assessment, teaching and assessment, learning support, tutoring and on-programme support, and which ends with progression. The syntheses of college improvement strategies cited above largely agree in their conclusions that colleges can improve by:

While many of the improvement strategies in colleges are broadly similar to those identified in the literature on schools, three stand out as being different (Martinez 2000):

The particular emphasis on threshold strategies is easily explained by differences between schools and colleges. For most schools, recruitment, course placement, induction, and starting pupils off on a programme of learning are seen as relatively unproblematic. CE research has shown consistently that many students experience substantial difficulties both before and immediately after enrolment.

The role of personal tutor seems to be more fully articulated than in schools. It involves an oversight of the whole of the student's progress, close liaison with subject tutors and teachers, and both pastoral and academic responsibilities. In terms of curriculum, CI research has tended to emphasise the greater degree of discretion and choice that colleges have in respect of their curriculum framework.

CI research also differs from that of schools in terms of the unit of analysis. Whilst school research tends to take the school as its focus, CI research has a rather heterogeneous and eclectic focus on a variety of different levels within the college, from individual courses to the whole college by way of programmes, departments, schools and faculties.

Notwithstanding the general agreement about the sorts of strategies which have been most successful in securing improvements, CI researchers have emphasised that there are no "magic bullets", "single solutions", "one best way" or "golden rules" (Martinez 2000 and 2001b, Cousin 2001, Kenwright 1997).

 

Process of college improvement

Common features of college improvement processes seem to include:

Beyond these generalisations, the processes by which colleges improve are almost as varied as the ways in which they improve. Improvement strategies can be top-down, bottom up or indeed shared. They can be led by a variety of different post holders, from teachers and team leaders to student services managers, to quality assurance directors, to principals and deputy principals. Indeed, "the way that strategies to raise achievement are inspired, researched, designed, implemented and evaluated varies considerably from college to college and even within the same college" (Martinez 2000 p 90).

 

Conclusions: Research and Policy Issues

This brief comparison of and school and college effectiveness and improvement research suggests that there are some significant differences. Each has its strengths, but each might also have something that could be usefully shared with the other. It is notoriously difficult to "sing the music of the future", but the comparison may help to suggest some fruitful priorities for research. The remainder of this article identifies some of the ways that first school and then college research might be enriched and extended in the light of the comparisons made in this article. Where relevant, it also reflects on implications for policy and practice.

Given the longer history and greater maturity of schools research, particularly school effectiveness research, it seems likely that most of the potential for transfer will be from school to college research. There are, however, a number of areas where the flow could run in the other direction.

 

School Research and Policy Issues

A significant difference between school effectiveness (SE) and CE research is that the former has been developed largely by academic researchers, the latter largely by practitioners. This has two main implications. First, there is a wider gap between SE and SI research than between their equivalents in the college sector. Second, SE research tends to focus on the whole school and to rely heavily on evidence derived from senior staff. These considerations suggest that school research could be enriched by:

The heterogeneity and wealth of college practitioner research suggests strongly that much could be done by research sponsors and, for that matter, by head teachers to encourage more practitioner research in school effectiveness. Indeed, it is difficult to see how school projects to raise standards can be undertaken without some initial internal research of this sort. Universities and research agencies, moreover, could provide wide access to the outcomes of such research perhaps on the model of the website college development projects mentioned above (p 14).

There is evidence from both CE and CI research that variable patterns of performance can subsist over time not only between different curriculum areas in the same college but between different courses in the same curriculum area in the same college. CI research, moreover, demonstrates that many successful improvement strategies are highly specific to their own context. College research would therefore lend weight to the argument that even greater school research efforts need to be concentrated on individual subjects and departments. Presumably, this would apply particularly to secondary schools.

The reasonably well developed and sophisticated methods developed in the college sector to research pupil attitudes and experience could enrich research in schools.

Colleges have demonstrated that data from students' surveys and focus groups can be of great value, particularly to the investigation and problem formulation aspects of school improvement projects (Cousin 2001).

 

College Research and Policy Issues

The limits of CE research to date (p 13 above), suggest that college researchers need to extend their methods to embrace some of the approaches developed and applied successfully in the school sector. The increasing size of colleges, the diversity of their student populations and the wide variety of their institutional missions, suggests that something akin to the research framework developed within the school effectiveness tradition is required to provide answers to questions which are becoming increasingly pressing:

Comparisons with school research also suggest that college researchers need to be more willing to engage in discussions of methodology. This conclusion actually bears more on policy than on the dispositions of individual researchers. The DfES, Learning and Skills Council and the Learning and Skills Development Agency between them sponsor most CE and CI research. If more methodological discussion and rigour are required, it falls to the sponsors of research to include that requirement in their research programmes.

Value added methods offer a relatively well tried and tested method that could be extended to enrich both CE and CI research. More widespread use of value added methods will depend primarily on two policy decisions: the type of value added reporting which the DfES plans to introduce in the post-16 sector and the extent to which DfES (or LSC) is prepared to support college and school sixth forms in their improvement activity through the provision of value added data. In this respect, the very detailed staff development manual produced by the Scottish Executive (2000) in both paper and electronic forms, could serve as a model for England.

The scope of college research needs to be extended. The focus of college research must widen to embrace not only the part-time, mainly adult students, who make up the great bulk of college populations, but also the increasing number of students who access college through work based, distance and open learning routes and through outreach centres. The prime responsibility to ensure that this occurs falls partly on the major sponsors of research and partly on college managements which, until now, have tended to focus their improvement efforts on younger, full-time students.

The brief consideration of the volume of CE and CI research and also its limited accessibility (particularly CE research), indicates a need both to:

Again, these are as much policy as research issues. The major sponsors of post-16 research need to commission summaries of available research for intended use by practitioners. With the ready availability of Internet publishing, moreover, there is really no excuse for so much college research to remain inaccessible.

Two apparently technical issues, finally, need to be addressed. Each has a mixture of policy and research aspects. In terms of college administrative systems, the current policy which requires colleges to capture and record a single reason for student withdrawal is indefensible. Colleges need to establish valid ways of identifying any and every reason which might contribute to a student's decision to withdraw. Secondly, the research and policy communities need to establish a convenient indicator for social deprivation, which is at least as valid as the free school meals indicator used in school research.

To summarise, there is quite a substantial body of effectiveness and improvement research in colleges which bears comparison with research in the school sector. The improvement literatures are broadly comparable in terms of their methodologies and findings. The effectiveness literatures are different. CE research is more closely linked to efforts to improve colleges, but compared with SE research is less capable of making a like-for-like comparison and systematically identifying factors contributing to effectiveness in different types of college, for different types of students and at different levels of college organisation. The comparison suggests not only that there is scope for schools and college researchers to learn from each other but, in a number of areas, research and policy issues are linked. Progress in research issues along some of the lines suggested here will either not be accomplished at all or will be accomplished much more slowly without the implementation of changes recommended to universities, sponsors of college research, the Learning and Skills Council, DfES and college and school managers.

 

REFERENCES

Adamson, G. and McAleavy, G. (2000)

 

 

Adamson, G., Archibold, J., McAleavy, G., McCrystal, P. and Trimble, T. (1998)

 

 

 

Askham Bryan College (2000)

 

Bale, E. (1990)

Withdrawal from vocational courses in colleges of further and higher education in Northern Ireland, Journal of Vocational Education and Training 52(3)

An investigation designed to improve the effectiveness of further education provision by reducing non-completion rates on GNVQ/NVQ courses in colleges of further education in Northern Ireland (Belfast, Department of Education for Northern Ireland)

Raising quality and achievement development project case study (published at www.rqa.org.uk)

Student drop out from part-time courses to non-advanced further education (Bedford CHE, M Phil thesis)

Bannister, V. (1996)

At-risk students and the 'feel good factor' (MBA Dissertation, FEDA)

Barrett, J. (1996)

Lifelong learning: creating successful adult learners (Southend, South East Essex College)

Basic Skills Agency (1997)

Staying the course: the relationships between basic skills support, drop-out, retention and management (London, BSA)

Blaire, T. and Woolhouse, M. (2000)

 

 

 

Bloomer, M. and Hodkinson, P. (1999)

Learning styles and retention and achievement on a two year A Level programme in a further education college (Paper presented at the annual Further Education Research Network Conference, December)

 

College life: the voice of the learner (London, FEDA)

Borrow, L. (1996)

Staying power: a customer service view of retention in further education (MBA Dissertation)

Bowring-Carr, C. and West-Burnham, J. (1999)

Brotherton, B. (1998)

Managing learning for achievement. (London, Financial Times)

Developing a culture and infrastructure to support research-related activity in further education institutions, Research in Post-Compulsory Education 3 (3), pp311-328

Brown, H. (1998)

Nature or nurture? Does tutoring increase retention on GNVQ Foundation programmes in FE colleges (MA Dissertation, Oxford Brookes University)

BTEC (1993)

Busher, H. and Harris, A (2000)

Staying the course: (London, BTEC)

Subject leadership and school improvement (London, Paul Chapman)

Clarke, C. (1997)

Can colleges make any inroads into improving the retention of students on community courses? (MEd Dissertation, University of Warwick).

Cousin, S. (2001)

Improving colleges through action research (London, LSDA in preparation)

Creemers, B. (1994)

The effective classroom. (London, Cassell)

Crossan, A. (1996)

 

 

CSET Lancaster University (1994)

Retention or "Now we've got 'em, let's keep 'em" (unpublished internal report, Ridge Danyers College)

Quitting; a survey of early learning carried out at Knowsley Community College (Lancaster, unpublished report)

Cunningham, B. (1999)

 

 

Davies, P. and Rudden, T. (2000)

 

Davies, P. (1999)

 

Davies, P., Mullaney, L. and Sparkes, P. (1998)

 

Davies, P (2000)

Towards a college improvement movement, Journal of Further and Higher Education 23 (3), pp 403 -413

Differential achievement: what does the ISR profile tell us? (London, Learning and Skills Development Agency)

What makes for a satisfied student? (London, FEDA)

 

Improving GNVQ retention and completion (London, FEDA)

Closing the achievement gap- colleges making a difference (London, LSDA)

Davies, T., Bamber, R., Rudge, L. and Stobo, M. (2000)

 

 

 

When the going gets tough... An investigation into patterns of student withdrawals in Pembrokeshire College during 1998/99. (unpublished report, Pembrokeshire College)

DfEE (2000)

Modern apprenticeships: exploring the reasons for non-completion in five sectors (London, DfEE)

DfEE (1999a)

Youth trainees: early leavers' study (London, DfEE)

DfEE (1999b)

Leaving training for work: trainees who do not achieve a payable positive outcome (London, DfEE)

DfEE (1999c)

 

 

Elliot, G. (2000)

 

 

 

Tackling early leaving from youth programmes (London, DfEE)

 

Riding the storm: practitioner research in FE colleges (Presentation to postgraduate students in the Faculty of Education, University of Central England, 10 January )

FEDA (1998)

 

FEFC (1996)

Improving GNVQ retention and achievement (London, FEDA)

Students' destinations: college procedures and practices (Coventry, FEFC)

 

 

Foreman-Peck, L. (1999)

Choice, support and accountability: issues raised from the experiences of non-completing GNVQ students, Westminster Studies in Education 22

Freeman, B. (2000)

 

 

 

Gill, J. (1998)

 

 

 

 

 

Survey of student non-completers October 1999 - May 2000 (Reigate College, unpublished report)

 

Small scale research into student withdrawal at Stafford College in the Department of General Education (unpublished report, Stafford College)

Gray, J. (2000)

 

 

Gray, J., Hopkins, D., Reynolds, D., Wilcox, B., Farrell, S. and Jesson, D. (1999)

Gray, J., Reynolds, D., Fitz-Gibbon, C. and Jesson, D. (1996)

 

The future of research on school improvement, (Presentation at BERA School Improvement Research Symposium, University of Nottingham 6 July 2000)

Improving schools: performance and potential (Milton Keynes, Open University Press)

Merging traditions: the future of research on school effectiveness and school improvement (London, Cassell)

Hall, D. and Marsh, C. (1998)

Retaining full-time students at Calderdale College, interim report (unpublished internal report, Calderdale College)

Hallinger, P. and Heck, R. (1998)

 

 

Harris, A., Jameson, I. and Russ, J. (1997)

 

HM Inspectorate (1991)

 

Exploring the principal's contribution to school effectiveness 1980-95, School Effectiveness and School Improvement 9 (2)

A study of "effective" departments in secondary schools in Harris A et al Organisational effectiveness and improvement. (London, Cassell)

Student completion rates in further education courses (London, DES)

Holy Cross College (1999)

 

Hooper, I., Fields E., Ham, J., Williams, T. and Wolderufael, H. (1999)

Raising quality and achievement development project case study (published at www.rqa.org.uk)

Investigating retention of full-time 16-19 years old students 1997-98, (unpublished internal report. Tower Hamlets College)

Hopkins, D., Ainscow, M. and West, M. (1994)

School improvement in an era of change (London, Cassell)

Kenwright, H. (1997)

Holding out the safety net: retention strategies in further education (York, York College of Further and Higher Education)

Kenwright, H. (1996)

Developing retention strategies: did they fall or were they pushed? (York, York College of Further and Higher Education)

Lamping, A. and Ball, C. (1996)

Maintaining motivation; strategies for improving retention rates in adult language classes (London, CILT)

Lea, S. (2000)

Improving retention and achievement at Level 2 (unpublished report, South Essex Lifelong Learning Partnership)

Little, E. (2000)

Managing students achievement (Work based Dissertation, Middlesex University)

Longhurst, R. (1999)

Why aren't they here? Student absenteeism in a further education college, Journal of Further andHigher Education 23 (1) pp61-80

Macbeath, J. and Mortimore, P. (eds) (2001)

 

Martinez, P. (2001a)

 

Martinez, P. (2001b)

 

 

Martinez, P. (2000)

Improving school effectiveness. (Buckingham, Open University Press)

Great expectations: setting targets for students (London, The Learning and Skills Development Agency)

College improvement: the voice of teachers and manager (London, The Learning and Skills Development Agency)

Raising Achievement: a guide to successful strategies (London, FEDA).

Martinez, P. (1998)

 

 

Staff development for student retention in further and adult education (London, FEDA)

Martinez, P. (1997)

Improving student retention: a guide to successful strategies (London, FEDA)

Martinez, P. (1997a)

 

 

Student persistence and dropout (paper presented at BERA annual conference in York, available at www.leeds.ac.uk/educol)

Martinez, P. (1996)

 

Martinez, P. (1995)

Student retention: case studies of strategies that work (London, FEDA)

Student retention in further and adult education: the evidence (Blagdon: Staff College)

Martinez, P. and Rudden, T. (2001)

 

 

Martinez, P. and Munday, F. (1998)

 

McGivney, V. (1996)

 

McHugh J. (1996)

 

Value added in vocational qualifications (London, The Learning and Skills Development Agency, forthcoming)

9,000 Voices: student persistence and drop-out in further education (London, FEDA)

Staying or leaving the course (Leicester, NIACE)

Application, enrolments and withdrawals at High Peak College (unpublished report, High Peak College)

Medway, J. and Penney, R. (1994)

Factors affecting successful completion: the Isle of Wight College (unpublished report The Further Education Unit)

Mortimore, P. (1998)

 

 

National Audit Office (2001)

 

 

National Council for Vocational Qualifications (1997)

 

North Tyneside College (1997)

 

 

O'Donoghue, C., Thomas, S., Goldstein, H. and Knight, T.(1997)

The road to improvement: reflections on school effectiveness, (Lisse: Swets and Zeitlinger)

Improving student performance. How English further education colleges can improve student retention and achievement (London, Stationery Office)

 

Investigation of non-completion research (London: NCVQ. Unpublished internal report)

The causes of student drop-out from courses: college executive summary (Newcastle, North Tyneside College and University of Northumbria, mimeo)

1996 Study on value added for 16-18 year olds in England (London: DfEE)

Ouston, J. (1999)

School effectiveness and school improvement: critique of a movement in Bush, T. et al (eds) Educational management: redefining theory, policy and practice (London, Paul Chapman)

Responsive College Unit (1998)

National retention survey report (Preston, RCU)

Reynolds, D., Sammons, P., Stoll, L., Barber, M. and Hillman, J. (1997)

 

 

Reynolds, D. (1990)

 

 

 

Sammons, P. (1999)

School effectiveness and school improvement in the United Kingdom in Harris A et al Organisation effectiveness and improvement (London, Cassell)

Research on school/organisational effectiveness: the end of the beginning? In Saron R, Trafford V, Research in education management and policy: retrospect and prospect. (Brighton: Falmer Press)

School effectiveness: coming of age in the twenty-first century (Lisse, Swets and Zeitlinger)

Sammons, P., Thomas, S. andMortimore, P. (1997)

 

Scheerens, J. and Bosker, R. (1997)

 

Solihull Sixth Form College (2000)

 

Searle, A. (1998)

 

Scottish Executive (2000)

 

 

Forging links: effective schools and effective departments (London, Paul Chapman)

The foundations of educational effectiveness. (London. Pergamon)

Raising quality and achievement development project (Published at www.rqa.org.uk)

Increasing completion and success rates for young people on full-time programmes, (unpublished internal report, Exeter College)

Standard tables and charts: staff development manual (Edinburgh, Scottish Executive, and also published at www.scotland.gov.uk)

 

 

Somekh, B., Convery, A., Delaney, J., Fisher, R., Gray, G., Gunn, S., Henworth, A. and Powell, L. (1999)

Improving college effectiveness (London, FEDA)

Sommerfield, D. (1995)

An investigation into retention rates and causes of drop-out of students on full-time engineering courses within a college of further education (MA Dissertation, University of Greenwich)

Stack, J. (1999)

Investigation into student retention and achievement at Bradford College (MSc Dissertation, University of Manchester)

Stoll, L., Fink, D. (1996)

 

Strefford, D. (1999)

 

 

Vallender, J. (1998)

Changing our schools (Buckingham, Open University Press)

Halton College research results: interviews with students who have left the college (Cheshire Guidance Partnership, mimeo)

Attendance patterns: an instrument of management (MBA Dissertation, Aston University)

Vick, M. (1997)

Student drop-out from adult education courses: causes and recommendations for reductions (MA Dissertation, University of Wolverhampton)

Wardman, M. and Stevens, J. (1998)

 

Wang, M., Haertel, G. and Walberg, H. (1993)

 

 

Early post-16 course switching (London, DfEE)

Towards a knowledge base for school learning, Review of Educational Research 63 (3)

Yang, M., and Woodhouse, G. (2000)

 

Progress from GCSE to A and AS Level: institutional and gender differences and trends over time. (Paper delivered at BERA annual conference, September).

 

This document was added to the Education-line database on 25 September 2001