Education-line Home Page

Time to say goodbye? The future of school performance tables

Jodie Reed and Joe Hallgarten
Institute for Public Policy Research, email:;

December 2003


The Coming of Performance Tables
Introduced by the Right...
Extended by the Left...
The Effects of Performance Tables
Effects on pupil performance
Effects on parental attitudes and choice
Effects on school behaviour
Effects on accountability
Effects on the future school improvement agenda
The Future of Performance Tables
Abolish all national testing
Keep testing but discontinue all national publication
Publish value-added alongside raw results tables
Publish value-added only (or alongside other achievement measures
About the Authors
About the Paper
About ippr

Has the national publication of school performance tables outlived its usefulness in England? Jodie Reed and Joe Hallgarten argue that it has and explore ideas for how the Government might go about ending the league tables culture.


Having supported an ippr seminar entitled School Performance Tables: What Next? on 19 November 2003, the Secondary Heads Association is now pleased to endorse this report.

In terms of data, our schools have been the public service market leaders, introducing national performance tables in 1993. Now performance tables are beginning to be used in hospitals, police forces, benefits agencies and universities.

Yet ippr's challenging and useful contribution comes at a critical time in the debate about the accountability of public services, and of schools in particular. Policy makers are being forced to open their ears to increasing calls for reform of performance tables. In a Secondary Heads Association policy paper, entitled Towards Intelligent Accountability, published in March 2003, we argued that national performance tables have become an obstacle to legitimate government aims, such as raising the achievement of all young people and educational inclusion. The Royal Statistical Society, the National Audit Office and the Public Administration Select Committee have all expressed deep concerns over the fairness and usefulness of the tables in recent months.

A better balance must be sought between accountability of schools and the legitimate need for performance information to be in the public domain. For too long, parents have not been asked what they think. It is now time for an informed public consultation.

John Dunford
General Secretary
Secondary Heads Association


On 19 November 1992 then Education Secretary John Patten told The Times he was 'consigning to the dustbin of educational history' a system which 'denied parents the right to know how schools are performing and prevented them from making informed choices about where they want their children educated'1. This was the date of the first publication of national school performance tables at GCSE for England - a move which was replicated throughout the UK. In 1996 it was followed up with the introduction of tables reporting Key Stage 2 results in primary schools in England.

After consultation, Northern Ireland, Wales and Scotland have abolished the national publication of school performance tables reflecting the raw results of individual schools. Welsh Assembly Education Minister Jane Davidson declared 'they do not have the support of either the teaching profession or members of the public'2. This alone is not justification for a reassessment in England. What is right for some parts of the UK may not be right for others and indeed, if decision making in the four nations does not reflect differences in political, social and economic climate, what was the point of devolution at all?

Yet in contrast, performance tables in England are becoming, if anything, more entrenched in the education system. 2003 saw the addition of national tables for Key Stage Three. Announcing the Primary Strategy in May 2003, Charles Clarke asserted that 'testing, targets and performance tables are very much here to stay'.3

However the campaign against the tables in England, and in particular the inevitable culture of media-produced league tables showing the 'best and worst schools in England', has been strong from the start. Jack Straw was one of the first to condemn them in 1991, as Shadow Education Secretary, calling the tables 'crude' and demanding publication of a value-added measure instead4. Having worked on this since their arrival in office, Labour finally published value-added tables reflecting progress between KS2 and KS3 and between KS3 and GCSE/GNVQ alongside (rather than instead of) the traditional raw data tables in 2002.

Many are now questioning whether tables have performed their purpose and become barriers to more profound change in schools. Recent shifts in curriculum design, progress in the Government's relationship with the teaching profession and new thinking about the purposes and processes of learning and teaching are forcing people to reassess how we use performance data. David Miliband in particular has made repeated calls for 'intelligent accountability'. As the media focus on raw data 'league tables' continues despite the introduction of higher quality value-added data, this paper shall explore whether it is time for the Government to take a lead from the rest of the UK after all with more action. The paper makes a close examination of whether performance tables have fulfilled the objectives they were originally set to meet, how these objectives have evolved and whether, in the light of new ways of thinking about learning, they are likely to be effective in the future.


Comparisons between local schools were nothing new. Schools had always used standardised tests, the results of which were collated by LEAs who used them to offer additional (if often insufficient) support. It must also be remembered that no government has ever introduced school league tables. It only publishes alphabetically ordered lists of every school's exam results. The ranking of these results to produce the 'best and worst schools in Britain' is a media invention.

Introduced by the Right...

The demands for national information came from a government that saw all public services, and state schools and LEAs in particular, as inefficient, unresponsive and prone to producer capture. The solution was seen as the introduction of market principles, including Local Management of Schools, per capita funding, the encouragement of parental choice, open enrolment and a new inspection regime. The introduction of performance tables can only seen in the wider context of the attempt to create a competitive, accountable quasi-market in state education.

The publication of every school's exam results became compulsory in 1981 with the requirement to publish results in 'a common and consistent form' dating from 1991. Performance tables were a key strand of the 1993 Parent's Charter which promised parents the 'Right to Know'. The couching of choice in the language of rights illustrates that, for the Conservative Government, choice was partially seen as an end in itself.

However, these reforms were centrally driven, done for rather than with the consumer. Research on parental attitudes at the time consistently revealed a desire to know more about the achievements of individual children, not of schools. The Parent's Charter was not based on any broad consultation of parents and support for the national publication of results was thought to be limited.

Extended by the Centre-Left...

The Labour government has shown faith in market mechanisms, bolting some initiatives to promote collaboration between schools onto a framework that continues to incentivise competition.

Labour's relationship with performance tables typifies this approach. Although it is continually attempting to improve them, for instance through introducing points scores so that all children count, and through value added data, it has also ratcheted up the importance of examination results as key performance indicators, particularly through target setting at national, local, school and pupil levels. The information contained in performance tables and associated data have to some extent been used to determine whether:

However, there is a difference in stress between the two Governments. Rather than a demonstration of faith in the quasi-market, Labour use exam data as a trigger to give particular focus to schools in the greatest difficulties: in other words, as a means by which to pinpoint and act on market failure. Improved exam performance is seen by Labour as the simplest means to justify increases in education spending, giving the appropriate bang for the Treasury's buck.

Tables and targets are seen as a crucial tool to improve performance and thus have been pushed to the top of a hierarchy of overlapping accountability mechanisms. Their inflated importance has polarised the debate over their efficacy.


Clearly, the effects of performance tables are difficult to extrapolate from the other simultaneous and far reaching reforms described above. Had performance tables not existed, these other reforms would have undoubtedly influenced the internal and external practices of schools. However, with the move away from publication of national tables in the rest of the UK at last we have three control groups in Northern Ireland, Wales and now Scotland - approximate as they may be.

It is also important to avoid what could be described as a London view of the world, where opinions are shaped by the blinkered views of columnists and policy advisers who, for instance, perpetuate the myth of an exodus to private education. Performance tables have had some effects, yet many schools are confident and robust enough to withstand their more pernicious influences.

Effects on pupil performance

Ostensibly, alongside the implementation of other market mechanisms, the effect of tables on school and pupil performance overall has been positive. The total number of pupils gaining five A*-Cs at GCSE has also experienced a year on year rise since during the past ten years and primary school results have improved markedly since the publication of their performance tables.

The Government would argue that the literacy and numeracy strategies and target setting were more responsible for this improvement than league tables. In fact, Key Stage Two results have followed the classic exam pattern: first, artificially low results as teachers and pupils confront the unfamiliar; second, an artificially rapid rate of improvement, as schools adapt to teaching techniques to fit the tests; finally, a levelling off of improvement, the point where accurate judgements can be made about year on year changes.

More pertinent to social justice goals, if performance tables have provided an incentive for this improvement, they must also take some responsibility for the widening of gaps in the performances of individual schools and pupils. Focus on 5 A* to Cs has given an incentive for the continuing neglect for the 'tail end of failure'. This persists with the percentage of pupils achieving at least five A-Cs increasing at a sharper rate than the percentage of pupils achieving at least five A-Gs over the same period. This is significant in international terms: the OECD reports that despite levels of performance at 15 years being relatively high in UK overall, variation between pupils within schools is amongst the highest in the industrialised countries.5

Targets based on A-G grades introduced in 1998 (but subsequently dropped) and the inclusion of points scores for GCSE and Key Stage Two results so that all results (and all children) count should have had some counter affect. Yet, if you compare progress over 2000-2003 the percentage of pupils achieving 5 A-Gs including English and Maths has actually gone down slightly whilst the number of pupils achieving 5 A*-Cs has slowly risen. It seems the incentives were not strong enough, the culture was already too ingrained in schools, and the media emphasis on A*-Cs too overwhelming.

There are other reasons to question this apparent positive effect. First, it may be that narrow conception of learning promoted by league tables produces what Harvard's David Perkins calls 'Tactical Learning', rather than deep learning6. This is likely to occur as schools and teachers feel pressurised to concentrate their efforts on exams and teaching to the test. John Gray7 identifies three distinct forms of school improvement: the tactical, the strategic, and capacity-building. His team found many English schools following the tactical route, and other schools pursuing a more strategic path, seeking to raise student achievement over time. Only a few schools showed evidence of seeing improvement as building the capacity of the organisation, developing its staff, increasing its resilience, sparking its creativity, consolidating its inner strength. This reflects wider concerns about whether the exams culture, for all its superficial impact on pupil performance, is having a negative impact on pupil motivation, particularly in the context of this government's aspirations to promote a culture of lifelong learning.

Effects on parental attitudes and choice

To some extent, parents could be described as 'reluctant consumers' in the new education quasi-market. In one 'education perception monitor' which surveyed 60,000 parents, exam results come tenth in the order of parental priorities over what makes a good school8. Recent research on secondary school choice commissioned by the DfES found that only 4 out of 10 parents used performance tables9 whilst in a Guardian poll 81% of parents said they had not used league tables to select their child's school.10

None the less it is fair to say that parents, and other education stakeholders, have to some extent bought into this philosophy. A COI survey in 1995 showed a 71% awareness of league tables and this has undoubtedly risen since then. Their influence over parental preference may be increasing, particularly amongst some parents in the primary sector and in urban areas. Recent research for the Government's London Challenge Unit shows that for parents sending their children to a school in a different borough, a higher place in performance tables is the top reason - the reputation of local schools and the child's happiness come fifth and sixth respectively11. This contrasts the figures for all parents where 'league tables' came only twelfth in a list of factors considered important when selecting a secondary school.

However, preference does not always equate to choice for all. The Welsh consultation highlighted the fact that tables are rarely used for choice purposes outside of the large conurbations where distance means that choice is frequently not a reality. Frustrated choice is also illustrated by the rise in appeals over places in primary and secondary schools, especially acute in and around London. Furthermore, a 2002 study by Stephen Machin and Steve Gibson shows a correlation between league tables and house prices indicating that many parents may be priced out of choice, especially in urban areas12. They estimate a premium on postcode sector house prices of between 2.6% in the West and Southwest and 4.2% in the East, South East and North for each 5% improvement in the proportion of children reaching Level Four at the end of Key Stage Two. That translates to an average of 17,600 on house prices in the local area in London for example.

Whilst schools had reputations and varying popularity long before league tables were introduced, the polarisation of school reputations fed by league tables (particularly in cities) has led to massive over subscription of some and the undersubscription of others. If this trend continues, access will become increasingly limited in some areas to those with the financial means and the future pressures on popular and unpopular schools could be significant.

Meanwhile, fundamental questions must be asked about how useful national tables, especially reflecting raw results, can ever be even for those parents who are in a position to 'choose'. The limits of tables are discussed further below with relation to accountability.

Effects on school behaviour

Performance tables may have impacted upon parental behaviour far less than they have impacted upon school behaviour. In one respect, since the introduction of tables and accompanying market mechanisms, schools have used performance data to become far more outcome-focussed, leading to improvements in teaching and learning. Yet anecdotes abound about school practices that border on obsession; from rising accusations of malpractice in the SATs, to schools asking ill children to be present for SATs week.

It should come as no surprise that performance indicators change an organisation's priorities. That is precisely their purpose. The concern occurs when such indicators skew priorities to the extent that other, normally less measurable, goals are relegated or jettisoned. As the Public Administration Select Committee argued;

'Whereas improvement requires knowledge and awareness of where best practice can be found, simplistic interpretation by the media among others, distorts this objective, emphasising a crude form of accountability rather than helping to improve services'13

Examples might include train operating companies that miss stations to meet punctuality targets, or doctors who refuse to treat high-risk patients due to the publication of morbidity rates. In schooling, for missed stations, read whole curriculum areas. For high-risk patients, read disaffected pupils. It is not just the 'disaffected' who are impacted upon by the skewing of priorities. The importance attached by teachers to exam results is believed by many to have had a damaging impact for conscientious pupils across all ability ranges.

League tables have certainly skewed the responsiveness of schools. Whether this has been in a positive direction is open to debate.

Effect on admissions and exclusions

After consultation with secondary headteachers, West et al, argued that 'there is little doubt that the most effective strategy for improving GCSE performance is to change the student intake.'14 Whether headteachers use this strategy is another question. The 'selecting in' and 'selecting out' of pupils does occur, and not only through overt selection methods. League tables have given a clear incentive to a practice that has always happened. Studies have shown that many schools take steps to promote themselves to high achieving pupils15. More disturbing, the significant rise in exclusions during the 1990s must in part be blamed on a combination of league tables and open enrolment.

This Government has recognised the link between league tables and the explicit neglect of low ability and disaffected pupils. There have been repeated attempts to modify performance tables to break this link such as the inclusion of points scores and the publication of the proportion of pupils with Special Educational Needs alongside the results data. However, although this government is giving more attention and resources to disaffected pupils than any previous administration, truancy targets have not been met and recorded permanent exclusions are on the rise again. Particular concern exists about levels of fixed-term and informal exclusion where no national data is available. There is certainly scope for performance data to be used as a far more effective mechanism to raise the achievement of all underachieving pupils.

Effect on school-to-school relationships

One area where market mechanisms have been uniquely successful, and league tables have played their part, is in fostering a competitive culture between schools. There may have been benefits gained from this culture. However, the same culture may have altered admissions and exclusion policies, so that schools collectively neglect their responsibilities to children who are less easy to teach. It has also created a barrier to the collaboration upon which school self-improvement is so reliant inhibiting the traditional sharing of best (and learning from worst) practice which Tim Brighouse has described as the 'interdependence' between schools. Education Action Zones aimed to promote such interdependence, but have largely been unable to erode this competitive culture. It remains to be seen whether new initiatives such as Diversity Pathfinders, Federations, Leadership Incentive Grants and Excellence Clusters will prove more effective.

The schools most damaged by this competition, those at the bottom of league tables whose spiral of decline is exacerbated by a concentration of disaffected children, are more conspicuous than ever. Although this may contribute to problems with pupil intake and teacher morale, they are, at least, easy to identify, and they can receive additional targeted resources and support. Whether this is sufficient compensation for the effects of public castigation needs careful monitoring.

Effect on teaching and learning

One effect has been the over-targeting of resources on pupils who are on the borderline of whatever success measure is given most prominence in performance tables. In recent years, Year Six pupils on Level Three and Year Eleven pupils at the GCSE grade C/D interface may have had more money spent on them than any private school pupil. As Michael Barber argued three years before initiating booster classes and other catch up initiatives, 'these measures can certainly help if you have a few months to go before the exams; they are not to be sniffed at, but they don't amount to a long-term improvement strategy. They won't make up for weaknesses in the core job of the school.'16

Another major concern is that curricula and pedagogies have been narrowed to focus on the measurements that performance tables have chosen to value. At Key Stage Two, there is a sustained anxiety, not only that non-core subjects are being marginalised, but that areas within the core subjects, for instance drama in English, or enquiry and practical experiments in science, are also suffering. This is a situation that the most confident schools can always avoid and may be a short-term problem (if the literacy and numeracy strategies are effective, most pupils may achieve Level 4 without such intense cramming). As Key Stage Three results are published for the first time, lower secondary schools may be about to undergo the same painful process.

A final complaint is that performance tables ensure that summative assessment or 'freeze-frame education' is prioritised over formative 'assessment for learning', despite our knowledge that the latter, if done well, contributes far more to children's learning.

In changing teaching and learning, performance tables have of course changed teachers. Like Ofsted, performance tables were never designed to placate teachers, nor should they have been. They may in some way be contributing to teacher supply difficulties and adding to the problems of recruitment in poorly performing schools. Whatever boost they give to the schools at the top of the tables, they probably do more damage to the morale of those teachers in the so-called 'worst schools in Britain'.

Effect on accountability

Although the 2001 White Paper was full of praise that 'teachers have embraced accountability', there is an increasing acceptance, best asserted in the SHA document on 'intelligent accountability', that accountability mechanisms are blurred, overlapping and excessive, forming a blanket weed that hinders bottom-up innovation.17 Where do league tables fit into this picture?

On one level, they could be seen as the key to 'earned autonomy': when success is demonstrated by good exam performances, a school will be left alone. However, it remains to be seen what this government really means by 'earned autonomy'. Command and control can be habit forming.

However, there are well rehearsed arguments about the tendency of raw data to be misleading for accountability purposes (and indeed for those who are in a position to make consumer choices about schools).

Crude measures that focus on overall exam results say little about whether a school is good for any particular child. They ignore for example differences between different subject areas, do not take account broader definition of achievement and do not say anything about school culture. Ofsted reports may give a far better indicator of all of these elements.

Crucially however, the tables do not even do the job of reflecting attainment accurately or fairly. At least 80% of the variance in pupil performance is due to factors lying outside the school and the data takes no account of this. Therefore, they tell us more about social make-up of schools than quality of teaching18. The National Audit Office has emphasised the degree of contextualisation needed for performance data to accurately reflect the performance of schools and, hence, the considerable misrepresentation of the current data19. Based on 2002 Key Stage Three and GCSE/GNVQ results they analysed secondary school performance adjusting for external factors including prior attainment, gender, ethnicity and eligibility for free school meals. Their results show that variations between schools diminished substantially once many of these adjustments had been made. Applying this to the newspaper league tables for GCSEs they found for example that of the 621 schools ranked in the bottom 20%, just 272 remained in this category once performance had been adjusted for the influence of external factors. Further, 60 of the schools moved up to the top 20%. The converse occurred for some high ranking schools.

For the Government the focus on value-added data would partially resolve this issue. Unlike raw data, value-added data is a measure of pupil improvement and therefore takes into consideration some contextual factors. For parents looking to assess the impact of a school on academic performance, rather than the social make-up of the school value-added is much more honest and useful. Despite statistical limitations relating to sample size and the inability of the most easily digestible value-added data to reflect school performance for pupils of different ability ranges, it is a more accurate proxy. For parents who are concerned with social make-up local social networks have always been the prime source of information anyway.

In terms of general Government accountability to the public, as opposed to school accountability to parents, school performance tables serve no unique purpose. It is perfectly possible to continue using exam results as a justification for increases or redirection of education spending without publishing nationally collated tables. The information put out annually summarising total outcomes at GCSE and Key Stage Two sufficiently highlights improvements and overall weaknesses in the system. This data could be further backed up by inspection evidence.

On another level, the more sophisticated and streamlined other accountability mechanisms become, from performance management to OFSTED to school self -evaluation, the more meaningless, crude and flawed league tables may seem. In the long term, the best way to abolish league tables could be to render them unnecessary through higher quality information about individual schools and, above all, individual children.

Effect on the future school improvement agenda

Crucially, looking to the future, the publication of performance tables seems to be acting as an obstacle to other parts of the drive to raise achievement.

First, the Government is now trying to move from what David Hopkins describes as an age of 'informed prescription' to an age of 'informed professional judgement' with school self-improvement at its heart. On one level, performance tables could be seen as the key to earned autonomy; when success is demonstrated by good exam performances, a school will be left alone. However, it is arguable that a much more radical assertion of trust is needed if the next phase of reform is genuinely to be led by schools. Indeed the rumblings of a revolution in Government target setting have been spearheaded in the education sector with the Primary Strategy which made primary teachers the first public servants to be responsible for setting their own targets. Whether this revolution takes root however seems largely dependent on whether teachers have the confidence to use new powers to set targets which produce the best overall outcomes for their pupils. It is questionable whether such an outcome is achievable in a league tables culture which continues to emphasise narrowly defined academic outcomes above all else.

Second, there are clear tensions between the Government's ambitions for a more individualised, tailor-made learning and the demands for national, comparable performance data. Inappropriate curriculum has been recognised as a major demotivator for underachieving 14-19 year olds and the Government have responded by giving greater emphasis to a range of 'vocational' courses, and a reduction of prescriptive elements of the curriculum for this age group. The new 14-19 agenda is an example of how personalised learning is causing a shift in emphasis from parental choices between schools toward choice and flexibility for individual young people in the education system as a whole.

This throws up a significant problem; mix and match qualifications do not fit into the boxes so loved by number crunchers. The difficulty in applying a uniform marking system as modes of curriculum and assessment diversify is already in evidence. The Government's decision to give intermediate GNVQs an equivalency rating of four A*-C GCSEs has led to a surge of schools taking advantage of what is seen as easy league table success. Thomas Telford School in Shropshire, for example, has embraced GNVQs from its inception. Yet now, all of their pupils take at least one GNVQ and some leave with a total equivalent of nineteen GCSEs contributing in large part to their outstanding league table performance. If the full potential of the personalised learning agenda is realised, and curriculum and assessment continue to diversify, the Government and the public will have to accept a decreasing number of points of comparison. National performance tables in the future will then reflect differences in achievement in core subjects only at best.

Third, the importance of school-to-school collaboration is increasingly recognised as crucial for driving up standards overall through generating local solutions and spreading best practice. It has been emphasised and incentivised through a range of measures from provisions for federations in the Education Act to the setting of collaboration as a precondition for schools being awarded Specialist Status (and attached funding). But are performance tables, the most crude mechanism of competition, compatible with collaboration or is a more substantial culture shift again required? Whilst it may be possible to reconcile a certain degree of competition with collaboration, 'good schools' will always be reluctant to link up with those which are struggling. Also, as collaboration takes off and several institutions become responsible for parts of any individual's learning, it will become increasingly difficult to hold one single institution responsible for the exam successes or failures of any individual learner.


Performance tables have rapidly become central to the public debate on education. They are increasingly used by the public to make snap judgements, by Ofsted to make pre-judgements, and by newspapers to make headline-grabbing judgements of schools. They have become highly politicised instruments: witness Tony Blair's election-time description of tests which have only been used for seven years as 'the best primary results ever'. The performance culture has come to dominate the management strategies of a large number of schools.

Above all, at a time when deep thinking is required about the purpose of schooling, and the what, where and how of children's learning, league tables have narrowed and oversimplified the debate about the future of learning. It is true that some of their effects, taken in concert with other mechanisms to measure and demonstrate performance, may have been benign; However, many of these benefits could continue without the national publication of information.

Abolish all national testing

The campaign to abolish national testing, particularly at Key Stage One and Two, has always existed, but is having a wider impact. An NUT survey of three thousand teachers recently reported that 94.5% believed there were adverse effects from the tests and 22 percent said they led to excessive pupil stress20. Their areas of concern demand attention, and our pupils deserve decent research to discover the full effects of exams and exam pressure on their motivation, stress levels, and behaviour.

However, the question remains whether it is exams themselves that damage children's learning and motivation, or the associated pressures that this paper has discussed, which leads to schools' over-prioritisation of 'high-stakes tests'. Of the 'three Ts' of tests, targets and tables, the tests themselves may be the most benign. Re-engineer targets so that they are locally driven, change or abolish tables in the ways that are described below, and the more damaging effects of tests may disappear.

National performance examinations and data have a positive role to play in the education system. Despite challenges, they are in one sense the keys to the concept of 'earned autonomy', and the rich nature of data now being collected is unleashing potential for the UK to become world leaders in evidence led school improvement at the individual as well as institutional level. The best way for Government to resist the pressure to abolish tests altogether, and avoid the potential damage of a SATS boycott, would be to take a more open view on how the data produced by exams should be used. This requires consideration of the options below.

Keep testing but discontinue all national publication

Would a discontinuation of the publication of national performance tables reflecting individual school's results alone have an impact? Exam results would still be published at local level and may be published at LEA level too as they are in Northern Ireland, Scotland and Wales but there would be no national ranking, including of value-added data.

Many would contend that this is the only way of removing the harmful effects of a league tables culture and would in addition eliminate a major obstacle to realising the full potential of value-added data as an internal tool for school improvement. The Government has introduced new software with this end in mind. 'The Pupil Achievement Tracker', sent out to headteachers earlier this year, enables all schools to set individual pupil targets more easily and accurately using the vast, rich banks of information now held. The concern is that allowing the public to access this data could lead to its misuse as parents justify putting blame on individual teachers. In the context of the shift from parental choice between schools toward pupil choice within schools discussed above, public access to this data could lead to unrealistic demands from education 'consumers' wishing to assert their right to choose a specific teacher within a school.

Whilst this discontinuation approach seems to have been successful (so far) in the rest of the UK, we must bear in mind that non-English UK settings were never very receptive to the introduction of market based solutions. All three other countries had much higher proportions of the population living in rural areas and small towns and hence 'choice' was never a concept which yielded much meaning21. Should this approach be adopted in England, especially since the Freedom of Information Act, it may be difficult to prevent any newspaper or organisation collating local data into national statistics. Such tables would lack any centrally-approved status or reliability. Even without the newspapers constructing tables, parents would continue to make judgements on schools within communities and social networks would continue to perpetuate and exaggerate reputations.

In this context, would a retreat from nationally collated performance information really mark progress or would it simply have a negative impact on the quality of parents' judgements? It could be argued that abolishing tables reduces transparency and privileges those with the social and cultural capital to find out the results themselves.

Further, in England, it is impossible to imagine a government which has decided to follow a 'four years to deliver on public services' mission scrapping the very tables which could demonstrate this. Government's would fear that they were being seen to be resurrecting an age of producer capture, where information, if gathered at all, was deliberately withheld from a public deemed too ignorant to need it. At the same time, if primary and secondary exam performances improve too slowly during the next four years, this could lead the Government to use other performance indicators to demonstrate its success.

Publish value-added alongside raw results tables

'More information, not less' is clearly the popular political answer and this is the tack the Government in England has been taking for some time as reflected by the ongoing development of the value-added project alongside the continued publication of raw data performance tables.

Many believed that the publication of this data would lead to a natural withering of the league tables culture and there is a certain logic to this argument. The contextualisation offered by value-added data clearly tells more about performance than raw data (as discussed) yet it also shows that in reality there is no great distinguishable difference between most schools. Due to small sample sizes, there are only a relatively tiny number of outliers which analysts can say with any degree of statistical certainty are 'good' or 'bad' performers. As Harvey Goldstein points out, even once one has accounted for large confidence intervals, there are empirical limits. For example, as this is a measure of improvement, the cohorts being used are by definition several years out of date. Using a deeper sample of data which more fairly reflected progress since primary (which Goldstein suggests is the biggest indicator of later achievement)22 would mean using an even older cohort.

However, the publication of value-added has, as it turns out, failed to render raw data league tables obsolete. Producing results easily digestible for the public and at the same time reflecting their limits is not a challenge relished by the media though ten years ago the Guardian did make an arguably successful attempt through the use of diagrams. There seems little appetite for another such trial now and meanwhile the media continue to use English raw scores to produce overall rankings of schools. Most publish value-added information, but none rank schools according to this information.

The withering away idea seems to have been based on an over-optimistic view which, in particular, underestimated the impact of the panic over school choice in London. The polarisation of school reputations in the capital has led to a situation where London parents are twice likely as the national average to have their first choice of school rejected23. As the recent furore over left-wing MP Diane Abbot sending her child to a private school illustrates, a high number of our political elite (including journalists and politicians) are competing in this particularly cut-throat schools market. It may be that the influence they wield disproportionately feeds the Government's impression of parent's hunger for mechanisms that appear to enhance choice - however crude.

Publish value-added only (or alongside other achievement measures)

The continuation of publication of value-added data on a national basis may, at the very least, encourage parents to think about what the figures really mean. If backed up by a simultaneous drive to improve other methods of public accountability such as Ofsted it could maintain all the supposed benefits of performance tables, whilst removing many of the perverse outcomes.

The most commonly used argument against the sole publication of value-added data is that the public would find it difficult to understand. This was the same argument used against the publication of any performance tables a decade ago. Value-added data is more complex and produces less stark results but is ultimately far more illuminating.

There could be a case for such data to be included as part of a broader school 'national priorities' system such as has been introduced in Scotland. This system publishes school data based on a range of characteristics measured both quantitively and qualititively. Thus as well as exam achievement, schools could be rated on their inclusiveness, parent and community links, levels of collaboration with other schools, and commitment to innovation and creativity. A discussion on a star rating system which would be both easy for the public to understand, but, similar to Local Authorities' Comprehensive Performance Assessment ratings, would include ratings on a broad range of criteria could be a useful quid pro quo for the abolition of raw league tables.


Reform of league tables is long overdue. The annual national publication of performance tables has evolved from an irritating splinter to a poisonous thorn wedged in the side of educational progress. As the personalised learning agenda pushes forward it will become increasingly brittle and weak but until someone pulls it out, it will remain.

Whether the Government has the strength to take the lead in dispelling myths in the face of media opposition is doubtful however, especially within the context of an election looming in 2005. The public must be brought on board and the role of the media in this is crucial. To do this journalists must begin to follow the recommendations of the Royal Statistical Society and report measures of uncertainty in performance data so that the public can form their own judgements24. Further, editors must question their motives over the publication of raw data based league tables and whether this really does represent some sort of public service.

The logical next step is a public consultation but this must be an informed consultation. Any such consultation must set out clear options advising of the real strengths and limitations of different alternatives for public accountability including value-added data, publishing data at local level only and broader achievement measures. It must canvass the opinions of parents, pupils and teachers nationally. Finally, it must be supported by the media who could play a crucial role in outlining what more the different options  might really mean. Only with media and non-partisan political support can we build public understanding that such a move would engender a shift not to a lesser accountability but to a more intelligent accountability.

About the Authors

Jodie Reed is Research Fellow for the Institute for Public Policy Research's Education Team. Jodie has worked with the Education team at ippr since March 2002 and has researched for a variety of projects including Schooling in London, SchooLets and Toward Zero-Exclusion. Her published work includes a contribution to From Victims of Change to Agents of Change and articles for TES, Education Review and the International Journal on School Disaffection. Previously she has worked as a Specialist Adviser to the Joint Committee on Human Rights assisting them in their inquiries into the case for a Human Rights Commission and a Children's Commissioner for England.

Joe Hallgarten is Senior Research Fellow at the Institute for Public Policy Research's Education Team. Prior to this, he was a primary school teacher in London and Manchester, and a researcher at the National Union of Teachers' Education Action Zone Unit. He has recently co-edited a collection on the future of the teaching profession, From Victims of Change to Agents of Change. He has also written a report on parent-school relationships, Parents Exist OK!? (IPPR, 2000), and co-edited Tomorrows' Citizens (IPPR, 2000), a collection of essays on citizenship and education. Joe's current projects include SchooLets, which is initiating community currencies in schools, and I was a Teenage Governor, which is working with secondary schools to pilot the use of pupils as 'Associate Governors'. From January 2004 Joe will be leaving ippr to become Learning Director at Creative Partnerships.

About the Paper

Time to Say Goodbye? The Future of School Performance Tables was initially prepared for a seminar funded by the Secondary Heads Association (SHA) at the ippr on 19 November 2003. Presentations were also given by Professor Harvey Goldstein, Institute of Education and John Dunford, SHA. The paper builds on arguments previously made by Joe Hallgarten in December 2001 in an article entitled 23,000 League Tables Under the Sea written for the ippr journal New Economy, Vol 8: 4.

The authors would like to thank Harvey Goldstein, Sheila Dainton, David Watson and Alan Wells in particular for their helpful comments. They would also particularly like to thank Susan Steed for her extremely valuable research assistance and support in the drafting process.

About IPPR

The Institute for Public Policy Research is an independent charity whose purpose is to contribute to public understanding of social, economic and political questions through research, discussion and publication. It was established in 1988 by leading figures in the academic, business and trade union communities to provide an alternative to the free market think tanks.

For information on ippr's current education projects, visit our website at


  1. Patten J (1992) 'Education League Tables; School Report' in The Times, 19 November 1992
  2. Quoted in News Wales (2001) 'Welsh School league tables scrapped' in News Wales, 20 July 2001
  3. DfES (2003) 'Excellence and Enjoyment: A Strategy for Primary Schools', DfES
  4. Press Notice 2003/0095, 20 May 2003
  5. Quoted in BBC News (22/11/01) Analysis: Adding value to the tables, Available:[Online], accessed 1/11/03
  6. OECD (2003) Education at a Glance Paris: OECD
  7. Perkins D. (1992) Smart Schools: From Training Memories to Educating Minds, Free Press: USA

  8. Gray, J, Hopkins, D, Reynolds, D, Wilcox, B, Farrell, S and Jesson D (1999) Improving Schools: Performance and Potential. Open University Press: Milton Keynes
  9. Kirkland Rowall Marketing (2003) Education Perception Monitor: Summary of National Averages to Year 2002, available online at
  10. Office for National Statistics (2001) A nationally representative survey of parents of children in school years 5, 6 and 7

  11. Carvel J (2000) 'A Good Report' in The Guardian 29th February 2000
  12. Batterham J, Educational Communities & ICM (2003) DfES London Challenge: First Survey of London Parents' Attitudes to London Secondary Schools, DfES: London
  13. Gibbons S & Machin S (2002) Valuing Primary Schools
  14. House of Commons Public Administration Select Committee (2003) On Target? Government By Measurement, Fifth Report of Session 2002-03, Volume I
  15. West A, Pennell H and Noden, P (1998) School admissions: increasing equity, accountability and transparency, British Journal of Educational Studies, 46, 2, 188-200

  16. For example Woods, P A, Bagley, C and Glatter, R (1998) School Choice and Competition: Markets in the Public Interest London: Routledge

  17. Pyke N (1996) 'Schools focus on D-graders to boost rankings' in Times Educational Supplement, 22 November 1996
  18. SHA (2003) Towards Intelligent Accountability for Schools: A policy statement on School Accountability, Policy Paper 5 SHA: Leicester
  19. Lupton R. (2002) School Quality, Free School Meals And Area Deprivation : Reading Between The Lines, Paper presented at the LSE Research Laboratory All-Centre Event. July 3rd 2002.

  20. National Audit Office (2003) Making a difference: Performance of maintained secondary schools in England, Report by the Comptroller and Auditor General, The Stationary Office: London
  21. NUT (2002) National Curriculum Tests, NUT: London
  22. Adams J & Robinson P (2002), Devolution in Practice, IPPR:London
  23. Goldstein, H. and Sammons, P. (1997). 'The influence of secondary and junior schools on sixteen year examination performance: a cross-classified multilevel analysis' in School effectiveness and school improvement. 8: 219-230
  24. Williams J, Coldon J, Stephenson K, Logie A & Smith N (2001) Parents' Experiences of the Process of Choosing a Secondary School Research Report 278, Sheffield Hallam University DfES: London
  25. The Royal Statistical Society (23 October 2003) Performance Indicators: Good, Bad and Ugly, Statement of the Royal Statistical Society's Working Party on Performance Monitoring in the Public Services

This document was added to the Education-line database on 19 February 2004