Evaluating the impact of widening participation initiatives

Lee Elliot Major argues for a more evidence-based approach to university access work.

It is nothing short of a scandal that the vast majority of work in our universities and colleges aimed at opening doors to students from low and middle income homes is not evaluated properly. We spend over £1 billion a year on programmes to widen participation and broaden access into our academic elites; yet we know very little about what impact most of these efforts are having. Well-intentioned efforts to aid social mobility – from school outreach programmes to financial support for students – are effectively operating in the dark, uninformed by any hard evidence of what has worked before.

The problem has come to light again with the release of a report for the Higher Education Funding Council for England (Hefce) which “found little evidence that impact is being systematically evaluated by institutions”. Previous reports have revealed a lack of even the most basic monitoring of data and outcomes across the sector, prompting the English funding council to issue guidance on evaluation.

The national strategy unveiled by Hefce and the Office for Fair Access (Offa) meanwhile has recommended a light-touch network of regional coordinators to facilitate collaboration between universities and schools. This sounds suspiciously like ‘AimHigher light’- a slim-line version of the previous national outreach programme in England. AimHigher was cut in the last Whitehall spending review due to lack of evidence of its impact. A lot of good work was undermined by the absence of hard data.

The gathering of robust evidence remains the Achilles heel of the sector. It seems tragic that this should be so in our respected seats of learning. Once when the Sutton Trust offered to evaluate an outreach scheme at a highly prestigious UK university, the head of access declined, arguing that they would rather use the extra money to help more students.

The problem with this response is twofold: Firstly, we didn’t (still don’t) know if the programme was actually having any impact on the students taking part. Secondly, if we did evaluate it, then the lessons could enable many thousands more students to be helped properly in the future. The current default – to simply survey participants to see if they enjoyed the experience – is no longer good enough. The question must be asked: did the programme impact on the student in the desired way that would not otherwise have happened if the programme had not existed. Did the programme enable students from poorer backgrounds to enter university who otherwise wouldn’t have done so?

But there are signs that the tide is at last turning. To its credit Offa is urging institutions to adopt a more ‘evidence based’ approach. What is now needed is the full mix of evaluation and monitoring – local pilot studies as well as national randomised trials – to measure the outcomes of access work.

Universities can look to the work we have been doing with schools on interventions in the classroom to learn some of the basic principles. The DIY evaluation guide published by the Education Endowment Foundation (EEF) offers simple advice on how to evaluate the impact of a programme at a local level. This is about combining professional judgment with knowledge of previous evidence to devise a programme, and then monitor outcomes of participating students in comparison to similar students not on the programme. The Trust is currently developing a common evaluation framework for all of its programmes. This will enable evaluations for small projects without the resources to commission an independent evaluation themselves.

The Government recently designated The Sutton Trust and EEF as the ‘What Works centre’ for education following the publication of our highly successful toolkit for schools. The Trust is currently developing an ‘HE access toolkit’, which we hope will summarise current evidence on the impact of access work in an accessible format. Although it is not clear how much this will be able to say, given the paucity of research in the field.

Undertaking ‘gold standard’ evaluations which involve selecting participants at random to ascertain genuine impact remains a tricky task. But the Sutton Trust has already funded a feasibility study on how a proper randomised control trial (RCT) might be undertaken for an access programme. We are now considering commissioning a fully fledged RCT.

Even if RCTs are currently a step too far for others, then evaluations need at least to involve the use of comparison groups. Two examples of such usage can be seen in recent evaluations commissioned by the Trust. Our review of summer schools used UCAS university admissions data to compare the outcomes of summer school students against similar students not on the programme. The Reach for Excellence programme meanwhile constructed a comparison group from students who qualified but didn’t enrol on the programme.

If I had my way every access programme would require an evaluation that met these basic standards. Robust evaluation is not easy to do, costs time and money, and often produces awkward and humbling results. But not to do so, is in the end failing the students we are trying to help.

This blog post first appeared on Westminster Briefing.

The American Revolution in Teacher Evaluation

Lee Elliott Major on the American revolution in teacher evaluation, and the lessons for the UK

From Colorado to Tennessee, from Florida to New Jersey, all across the United States, a revolution in education is taking place. And it is likely to hit British schools anytime soon. Teachers are for the first time being evaluated on how effective they are in the classroom. Gone are the age old assumptions that teachers should be left to get on with their important work and tenured for life. This is a brave new world of pupil progress measures, classroom observations and student feedback.

The talisman for these bold reforms is one Jeb Bush, former Florida Governor and chairman of the Foundation for Excellence in Education, which this week hosted a major summit in Washington. Bush delivered a powerful oration on the demise of the American dream and the US’s slide down the international rankings of education performance. It was stirring stuff. One can see why Democrats both respect and fear Bush, who is already being touted as the next Republican presidential candidate.

His belief is that education, and in particular teachers, hold the best hope for the nation to recapture its founding principle of upward social mobility. “We need to have a teacher evaluation system that is based on teachers being professionals, not part of some collective trade union bargaining process,” he said. “There are incredibly fine teachers that get paid less even though they’re doing the Lord’s work consistently over time, and there are teachers that are mediocre that get paid more because they’ve been there longer.”

Improving teacher effectiveness has become the priority of education policy makers across the world. A recent Sutton Trust report demonstrated why. Over a school year, poorer pupils gain 1.5 years’ worth of learning with very effective teachers, compared with 0.5 years with poorly performing teachers. Teacher impact dwarfs all other influences on learning within school.

The fledgling teacher evaluation systems, partly stimulated by President Obama’s Race to the Top (RTTT) Fund, are now being developed in 30 states – and increasingly supported by Democrats and Republicans alike. US education secretary, Arne Duncan, also spoke at the summit, praising States for their new assessment regimes.

Most systems combine teacher observations with data on pupil progress to assess teachers. But this has been a bitter battle with the teacher unions. One education commissioner likened it to a ‘knife fight in a dark room’.

Even those who have implemented reforms are struggling to translate them into genuine change. Tennessee has gone to great pains to train all its school principals as evaluators. But when it came to the crunch, few principals were willing to assess their teachers as less than average, rendering the assessments fairly meaningless.

Despite these difficulties the reforms will continue and many states are waiting for the results of the $45million Measures of Effective Teaching (MET) project due to be released by the Gates Foundation early next year.

What are the lessons for this side of the Pond? Here the opening salvoes of the battle have already been fired. Education Secretary Michael Gove has introduced more freedoms for schools to adopt their own appraisal systems, and abolished the limits on teacher observations. School inspectors meanwhile have more powers to scrutinise the pay and performance of teachers. These add to earlier attempts by the Labour Government to introduce performance related pay.

There is no perfect evaluation model in education – or elsewhere for that matter. But the hope still must be that we can create evaluation systems for the teachers by the teachers. The Sutton Trust plans to review the evidence on teacher evaluation from the US and around the world and work alongside schools to develop best practise. John Podesta, a former aide to Bill Clinton, warned this week’s conference in Washington: “If you go to war on your workforce, sooner or later you’re going to lose.”

A Thousand Flowers Wilt?

James Turner suggests that more coherence in social mobility programmes would benefit everyone

One of the privileges of working at the Sutton Trust is the chance to meet people who have been inspired to dedicate their working lives to improving educational opportunity. Many have given up more conventional (and high paying) careers and taken the risk of setting up new programmes to make their ideas a reality.

Over its fifteen year history the Trust has funded many of these. In the last year, we are proud to have supported the start up of The Brilliant Club, which uses PhD students to tutor non-privileged pupils, and to provide seed corn funding to Spire Hubs, which harnesses the expertise of retired teachers for the cause of social mobility.

Many of the small projects we helped to support in the past have now blossomed into significant programmes, reaching thousands of young people.

But things have changed a lot in those fifteen years – and in my eight year tenure at the Trust. As the term ‘social mobility’ has gained political currency, so the landscape has become more crowded, with more and more organisations springing up.

This has been accompanied by a dismantling – sometimes for good reason, sometimes not – of the structures which provided a framework for this activity, whether that is the demise of Connexions and Aimhigher, or the diminished role of local government.  The worlds of university access and information, advice and guidance seem busier, but more fragmented, than ever.

But does this matter?

My view is that it does for three main reasons. Firstly, it’s inefficient for a sector already starved of resources to duplicate efforts. We should be much better at linking complementary initiatives rather than sowing the seeds of new projects which substantially overlap with existing ones.

Secondly – and perhaps most importantly – fragmentation makes the area harder to navigate for teachers, parents and pupils.   Where should a headteacher go for, say, university access work – to their local HE provider, a national charity, a local not-for-profit provider, a commercial outfit or one of the many consultants working in this space?

And finally, how does the teacher know which will make the most difference to young people?   It makes evaluation of what works – already shamefully lacking – even harder.

Others are also concerned by the lack of coordination.  Accessprofessions.com makes it easier for students to benefit from the multiplicity of aspiration-raising activities out there. And PRIME is bringing coherence to work experience schemes, initially in the legal sector, with the aim of boosting quality and equity.

In the schools sphere, the Education Endowment Foundation, which the Sutton Trust set up with support from Impetus, is providing a valuable framework for activities to raise the achievement of the poorest students. It focusses on evaluating what works and funding disciplined innovation – work which is grounded in evidence, rather than a flight of fancy.

In the Trust’s own work, we are developing our existing proven programmes, linking them with other initiatives, and building in evaluation. Our summer schools, for instance, are expanding and we are developing wrap-around activities, such as mentoring and teacher events.

Of course, the Trust will continue to fund new programmes where they are needed and to take justified risks.  We are not afraid of being bold – as our ambitious US summer school programme has showed.

None of this should discourage social entrepreneurs who have a vision and who spot an opportunity to improve education for the better.

But shiny and new isn’t always the answer.   As in life, the most heroic thing to do might, in fact, be the most mundane.