Paving the way for Pathways to the Professions

James Turner reflects on the new phase of Pathways to Law and hopes of extending the model to other areas

Today we announced a £2.4m extension to Pathways to Law, backed by a grant from the Legal Education Foundation, which means that the programme will reach 1200 more students. This is great news for the Trust, for the legal profession and for the young people who will benefit.

Since Pathways was originally developed, based on a model at Edinburgh University, back in 2006/7, the access to professions area has changed considerably.  As I have written before there are now many more projects and organisations working in this space, and the issue has increased policy and media prominence. In fact, it was the work the Trust published in 2005 drawing attention to the social exclusivity of top lawyers – and, subsequently, those in other leading professions – which paved the way for Alan Milburn’s report in 2009.

The challenge has been to keep Pathways fresh and relevant – and to show that, despite all the other good work going on, there is still an important place for it. Pathways scale and reach, for one thing, sets it apart: in its next phase it will be delivered by around a dozen universities all over the country. Importantly, a large swathe of provision will be outside the capital, where the biggest problems of social mobility lie.

Pathways also offers a sustained programme throughout the sixth form, rather than a one off hit, which experience suggests is more likely to have a transformative and lasting impact.    And Pathways twin approach – of offering university access support alongside professional skills and work experience – remains the ‘best bet’ in terms of increasing access to the profession for low and middle income young people.  It is the combination of soft skills with the right academic credentials which is so important in securing a job in this highly competitive area.

This employability dimension will be greatly strengthened when the seventh cohort of students join us in the Autumn. A new component – Pathways Plus – will support the most engaged Pathways graduates and other top students into their undergraduate years.  We will offer them a range of mentoring, skills development and networking opportunities – feeding in to law firms’ own talent pipelines. This should be a tremendous boost for the Pathways students’ prospects of gaining a training contract – whether they are aiming for the Magic Circle or a legal aid practise.

Crucially – and all too often forgotten – we are putting in place an independent academic evaluation of Pathways to add to the promising tracking and survey data we have collected to date.  After all, changing trajectories and showing impact is what the scheme is all about.

The legal profession has grasped the nettle when it comes to Pathways and realised the added value it can bring to their own CSR work – and their graduate recruitment efforts.   But there’s no reason for it to stop at law. We know that other professions – the media, medicine, accountancy, the City – all face similar problems and can ill afford to fish in a shallow pool of talent.

Pathways is already up and running in real estate and property at Reading University and we are keen to consider other opportunities. The model is an important way of cultivating the talents of bright students in state schools who are forging their education and career paths.  Time and again, the Trust’s programmes prove exceptional low and middle income students are out there  – let’s make it easier for the professions to access them.

Evaluating the impact of widening participation initiatives

Lee Elliot Major argues for a more evidence-based approach to university access work.

It is nothing short of a scandal that the vast majority of work in our universities and colleges aimed at opening doors to students from low and middle income homes is not evaluated properly. We spend over £1 billion a year on programmes to widen participation and broaden access into our academic elites; yet we know very little about what impact most of these efforts are having. Well-intentioned efforts to aid social mobility – from school outreach programmes to financial support for students – are effectively operating in the dark, uninformed by any hard evidence of what has worked before.

The problem has come to light again with the release of a report for the Higher Education Funding Council for England (Hefce) which “found little evidence that impact is being systematically evaluated by institutions”. Previous reports have revealed a lack of even the most basic monitoring of data and outcomes across the sector, prompting the English funding council to issue guidance on evaluation.

The national strategy unveiled by Hefce and the Office for Fair Access (Offa) meanwhile has recommended a light-touch network of regional coordinators to facilitate collaboration between universities and schools. This sounds suspiciously like ‘AimHigher light’- a slim-line version of the previous national outreach programme in England. AimHigher was cut in the last Whitehall spending review due to lack of evidence of its impact. A lot of good work was undermined by the absence of hard data.

The gathering of robust evidence remains the Achilles heel of the sector. It seems tragic that this should be so in our respected seats of learning. Once when the Sutton Trust offered to evaluate an outreach scheme at a highly prestigious UK university, the head of access declined, arguing that they would rather use the extra money to help more students.

The problem with this response is twofold: Firstly, we didn’t (still don’t) know if the programme was actually having any impact on the students taking part. Secondly, if we did evaluate it, then the lessons could enable many thousands more students to be helped properly in the future. The current default – to simply survey participants to see if they enjoyed the experience – is no longer good enough. The question must be asked: did the programme impact on the student in the desired way that would not otherwise have happened if the programme had not existed. Did the programme enable students from poorer backgrounds to enter university who otherwise wouldn’t have done so?

But there are signs that the tide is at last turning. To its credit Offa is urging institutions to adopt a more ‘evidence based’ approach. What is now needed is the full mix of evaluation and monitoring – local pilot studies as well as national randomised trials – to measure the outcomes of access work.

Universities can look to the work we have been doing with schools on interventions in the classroom to learn some of the basic principles. The DIY evaluation guide published by the Education Endowment Foundation (EEF) offers simple advice on how to evaluate the impact of a programme at a local level. This is about combining professional judgment with knowledge of previous evidence to devise a programme, and then monitor outcomes of participating students in comparison to similar students not on the programme. The Trust is currently developing a common evaluation framework for all of its programmes. This will enable evaluations for small projects without the resources to commission an independent evaluation themselves.

The Government recently designated The Sutton Trust and EEF as the ‘What Works centre’ for education following the publication of our highly successful toolkit for schools. The Trust is currently developing an ‘HE access toolkit’, which we hope will summarise current evidence on the impact of access work in an accessible format. Although it is not clear how much this will be able to say, given the paucity of research in the field.

Undertaking ‘gold standard’ evaluations which involve selecting participants at random to ascertain genuine impact remains a tricky task. But the Sutton Trust has already funded a feasibility study on how a proper randomised control trial (RCT) might be undertaken for an access programme. We are now considering commissioning a fully fledged RCT.

Even if RCTs are currently a step too far for others, then evaluations need at least to involve the use of comparison groups. Two examples of such usage can be seen in recent evaluations commissioned by the Trust. Our review of summer schools used UCAS university admissions data to compare the outcomes of summer school students against similar students not on the programme. The Reach for Excellence programme meanwhile constructed a comparison group from students who qualified but didn’t enrol on the programme.

If I had my way every access programme would require an evaluation that met these basic standards. Robust evaluation is not easy to do, costs time and money, and often produces awkward and humbling results. But not to do so, is in the end failing the students we are trying to help.

This blog post first appeared on Westminster Briefing.

Access and the avalanche

Conor Ryan considers a new report suggesting that the days of many traditional universities are numbered in the face of online and mass delivery challenges.

In 1926, John Clarke Stobart, the classical scholar and Children’s Hour creator who was also the first BBC Director of Education, had the idea that there might be a ‘wireless university’, bringing learning to the masses in a way that traditional universities, then the preserve of a small elite, could not achieve. What followed was rather less ambitious: a series of 25 minute talks supplemented by study aid pamphlets.

It would be another four decades before Jennie Lee started to develop her ideas for what would become the Open University in 1969.   Those of us old enough to remember the late night OU broadcasts will forever have the image of the typical OU lecture from the 1970s imprinted on our minds.

Nevertheless, despite an initial lack of technical sophistication, the Open University helped over 1.6 million people to gain a higher education. More recently, it has embraced the Internet with the enthusiasm due to a medium well suited to its ambitious approach to access, and it now boasts some 250,000 students worldwide with 1200 academic staff and 7000 tutors.  Its model of delivery has been picked up across the world, not least in fast-growing large nations like India and China.

Reading the fascinating new report from Michael Barber and his colleagues for the IPPR this week, one couldn’t help but think of the profound changes that the Open University made in providing access to higher education for many people, initially on TV and latterly via the Internet.

At the same time, the model did not prove as disruptive as it might have to traditional universities which now educate nearly half the young adult population in ways not so different from the approach taken when J C Stobart was expounding his Reithian mission. Nor, despite its often impressive academic credentials, has it managed to challenge the grip of the elite universities in the UK.

Barber and his colleagues argue persuasively that an ‘avalanche’ is coming in higher education which will completely transform the delivery and – in many respects – the nature of higher education. They say all universities face key challenges including the traditional degree structure, the need for specialisation, their links to employability and a devaluing of the worth of an ordinary primary degree.

Of course, we have had some false starts before. I remember all too well what happened to the ill-fated e-university initiative, a construct that was perhaps too premature. Yet, with the growth of Massive Open Online Courses – bearing the unattractive acronym of MOOCs – the world could potentially become a smaller place for students. A relatively small but growing number of UK students now prefer to study in the US – some with the support of Sutton Trust summer schools.

But some US universities including Harvard, MIT and Berkeley, using the EdX platform, are putting many courses and lectures online, opening them up to mass audiences. In developing countries, online may be the only way to achieve mass higher education, but how much will it affect tradition universities in developed nations?

Barber et al argue that it will require universities to adopt one of five models: the elite, the mass, the niche, the local or the lifelong learning. That may well be true. Equally, they point to the impact of rising fees on students as consumers, and their rising expectations as a result. Students may start to demand more contact time and fewer enforced holidays.

Already there are concerns that few students complete MOOC courses, with dropout rates as high as 90 per cent, though that could also reflect differing motivations for signing up.  It may well be that students without a higher education tradition at home are the least likely to be able to sustain such course options. However, universities cannot afford to be complacent, and must acquire far more flexibility in their approach if they are to remain relevant in this brave new world, both in their traditional and online delivery.

Universities will have to make the case for an experience that is collaborative, and which opens students up to networks that still feel more real than the social media alternatives that are supposed to act as substitutes. As importantly, they will need to show that they are delivering it.

Of course, that may mean new ways of doing things. Warwick University, which ran some excellent summer schools for gifted and talented school students in the first decade of this century, has recently created a new online network – IGGY  – that it wants to blend with face-to-face activities and use that as a way to encourage able students of all backgrounds to network.

Whatever the mode of delivery, access will surely be as important an issue to all the new types of university as it is to traditional institutions. MOOCs must not become the poor man or woman’s alternative to a place at Harvard or Cambridge, which seem unlikely to forfeit their prestige or their role in developing leaders in all fields. Unless we are careful, there is a real danger they will do so.

If elite institutions are here to stay, as Barber et al believe they are, new levels of global competition for talent will make it more important than ever to harness brainpower from the whole of society, not just a narrow elite. That social mobility challenge seems no more destined to disappear than the great universities of the world and their formidable brands.

Evidence is just the start of finding what works

Lee Elliot Major and Steve Higgins, professor of Education at Durham University argue that effective implementation matters once the evidence for what works has been identified.

In England we spend around £2 billion a year on 170,000 teaching assistants in schools. The total impact of this money on the attainment of our children is zero.  The best evidence we have indicates that for every TA who improves a pupil’s progress there is another whose deployment has a negative impact.

This is a powerful example of why we need evidence based policy and practice, but it also highlights the difficulties of promoting changes to improve practice – because finding that TAs are deployed ineffectively does not tell you what to do about it.

Such issues are soon to be faced across government, following the launch last week of a network of What Works centres to champion evidence-based social policy.

At the launch, Oliver Letwin, Minister of State at the Cabinet Office, said that the biggest question was why government hadn’t done something like this before.  But if government hasn’t, others have, and the centres will be building on existing practicies of evidence use in health and education.

In health, the Cochrane Collaboration this year celebrates two decades of producing systematic research reviews. Its approach has shaped advice offered by the National Institute for Health and Clinical Excellence on NHS treatments.

The cultural shifts that introduced evidence-based medicine to surgeries and hospitals 20 years ago are now playing out in classrooms and schools. The What Works education centre will use a toolkit we developed three years ago summarizing thousands of studies to show the best and worst bets for improving pupils’ results. It is the model now being advocated by the Cabinet Office for other policy areas.

Since 2011, the Education Endowment Foundation, a charity that aims to improve the educational achievement of disadvantaged children, has developed this toolkit into a resource for disseminating evidence across the sector. The EEF has overseen a quiet revolution in England’s schools, commissioning some 40 randomized control trials so far.

The toolkit shows that working on teacher-learner interaction – improving feedback to pupils, for example –  gives the biggest bang for the education buck.  Yet our surveys reveal that most head teachers prioritise actions that evidence suggests, on average, have little impact: reducing class sizes or recruiting TAs.

In education, the route to evidence-based policy is particularly challenging, because the navigation instruments are less powerful and predictable than in medicine.  Imagine a world where the laws of nature vary through time and space. That is the reality across the thousands of different classrooms, teachers and children that make up our education system. Over the last 30 years curriculum and assessment have changed many times, and variation in schools and teachers has a profound impact on how an intervention works or doesn’t work.

 We have little idea how to help schools implement the best bets for improvement. Some may need a highly prescriptive programme; others general principles to shape and evaluate a tailored programme.

To return to TAs, for example, the evidence does not mean that they should be scrapped. There are many ways in which TAs might be better recruited, trained, deployed and evaluated. Some approaches will be more effective in lower-performing schools, schools serving a high proportion of children with special needs, or schools with particular teachers. Knowing what works where and for whom could improve a school’s choices about TAs and everything else.

A commitment to what works (strictly, what’s worked) in education must also consider the constantly changing pedagogical landscape. Take phonics teaching: if the current emphasis on phonics becomes routine, then remedial support based on phonics is likely to become less effective than research currently suggests. Children who have failed in a phonics-rich pedagogy may benefit more from a different remedial style.

These are important lessons for the other four planned What Works centres. Evidence can be boring or inconvenient for politicians more interested in an immediate and popular policy fix. But, as Letwin stressed, “this is only the start of a journey, not the destination”, and the outlook at this early stage is promising. This programme has the potential to revolutionise public policy in areas as diverse as ageing, education and policing, replacing dogma and tradition with research and randomised trials. Billions of pounds of public money could be saved.

This post first appeared on Research Fortnight

What Works – A winning formula

Peter Lampl welcomes the designation of the Sutton Trust and EEF as the What Works Centre for Education

This week the Sutton Trust was, together with the Education Endowment Foundation, designated the What Works evidence centre for education by the Government. There will be six leading evidence centres and we and the National Institute for Health and Clinical Excellence (NICE) have been selected to lead on education and health respectively. The centres will be the first port of call for advice on the latest research on the impact of Government programmes.

This is recognition of the Sutton Trust’s focus on evaluation and research in all the work it does. We have always aspired to subject our programmes to robust review. And as an independent foundation we have used evidence to challenge or support the Government’s education policies.

The Trust has funded over 120 research studies in the areas of social mobility and education. But it is primarily a ‘do tank’. Our flagship summer school programme for example is now the largest national university access scheme – but it is also the most scrutinised programme in this field.

We know they have impact: over three quarters (76%) of summer school attendees go on to a leading university, compared with only 55% of students with similar backgrounds who aren’t on the programme. We also know they are highly cost-effective: when Boston Consulting Group did a cost-benefit analysis of the Trust’s programmes – comparing the lifetime earnings benefits for the individuals on the schemes with the money spent – summer schools were among the programmes resulting in returns of over 20:1.

It was these disciplines – assessing the evidence on what works, assessing cost-benefit, but also ensuring that the research results are presented in a clear accessible way – that underpinned the Teaching and Learning Toolkit the Trust developed for schools on what works best at improving the results of children from poorer backgrounds. The Toolkit has now been used by thousands of schools across the country, and underpins the work of the Education Endowment Foundation.

When we established the EEF in 2011 as lead foundation with Impetus our vision was that it was going to embrace the Sutton Trust’s principles and become a gigantic do tank. The aim was to improve the results of the poorest children in our most challenging schools. But it would also have the freedom to experiment, innovate and rigorously evaluate projects and scale up those that were cost effective.

Two years on I am pleased to say that this has become the reality. To date the EEF has awarded £24.4 million to 55 projects working with over 275,000 pupils in over 1,400 schools across England. It has commissioned over 40 randomised research trials in our schools – the gold standard for evaluations on what works. Over the coming years these studies will add greatly to our knowledge of what interventions are successful in the classroom.

But with research, you have to take the rough with the smooth. Not all the Sutton Trust’s research findings have been welcome. In 2005 the Trust jointly funded a five-year study with the Department for Business, Innovation, and Skills and the College Board into the US SAT aptitude test as a potential addition tool in the selection of candidates for universities.

In particular the National Foundation for Educational Research study aimed to find out whether the SAT test could identify highly able non-privileged students whose potential was not being reflected in A-levels because of their circumstances. After five years tracking the results of thousands sixth formers who then attending university, the study concluded that the SAT added little extra information to that provided by A-levels.

If the Government is true to its word on ‘evidence-based policy’ then it will have to face up to this reality. The research may not always confirm prior convictions or favoured policies, and almost always throws up some unexpected results. That’s why I think it is important the EEF and the Sutton Trust remain fiercely independent and make public all the evidence we produce. As the Government’s What Works evidence centre for education, these will be our guiding principles.

How good is a teacher? Check the exam results

Conor Ryan on why improved test scores are a far better measure of success than student surveys

Good teaching is at the heart of good schools. We have done a lot to improve the quality of new teachers, but there has been much less focus on the quality of the existing workforce. Yet, while 35,000 new teachers enter the profession each year, the teacher workforce is 440,000-strong.

Schools need to make the most of teachers’ talents if young people are to get a decent education. For a disadvantaged pupil, an excellent teacher can deliver the equivalent of 1.5 years learning in a year, whereas a poor teacher contributes just half a year: the difference is a whole year of a child’s education.

That’s why it is important we evaluate the contribution that teachers are making and can make with the right support. A new Sutton Trust study, Testing Teachers, published today, shows that the contribution that teachers make to improving exam and test results is the most reliable way to predict a teacher’s long-term success.

The study, by Richard Murphy of the London School of Economics, drawing on the latest international research, shows that improved test scores are nearly twice as effective as student surveys and nearly three times more effective as classroom observations.

But schools can’t simply look at a single year’s test scores to assess performance. A reliable and fair approach requires a sensible combination of these and other measures taken over several years, and might also include teachers’ contributions to sports and school trips.

When Labour introduced performance related pay in 1999, it did so within a very bureaucratic framework that didn’t work as intended in most schools. By contrast, the education secretary Michael Gove is hoping that leaving schools to develop their own systems will improve results and see the best teachers more effectively rewarded.

But without the right systems in place, schools may be no readier to do so now than they were in the past. So what are the characteristics of an effective system of teacher appraisal?

Most importantly, it should involve clear standards, fairly and consistently applied. External advice can be helpful in getting this right, and could assure staff of its fairness and governors of its robustness.

Teachers or school leaders involved in evaluation should be properly trained, and should discuss their evaluation fully with the teachers concerned.

When using exam or test results, it is important to focus on value added rather than absolute results, as they are the most objective and comparable assessment of a teacher’s contribution. It is also important that the baseline for such comparisons is sufficiently robust.

With classroom observations – where teachers or school leaders witness teaching in practice – the report suggests that those designed to help a teacher improve should be carried out separately from those used for appraisal, as this is more likely to promote honest feedback.

Pupil surveys can also be used – particularly with older pupils – as they are the ones in most day-to-day contact with teachers, but when they are they should be clearly structured, be age appropriate, and should complement other measures.

Getting all this right can have real benefits for pupils and teachers alike. Earlier research for the Sutton Trust has shown that if we were to raise the performance of the poorest performing tenth of teachers to the average, we would move into the top rank of the OECD’s PISA tables internationally.

But there is a more compelling reason: by improving the quality of our teachers collectively, we can ensure that every child has a decent education, and is not held back by poor teaching. That is a goal well worth pursuing.

This blog post first appeared on Independent Voices