Last week I attended the NORRAG roundtable Learning from Learning Assessments: The politics and policies of attaining quality education in Geneva’s Maison de la paix. The event was an outcome of collaboration among NORRAG, the Center for Universal Education (CUE) at Brookings, and CONFEMEN’s Programme for the Analysis of Education Systems (PASEC). With around 40 participants – from universities, OECD, UNESCO and other UN agencies, Education International, consultancies, and found+ations – it was an intense day with a packed agenda.
As pointed out by NORRAG managing director Joost Monks in his introduction, the central theme was the possibilities and limitations of ‘governance by numbers’ in global educational governance. Here are some of the questions that were discussed:
- Do assessment regimes actually capture the information that they intend to?
- How has the reliance on quantifying learning outcomes influenced – both positively and negatively – policy-making and policy delivery at the national level?
- How do different large scale assessments, such as PASEC and OECD’s PISA for Development, relate to one another and to the national context in terms of content and capacity?
Everybody agrees that it is a problem in so many ways that the remarkable increase in access to education in low and middle-income countries over the last decades is often accompanied by less than basic levels of competences among students. In this sense, the UN Sustainable Development Goal (SDG) 4 on quality education makes perfectly sense. Yet, major points of contention remain concerning how to address the issue, who are involved in developing indicators and putting international large-scale assessment (ILSA) frameworks into place, and which learning domains are to assessed.
One of the key initiatives in understanding why SDF 4 came to focus on quality is the Learning Metrics Task Force (LMTF), a joint collaboration between the UNESCO Institute for Statistics and the CUE at Brookings launched in mid-2012. LMTF is currently in its 2.0 phase, advocating for the improvement of assessment systems at the local and country level.
At the NORRAG event, the participants held different views over how to proceed and indeed the very value of initiatives like LMTF. Some argued for the benefits that come with harnessing international expertise in assessment tools for educational development. Others were critical concerning the perverse effects that sometimes accompany ILSA and worried for the influence of an assessment industry – private and publicly-funded – that in these years appears to have great momentum in extending its reach globally.
The former argument is based on the acknowledged fact that there is limited expertise, or capacity, on ILSA in many countries. So, now when we are to measure learning outcomes in a comparable manner across the world, expertise from abroad is required. Representatives from three (organisations from fifteen countries in total take part) LMTF Learning Champions Zambia, Palestine and Kenya (see also here) at the roundtable all made a point of the importance of capacity-building and collaboration that comes with LMTF and PISA for Development and also that the very engagement in these ILSA projects helps in ensuring budgets for education. Another point emphasised by the three Learning Champions is that it is major challenge to engage and train schools and teachers in assessment practices and uses of results so that assessment can become a driver for learning.
The capacity debate spells out that learning assessments associated with SDG 4, PISA for Development, and LMTF involves a sort of assessment learning for those engaged in education – those designing and administrating policies, researchers, and teachers. What was not addressed at the roundtable was that assessment learning also involves pupils, students and parents in the sense that they have to learn how to deal with ILSA in ways that make sense to them.
ILSA are both means and outcomes of politics
At the NORRAG roundtable, the debate between those endorsing the virtues of ILSA in global educational development and those more critical seemed to fluctuate around various conceptions of the political dimensions of ILSA. It would probably have been fruitful for the debate if somebody had been able to spell out the differences at the event. At it were, the different orientations remained largely implicit, with the result that the debate remained somewhat polarised throughout the day.
CUE at Brookings in particular started from the notion that, within the SDG framework, they are delivering what governments call for with LMTF. So, no need to get lost in ephemeral speculation about the political legitimacy of their expertise-based endeavours. They are merely engaged in research for progressive political objectives. Valid and reliable research data are means to inform policy.
The other broader conception of the politics suggests that education, assessment, and research have a political dimension per se; ILSA are in themselves political outcomes. This view was implicit in the arguments put forward from especially the present university-based researchers; the experiences of ILSA, and the ways they tend to transform educational objectives into a simplistic set of measures, call for critical self-reflection in the epistemic community – often with close links to edu-business – engaging with ILSA. Moreover, what do we make of a Washington-based think tank like Brookings leading the design of a potentially global Learning Metrics and associated ILSA practices within the SDG 4 framework? Is this really the right way to go around it? What wider implications does the ensuing market-making in assessment have for the politics and policies in education? Do we dare go along that way considering the implications for education, teaching and learning?
The different perceptions of where the boundaries of politics go in education appeared to also mark the line between those with a hands-on, administrative approach, and those trying to understand the background and wider implications of ILSA, with the former being active and the latter reactive in terms of the rolling out of assessment regimes.
While it’s hard to adopt both perspectives simultaneously, SDG 4, LMTF, PASEC, PISA for Development, and other assessment instruments are both means and outcomes in terms of the politics and policies of assessment. For example, that LMTF is coordinated by a well-established think-tank based in the US and that PISA for Development is on offer from the OECD ‘club of rich countries’ are hardly coincidences. There are rich political backstories to be told about each of them, stories that continue to unfold and that calls for analysis and documentation.
Yet, ILSA are also means of governance. At the roundtable, Pablo Zoido, OECD Technical Lead of PISA for Development, suggested that ILSA might be democratising by contributing to public debate with scientific evidence. Esther Care from CUE at Brookings pointed out that LMTF co-exists with and complements other initiatives in national and local contexts. Indeed, the stories on LMTF in Kenya, Palestine, and Zambia seemed to confirm those arguments. In the end, it comes down to politics; as Gita Steiner-Khamsi pointed out, ILSA results are used by governments to generate reform pressure. The debate on quality education, ILSA and capacity-building through assessment learning are harnessed for very different objectives at various scales.
What about the students?
During the wrap-up, NORRAG Executive Director Michel Carton reminded that there had been very little mentioning of students during the day, except as objects of assessment and efficient learning. Perhaps one of the most significant outcomes of ILSA, SDG 4, LMTF, etc, is that the associated rolling out of standards-based assessment regimes fixes students in the role of objects in their own education in the sense that they find themselves assessed on the basis of performance frameworks that they have no influence over whatsoever.
With the reinforced emphasis on quality education, global education governance is thickening. The education debate is becoming increasingly politicised with scores of organisations making a living by advocating their ideas. This might be good news in those locations where there is relative consensus and long-term commitment to let students pursue what is also quality education in their eyes. However, short-term politics and policies, market-making and spin might easily corrupt that potential.
The abundancy of quantitative data and correlations that comes with ILSA is no guarantee for informed decision-making; indeed, ILSA are probably so popular worldwide because they provide legitimacy for whatever policy of symptom treatment and reasons for constant attention-generating reform intervention. This might be perfectly alright for the burgeoning assessment industry, and while it makes for a frustrating experience for administrators and educators, pupils and students are those with most to lose. In the end, it’s their education that might be corrupted, without any measure of capacity-building being able to prevent it.
Editor’s Note: Tore Bernt Sorensen is a doctoral candidate (Education) in the Centre for Globalisation, Education & Social Futures, Graduate School of Education, University of Bristol. Tore’s PhD project concerns contemporary trends in the global educational policy field. Focusing on the main political actors involved in OECD’s Teaching and Learning International Survey (TALIS), the project discusses the implications for the teaching profession on a global scale and in selected countries such as Australia, England and Finland. Contact: email@example.com