Ei-iE

Worlds of Education

Education International
Education International

The simplimetrification of educational research

published 11 May 2016 updated 11 May 2016
written by:

Two teachers and their 35 students won free tickets for a hot-air balloon excursion. After a pleasant lifting, winds changed, bad luck happened, and the hot-air balloons went off the planned route. The guides started to look a bit concerned as menacing looking clouds appeared in the horizon. The kids started to feel anxious, and quite a few began screaming and crying. After a few minutes of floating aimlessly, the teachers spotted someone down below and started calling out, “Hello! Can you help us!

The person on the ground replied, “Hello, of course!”

“Where are we?” one of the teachers called down.

Up came the reply: “You’re in a balloon, I can see two female adults and 35 screaming children!”

The wind changed again and the balloon continued to drift, but as the lucky balloonists spotted their destination, and the children and teachers relaxed. After a few minutes, one of the teachers said to the other, “Who was that?”

The other responded, “That was obviously an educational researcher.”

“An educational researcher? How can you tell?” the first asked.

“Because what he said was very, very precise, but utterly irrelevant.” [1]

Globally, colleges of education at research universities are converging around the idea that it is of the utmost importance to be more impactful, making the very notion of impact a new fetish. This fetishistic adoration has generalized the ritualistic use of metric-based reward and punishment models to accomplish three simultaneous and elusive goals: increase their research impact, gain institutional prestige, and demonstrate high levels of scholarly productivity and innovation.  Today it is rare to find a college of education that is not requesting faculty to provide annual reports of available measurable metrics, such as number of articles published in so called “High Impact Factor Journals”, H factors, chapters in books published by university presses, citations obtained, funds, grants and research awards.

Clearly, this model of making university professors accountable is not exclusive of colleges of education; it is internationally pervasive in all areas and fields. Amanda Cooper reports that there are 19 countries that use performance-based research funding models that can be better described as following  “ Metric Tide” and the “Ranking Mania” of contemporary universities.  To be clear, I don’t oppose the use of clear metrics to assess research, and I don’t believe that a nostalgic opposition to the current models, with calls to go back to an idealized “golden era” of academia, is the best way to address the concerns with how to best measure faculty impact. I do believe that we need to be careful and identify, resist and replace any simplistic policies using poorly constructed metrics and incentives to improve educational research. Colleges of education will greatly benefit from remembering Goodhart’s Law: “When a measure becomes a target, it ceases to be a good measure."

The goal to increase the impact of educational scholarship has broad support, inside and outside colleges of education. This goal responds to long-standing traditions in the field as well as social demands and policy mandates to produce impactful research, effective scholarship addressing educational problems in real schools and classrooms. The problem faculties of education are facing, is that even when there is ample consensus about the desirability of producing more engaged research or research with a public purpose, there is no easy and effective system of capturing the scholarly production and its relevance in a field as diverse as education. In most cases, instead of adopting cautious and measured models, most colleges of education adopted simplified systems and metrics that are taken at face value as objective and appropriate indicators of impact, relevance and influence. In other words, metrics that are easiest to quantify do not seem to promote the most impactful research. The results are quite evident: far from encouraging the exploration of new areas and deepening the connections between educational researchers and other relevant potential users such as teachers, administrators, policymakers, journalists, and the public, in general the systematic use of simplifying/reductive measuring mechanisms consolidates the habits of following simplifying/reductive procedures. By responding to narrow systems that couple simple incentives to simple targets, the effects are quite problematic. We are not seeing a metric tide in the field, but a metric tsunami with gigantic waves of simplistic proposals and plenty of debris. I call this phenomenon, the  “simplimetrification of educational research” because it is one of those bad ideas, that has the ironic effect of allowing researchers and their institutions to feel good about themselves, while doing something that is not what they wanted to do. The simplimetrification confuses continuous increases of countable items (more articles, more citations in more hard to publish journals) with impact, but there are no clear and compelling indicators that the quality, access, relevance, and usability of educational scholarship have significantly improved.

Why has simplimetrification gained so much acceptance among colleges of education?  As in any complex process, there are multiple aspects that need to be considered. Simplimetrification channels the aspiration of achieving the badge of “impactful scholarship” so powerfully, that most traces of usability, accountability and innovation in the new and highly productive educational research, fall away.  This model rewards people based on metrics and measurement systems where there’s literally no difference between publishing research that systematically concludes with the statement “more research is needed” and producing knowledge that may eventually bring value to a scholarly field, help educators to improve practices, or provide rigorous evidence to policymakers. The use of these simplistic models is analogous to confusing the delivery of calories with feeding people – if our main goal is feeding people, but all you can effectively incentivize is the delivery of calories, we will conclude that junk food is more efficient than an apple.

The previous analogy is not so far fetched. In the period 1980 to 2015, there was a rapid increase in research articles published in all areas. In 2014 alone, over 2.5 million articles were published at an estimated cost of 15 billion dollars. The field of education shows a very similar trend. Every year we publish more. Hopefully the scholarly output is also showing healthy indications of increased scientific rigor, but indications of the usability of so much production is harder to come by. The “simplimetrification” of the field of education research generates new rituals and produces results, and there is no doubt that educational researchers have learned some forms of innovation but not in asking better and more relevant questions for the advancement of the field or producing more usable knowledge. The crucial innovation over the past 20 years has been the massive adoption of the “publish and perish game”: follow pre-established, tidy paths of exploration, because only those can be properly measured and rewarded. Moreover, the obsessive search for the “research impact” of discrete findings, can only assign value to the final output of the knowledge produced, knowledge that cannot but be considered a discrete thing, that can be counted, and immediately discarded, as just another unit in an increasingly unusable scholarly production. Very, very precise and utterly irrelevant.

[1] This version is an adaptation of a joke about economists from the Planet Money Podcast to the field of education.

The opinions expressed in this blog are those of the author and do not necessarily reflect any official policies or positions of Education International.