Get to know our work

Publications

Here’s a selection of outputs that have come out of the grant so far. Get in touch if you’d like to learn more.


  • Download here

    The objective of this study is to investigate the application of machine learning techniques to the large-scale human expert evaluation of the impact of academic research. Using publicly available impact case study data from the UK’s Research Excellence Framework (2014), we trained five machine learning models on a range of qualitative and quantitative features, including institution, discipline, narrative style (explicit and implicit), and bibliometric and policy indicators. Our work makes two key contributions. Based on the accuracy metric in predicting high- and low-scoring impact case studies, it shows that machine learning models are able to process information to make decisions that resemble those of expert evaluators. It also provides insights into the characteristics of impact case studies that would be favoured if a machine learning approach was applied for their automated assessment. The results of the experiments showed strong influence of institutional context, selected metrics of narrative style, as well as the uptake of research by policy and academic audiences. Overall, the study demonstrates promise for a shift from descriptive to predictive analysis, but suggests caution around the use of machine learning for the assessment of impact case studies.

  • Download here

    Research in the global field of artificial intelligence is increasingly hybrid in orientation. Researchers are beholden to the requirements of multiple intersecting spheres, such as scholarly, public, and commercial, each with their own language and logic. Relatedly, collaboration across disciplinary, sector and national borders is increasingly expected, or required. Using a dataset of 93,482 artificial intelligence publications, this article operationalises scholarly, public, and commercial spheres through citations, news mentions, and patent mentions, respectively. High performing publications (99th percentile) for each metric were separated into eight categories of influence. These comprised four blended categories of influence (news, patents and citations; news and patents; news and citations; patents and citations) and three single categories of influence (citations; news; patents), in addition to the ‘Other’ category of non-high performing publications. The article develops and applies two components of a new hybridity lens: evaluative hybridity and generative hybridity. Using multinomial logistic regression, selected aspects of knowledge production – research context, focus, artefacts, and collaborative configurations – were examined. The results elucidate key characteristics of knowledge production in the artificial intelligence field and demonstrate the utility of the proposed lens.

  • Download here

    A key goal of public policy and public administration research is to inform policy decisions. It is not clear, however, to what extent this is the case. In this study, therefore, citations from policy documents to public policy and administration research were analyzed to identify which research contributed most to policy reports and decisions. Additionally, we identified which policy institutions used research literature more than others to justify their policy decisions. Our findings show that think tanks use public policy and administration research literature more often than governmental organizations when justifying policy reports and decisions.

 
  • Download here

    There is no singular way of measuring the value of research. There are multiple criteria of evaluation given by different fields, including academia but also others, such as policy, media and application. One measure of value within the academy is citations, while indications of wider value are now offered by altmetrics. This study investigates research value using a novel design focusing on the World Bank, which illuminates the complex relationship between valuations given by metrics and by peer review. Three theoretical categories, representing the most extreme examples of value, were identified: ‘exceptionals,’ highest in both citations and altmetrics; ‘scholars’, highest in citations and lowest in altmetrics and ‘influencers’, highest in altmetrics and lowest in citations. Qualitative analysis of 18 interviews using abstracts from each category revealed key differences in ascribed characteristics and judgments. This article provides a novel conception of research value across fields.

  • Download here

    Before problems can be solved, they must be defined. In global public policy, problems are defined in large part by institutions like the World Bank, whose research shapes our collective understanding of social and economic issues. This article examines how research is produced at the World Bank and deemed to be worthwhile and legitimate. Creating and capturing research on global policy problems requires organisational configurations that operate at the intersection of multiple fields. Drawing on an in-depth study of the World Bank research department, this article outlines the structures and technologies of evaluation (i.e., the measurements and procedures used in performance reviews and promotions) and the social and cultural processes (i.e., the spoken and unspoken things that matter) in producing valuable policy research. It develops a theoretically informed account of how the conditions of measurement and evaluation shape the production of knowledge at a dominant multilateral agency. In turn, it unpacks how the internal workings of organisations can shape broader epistemic infrastructures around global policy problems.

  • Download here

    Academics undertaking public policy research are committed to tackling interesting questions driven by curiosity, but they generally also want their research to have an impact on government, service delivery, or public debate. Yet our ability to capture the impact of this research is limited because impact is under-theorised, and current systems of research impact evaluation do not allow for multiple or changing research goals. This article develops a conceptual framework for understanding, measuring, and encouraging research impact for those who seek to produce research that speaks to multiple audiences. The framework brings together message, medium, audience, engagement, impact, evaluation, and affordance within the logics of different fields. It sets out a new way of considering research goals, measurements, and incentives in an integrated way. By accounting for the logics of different fields, which encompass disciplinary, institutional, and intrinsic factors, the framework provides a new way of harnessing measurements and incentives towards fruitful learning about the contribution diverse types of public policy research can make to wider impact.

    Read the paper here