Please enter your text to search.

Research outputs, money, and stories: The view from BBSRC and EPSRC

Between them the BBSRC and EPSRC fund a significant proportion of the natural sciences research in the UK spreading from engineering, physics, and computer science through biochemistry, to ecology and agriculture. Sue Smart is head of evaluation at EPSRC and Mari Williams leads the parallel group at BBSRC. In conversation with both of them some important points came through that are worth discussion.

Sue emphasised in discussion that, in terms of research outcomes information, there is an important distinction to be made between the research outputs themselves (such as publications, data-sets, and other physical outputs) and the data used for the overall assessment/reporting of research outputs as the outcomes of investment. There is also a distinction to be made between measuring actual outcomes/impact and intermediate indicators of impact (such as levels of collaboration and accessibility to research outputs). Particularly for research that is at a greater remove from application it may be more appropriate to reward the efforts made to ensure that outputs are available in an optimal form for future exploitation even if they have not yet been exploited.

Mari noted the practical issues related to a long term perspective.  As preparation for the next UK government spending review starts, the research community is going to have demonstrate that it can deliver on the promise that was implicit in the government’s recent funding settlement that public investment in research is essential for addressing current economic problems. This means that we have a real need to be active gathering evidence of past impact now, as well as needing to show how we are optimising systems to deliver impact in the future. Given the much of the campaign to protect science funding was built on a case of Economic Impact we are now in a position of having to develop the evidence to justify spending on those grounds. This may require case studies into research funded quite a way in the past, which is a challenging exercise in and of itself. US funders have recent experience of this kind of evidence gathering process. Preparing to be able to better deliver this information in the future is crucial.

In this respect it is interesting as Sue noted that concerns over quantitative measures have been accepted both in the conduct of the UK Research Excellence Framework and in the work of the Department of Business Innovation and Skills (which manages UK research funding) and that there has been a move towards narrative description. This is particularly the case for the REF where impact cases submitted as part of the case for funding from each university department or unit will be based on a narrative description. The impact case studies in the pilot REF exercise that were viewed as most successful also included quantitative and qualitative evidence but there seems to be a sense that these numbers are not well enough understood, and perhaps never will be, to provide a quantitative basis for comparing different projects. It is interesting to consider whether it is even appropriate to look forward to a situation where such quantitative assessments of impact might be used for comparative assessment. Is this technically feasible? And is it an appropriate way to assess research outcomes?


 

One Response to “Research outputs, money, and stories: The view from BBSRC and EPSRC”


 
  1. The different roles played by quantitative (counting) and qualitative (narrative) are important. One without the other will be an incomplete assessment methodology as we move forward. I think we will need to develop a continuum along the lines of:

    1) Capture (researcher and institutional source data harnessed – from private data and public data).

    2) Connect (simple linking of this source data, through standard unique IDs, to people and projects and funders)

    3) Count (automate the combination, aggregation and analysis of this linked source data into the agreed performance indicators that are ‘countable’)

    4) Communicate (tell the narrative stories from teams and people that are the final layer on top of the quantitative)

    5) Combine (bring it all together into modular ‘impact reports’ that can be independently aggregate/roll-ups along different comparative groupings – team, institution, funder, state/province, country, etc – but can also be distilled down to any single person’s impact)

Leave a Reply to David Baker (CASRAI)