Please enter your text to search.

Total Impact – building a tool to understand ‘my’ influence

Rebecca Lawrence and Neil Chue Hong (on behalf of the group)

An overarching theme of the Beyond Impact workshop has been to get past the existing standard definition of ‘impact’ to look at other research outputs and measures. The problems with existing measures are well known but there has been limited movement at a broad level to really move beyond these and look at newer forms of measurement of impact such as blogs, presentations, datasets etc.

At the level of an individual, we have the standard CV that is used in many places: job applications, researcher homepages, in various forms within institutions and in grant applications.  However the information provided tends to be fairly standard and limited, e.g. publications (journal articles, books), oral and poster presentations at conferences, and high-level awards, editorial boards.

There is a lot that researchers contribute outside of this that contributes positively to science that is not being reflected in most cases, and the importance of these contributions (both standard and non-standard) are not being measured in terms of an individual’s contributions or in terms of the contribution it makes at the institutional level or as an output of funding. Beyond that, a standard CV tends to inform you about what the researcher themselves has done but not what others have been able to do with what the researcher has done. This could lead to a much broader measure of impact than is easy to achieve today and provide an idea of what an individual’s output has accumulated in terms of ‘externalised mindshare’.

This led to the concept of an enhanced personal impact dashboard, that the group has named Total Impact. The idea is that the researcher can input a list of research outputs to create a ‘live CV’ with a number of indicators associated with each. From this, one could create an aggregate score, providing a ‘Total Impact’ for that individual. The philosophy is still very much that the researcher is in control over which research outputs they choose to add (thereby also reducing duplication of blogs referring to other blogs, co-tweets etc); however the tool will enable a standardised comparison to be worked towards.

Additionally, by better recognising the value of time and effort spent on sharing datasets, taking part in discussion of research (e.g. blogs etc) and other similar activities, it may encourage researchers to be more willing to take part in these valuable activities.

What’s already been done

This idea has been examined before. David Baker from CASRAI has written previously on the idea of a “personal impact dashboard” and there are several tools which allow the calculation of specific metrics that might form part of an aggregated set of information for an individual.

Tools like ReaderMeter provide an easy interface for finding out readership metrics such as the H-index for a researcher. It’s limited to information coming from system at present, Mendeley, but could be extended to CiteULike and others in the future. Other prototypes have been developed, particularly targeting collections with rich APIs, such as the work on Alt-Metrics based on PLoS (PLoS Altmetrics Crawler, PLoS ALM) and many others (CitedIn, pypub, Reseval)

However none of these allow better control by the user in two respects: allowing them to choose what is included and allowing non-traditional artefacts to be combined with traditional ones.

What have we done at the workshop

During the course of the Beyond Impact workshop, we’ve developed the idea of a “Total Impact” tool to take forward our ideas. This tool not only allows the user to record the things they’ve done, but allows them to get a sense of what others have done with it.

To enable this to be implemented, the tool is specifically looking at categories of research artefacts with two properties: there is a type of unique identifier for the artefact; and there is a framework, with an API, that allows the tool to recognise when others do something with that artefact.

For instance, we could take journal papers (which are commonly identified with DOIs) and look at the number of times that they have been bookmarked in Mendeley, the number of times they have been tweeted (using the BackTweets tool) and the number of citations to the paper (e.g. using the information from PubMed Central). The definition of these two properties turns out to be really useful, as we can see how we can apply the tool to:

  • presentations on Slideshare (number of views, replies and tweets/likes/shares)
  • blog posts (number of views/shares using info from PostRank and BackType)
  • datasets (number of downloads/mentions via accession numbers in PubMed Central by non-submission authors)
  • and many others…

These are displayed such that metrics associated with each artefact can be associated with them. Additionally, we can add summary statistics to the list of indicators, either by aggregating statistics from each artefact, or calculating other measures based on a wider sense of how different types of feedback interact with each other. For instance, it would be simple to see how a “Conference Presence” measure might be created by looking at a combination of both primary outputs (presentations) and secondary outputs (blog posts, comments, sharing, linking to conference artefacts).

Finally a persistent URL can be assigned to the page itself, associated to that list of inputs and generated scores, which can then be shared and its presence tracked. This URL always points to the latest update based on the inputs; it is the URL that is persistent, not the set of scores.

The proof-of-concept code is available here: https://github.com/mhahnel/Total-Impact

Future potential

The prototype has been designed to show what can be done with some low-hanging fruit in terms of easily accessible data outputs with simple APIs and that are associated with a range of types of measures. It has also enabled a lot of the requirements for the tool to be scoped out. This then provides a framework to easily add other types of outputs and measures, for example:

  • Grants and Funding  (associated knowledge transfer, patents, publications and grants)
  • Software e.g. GitHub (number of downloads, number of project contributions, code reuse)
  • Data repositories e.g. Protein Data Bank (number of citations)
  • Figures e.g. FigShare (number of shares)
  • Videos e.g. YouTube (number of favorites, times playlisted)
  • Workflows e.g. myExperiment (number of shares, number of reuses)
  • Wikipedia (citations to publications within Wikipedia articles, contributions to articles on Wikipedia)
  • Conference papers, posters and presentations (which traditionally are not assigned identifiers)
  • Whitepapers e.g. Nature Precedings
  • Additional measures (both qualitative and quantitative), e.g. F1000 evaluations, embeds, tweets, Facebook likes, shared on LinkedIn

Although this initial prototype is being built in its simplest form, i.e. at the researcher level, the same principles can be extrapolated out for use the research group level.  Indeed, if structured correctly, the same system works at the institution and funder level and will enable these organisations to view a live dashboard of the latest information, creating an effective aggregate of the impact both at an individual’s level and as a total output for that organisation (in real time), bringing together the full range of outputs available and accessible.  By providing the raw ‘data’ (and the more outputs and measures the better) in an interoperable format, the information can then be analysed and used as appropriate to meet the specific requirements of different researchers, institutions and funders, thereby circumventing some of the problems of trying to reach agreement on how to interpret that information.

What is key to the success of an approach like this is making the information easily transferable, e.g. downloadable as a CSV or XML file, embeddable in a webpage, easy to export into a more standard CV, and easy to transfer into other regular documents such as institutional researcher or research group homepages, or funder biosketch pages. It is also important that the tool itself has a documented API that enables other tools to quickly and easily access the information for export and analysis.

The import of the data should be customisable at the level of the entity being reported on (e.g. researcher, institution etc) and it will be important that this information import is equally straightforward from existing systems and tools. For example, entering a relevant URL relating to part of say an institutional repository would mean that the system can automatically extract the DOIs listed on that page.

By enabling (and in fact encouraging) the community at its broadest level to play with the data should also provoke debate about which measures are more relevant and how best to combine those different measures in a sensible way.  The measures should be clickable such that you can drill down and view the 5 tweets, say, that are related to this particular output. Aggregation metrics certainly need to be explored, for example using the baseball scorecard metaphor that became a recurring theme in the workshop, you could come up with the research equivalent of a batting average, i.e. # times you get a hit / # times you go up to bat.

Looking further into the future, the use of author IDs such as ORCID within the listed outputs (i.e. use of IDs for all the co-authors on papers, datasets etc), means we can then start to do much more complex and interesting analyses, for example using graph networks to show how your Total Impact factor compares with your collaborators, and analysing your influence in collaboration networks.

Many of the major publishers and other suppliers of some of the key information for these outputs do not currently provide much (if any) measure information but once real benefit is gained from the information provided by those that do (e.g. PLoS), and especially once it is noted that institution promotion & tenure committees and administrators, as well as funders are actively using this information, it will provide an incentive to encourage more accessibility to a much broader range of measure information around the different outputs, along with an understanding of what types of reuse can be measured to provide robust metrics.


 

5 Responses to “Total Impact – building a tool to understand ‘my’ influence”


 
  1. [...] and one explored by the Total Impact tool developed as a proof-of-concept at the workshop and described elsewhere. Something which generated significant discussion, if not conclusions, was whether existing [...]

  2. [...] that, called Total Impact but it needs to be taken from prototype to full-fledged app. More info here and here. Suggested by Heather Piwowar: hpiwowar(at)gmail(dot)com or [...]

  3. [...] Total Impact, does just that but it needs to be taken from prototype to full-fledged app. More info here. Suggested by Heather Piwowar hpiwowar(at)gmail(dot)com or [...]

  4. [...] that, called Total Impact but it needs to be taken from prototype to full-fledged app. More info here and here. Suggested by Heather Piwowar: hpiwowar(at)gmail(dot)com or [...]

  5. [...] ran by Cameron Neylon last month, a group of us worked to create a mock-up of a ‘live CV’ (Total Impact) which assembles all of the researchers’ outputs (not just research articles), associates each [...]