Papers by Nadine Rons
Journal of Informetrics, 2018
Rons, N. (2018). Bibliometric approximation of a scientific specialty by combining key sources, t... more Rons, N. (2018). Bibliometric approximation of a scientific specialty by combining key sources, title words, authors and references. Journal of Informetrics, 12(1), 113–132.
PDF/DOI: http://arxiv.org/abs/1712.07087, or https://doi.org/10.1016/j.joi.2017.12.003
Bibliometric methods for the analysis of highly specialized subjects are increasingly investigated and debated. Information and assessments well-focused at the specialty level can help make important decisions in research and innovation policy. This paper presents a novel method to approximate the specialty to which a given publication record belongs. The method partially combines sets of key values for four publication data fields: source, title, authors and references. The approach is founded in concepts defining research disciplines and scholarly communication, and in empirically observed regularities in publication data. The resulting specialty approximation consists of publications associated to the investigated publication record via key values for at least three of the four data fields. This paper describes the method and illustrates it with an application to publication records of individual scientists. The illustration also successfully tests the focus of the specialty approximation in terms of its ability to connect and help identify peers. Potential tracks for further investigation include analyses involving other kinds of specialized publication records, studies for a broader range of specialties, and exploration of the potential for diverse applications in research and research policy context.
"Rons, N. (2012). Partition-based Field Normalization: An approach to highly specialized publicat... more "Rons, N. (2012). Partition-based Field Normalization: An approach to highly specialized publication records. Journal of Informetrics, 6(1), 1-10. PDF/DOI: http://arxiv.org/abs/1307.6804, or http://dx.doi.org/10.1016/j.joi.2011.09.008 & corrigendum http://dx.doi.org/10.1016/j.joi.2012.09.001
Field normalized citation rates are well-established indicators for research performance from the broadest aggregation levels such as countries, down to institutes and research teams. When applied to still more specialized publication sets at the level of individual scientists, also a more accurate delimitation is required of the reference domain that provides the expectations to which a performance is compared. This necessity for sharper accuracy challenges standard methodology based on predefined subject categories. This paper proposes a way to define a reference domain that is more strongly delimited than in standard methodology, by building it up out of cells of the partition created by the pre-defined subject categories and their intersections. This partition approach can be applied to different existing field normalization variants. The resulting reference domain lies between those generated by standard field normalization and journal normalization. Examples based on fictive and real publication records illustrate how the potential impact on results can exceed or be smaller than the effect of other currently debated normalization variants, depending on the case studied. The proposed Partition-based Field Normalization is expected to offer advantages in particular at the level of individual scientists and other very specific publication records, such as publication output from interdisciplinary research."
"Rons, N. (2011). Interdisciplinary Research Collaborations: Evaluation of a Funding Program. Col... more "Rons, N. (2011). Interdisciplinary Research Collaborations: Evaluation of a Funding Program. Collnet Journal of Scientometrics and Information Management, 5(1), 17-32. PDF/DOI: http://arxiv.org/abs/1307.6784, or http://dx.doi.org/10.1080/09737766.2011.10700900
Innovative ideas are often situated where disciplines meet, and socio-economic problems generally require contributions from several disciplines. Ways to stimulate interdisciplinary research collaborations are therefore an increasing point of attention for science policy. There is concern that ‘regular’ funding programs, involving advice from disciplinary experts and discipline-bound viewpoints, may not adequately stimulate, select or evaluate this kind of research. This has led to specific policies aimed at interdisciplinary research in many countries. There is however at this moment no generally accepted method to adequately select and evaluate interdisciplinary research. In the vast context of different forms of interdisciplinarity, this paper aims to contribute to the debate on best practices to stimulate and support interdisciplinary research collaborations. It describes the selection procedures and results of a university program supporting networks formed 'bottom up', integrating expertise from different disciplines. The program’s recent evaluation indicates that it is successful in selecting and supporting the interdisciplinary synergies aimed for, responding to a need experienced in the field. The analysis further confirms that potential for interdisciplinary collaboration is present in all disciplines."
"Rons, N. and Amez, L. (2009). Impact vitality: an indicator based on citing publications in sear... more "Rons, N. and Amez, L. (2009). Impact vitality: an indicator based on citing publications in search of excellent scientists. Research Evaluation, 18(3), 233-241. PDF/DOI: http://arxiv.org/abs/1307.7035, or http://dx.doi.org/10.3152/095820209X470563
This paper contributes to the quest for an operational definition of 'research excellence' and proposes a translation of the excellence concept into a bibliometric indicator. Starting from a textual analysis of funding program calls aimed at individual researchers and from the challenges for an indicator at this level in particular, a new type of indicator is proposed. The Impact Vitality indicator [RONS & AMEZ, 2008] reflects the vitality of the impact of a researcher's publication output, based on the change in volume over time of the citing publications. The introduced metric is shown to posses attractive operational characteristics and meets a number of criteria which are desirable when comparing individual researchers. The validity of one of the possible indicator variants is tested using a small dataset of applicants for a senior full time Research Fellowship. Options for further research involve testing various indicator variants on larger samples linked to different kinds of evaluations."
"Rons, N., De Bruyn, A. and Cornelis, J. (2008). Research evaluation per discipline: a peer-revie... more "Rons, N., De Bruyn, A. and Cornelis, J. (2008). Research evaluation per discipline: a peer-review method and its outcomes. Research Evaluation, 17(1), 45-57. PDF/DOI: http://arxiv.org/abs/1307.7033, or http://dx.doi.org/10.3152/095820208X240208
This paper describes the method for ex-post peer review evaluation per research discipline used at the Vrije Universiteit Brussel (VUB) and summarizes the outcomes obtained from it. The method produces pertinent advice and triggers responses - at the level of the individual researcher, the research team and the university’s research management - for the benefit of research quality, competitivity and visibility. Imposed reflection and contacts during and after the evaluation procedure modify the individual researcher’s attitude, improve the research teams' strategies and allow for the extraction of general recommendations that are used as discipline-dependent guidelines in the university’s research management. The deep insights gained in the different research disciplines and the substantial data sets on their research, support the university management in its policy decisions and in building policy instruments. Moreover, the results are used as a basis for comparison with other assessments, leading to a better understanding of the possibilities and limitations of different evaluation processes. The peer review method can be applied systematically in a pluri-annual cycle of research discipline evaluations to build up a complete overview, or it can be activated on an ad hoc basis for a particular discipline, based on demands from research teams or on strategic or policy arguments. "
A new way to calculate doublet P-Cygni profiles in comoving frame was used to calculate theoretic... more A new way to calculate doublet P-Cygni profiles in comoving frame was used to calculate theoretical C IV profiles for early type stars in the Magellanic Clouds. The calculated profiles were compared to the profile fits presented by Patriarchi et al. (1992). Two examples are shown : HD 268605 and HD 169698.
Many stars continuously eject material and thus surround themselves with a stellar wind. We can o... more Many stars continuously eject material and thus surround themselves with a stellar wind. We can observe this wind indirectly through its characteristic effects on the stellar spectrum. In particular, this mass flow influences the line profiles. These so called P-Cygni profiles can ...
Many stars continuously eject material and thus surround themselves with a stellar wind. We can o... more Many stars continuously eject material and thus surround themselves with a stellar wind. We can observe this wind indirectly through its characteristic effects on the stellar spectrum. In particular, this mass flow influences the line profiles. These so called P-Cygni profiles can be observed in the visual part of the spectrum of early-type stars and Wolf-Rayet stars, as well as - and even more clearly - in the ultraviolet part. In order to study the stellar wind structure, a computer programme was written which calculates theoretical P-Cygni profiles using the Comoving Frame Method. As input, this code can use the results obtained from other programmes, such as a non-LTE code. Through parameter adjustments a fit to an observed profile can be made.
Conference Presentations by Nadine Rons
Rons, N. (2016). Publication and citation patterns can vary significantly between related discipl... more Rons, N. (2016). Publication and citation patterns can vary significantly between related disciplines or more narrow specialties, even when sharing journals. Journal-based structures are therefore not accurate enough to approximate certain specialties, neither subject categories in global citation indices, nor cell sub-structures (Rons, 2012). This paper presents first test results of a new methodology that approximates the specialty of a highly specialized seed record by combining criteria for four publication metadata-fields, thereby broadly covering conceptual components defining disciplines and scholarly communication. To offer added value compared to journal-based structures, the methodology needs to generate sufficiently distinct results for seed directories in related specialties (sharing subject categories, cells, or even sources) with significantly different characteristics. This is tested successfully for the sub-domains of theoretical and experimental particle physics. In particular analyses of specialties with characteristics deviating from those of a broader discipline embedded in can benefit from an approach discerning down to specialty level. Such specialties are potentially present in all disciplines, for instance as cases of peripheral, emerging, frontier, or strategically prioritized research areas.
Rons, N. (2014). Investigation of Partition Cells as a Structural Basis Suitable for Assessments ... more Rons, N. (2014). Investigation of Partition Cells as a Structural Basis Suitable for Assessments of Individual Scientists. In: Proceedings of the science and technology indicators conference 2014 Leiden 'Context Counts: Pathways to Master Big and Little Data', 3-5 September 2014, Leiden, the Netherlands, Ed Noyons (Ed.), 463-472. (Full paper) PDF: http://arxiv.org/abs/1409.2365 PRESENTATION: http://www.slideshare.net/NadineRons/rons-partition-cellssti14ppt
Individual, excellent scientists have become increasingly important in the research funding landscape. Accurate bibliometric measures of an individual's performance could help identify excellent scientists, but still present a challenge. One crucial aspect in this respect is an adequate delineation of the sets of publications that determine the reference values to which a scientist's publication record and its citation impact should be compared. The structure of partition cells formed by intersecting fixed subject categories in a database has been proposed to approximate a scientist's specialty more closely than can be done with the broader subject categories. This paper investigates this cell structure's suitability as an underlying basis for methodologies to assess individual scientists, from two perspectives:
(1) Proximity to the actual structure of publication records of individual scientists: The distribution and concentration of publications over the highly fragmented structure of partition cells are examined for a sample of ERC grantees;
(2) Proximity to customary levels of accuracy: Differences in commonly used reference values (mean expected number of citations per publication, and threshold number of citations for highly cited publications) between adjacent partition cells are compared to differences in two other dimensions: successive publication years and successive citation window lengths.
Findings from both perspectives are in support of partition cells rather than the larger subject categories as a journal based structure on which to construct and apply methodologies for the assessment of highly specialized publication records such as those of individual scientists.
"Rons, N. (2013). Groups of Highly Cited Publications: Stability in Content with Citation Window ... more "Rons, N. (2013). Groups of Highly Cited Publications: Stability in Content with Citation Window Length. In: Proceedings of ISSI 2013, 14th International Society of Scientometrics and Informetrics Conference, Vienna, Austria, 15-19 July 2013, Juan Gorraiz, Edgar Schiebel, Christian Gumpenberger, Marianne Hörlesberger, Henk Moed (Eds.), Vol. 2, 1998-2000. (Poster paper) PDF: http://arxiv.org/abs/1307.6797 POSTER: http://www.slideshare.net/NadineRons/rons-hi-cicontentstabilityissi13poster
The growing focus in research policy worldwide on top scientists makes it increasingly important to define adequate supporting measures to help identify excellent scientists. Highly cited publications have since long been associated to research excellence. At the same time, the analysis of the high-end of citation distributions still is a challenging topic in evaluative bibliometrics. Evaluations typically require indicators that generate sufficiently stable results when applied to recent publication records of limited size. Highly cited publications have been identified using two techniques in particular: pre-set percentiles, and the parameter free Characteristic Scores and Scales (CSS) (Glänzel & Schubert, 1988). The stability required in assessments of relatively small publication records, concerns size as well as content of groups of highly cited publications. Influencing factors include domain delineation and citation window length. Stability in size is evident for the pre-set percentiles, and has been demonstrated for the CSS-methodology beyond an initial citation period of about three years (Glänzel, 2007). Stability in content is less straightforward, considering for instance that more highly cited publications can have a later citation peak, as observed by Abt (1981) for astronomical papers. This paper investigates the stability in content of groups of highly cited publications, i.e. the extent to which individual publications enter and leave the group as the citation window is enlarged."
"Rons, N. (2012). Characteristics of International versus Non-International Scientific Publicatio... more "Rons, N. (2012). Characteristics of International versus Non-International Scientific Publication Media in Team- and Author-Based Data. Proceedings of STI 2012 Montréal, 17th International Conference on Science and Technology Indicators, Montréal, Québec, Canada, 05-08 September 2012, Eric Archambault, Yves Gingras and Vincent Larivière (Eds.), Vol. 2, 888-889. (Poster paper) PDF: http://arxiv.org/abs/1307.6792 POSTER: http://www.slideshare.net/NadineRons/rons-internat-publmediasti12poster
The enlarged coverage of the international publication and citation databases Web of Science and Scopus towards local media in social sciences was a welcome response to an increased usage of these databases in evaluation and funding systems. The mostly international journals available earlier were the basis for the development of current standard bibliometric indicators. The same indicators may no longer measure exactly the same concepts when applied to newly introduced or extended media categories, with possibly different characteristics than those of international journals. This paper investigates differences between media with and without international dimension in publication data at team and author level. The findings relate the international publication categories to research quality, important for validation of their usage in evaluation or funding models that aim to stimulate quality."
"Rons, N. (2011). Research Excellence Milestones of BRIC and N-11 Countries. In: Proceedings of I... more "Rons, N. (2011). Research Excellence Milestones of BRIC and N-11 Countries. In: Proceedings of ISSI 2011, 13th Conference of the International Society for Scientometrics and Informetrics, Durban, South Africa, 04-07 July 2011. Ed Noyons, Patrick Ngulube and Jacqueline Leta (Eds.), Vol. 2, 1049-1051. (Poster paper) PDF: http://arxiv.org/abs/1307.6791 POSTER: http://www.slideshare.net/NadineRons/rons-research-excellencemilestonesissi11poster
While scientific performance is an important aspect of a stable and healthy economy, measures for it have yet to gain their place in economic country profiles. As useful indicators for this performance dimension, this paper introduces the concept of milestones for research excellence, as points of transition to higher-level contributions at the leading edge of science. The proposed milestones are based on two indicators associated with research excellence, the impact vitality profile and the production of review type publications, both applied to a country's publications in the top journals Nature and Science. The milestones are determined for two distinct groups of emerging market economies: the BRIC countries, which outperformed the relative growth expected at their identification in 2001, and the N-11 or Next Eleven countries, identified in 2005 as potential candidates for a BRIClike evolution. Results show how these two groups at different economic levels can be clearly distinguished based on the research milestones, indicating a potential utility as parameters in an economic context."
"Rons, N. (2010). Interdisciplinary Research Collaborations: Evaluation of a Funding Program. In:... more "Rons, N. (2010). Interdisciplinary Research Collaborations: Evaluation of a Funding Program. In: Proceedings of the Sixth International Conference on Webometrics, Informetrics and Scientometrics (WIS) and Eleventh COLLNET Meeting (CD-rom and online), Mysore, India, 19-22 October, 2010, 692-704. PDF: http://arxiv.org/abs/1307.6784v1 PRESENTATION: http://www.slideshare.net/NadineRons/rons-interd-researchcollabcollnet2010ppt
Innovative ideas are often situated where disciplines meet, and socio-economic problems generally require contributions from several disciplines. Ways to stimulate interdisciplinary research collaborations are therefore an increasing point of attention for science policy. There is concern that ‘regular’ funding programs, involving advice from disciplinary experts and discipline-bound viewpoints, may not adequately stimulate, select or evaluate this kind of research. This has led to specific policies aimed at interdisciplinary research in many countries. There is however at this moment no generally accepted method to adequately select and evaluate interdisciplinary research. In the vast context of different forms of interdisciplinarity, this paper aims to contribute to the debate on best practices to stimulate and support interdisciplinary research collaborations. It describes the selection procedures and results of a university program supporting networks formed 'bottom up', integrating expertise from different disciplines. The program’s recent evaluation indicates that it is successful in selecting and supporting the interdisciplinary synergies aimed for, responding to a need experienced in the field. The analysis further confirms that potential for interdisciplinary collaboration is present in all disciplines."
"Rons, N. (2010). Output and citation impact of interdisciplinary networks: Experiences from a de... more "Rons, N. (2010). Output and citation impact of interdisciplinary networks: Experiences from a dedicated funding program. In: Book of Abstracts, 11th International Conference on Science and Technology Indicators "Creating Value for Users", Leiden, The Netherlands, 9-11 September 2010, 227-228. (Poster paper) PDF: http://arxiv.org/abs/1307.6778, also in http://www.cwts.nl/pdf/BookofAbstracts2010_version_15072010.pdf POSTER: http://www.slideshare.net/NadineRons/rons-output-impactinterdnetwsti10poster
In a context of ever more specialized scientists, interdisciplinarity receives increasing attention as innovating ideas are often situated where the disciplines meet. In many countries science policy makers installed dedicated funding programs and policies. This induces a need for specific tools for their support. There is however not yet a generally accepted quantitative method or set of criteria to recognize and evaluate interdisciplinary research outputs (Tracking and evaluating interdisciplinary research: metrics and maps, 12th ISSI Conference, 2009). Interdisciplinarity also takes on very different forms, as distinguished in overviews from the first codifications (Klein, 1990) to the latest reference work (Frodeman et al., 2010). In the specific context of research measurement and evaluation, interdisciplinarity was discussed e.g. by Rinia (2007) and Porter et al. (2006). This empirical study aims to contribute to the understanding and the measuring of interdisciplinary research at the micro level, in the form of new synergies between disciplines. Investigation of a specialized funding program shows how a new interdisciplinary synergy and its citation impact are visible in co-publications and cocitations, and that these are important parameters for assessment. The results also demonstrate the effect of funding, which is clearly present after about three years."
"Rons, N. and De Bruyn, A. (2010). Quality related publication categories in social sciences and ... more "Rons, N. and De Bruyn, A. (2010). Quality related publication categories in social sciences and humanities, based on a university's peer review assessments. In: Book of Abstracts, 11th International Conference on Science and Technology Indicators "Creating Value for Users", Leiden, The Netherlands, 9-11 September 2010, 229-230. (Poster paper) PDF: http://arxiv.org/abs/1307.6773, also in http://www.cwts.nl/pdf/BookofAbstracts2010_version_15072010.pdf POSTER: http://www.slideshare.net/NadineRons/rons-debruyn-qualrelpubcatsshsti10poster
Bibliometric analysis has firmly conquered its place as an instrument for evaluation and international comparison of performance levels. Consequently, differences in coverage by standard bibliometric databases installed a dichotomy between on the one hand the well covered 'exact' sciences, and on the other hand most of the social sciences and humanities with a more limited coverage (Nederhof, 2006). Also the latter domains need to be able to soundly demonstrate their level of performance and claim or legitimate funding accordingly. An important part of the output volume in social sciences appears as books, book chapters and national literature (Hicks, 2004). To proceed from publication data to performance measurement, quantitative publication counts need to be combined with qualitative information, for example from peer assessment or validation (European Expert Group on Assessment of University-Based Research, 2010), to identify those categories that represent research quality as perceived by peers. An accurate focus is crucial in order to stimulate, recognize and reward high quality achievements only. This paper demonstrates how such a selection of publication categories can be based on correlations with peer judgments. It is also illustrated that the selection should be sufficiently precise, to avoid subcategories negatively correlated with peer judgments. The findings indicate that, also in social sciences and humanities, publications in journals with an international referee system are the most important category for evaluating quality. Book chapters with international referee system and contributions in international conference proceedings follow them."
"Rons, N. and Amez, L. (2008). Impact Vitality - A Measure for Excellent Scientists. In: Book of ... more "Rons, N. and Amez, L. (2008). Impact Vitality - A Measure for Excellent Scientists. In: Book of Abstracts, 10th International Conference on Science and Technology Indicators, Vienna, Austria, 17-20 September 2008, 211-213. PDF: http://arxiv.org/abs/1307.6770 PRESENTATION: http://www.slideshare.net/NadineRons/rons-amez-impactvitalitysti08ppt
In many countries and at European level, research policy increasingly focuses on 'excellent' researchers. The concept of excellence however is complex and multidimensional. For individual scholars it involves talents for innovative knowledge creation and successful transmission to peers, as well as management capacities. Excellence is also a comparative concept, implying the ability to surpass others [TIJSSEN, 2003]. Grants are in general awarded based on assessments by expert committees. While peer review is a widely accepted practice, it nevertheless is also subject to criticism. At higher aggregation levels, peer assessments are often supported by quantitative measures. At individual level, most of these measures are much less appropriate and there is a need for new, dedicated indicators."
Amez, L. and Rons, N. (2008). Composing a Publication List for Individual Researcher Assessment b... more Amez, L. and Rons, N. (2008). Composing a Publication List for Individual Researcher Assessment by Merging Information from Different Sources. In: Book of Abstracts, 10th International Conference on Science and Technology Indicators, Vienna, Austria, 17-20 September 2008, 435-437. (Poster paper)
Citation and publication profiles are gaining importance for the evaluation of top researchers when it comes to the appropriation of funding for excellence programs or career promotion judgments. Indicators like the Normalized Mean Citation Rate, the hindex or other distinguishing measures are increasingly used to picture the characteristics of individual scholars. Using bibliometric techniques for individual assessment is known to be particularly delicate, as the chance of errors being averaged away becomes smaller whereas a minor incompleteness can have a significant influence on the evaluation outcome. The quality of the data becomes as such crucial to the legitimacy of the methods used.
"Rons, N. and De Bruyn, A. (2007). Quantitative CV-based indicators for research quality, validat... more "Rons, N. and De Bruyn, A. (2007). Quantitative CV-based indicators for research quality, validated by peer review. In: Proceedings of ISSI 2007, 11th International Conference of the International Society for Scientometrics and Informetrics, CSIC, Madrid, Spain, 25-27 June 2007, 930-931. (Poster paper) PDF: http://arxiv.org/abs/1307.6760 POSTER: http://www.slideshare.net/NadineRons/rons-debruyn-cvbasedindissi07poster
In a university, research assessments are organized at different policy levels (faculties, research council) in different contexts (funding, council membership, personnel evaluations). Each evaluation requires its own focus and methodology. To conduct a coherent research policy however, data on which different assessments are based should be well coordinated. A common set of core indicators for any type of research assessment can provide a supportive and objectivating tool for evaluations at different institutional levels and at the same time promote coherent decision-making. The same indicators can also form the basis for a 'light touch' monitoring instrument, signalling when and where a more thorough evaluation could be considered.
This poster paper shows how peer review results were used to validate a set of quantitative indicators for research quality for a first series of disciplines. The indicators correspond to categories in the university's standard CV-format. Per discipline, specific indicators are identified corresponding to their own publication and funding characteristics. Also more globally valid indicators are identified after normalization for discipline-characteristic performance levels. The method can be applied to any system where peer ratings and quantitative performance measures, both reliable and sufficiently detailed, can be combined for the same entities."
Uploads
Papers by Nadine Rons
PDF/DOI: http://arxiv.org/abs/1712.07087, or https://doi.org/10.1016/j.joi.2017.12.003
Bibliometric methods for the analysis of highly specialized subjects are increasingly investigated and debated. Information and assessments well-focused at the specialty level can help make important decisions in research and innovation policy. This paper presents a novel method to approximate the specialty to which a given publication record belongs. The method partially combines sets of key values for four publication data fields: source, title, authors and references. The approach is founded in concepts defining research disciplines and scholarly communication, and in empirically observed regularities in publication data. The resulting specialty approximation consists of publications associated to the investigated publication record via key values for at least three of the four data fields. This paper describes the method and illustrates it with an application to publication records of individual scientists. The illustration also successfully tests the focus of the specialty approximation in terms of its ability to connect and help identify peers. Potential tracks for further investigation include analyses involving other kinds of specialized publication records, studies for a broader range of specialties, and exploration of the potential for diverse applications in research and research policy context.
Field normalized citation rates are well-established indicators for research performance from the broadest aggregation levels such as countries, down to institutes and research teams. When applied to still more specialized publication sets at the level of individual scientists, also a more accurate delimitation is required of the reference domain that provides the expectations to which a performance is compared. This necessity for sharper accuracy challenges standard methodology based on predefined subject categories. This paper proposes a way to define a reference domain that is more strongly delimited than in standard methodology, by building it up out of cells of the partition created by the pre-defined subject categories and their intersections. This partition approach can be applied to different existing field normalization variants. The resulting reference domain lies between those generated by standard field normalization and journal normalization. Examples based on fictive and real publication records illustrate how the potential impact on results can exceed or be smaller than the effect of other currently debated normalization variants, depending on the case studied. The proposed Partition-based Field Normalization is expected to offer advantages in particular at the level of individual scientists and other very specific publication records, such as publication output from interdisciplinary research."
Innovative ideas are often situated where disciplines meet, and socio-economic problems generally require contributions from several disciplines. Ways to stimulate interdisciplinary research collaborations are therefore an increasing point of attention for science policy. There is concern that ‘regular’ funding programs, involving advice from disciplinary experts and discipline-bound viewpoints, may not adequately stimulate, select or evaluate this kind of research. This has led to specific policies aimed at interdisciplinary research in many countries. There is however at this moment no generally accepted method to adequately select and evaluate interdisciplinary research. In the vast context of different forms of interdisciplinarity, this paper aims to contribute to the debate on best practices to stimulate and support interdisciplinary research collaborations. It describes the selection procedures and results of a university program supporting networks formed 'bottom up', integrating expertise from different disciplines. The program’s recent evaluation indicates that it is successful in selecting and supporting the interdisciplinary synergies aimed for, responding to a need experienced in the field. The analysis further confirms that potential for interdisciplinary collaboration is present in all disciplines."
This paper contributes to the quest for an operational definition of 'research excellence' and proposes a translation of the excellence concept into a bibliometric indicator. Starting from a textual analysis of funding program calls aimed at individual researchers and from the challenges for an indicator at this level in particular, a new type of indicator is proposed. The Impact Vitality indicator [RONS & AMEZ, 2008] reflects the vitality of the impact of a researcher's publication output, based on the change in volume over time of the citing publications. The introduced metric is shown to posses attractive operational characteristics and meets a number of criteria which are desirable when comparing individual researchers. The validity of one of the possible indicator variants is tested using a small dataset of applicants for a senior full time Research Fellowship. Options for further research involve testing various indicator variants on larger samples linked to different kinds of evaluations."
This paper describes the method for ex-post peer review evaluation per research discipline used at the Vrije Universiteit Brussel (VUB) and summarizes the outcomes obtained from it. The method produces pertinent advice and triggers responses - at the level of the individual researcher, the research team and the university’s research management - for the benefit of research quality, competitivity and visibility. Imposed reflection and contacts during and after the evaluation procedure modify the individual researcher’s attitude, improve the research teams' strategies and allow for the extraction of general recommendations that are used as discipline-dependent guidelines in the university’s research management. The deep insights gained in the different research disciplines and the substantial data sets on their research, support the university management in its policy decisions and in building policy instruments. Moreover, the results are used as a basis for comparison with other assessments, leading to a better understanding of the possibilities and limitations of different evaluation processes. The peer review method can be applied systematically in a pluri-annual cycle of research discipline evaluations to build up a complete overview, or it can be activated on an ad hoc basis for a particular discipline, based on demands from research teams or on strategic or policy arguments. "
Conference Presentations by Nadine Rons
Individual, excellent scientists have become increasingly important in the research funding landscape. Accurate bibliometric measures of an individual's performance could help identify excellent scientists, but still present a challenge. One crucial aspect in this respect is an adequate delineation of the sets of publications that determine the reference values to which a scientist's publication record and its citation impact should be compared. The structure of partition cells formed by intersecting fixed subject categories in a database has been proposed to approximate a scientist's specialty more closely than can be done with the broader subject categories. This paper investigates this cell structure's suitability as an underlying basis for methodologies to assess individual scientists, from two perspectives:
(1) Proximity to the actual structure of publication records of individual scientists: The distribution and concentration of publications over the highly fragmented structure of partition cells are examined for a sample of ERC grantees;
(2) Proximity to customary levels of accuracy: Differences in commonly used reference values (mean expected number of citations per publication, and threshold number of citations for highly cited publications) between adjacent partition cells are compared to differences in two other dimensions: successive publication years and successive citation window lengths.
Findings from both perspectives are in support of partition cells rather than the larger subject categories as a journal based structure on which to construct and apply methodologies for the assessment of highly specialized publication records such as those of individual scientists.
The growing focus in research policy worldwide on top scientists makes it increasingly important to define adequate supporting measures to help identify excellent scientists. Highly cited publications have since long been associated to research excellence. At the same time, the analysis of the high-end of citation distributions still is a challenging topic in evaluative bibliometrics. Evaluations typically require indicators that generate sufficiently stable results when applied to recent publication records of limited size. Highly cited publications have been identified using two techniques in particular: pre-set percentiles, and the parameter free Characteristic Scores and Scales (CSS) (Glänzel & Schubert, 1988). The stability required in assessments of relatively small publication records, concerns size as well as content of groups of highly cited publications. Influencing factors include domain delineation and citation window length. Stability in size is evident for the pre-set percentiles, and has been demonstrated for the CSS-methodology beyond an initial citation period of about three years (Glänzel, 2007). Stability in content is less straightforward, considering for instance that more highly cited publications can have a later citation peak, as observed by Abt (1981) for astronomical papers. This paper investigates the stability in content of groups of highly cited publications, i.e. the extent to which individual publications enter and leave the group as the citation window is enlarged."
The enlarged coverage of the international publication and citation databases Web of Science and Scopus towards local media in social sciences was a welcome response to an increased usage of these databases in evaluation and funding systems. The mostly international journals available earlier were the basis for the development of current standard bibliometric indicators. The same indicators may no longer measure exactly the same concepts when applied to newly introduced or extended media categories, with possibly different characteristics than those of international journals. This paper investigates differences between media with and without international dimension in publication data at team and author level. The findings relate the international publication categories to research quality, important for validation of their usage in evaluation or funding models that aim to stimulate quality."
While scientific performance is an important aspect of a stable and healthy economy, measures for it have yet to gain their place in economic country profiles. As useful indicators for this performance dimension, this paper introduces the concept of milestones for research excellence, as points of transition to higher-level contributions at the leading edge of science. The proposed milestones are based on two indicators associated with research excellence, the impact vitality profile and the production of review type publications, both applied to a country's publications in the top journals Nature and Science. The milestones are determined for two distinct groups of emerging market economies: the BRIC countries, which outperformed the relative growth expected at their identification in 2001, and the N-11 or Next Eleven countries, identified in 2005 as potential candidates for a BRIClike evolution. Results show how these two groups at different economic levels can be clearly distinguished based on the research milestones, indicating a potential utility as parameters in an economic context."
Innovative ideas are often situated where disciplines meet, and socio-economic problems generally require contributions from several disciplines. Ways to stimulate interdisciplinary research collaborations are therefore an increasing point of attention for science policy. There is concern that ‘regular’ funding programs, involving advice from disciplinary experts and discipline-bound viewpoints, may not adequately stimulate, select or evaluate this kind of research. This has led to specific policies aimed at interdisciplinary research in many countries. There is however at this moment no generally accepted method to adequately select and evaluate interdisciplinary research. In the vast context of different forms of interdisciplinarity, this paper aims to contribute to the debate on best practices to stimulate and support interdisciplinary research collaborations. It describes the selection procedures and results of a university program supporting networks formed 'bottom up', integrating expertise from different disciplines. The program’s recent evaluation indicates that it is successful in selecting and supporting the interdisciplinary synergies aimed for, responding to a need experienced in the field. The analysis further confirms that potential for interdisciplinary collaboration is present in all disciplines."
In a context of ever more specialized scientists, interdisciplinarity receives increasing attention as innovating ideas are often situated where the disciplines meet. In many countries science policy makers installed dedicated funding programs and policies. This induces a need for specific tools for their support. There is however not yet a generally accepted quantitative method or set of criteria to recognize and evaluate interdisciplinary research outputs (Tracking and evaluating interdisciplinary research: metrics and maps, 12th ISSI Conference, 2009). Interdisciplinarity also takes on very different forms, as distinguished in overviews from the first codifications (Klein, 1990) to the latest reference work (Frodeman et al., 2010). In the specific context of research measurement and evaluation, interdisciplinarity was discussed e.g. by Rinia (2007) and Porter et al. (2006). This empirical study aims to contribute to the understanding and the measuring of interdisciplinary research at the micro level, in the form of new synergies between disciplines. Investigation of a specialized funding program shows how a new interdisciplinary synergy and its citation impact are visible in co-publications and cocitations, and that these are important parameters for assessment. The results also demonstrate the effect of funding, which is clearly present after about three years."
Bibliometric analysis has firmly conquered its place as an instrument for evaluation and international comparison of performance levels. Consequently, differences in coverage by standard bibliometric databases installed a dichotomy between on the one hand the well covered 'exact' sciences, and on the other hand most of the social sciences and humanities with a more limited coverage (Nederhof, 2006). Also the latter domains need to be able to soundly demonstrate their level of performance and claim or legitimate funding accordingly. An important part of the output volume in social sciences appears as books, book chapters and national literature (Hicks, 2004). To proceed from publication data to performance measurement, quantitative publication counts need to be combined with qualitative information, for example from peer assessment or validation (European Expert Group on Assessment of University-Based Research, 2010), to identify those categories that represent research quality as perceived by peers. An accurate focus is crucial in order to stimulate, recognize and reward high quality achievements only. This paper demonstrates how such a selection of publication categories can be based on correlations with peer judgments. It is also illustrated that the selection should be sufficiently precise, to avoid subcategories negatively correlated with peer judgments. The findings indicate that, also in social sciences and humanities, publications in journals with an international referee system are the most important category for evaluating quality. Book chapters with international referee system and contributions in international conference proceedings follow them."
In many countries and at European level, research policy increasingly focuses on 'excellent' researchers. The concept of excellence however is complex and multidimensional. For individual scholars it involves talents for innovative knowledge creation and successful transmission to peers, as well as management capacities. Excellence is also a comparative concept, implying the ability to surpass others [TIJSSEN, 2003]. Grants are in general awarded based on assessments by expert committees. While peer review is a widely accepted practice, it nevertheless is also subject to criticism. At higher aggregation levels, peer assessments are often supported by quantitative measures. At individual level, most of these measures are much less appropriate and there is a need for new, dedicated indicators."
Citation and publication profiles are gaining importance for the evaluation of top researchers when it comes to the appropriation of funding for excellence programs or career promotion judgments. Indicators like the Normalized Mean Citation Rate, the hindex or other distinguishing measures are increasingly used to picture the characteristics of individual scholars. Using bibliometric techniques for individual assessment is known to be particularly delicate, as the chance of errors being averaged away becomes smaller whereas a minor incompleteness can have a significant influence on the evaluation outcome. The quality of the data becomes as such crucial to the legitimacy of the methods used.
In a university, research assessments are organized at different policy levels (faculties, research council) in different contexts (funding, council membership, personnel evaluations). Each evaluation requires its own focus and methodology. To conduct a coherent research policy however, data on which different assessments are based should be well coordinated. A common set of core indicators for any type of research assessment can provide a supportive and objectivating tool for evaluations at different institutional levels and at the same time promote coherent decision-making. The same indicators can also form the basis for a 'light touch' monitoring instrument, signalling when and where a more thorough evaluation could be considered.
This poster paper shows how peer review results were used to validate a set of quantitative indicators for research quality for a first series of disciplines. The indicators correspond to categories in the university's standard CV-format. Per discipline, specific indicators are identified corresponding to their own publication and funding characteristics. Also more globally valid indicators are identified after normalization for discipline-characteristic performance levels. The method can be applied to any system where peer ratings and quantitative performance measures, both reliable and sufficiently detailed, can be combined for the same entities."
PDF/DOI: http://arxiv.org/abs/1712.07087, or https://doi.org/10.1016/j.joi.2017.12.003
Bibliometric methods for the analysis of highly specialized subjects are increasingly investigated and debated. Information and assessments well-focused at the specialty level can help make important decisions in research and innovation policy. This paper presents a novel method to approximate the specialty to which a given publication record belongs. The method partially combines sets of key values for four publication data fields: source, title, authors and references. The approach is founded in concepts defining research disciplines and scholarly communication, and in empirically observed regularities in publication data. The resulting specialty approximation consists of publications associated to the investigated publication record via key values for at least three of the four data fields. This paper describes the method and illustrates it with an application to publication records of individual scientists. The illustration also successfully tests the focus of the specialty approximation in terms of its ability to connect and help identify peers. Potential tracks for further investigation include analyses involving other kinds of specialized publication records, studies for a broader range of specialties, and exploration of the potential for diverse applications in research and research policy context.
Field normalized citation rates are well-established indicators for research performance from the broadest aggregation levels such as countries, down to institutes and research teams. When applied to still more specialized publication sets at the level of individual scientists, also a more accurate delimitation is required of the reference domain that provides the expectations to which a performance is compared. This necessity for sharper accuracy challenges standard methodology based on predefined subject categories. This paper proposes a way to define a reference domain that is more strongly delimited than in standard methodology, by building it up out of cells of the partition created by the pre-defined subject categories and their intersections. This partition approach can be applied to different existing field normalization variants. The resulting reference domain lies between those generated by standard field normalization and journal normalization. Examples based on fictive and real publication records illustrate how the potential impact on results can exceed or be smaller than the effect of other currently debated normalization variants, depending on the case studied. The proposed Partition-based Field Normalization is expected to offer advantages in particular at the level of individual scientists and other very specific publication records, such as publication output from interdisciplinary research."
Innovative ideas are often situated where disciplines meet, and socio-economic problems generally require contributions from several disciplines. Ways to stimulate interdisciplinary research collaborations are therefore an increasing point of attention for science policy. There is concern that ‘regular’ funding programs, involving advice from disciplinary experts and discipline-bound viewpoints, may not adequately stimulate, select or evaluate this kind of research. This has led to specific policies aimed at interdisciplinary research in many countries. There is however at this moment no generally accepted method to adequately select and evaluate interdisciplinary research. In the vast context of different forms of interdisciplinarity, this paper aims to contribute to the debate on best practices to stimulate and support interdisciplinary research collaborations. It describes the selection procedures and results of a university program supporting networks formed 'bottom up', integrating expertise from different disciplines. The program’s recent evaluation indicates that it is successful in selecting and supporting the interdisciplinary synergies aimed for, responding to a need experienced in the field. The analysis further confirms that potential for interdisciplinary collaboration is present in all disciplines."
This paper contributes to the quest for an operational definition of 'research excellence' and proposes a translation of the excellence concept into a bibliometric indicator. Starting from a textual analysis of funding program calls aimed at individual researchers and from the challenges for an indicator at this level in particular, a new type of indicator is proposed. The Impact Vitality indicator [RONS & AMEZ, 2008] reflects the vitality of the impact of a researcher's publication output, based on the change in volume over time of the citing publications. The introduced metric is shown to posses attractive operational characteristics and meets a number of criteria which are desirable when comparing individual researchers. The validity of one of the possible indicator variants is tested using a small dataset of applicants for a senior full time Research Fellowship. Options for further research involve testing various indicator variants on larger samples linked to different kinds of evaluations."
This paper describes the method for ex-post peer review evaluation per research discipline used at the Vrije Universiteit Brussel (VUB) and summarizes the outcomes obtained from it. The method produces pertinent advice and triggers responses - at the level of the individual researcher, the research team and the university’s research management - for the benefit of research quality, competitivity and visibility. Imposed reflection and contacts during and after the evaluation procedure modify the individual researcher’s attitude, improve the research teams' strategies and allow for the extraction of general recommendations that are used as discipline-dependent guidelines in the university’s research management. The deep insights gained in the different research disciplines and the substantial data sets on their research, support the university management in its policy decisions and in building policy instruments. Moreover, the results are used as a basis for comparison with other assessments, leading to a better understanding of the possibilities and limitations of different evaluation processes. The peer review method can be applied systematically in a pluri-annual cycle of research discipline evaluations to build up a complete overview, or it can be activated on an ad hoc basis for a particular discipline, based on demands from research teams or on strategic or policy arguments. "
Individual, excellent scientists have become increasingly important in the research funding landscape. Accurate bibliometric measures of an individual's performance could help identify excellent scientists, but still present a challenge. One crucial aspect in this respect is an adequate delineation of the sets of publications that determine the reference values to which a scientist's publication record and its citation impact should be compared. The structure of partition cells formed by intersecting fixed subject categories in a database has been proposed to approximate a scientist's specialty more closely than can be done with the broader subject categories. This paper investigates this cell structure's suitability as an underlying basis for methodologies to assess individual scientists, from two perspectives:
(1) Proximity to the actual structure of publication records of individual scientists: The distribution and concentration of publications over the highly fragmented structure of partition cells are examined for a sample of ERC grantees;
(2) Proximity to customary levels of accuracy: Differences in commonly used reference values (mean expected number of citations per publication, and threshold number of citations for highly cited publications) between adjacent partition cells are compared to differences in two other dimensions: successive publication years and successive citation window lengths.
Findings from both perspectives are in support of partition cells rather than the larger subject categories as a journal based structure on which to construct and apply methodologies for the assessment of highly specialized publication records such as those of individual scientists.
The growing focus in research policy worldwide on top scientists makes it increasingly important to define adequate supporting measures to help identify excellent scientists. Highly cited publications have since long been associated to research excellence. At the same time, the analysis of the high-end of citation distributions still is a challenging topic in evaluative bibliometrics. Evaluations typically require indicators that generate sufficiently stable results when applied to recent publication records of limited size. Highly cited publications have been identified using two techniques in particular: pre-set percentiles, and the parameter free Characteristic Scores and Scales (CSS) (Glänzel & Schubert, 1988). The stability required in assessments of relatively small publication records, concerns size as well as content of groups of highly cited publications. Influencing factors include domain delineation and citation window length. Stability in size is evident for the pre-set percentiles, and has been demonstrated for the CSS-methodology beyond an initial citation period of about three years (Glänzel, 2007). Stability in content is less straightforward, considering for instance that more highly cited publications can have a later citation peak, as observed by Abt (1981) for astronomical papers. This paper investigates the stability in content of groups of highly cited publications, i.e. the extent to which individual publications enter and leave the group as the citation window is enlarged."
The enlarged coverage of the international publication and citation databases Web of Science and Scopus towards local media in social sciences was a welcome response to an increased usage of these databases in evaluation and funding systems. The mostly international journals available earlier were the basis for the development of current standard bibliometric indicators. The same indicators may no longer measure exactly the same concepts when applied to newly introduced or extended media categories, with possibly different characteristics than those of international journals. This paper investigates differences between media with and without international dimension in publication data at team and author level. The findings relate the international publication categories to research quality, important for validation of their usage in evaluation or funding models that aim to stimulate quality."
While scientific performance is an important aspect of a stable and healthy economy, measures for it have yet to gain their place in economic country profiles. As useful indicators for this performance dimension, this paper introduces the concept of milestones for research excellence, as points of transition to higher-level contributions at the leading edge of science. The proposed milestones are based on two indicators associated with research excellence, the impact vitality profile and the production of review type publications, both applied to a country's publications in the top journals Nature and Science. The milestones are determined for two distinct groups of emerging market economies: the BRIC countries, which outperformed the relative growth expected at their identification in 2001, and the N-11 or Next Eleven countries, identified in 2005 as potential candidates for a BRIClike evolution. Results show how these two groups at different economic levels can be clearly distinguished based on the research milestones, indicating a potential utility as parameters in an economic context."
Innovative ideas are often situated where disciplines meet, and socio-economic problems generally require contributions from several disciplines. Ways to stimulate interdisciplinary research collaborations are therefore an increasing point of attention for science policy. There is concern that ‘regular’ funding programs, involving advice from disciplinary experts and discipline-bound viewpoints, may not adequately stimulate, select or evaluate this kind of research. This has led to specific policies aimed at interdisciplinary research in many countries. There is however at this moment no generally accepted method to adequately select and evaluate interdisciplinary research. In the vast context of different forms of interdisciplinarity, this paper aims to contribute to the debate on best practices to stimulate and support interdisciplinary research collaborations. It describes the selection procedures and results of a university program supporting networks formed 'bottom up', integrating expertise from different disciplines. The program’s recent evaluation indicates that it is successful in selecting and supporting the interdisciplinary synergies aimed for, responding to a need experienced in the field. The analysis further confirms that potential for interdisciplinary collaboration is present in all disciplines."
In a context of ever more specialized scientists, interdisciplinarity receives increasing attention as innovating ideas are often situated where the disciplines meet. In many countries science policy makers installed dedicated funding programs and policies. This induces a need for specific tools for their support. There is however not yet a generally accepted quantitative method or set of criteria to recognize and evaluate interdisciplinary research outputs (Tracking and evaluating interdisciplinary research: metrics and maps, 12th ISSI Conference, 2009). Interdisciplinarity also takes on very different forms, as distinguished in overviews from the first codifications (Klein, 1990) to the latest reference work (Frodeman et al., 2010). In the specific context of research measurement and evaluation, interdisciplinarity was discussed e.g. by Rinia (2007) and Porter et al. (2006). This empirical study aims to contribute to the understanding and the measuring of interdisciplinary research at the micro level, in the form of new synergies between disciplines. Investigation of a specialized funding program shows how a new interdisciplinary synergy and its citation impact are visible in co-publications and cocitations, and that these are important parameters for assessment. The results also demonstrate the effect of funding, which is clearly present after about three years."
Bibliometric analysis has firmly conquered its place as an instrument for evaluation and international comparison of performance levels. Consequently, differences in coverage by standard bibliometric databases installed a dichotomy between on the one hand the well covered 'exact' sciences, and on the other hand most of the social sciences and humanities with a more limited coverage (Nederhof, 2006). Also the latter domains need to be able to soundly demonstrate their level of performance and claim or legitimate funding accordingly. An important part of the output volume in social sciences appears as books, book chapters and national literature (Hicks, 2004). To proceed from publication data to performance measurement, quantitative publication counts need to be combined with qualitative information, for example from peer assessment or validation (European Expert Group on Assessment of University-Based Research, 2010), to identify those categories that represent research quality as perceived by peers. An accurate focus is crucial in order to stimulate, recognize and reward high quality achievements only. This paper demonstrates how such a selection of publication categories can be based on correlations with peer judgments. It is also illustrated that the selection should be sufficiently precise, to avoid subcategories negatively correlated with peer judgments. The findings indicate that, also in social sciences and humanities, publications in journals with an international referee system are the most important category for evaluating quality. Book chapters with international referee system and contributions in international conference proceedings follow them."
In many countries and at European level, research policy increasingly focuses on 'excellent' researchers. The concept of excellence however is complex and multidimensional. For individual scholars it involves talents for innovative knowledge creation and successful transmission to peers, as well as management capacities. Excellence is also a comparative concept, implying the ability to surpass others [TIJSSEN, 2003]. Grants are in general awarded based on assessments by expert committees. While peer review is a widely accepted practice, it nevertheless is also subject to criticism. At higher aggregation levels, peer assessments are often supported by quantitative measures. At individual level, most of these measures are much less appropriate and there is a need for new, dedicated indicators."
Citation and publication profiles are gaining importance for the evaluation of top researchers when it comes to the appropriation of funding for excellence programs or career promotion judgments. Indicators like the Normalized Mean Citation Rate, the hindex or other distinguishing measures are increasingly used to picture the characteristics of individual scholars. Using bibliometric techniques for individual assessment is known to be particularly delicate, as the chance of errors being averaged away becomes smaller whereas a minor incompleteness can have a significant influence on the evaluation outcome. The quality of the data becomes as such crucial to the legitimacy of the methods used.
In a university, research assessments are organized at different policy levels (faculties, research council) in different contexts (funding, council membership, personnel evaluations). Each evaluation requires its own focus and methodology. To conduct a coherent research policy however, data on which different assessments are based should be well coordinated. A common set of core indicators for any type of research assessment can provide a supportive and objectivating tool for evaluations at different institutional levels and at the same time promote coherent decision-making. The same indicators can also form the basis for a 'light touch' monitoring instrument, signalling when and where a more thorough evaluation could be considered.
This poster paper shows how peer review results were used to validate a set of quantitative indicators for research quality for a first series of disciplines. The indicators correspond to categories in the university's standard CV-format. Per discipline, specific indicators are identified corresponding to their own publication and funding characteristics. Also more globally valid indicators are identified after normalization for discipline-characteristic performance levels. The method can be applied to any system where peer ratings and quantitative performance measures, both reliable and sufficiently detailed, can be combined for the same entities."
In this paper peer review reliability is investigated based on peer ratings of research teams at two Belgian universities. It is found that outcomes can be substantially influenced by the different ways in which experts attribute ratings. To increase reliability of peer ratings, procedures creating a uniform reference level should be envisaged. One should at least check for signs of low reliability, which can be obtained from an analysis of the outcomes of the peer evaluation itself.
The peer review results are compared to outcomes from a citation analysis of publications by the same teams, in subject fields well covered by citation indexes. It is illustrated how, besides reliability, comparability of results depends on the nature of the indicators, on the subject area and on the intrinsic characteristics of the methods. The results further confirm what is currently considered as good practice: the presentation of results for not one but for a series of indicators."