Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2015, Journal of Informetrics
…
20 pages
1 file
Citation metrics are becoming pervasive in the quantitative evaluation of scholars, journals and institutions. More then ever before, hiring, promotion, and funding decisions rely on a variety of impact metrics that cannot disentangle quality from quantity of scientific output, and are biased by factors such as discipline and academic age. Biases affecting the evaluation of single papers are compounded when one aggregates citation-based metrics across an entire publication record. It is not trivial to compare the quality of two scholars that during their careers have published at different rates in different disciplines in different periods of time. We propose a novel solution based on the generation of a statistical baseline specifically tailored on the academic profile of each researcher. Our method can decouple the roles of quantity and quality of publications to explain how a certain level of impact is achieved. The method is flexible enough to allow for the evaluation of, and fair comparison among, arbitrary collections of papers-scholar publication records, journals, and entire institutions; and can be extended to simultaneously suppresses any source of bias. We show that our method can capture the quality of the work of Nobel laureates irrespective of number of publications, academic age, 1 arXiv:1411.7357v2 [cs.DL] 15 Dec 2014 and discipline, even when traditional metrics indicate low impact in absolute terms. We further apply our methodology to almost a million scholars and over six thousand journals to measure the impact that cannot be explained by the volume of publications alone.
2014
Citation metrics are becoming pervasive in the quantitative evaluation of scholars, journals and institutions. More then ever before, hiring, promotion, and funding decisions rely on a variety of impact metrics that cannot disentangle quality from productivity, and are biased by factors such as discipline and academic age. Biases affecting the evaluation of single papers are compounded when one aggregates citation-based metrics across an entire publication record. It is not trivial to compare the quality of two scholars that during their careers have published at different rates in different disciplines in different periods of time. We propose a novel solution based on the generation of a statistical baseline specifically tailored on the academic profile of each researcher. By decoupling productivity and impact, our method can determine whether a certain level of impact can be explained by productivity alone, or additional ingredients of scientific excellence are necessary. The method is flexible enough to allow for the evaluation of, and fair comparison among, arbitrary collections of papers --- scholar publication records, journals, and entire institutions; and can be extended to simultaneously suppresses any source of bias. We show that our method can capture the quality of the work of Nobel laureates irrespective of productivity, academic age, and discipline, even when traditional metrics indicate low impact in absolute terms. We further apply our methodology to almost a million scholars and over six thousand journals to quantify the impact required to demonstrate scientific excellence for a given level of productivity.
Publication statistics are ubiquitous in the ratings of scientific achievement, with citation counts and paper tallies factoring into an individual's consideration for postdoctoral positions, junior faculty, and tenure. Citation statistics are designed to quantify individual career achievement, both at the level of a single publication, and over an individual's entire career. While some academic careers are defined by a few significant papers ͑possibly out of many͒, other academic careers are defined by the cumulative contribution made by the author's publications to the body of science. Several metrics have been formulated to quantify an individual's publication career, yet none of these metrics account for the collaboration group size, and the time dependence of citation counts. In this paper we normalize publication metrics in order to achieve a universal framework for analyzing and comparing scientific achievement across both time and discipline. We study the publication careers of individual authors over the 50-year period 1958-2008 within six high-impact journals: CELL, the Science. Using the normalized metrics ͑i͒ "citation shares" to quantify scientific success, and ͑ii͒ "paper shares" to quantify scientific productivity, we compare the career achievement of individual authors within each journal, where each journal represents a local arena for competition. We uncover quantifiable statistical regularity in the probability density function of scientific achievement in all journals analyzed, which suggests that a fundamental driving force underlying scientific achievement is the competitive nature of scientific advancement.
Scientific reports, 2014
The impact factor (IF) of scientific journals has acquired a major role in the evaluations of the output of scholars, departments and whole institutions. Typically papers appearing in journals with large values of the IF receive a high weight in such evaluations. However, at the end of the day one is interested in assessing the impact of individuals, rather than papers. Here we introduce Author Impact Factor (AIF), which is the extension of the IF to authors. The AIF of an author A in year t is the average number of citations given by papers published in year t to papers published by A in a period of Δt years before year t. Due to its intrinsic dynamic character, AIF is capable to capture trends and variations of the impact of the scientific output of scholars in time, unlike the h-index, which is a growing measure taking into account the whole career path.
PLoS ONE, 2012
Authorship and citation practices evolve with time and differ by academic discipline. As such, indicators of research productivity based on citation records are naturally subject to historical and disciplinary effects. We observe these effects on a corpus of astronomer career data constructed from a database of refereed publications. We employ a simple mechanism to measure research output using author and reference counts available in bibliographic databases to develop a citation-based indicator of research productivity. The total research impact (tori) quantifies, for an individual, the total amount of scholarly work that others have devoted to his/her work, measured in the volume of research papers. A derived measure, the research impact quotient (riq), is an age independent measure of an individual's research ability. We demonstrate that these measures are substantially less vulnerable to temporal debasement and cross-disciplinary bias than the most popular current measures. The proposed measures of research impact, tori and riq, have been implemented in the Smithsonian/NASA Astrophysics Data System.
Proceedings of The National Academy of Sciences, 2008
We study the distributions of citations received by a single publication within several disciplines, spanning broad areas of science. We show that the probability that an article is cited c times has large variations between different disciplines, but all distributions are rescaled on a universal curve when the relative indicator c f = c/c0 is considered, where c0 is the average number of citations per article for the discipline. In addition we show that the same universal behavior occurs when citation distributions of articles published in the same field, but in different years, are compared. These findings provide a strong validation of c f as an unbiased indicator for citation performance across disciplines and years. Based on this indicator, we introduce a generalization of the h-index suitable for comparing scientists working in different fields.
Using bibliometric data artificially generated through a model of citation dynamics calibrated on empirical data, we compare several indicators for the scientific impact of individual researchers. The use of such a controlled setup has the advantage of avoiding the biases present in real databases, and allows us to assess which aspects of the model dynamics and which traits of individual researchers a particular indicator actually reflects. We find that the simple citation average performs well in capturing the intrinsic scientific ability of researchers, whatever the length of their career. On the other hand, when productivity complements ability in the evaluation process, the notorious h and g indices reveal their potential, yet their normalized variants do not always yield a fair comparison between researchers at different career stages. Notably, the use of logarithmic units for citation counts allows us to build simple indicators with performance equal to that of h and g. Our analysis may provide useful hints for a proper use of bibliometric indicators. Additionally, our framework can be extended by including other aspects of the scientific production process and citation dynamics, with the potential to become a standard tool for the assessment of impact metrics.
Scientometrics, 2014
The citation potential is a measure of the probability of being cited. Obviously, it is different among fields of science, social science, and humanities because of systematic differences in publication and citation behaviour across disciplines. In the past, the citation potential was studied at journal level considering the average number of references in established groups of journals (for example, the crown indicator is based on the journal subject categories in the Web of Science database). In this paper, some characterizations of the author's scientific research through three different research dimensions are proposed: production (journal papers), impact (journal citations), and reference (bibliographical sources). Then, we propose different measures of the citation potential for authors based on a proportion of these dimensions. An empirical application, in a set of 120 randomly selected highly productive authors from the CSIC Research Centre (Spain) in four subject areas, shows that the ratio between production and impact dimensions is a normalized measure of the citation potential at the level of individual authors. Moreover, this ratio reduces the between-group variance in relation to the within-group variance in a higher proportion than the rest of the indicators analysed. Furthermore, it is consistent with the type of journal impact indicator used. A possible application of this result is in the selection and promotion Highlights 1. We provide some different characterizations of the research area at author level based on three dimensions: production (journal papers), impact (journal citations), and reference (bibliographical sources). 2. We propose some measures of the citation potential for authors, based on proportions between dimensions. 3. We compare the dimensions and proportions in a set of 120 randomly selected highly productive authors from the CSIC Research Centre (Spain) in four subject areas. 4. The ratio between production and impact dimensions reduces the between-group variance in relation to the within-group variance in a higher proportion than the rest of measures analysed. Furthermore, it is consistent with the type of journal impact indicator used.
"Rons, N. and Amez, L. (2009). Impact vitality: an indicator based on citing publications in search of excellent scientists. Research Evaluation, 18(3), 233-241. PDF/DOI: http://arxiv.org/abs/1307.7035, or http://dx.doi.org/10.3152/095820209X470563 This paper contributes to the quest for an operational definition of 'research excellence' and proposes a translation of the excellence concept into a bibliometric indicator. Starting from a textual analysis of funding program calls aimed at individual researchers and from the challenges for an indicator at this level in particular, a new type of indicator is proposed. The Impact Vitality indicator [RONS & AMEZ, 2008] reflects the vitality of the impact of a researcher's publication output, based on the change in volume over time of the citing publications. The introduced metric is shown to posses attractive operational characteristics and meets a number of criteria which are desirable when comparing individual researchers. The validity of one of the possible indicator variants is tested using a small dataset of applicants for a senior full time Research Fellowship. Options for further research involve testing various indicator variants on larger samples linked to different kinds of evaluations."
Recent science of science research shows that scientific impact measures for journals and individual articles have quantifiable regularities across both time and discipline. However, little is known about the scientific impact distribution at the scale of an individual scientist. We analyze the aggregate production and impact using the rank-citation profile c i (r) of 200 distinguished professors and 100 assistant professors. For the entire range of paper rank r, we fit each c i (r) to a common distribution function. Since two scientists with equivalent Hirsch h-index can have significantly different c i (r) profiles, our results demonstrate the utility of the b i scaling parameter in conjunction with h i for quantifying individual publication impact. We show that the total number of citations C i tallied from a scientist's N i papers scales as C i *h 1zb i i . Such statistical regularities in the input-output patterns of scientists can be used as benchmarks for theoretical models of career progress.
PLOS ONE, 2021
The pursuit of simple, yet fair, unbiased, and objective measures of researcher performance has occupied bibliometricians and the research community as a whole for decades. However, despite the diversity of available metrics, most are either complex to calculate or not readily applied in the most common assessment exercises (e.g., grant assessment, job applications). The ubiquity of metrics like the h-index (h papers with at least h citations) and its time-corrected variant, the m-quotient (h-index ÷ number of years publishing) therefore reflect the ease of use rather than their capacity to differentiate researchers fairly among disciplines, career stage, or gender. We address this problem here by defining an easily calculated index based on publicly available citation data (Google Scholar) that corrects for most biases and allows assessors to compare researchers at any stage of their career and from any discipline on the same scale. Our ε′-index violates fewer statistical assumptio...
EDUCAZIONE SENTIMENTALE, 2017
Hans-Dieter Klein (ed.), Der Begriff der Seele in der Geschichte der Philosophie, 2005
Behavioural Public Policy, 2023
servicioskoinonia.org/logos
Theresa Scimemi , 2019
World Development, 2024
II. Türk İslam Siyasi Düşüncesi Kongresi, 2017
Media & Journalism, 2024
Revue Juridique de l'Environnement, 2013
Aestimatio : Critical Reviews in the History of Science, 2015
Socialinė teorija, empirija, politika ir praktika, 2012
OncoImmunology, 2015
arXiv: Complex Variables, 2017
Lecture Notes in Computer Science, 2014
2018 Annual American Control Conference (ACC), 2018
24th Digital Avionics Systems Conference
RePEc: Research Papers in Economics, 2005