Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2014
…
20 pages
1 file
Citation metrics are becoming pervasive in the quantitative evaluation of scholars, journals and institutions. More then ever before, hiring, promotion, and funding decisions rely on a variety of impact metrics that cannot disentangle quality from productivity, and are biased by factors such as discipline and academic age. Biases affecting the evaluation of single papers are compounded when one aggregates citation-based metrics across an entire publication record. It is not trivial to compare the quality of two scholars that during their careers have published at different rates in different disciplines in different periods of time. We propose a novel solution based on the generation of a statistical baseline specifically tailored on the academic profile of each researcher. By decoupling productivity and impact, our method can determine whether a certain level of impact can be explained by productivity alone, or additional ingredients of scientific excellence are necessary. The method is flexible enough to allow for the evaluation of, and fair comparison among, arbitrary collections of papers --- scholar publication records, journals, and entire institutions; and can be extended to simultaneously suppresses any source of bias. We show that our method can capture the quality of the work of Nobel laureates irrespective of productivity, academic age, and discipline, even when traditional metrics indicate low impact in absolute terms. We further apply our methodology to almost a million scholars and over six thousand journals to quantify the impact required to demonstrate scientific excellence for a given level of productivity.
Journal of Informetrics, 2015
Citation metrics are becoming pervasive in the quantitative evaluation of scholars, journals and institutions. More then ever before, hiring, promotion, and funding decisions rely on a variety of impact metrics that cannot disentangle quality from quantity of scientific output, and are biased by factors such as discipline and academic age. Biases affecting the evaluation of single papers are compounded when one aggregates citation-based metrics across an entire publication record. It is not trivial to compare the quality of two scholars that during their careers have published at different rates in different disciplines in different periods of time. We propose a novel solution based on the generation of a statistical baseline specifically tailored on the academic profile of each researcher. Our method can decouple the roles of quantity and quality of publications to explain how a certain level of impact is achieved. The method is flexible enough to allow for the evaluation of, and fair comparison among, arbitrary collections of papers-scholar publication records, journals, and entire institutions; and can be extended to simultaneously suppresses any source of bias. We show that our method can capture the quality of the work of Nobel laureates irrespective of number of publications, academic age, 1 arXiv:1411.7357v2 [cs.DL] 15 Dec 2014 and discipline, even when traditional metrics indicate low impact in absolute terms. We further apply our methodology to almost a million scholars and over six thousand journals to measure the impact that cannot be explained by the volume of publications alone.
Publication statistics are ubiquitous in the ratings of scientific achievement, with citation counts and paper tallies factoring into an individual's consideration for postdoctoral positions, junior faculty, and tenure. Citation statistics are designed to quantify individual career achievement, both at the level of a single publication, and over an individual's entire career. While some academic careers are defined by a few significant papers ͑possibly out of many͒, other academic careers are defined by the cumulative contribution made by the author's publications to the body of science. Several metrics have been formulated to quantify an individual's publication career, yet none of these metrics account for the collaboration group size, and the time dependence of citation counts. In this paper we normalize publication metrics in order to achieve a universal framework for analyzing and comparing scientific achievement across both time and discipline. We study the publication careers of individual authors over the 50-year period 1958-2008 within six high-impact journals: CELL, the Science. Using the normalized metrics ͑i͒ "citation shares" to quantify scientific success, and ͑ii͒ "paper shares" to quantify scientific productivity, we compare the career achievement of individual authors within each journal, where each journal represents a local arena for competition. We uncover quantifiable statistical regularity in the probability density function of scientific achievement in all journals analyzed, which suggests that a fundamental driving force underlying scientific achievement is the competitive nature of scientific advancement.
PLoS ONE, 2012
Authorship and citation practices evolve with time and differ by academic discipline. As such, indicators of research productivity based on citation records are naturally subject to historical and disciplinary effects. We observe these effects on a corpus of astronomer career data constructed from a database of refereed publications. We employ a simple mechanism to measure research output using author and reference counts available in bibliographic databases to develop a citation-based indicator of research productivity. The total research impact (tori) quantifies, for an individual, the total amount of scholarly work that others have devoted to his/her work, measured in the volume of research papers. A derived measure, the research impact quotient (riq), is an age independent measure of an individual's research ability. We demonstrate that these measures are substantially less vulnerable to temporal debasement and cross-disciplinary bias than the most popular current measures. The proposed measures of research impact, tori and riq, have been implemented in the Smithsonian/NASA Astrophysics Data System.
Scientometrics, 2014
The citation potential is a measure of the probability of being cited. Obviously, it is different among fields of science, social science, and humanities because of systematic differences in publication and citation behaviour across disciplines. In the past, the citation potential was studied at journal level considering the average number of references in established groups of journals (for example, the crown indicator is based on the journal subject categories in the Web of Science database). In this paper, some characterizations of the author's scientific research through three different research dimensions are proposed: production (journal papers), impact (journal citations), and reference (bibliographical sources). Then, we propose different measures of the citation potential for authors based on a proportion of these dimensions. An empirical application, in a set of 120 randomly selected highly productive authors from the CSIC Research Centre (Spain) in four subject areas, shows that the ratio between production and impact dimensions is a normalized measure of the citation potential at the level of individual authors. Moreover, this ratio reduces the between-group variance in relation to the within-group variance in a higher proportion than the rest of the indicators analysed. Furthermore, it is consistent with the type of journal impact indicator used. A possible application of this result is in the selection and promotion Highlights 1. We provide some different characterizations of the research area at author level based on three dimensions: production (journal papers), impact (journal citations), and reference (bibliographical sources). 2. We propose some measures of the citation potential for authors, based on proportions between dimensions. 3. We compare the dimensions and proportions in a set of 120 randomly selected highly productive authors from the CSIC Research Centre (Spain) in four subject areas. 4. The ratio between production and impact dimensions reduces the between-group variance in relation to the within-group variance in a higher proportion than the rest of measures analysed. Furthermore, it is consistent with the type of journal impact indicator used.
Scientific reports, 2014
The impact factor (IF) of scientific journals has acquired a major role in the evaluations of the output of scholars, departments and whole institutions. Typically papers appearing in journals with large values of the IF receive a high weight in such evaluations. However, at the end of the day one is interested in assessing the impact of individuals, rather than papers. Here we introduce Author Impact Factor (AIF), which is the extension of the IF to authors. The AIF of an author A in year t is the average number of citations given by papers published in year t to papers published by A in a period of Δt years before year t. Due to its intrinsic dynamic character, AIF is capable to capture trends and variations of the impact of the scientific output of scholars in time, unlike the h-index, which is a growing measure taking into account the whole career path.
PLOS ONE, 2021
The pursuit of simple, yet fair, unbiased, and objective measures of researcher performance has occupied bibliometricians and the research community as a whole for decades. However, despite the diversity of available metrics, most are either complex to calculate or not readily applied in the most common assessment exercises (e.g., grant assessment, job applications). The ubiquity of metrics like the h-index (h papers with at least h citations) and its time-corrected variant, the m-quotient (h-index ÷ number of years publishing) therefore reflect the ease of use rather than their capacity to differentiate researchers fairly among disciplines, career stage, or gender. We address this problem here by defining an easily calculated index based on publicly available citation data (Google Scholar) that corrects for most biases and allows assessors to compare researchers at any stage of their career and from any discipline on the same scale. Our ε′-index violates fewer statistical assumptio...
"Rons, N. and Amez, L. (2009). Impact vitality: an indicator based on citing publications in search of excellent scientists. Research Evaluation, 18(3), 233-241. PDF/DOI: http://arxiv.org/abs/1307.7035, or http://dx.doi.org/10.3152/095820209X470563 This paper contributes to the quest for an operational definition of 'research excellence' and proposes a translation of the excellence concept into a bibliometric indicator. Starting from a textual analysis of funding program calls aimed at individual researchers and from the challenges for an indicator at this level in particular, a new type of indicator is proposed. The Impact Vitality indicator [RONS & AMEZ, 2008] reflects the vitality of the impact of a researcher's publication output, based on the change in volume over time of the citing publications. The introduced metric is shown to posses attractive operational characteristics and meets a number of criteria which are desirable when comparing individual researchers. The validity of one of the possible indicator variants is tested using a small dataset of applicants for a senior full time Research Fellowship. Options for further research involve testing various indicator variants on larger samples linked to different kinds of evaluations."
Using bibliometric data artificially generated through a model of citation dynamics calibrated on empirical data, we compare several indicators for the scientific impact of individual researchers. The use of such a controlled setup has the advantage of avoiding the biases present in real databases, and allows us to assess which aspects of the model dynamics and which traits of individual researchers a particular indicator actually reflects. We find that the simple citation average performs well in capturing the intrinsic scientific ability of researchers, whatever the length of their career. On the other hand, when productivity complements ability in the evaluation process, the notorious h and g indices reveal their potential, yet their normalized variants do not always yield a fair comparison between researchers at different career stages. Notably, the use of logarithmic units for citation counts allows us to build simple indicators with performance equal to that of h and g. Our analysis may provide useful hints for a proper use of bibliometric indicators. Additionally, our framework can be extended by including other aspects of the scientific production process and citation dynamics, with the potential to become a standard tool for the assessment of impact metrics.
Methodology
When evaluating the publication performance of a scientist one has to consider not only the difference in publication norms in different scientific fields, but also the length of the academic career of the investigated researcher. Here, our goal was to establish a database suitable as a reference for the ranking of scientific performance by normalizing the researchers output to those with the same academic career length and active in same scientific field. By using the complete publication and citation data of 17,072 Hungarian researchers, we established a framework enabling the quick assessment of a researcher’s scientific output by comparing four parameters (h-index, yearly independent citations received, number of publications, and number of high impact publications), to the age-matched values of all other researchers active in the same scientific discipline. The established online tool available at www.scientometrics.org could be an invaluable help for faster and more evidence-b...
International journal of advanced Information technology, 2013
In this paper, we introduce a new measure for evaluating and characterizing the scientific output of a researcher over time. The PACIFIFA is a measure that takes different attributes into account in the process of evaluating the productivity, quality of research, and research activity of a researcher. This measure takes into account the number of the publications of the author, the sum of the citations of the publications, the H-Index of co-authors, the average impact factor of the host journals, the average impact factor of the research field of the author, and the time interval between the first and the last publication of the author. The advantage of this measure is that it gives an overview about the author's productivity, quality of publications, publications frequency over time, and the quality of the researcher(s) he/she worked with. In comparison to other measures, our measure showed a higher accuracy and reliability.
Proc. SPIE 13212, Tenth International Conference on Remote Sensing and Geoinformation of the Environment (RSCy2024), 2024
2022
IEEE Transactions on Power Electronics
Journal of Astronist Studies, 2024
Jure Ramšak, Gašper Mithans, and Mateja Režek, eds. Christian Modernity and Marxist Secularism in East Central Europe. , 2023
derstandard.at, 2023
Historiografia da independência: síntese bibliográfica comentada, 2022
De opkomst van autoritaire regimes en dictaturen, 2024
Cuaternario y Geomorfología, vol. 32 (3-4). , 2018
Publìčne upravlìnnâ ta regìonalʹnij rozvitok, 2024
Journal of Clinical Nursing, 1993
International journal of aging & human development, 2018