Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
…
36 pages
1 file
For nearly a decade, several national exercises have been implemented for assessing the Italian research performance, from the viewpoint of universities and other research institutions. The penultimate one – i.e., the VQR 2004–2010, which adopted a hybrid evaluation approach based on bibliometric analysis and peer review – suffered heavy criticism at a national and international level. The architecture of the subsequent exercise – i.e., the VQR 2011–2014, still in progress – is partly similar to that of the previous one, except for a few presumed improvements. Nevertheless, this other exercise is suffering heavy criticism too. This paper presents a structured discussion of the VQR 2011–2014, collecting and organizing some critical arguments so far emerged, and developing them in detail. Some of the major vulnerabilities of the VQR 2011–2014 are: (1) the fact that evaluations cover a relatively small fraction of the scientific publications produced by the researchers involved in the evaluation, (2) incorrect and anachronistic use of the journal metrics (i.e., ISI Impact Factor and similar ones) for assessing individual papers, and (3) conceptually misleading criteria for normalizing and aggregating the bibliometric indicators in use.
In July, 2013, ANVUR has published the results of the 2004-2010 Italian evaluation exercise (VQR 2004-2010 or simply VQR in the acronym used hereafter). The VQR Report has presented aggregate results relative to the quality of scientific publications submitted for evaluation by Italian Universities and Research bodies; the final objective of the Report was to rank Italian scientific institutions on the basis of the quality of their research, so as to provide to the Italian Ministry of Education, University and Research (MIUR) information to be used to assign a part of the public funding. The aim of this paper is that of providing a more disaggregated analysis of evaluation outcomes, specifically looking at possible existing correlations among scientific quality and a number of product- and researcher-specific variables.
2017
For nearly a decade, several national exercises have been implemented for assessing the Italian research performance, from the viewpoint of universities and other research institutions. The penultimate one – i.e., the VQR 2004–2010, which adopted a hybrid evaluation approach based on bibliometric analysis and peer review – suffered heavy criticism at a national and international level. The architecture of the subsequent exercise – i.e., the VQR 2011–2014, still in progress – is partly similar to that of the previous one, except for a few presumed improvements. Nevertheless, this other exercise is suffering heavy criticism too. This paper presents a structured discussion of the VQR 2011–2014, collecting and organizing some critical arguments so far emerged, and developing them in detail. Some of the major vulnerabilities of the VQR 2011–2014 are: (1) the fact that evaluations cover a relatively small fraction of the scientific publications produced by the researchers involved in the ...
Scientometrics, 2016
Journal of Pharmacology and Pharmacotherapeutics, 2013
Worthiness of any scientific journal is measured by the quality of the articles published in it. The Impact factor (IF) is one popular tool which analyses the quality of journal in terms of citations received by its published articles. It is usually assumed that journals with high IF carry meaningful, prominent, and quality research. Since IF does not assess a single contribution but the whole journal, the evaluation of research authors should not be influenced by the IF of the journal. The h index, g index, m quotient, c index are some other alternatives to judge the quality of an author. These address the shortcomings of IF viz. number of citations received by an author, active years of publication, length of academic career and citations received for recent articles. Quality being the most desirable aspect for evaluating an author's work over the active research phase, various indices has attempted to accommodate different possible variables. However, each index has its own merits and demerits. We review the available indices, find the fallacies and to correct these, hereby propose the Original Research Performance Index (ORPI) for evaluation of an author's original work which can also take care of the bias arising because of self-citations, gift authorship, inactive phase of research, and length of non-productive period in research.
Scientometrics, 2016
The prediction of the long-term impact of a scientific article is challenging task, addressed by the bibliometrician through resorting to a proxy whose reliability increases with the breadth of the citation window. In the national research assessment exercises using metrics the citation window is necessarily short, but in some cases is sufficient to advise the use of simple citations. For the Italian VQR 2011-2014, the choice was instead made to adopt a linear weighted combination of citations and journal metric percentiles, with weights differentiated by discipline and year. Given the strategic importance of the exercise, whose results inform the allocation of a significant share of resources for the national academic system, we examined whether the predictive power of the proposed indicator is stronger than the simple citation count. The results show the opposite, for all discipline in the sciences and a citation window above two years.
The Journal of Contemporary Dental Practice, 2014
Evaluation of quality and quantity of publications can be done using a set of statistical and mathematical indices called bibliometric indicators. Two major categories of indicators are (1) quantitative indicators that measure the research productivity of a researcher and (2) performance indicators that evaluate the quality of publications. Bibliometric indicators are important for both the individual researcher and organizations. They are widely used to compare the performance of the individual researchers, journals and universities. Many of the appointments, promotions and allocation of research funds are based on these indicators. This review article describes some of the currently used bibliometric indicators such as journal impact factor, crown indicator, h-index and it's variants. It is suggested that for comparison of scientific impact and scientific output of researchers due consideration should be given to various factors affecting theses indicators. How to cite this ar...
Medical Journal Armed Forces India, 2018
From time immemorial, the body of scientific knowledge has grown with incremental additions of research. Metrics-based research evaluation provides crucial information regarding research credibility that would be difficult to understand by means of individual expertise. h-index and its modifications give an approximate quantitative measure of research output. Furthermore, g-index, e-index, ħ-index and i10-index address various intricacies involving authorship. Altmetrics and Plum X metrics are newer usage metrics that put an additional weightage on the impact on social media, usage, capture and scholarly networking. Indirect evaluation of research can also be obtained from the Journal Impact Factor in which the research is published but with certain limitations. While the scientific community is still waiting for a unique one-stop solution based on a high-quality robust process to exert judgement on research, the Leiden Manifesto comprising ten principles for research assessment can act as a guiding tool for development of a comprehensive evaluation system.
The success of research projects funded by various agencies can be evaluated by studying the research publications generated from those projects and the research publications can be evaluated using impact factors and citation indices. There are several citation indices commonly used to assess the value/quality of a research publication or the research impact of an author or a journal. Research indices are calculated based on either citation values of research publications of a research scholar or the number of research papers published by a research scholar for a given period. There are many research indices developed by many types of research which include H-index, i10-index, G-index, H(2)-index, HG-index, Q2 -index, AR-index, M-quotient, M-index, W-index, Hw-index, E-index, A-index, R-index, W-index, J-index, etc. Out of these citation based research indices, h-index, G-index and i10-index are commonly used in some of the citation databases. Researchers have also studied the problems and limitations associated with these indices. In this paper, we have discussed the most popular research indices presently used which include h-index, G-index, and i-10-index along with their advantages, benefits, constraints, and disadvantages. Most of the research indices are calculated based on number of citations a paper receives. The major limitation of this model is that the citations usually increase with an increase in time even after the researcher dies, the citations and hence the indices continue to grow. It is argued that due to various reasons, a research publication may not attract citations initially for some years and after ten to twenty years some papers may attract citations. The best method of identifying the contribution to research is calculating the annual research index for an author by considering the annual research publications. Accordingly, based on annual research index of an author, his average research contribution for five years, or ten years, or twenty years or any desired period can be determined. Here, we have suggested some of the new research indices to be used for calculating research productivity of individuals as well as a team of people in an organization. The paper also contains some of our newly proposed indices including ARP-Index – (Annual Research Publication Index), RC-Index – (Research Continuation Index), RE-Index (Research expansion Index), Project Productivity Index, and Cost Index and the method of calculating these indices along with their advantages and limitations.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
La gestión sostenible de los servicios públicos: Agua en Andalucía (1780-2020), 2023
Counselling Psychology Quarterly, 2024
Journal of Music Theory, 2023
Revista Diálogo Pertinentes, 2020
BIOGITA (Biogas Balon Giritirta) sebagai Bahan Bakar Alternatif Pengganti Gas LPG untuk Pengembangan Desa Mandiri Energi di Desa Tertinggal Banjarnegara, 2016
New England Journal of Medicine, 2010
Tendencias en la investigación universitaria. Una visión desde Latinoamérica. Volumen III, 2018
Revista de Enfermagem UFPE on line, 2018
Endocrine Abstracts, 2013
The International Journal of Advanced Manufacturing Technology, 2018
Laboratory Investigation, 2001
Ecosystems and Human …, 2005