Module 6 Research Metrics
Module 6 Research Metrics
Module 6 Research Metrics
Content
6B.0 Objectives
6B.1 Introduction
6B.2 Impact Factors of journal as per Journal Citation Report, SNIP, SJR, IPP, Cite Score
6B.3 Metrics : h-index, g-index, i10 Index, Almetrics
6B.4 Summary
6B.5 Questions/ Self Assessment Questions
6B.6 References/ Bibliography/ Select Reading
6B.0 Objectives
6B.1 Introduction
Research metrics are quantitative tools used to help assess the quality and impact of research outputs.
Metrics are available for use at the journal, article, and even researcher level. However, any one metric
only tells a part of the story and each metric also has its limitations. Therefore, a single metric should
never be considered in isolation. https://editorresources.taylorandfrancis.com/understanding-research-
metrics/
For a long time, the only tool for assessing journal performance was the Impact Factor – more on that in
a moment. Now there are a range of different research metrics available, from the Impact Factor to
altmetrics, h-index, and more.
But what do they all mean? How is each metric calculated? Which research metrics are the most relevant
to your journal? And how can you use these tools to monitor your journal’s performance?
Keep reading for a more in-depth look at the range of different metrics available.
Page 1 of 23
In March 2021 Taylor & Francis signed the San Francisco Declaration on Research Assessment (DORA),
which aims to improve the ways in which researchers and the outputs of scholarly research are evaluated.
Researchers should be assessed on the quality and broad impact of their work. While research metrics can
help support this process, they should not be used as a quick substitute for proper review. The quality of
an individual research article should always be assessed on its own merits rather than on the metrics of
the journal in which it was published. https://newsroom.taylorandfrancisgroup.com/taylor-francis-signs-up-
to-principles-outlined-in-dora-supporting-balanced-and-fair-research-assessment/
It is advisable that authors should always quote at least two different metrics, to give researchers a richer
view of journal performance. This should also accompany this quantitative data with qualitative
information that will help researchers assess the suitability of the journal for their research, such as its
aims & scope.
Publisher's researcher guide to understanding journal metrics explains in more detail how authors can use
metrics as part of the process of choosing a journal.
https://authorservices.taylorandfrancis.com/publishing-your-research/choosing-a-journal/journal-
metrics/
How to use metrics to monitor particular journal (Example of Taylor and Francis Journal)
Metrics can help you to assess your journal’s standing in the community, raise its profile, and support
growth in high-quality submissions. But only if you know how to interpret and apply them.
Most journals on Taylor & Francis Online display a range of metrics, to help give a rounded view of
a journal’s performance, reach, and impact. These metrics include usage, citation metrics, speed
(review and production turnaround times), and acceptance rate.
Guide to Taylor & Francis Online journal metrics --- how they’re calculated and the advice given to
researchers about their use.
Target audience
For journals with a practitioner focus, academic citations may be less valuable than mentions in policy
documents (as reported by Altmetric). If journal is for a purely academic audience, traditional citation
metrics like Impact Factor are more relevant. If journal has a regional focus, then geographical usage
might be important.
Page 2 of 23
Achieving target
If objective is to publish more high-quality, high-impact authors, need to consider analyzing the h-indices
of authors in recent volumes to assess to achieve this. If aim is to raise one's journal’s profile within the
wider community, it makes sense to consider altmetrics in one's analysis. Perhaps goal is to generate more
citations from high-profile journals within your field – so looking at Eigenfactor rather than Impact Factor
would be helpful.
6B.3 Impact Factors of journal as per Journal Citation Report, SNIP, SJR, IPP,
Cite Score
It’s easy to damage the overall picture of your research metrics by focusing too much on one specific
metric. For example, if you wanted to boost your Impact Factor (IF) by publishing more highly-cited
articles, you might be disregarding low-cited articles used extensively by your readers. Therefore, if you
chose to publish only highly-cited content for a higher Impact Factor, you could lose the value of your
journal for a particular segment of your readership. Generally, the content most used by practitioners,
educators, or students (who don’t traditionally publish) is not going to improve your Impact Factor, but
will probably add value in other ways to your community.
Page 3 of 23
Fundamentally, it’s important to consider a range of research metrics when monitoring your journal’s
performance. It can be tempting to concentrate on one metric, like the Impact Factor, but citations are not
the be-all and end-all. Think about each research metric as a single tile in a mosaic: you need to piece
them all together to see the bigger picture of journal performance.
So that the Impact Factor doesn’t penalize journals that publish rarely-cited content like book reviews,
editorials, or news items, these content types are not counted in the denominator of the calculation (the
total number of publications within the two-year period). However, citations to this kind of content are still
counted.
This creates two main problems. Firstly, the classification of content is not subjective, so content such as
extended abstracts or author commentaries fall into an unpredictable gray area. Secondly, if such articles
are cited, they increase the Impact Factor without any offset in the denominator of the equation.
Research metrics
Research metrics are sometimes controversial, especially when in popular usage they become proxies for
multidimensional concepts such as research quality or impact. Each metric may offer a different emphasis
based on its underlying data source, method of calculation, or context of use.
For this reason, Elsevier promotes the responsible use of research metrics encapsulated in two “golden
rules”. Those are: always use both qualitative and quantitative input for decisions (i.e. expert
opinion alongside metrics), and always use more than one research metric as the quantitative input. This
second rule acknowledges that performance cannot be expressed by any single metric, as well as the fact
that all metrics have specific strengths and weaknesses. Therefore, using multiple complementary metrics
can help to provide a more complete picture and reflect different aspects of research productivity and
impact in the final assessment.
On this page some of the most popular citation-based metrics employed at the journal level. Where
available, they are featured in the “Journal Insights” section on Elsevier journal homepages (for example),
which links through to an even richer set of indicators on the Journal Insights homepage (for example).
https://www.journals.elsevier.com/global-environmental-change
Page 4 of 23
Journal Citation Reports
Journal Citation Reports provides ranking for journals in the areas of science, technology, and
social sciences. For every journal covered, the following information is collected or calculated:
Citation and article counts, Impact factor, Immediacy index, Cited half-life, citing half-life, Source
data listing, Citing journal listing, Cited journal listing, Subject categories, Publisher information.
You can enter a journal title in the Search box under "Go to Journal Profile". Because impact
factors mean little on their own, it's best to view the journal you are interested in comparison to
the other journals in the same category. To determine the impact factor for a particular journal,
select a JCR edition (Science and/ or Social Science), year, and Categories, found on the left of
the screen. Click Submit. Scroll the list to find the journal you are interested in. The list can be
resorted by Journal time, Cites, Impact Factor, and Eigenfactor
Indexing Parameters
To understand a journal’s standing and quality there are certain parameters used to decide that and those
are called indexing parameters. Bibliometrics and Scientometrics are used for the measurement of all
aspects related to publication and reading of books and documents. (Vol-5 Issue-5 2019 IJARIIE-
ISSN(O)-2395-4396 10866 www.ijariie.com)
Impact Factor is the measure of the number of citation of a particular article in a year. This stands as a
value for measuring the quality of the article and how much it has contributed to the academic community.
It is also one of the strongest aspects that determines the rank of a journal and this can be calculated only
after two years of establishment of the journal.
This is how the impact factor for any journal is calculated. It is either calculated for two years or five
years. The higher the impact factor the journal has, the higher its quality. It is very useful in finding an
objective measure of quality. Impact Factors reflect the changing status of a journal in a particular
discipline as it is calculated every once in two years and is updated. It takes into account two major criteria:
the number of journals a particular article has and the number of citation each article of that journal has.
Page 5 of 23
So, even a journal which has published very few articles might have the chance of having very high
citations and hence ensures quality.
Review Methods
Deborah in her article talks about the efficiency of the journal review process. It mostly depends on the
time taken for the journal to review and publish an article from the date of the article being received. Peer-
review is very critical when it comes to publishing. There are different review methods that each journal
follows to review the articles sent to them. They usually follow this to prevent bias and plagiarism.
The most commonly followed peer-review process is double-blind, the journal also follows single-blind,
triple-blind and open review process. [8]
Single Blind Review method is the one where the reviewer’s identity remains anonymous to the author,
but the author’s identity is disclosed to the reviewer. Every reviewing process has its own merits and
demerits. The author need not worry about the reviewer’s area of expertise and will not be influenced by
the critical attitude of the reviewer since the identity of the reviewer is not disclosed to the author. Though
the author does not know the reviewer, the author’s identity is revealed to the reviewer which will, in turn,
result in bias of any kind or the reviewer might opt to be overly critical about the article since his/her
identity is not revealed.
In a Double-Blind Review process, the identity of the author and the reviewer is kept from each other.
This is done with a primary focus of encouraging unbiased review process. [4] The author is supposed to
submit a manuscript that doesn’t reveal his/her identity in any way. If the author has a concern against a
particular person reviewing their work, then they can let the editorial board know about it through their
conflict of interest. Most of the journals follow the Double-Blind process to have a healthy peer-review
environment and provide the space required for knowledge dissemination.
In Triple Blind Review Process, neither the author nor the reviewer or the editors knows each other.
Everybody’s identity is concealed from the other to encourage an unbiased peer-review process. This type
of review is rarely used because to conceal the identity of the author, reviewer and the editors includes
very complex logistics. [4]
The last one is the Open Review Process where the author and the reviewer know each other. This
review process is opted by a very few journals and are is considered to be a very open process. Here the
reviewer and the author even discuss the manuscript together and work together in the comments. The
names of the reviewers are also published along with the authors in the article or the journal. (Vol-5 Issue-
5 2019 IJARIIE-ISSN(O)-2395-4396 10866 www.ijariie.com 320)
Page 6 of 23
Scholarly Publishing Resources for Faculty: Scopus Metrics (CiteScore, SNIP & SJR, h-index)
https://liu.cwp.libguides.com/c.php?g=45770&p=4417804
The Impact Factor only considers the number of citations, not the nature or quality.
An article may be highly cited for many reasons, both positive and negative. A high Impact Factor only
shows that the research in a given journal is being cited. It doesn’t indicate the context or the quality of
the publication citing the research.
You can’t compare Impact Factors like-for-like across different subject areas.
Different subject areas have different citation patterns, which reflects in their Impact Factors. Research in
subject areas with typically higher Impact Factors (cell biology or general medicine, for example) is not
better or worse than research in subject areas with typically lower Impact Factors (such as mathematics or
history).
The difference in Impact Factor is simply a reflection of differing citation patterns, database coverage,
and dominance of journals between the disciplines. Some subjects generally have longer reference lists
and publish more articles, so there’s a larger pool of citations.
Impact Factors can show significant variation year-on-year, especially in smaller journals.
Page 7 of 23
Because Impact Factors are average values, they vary year-on-year due to random fluctuations. This
change is related to the journal size (the number of articles published per year): the smaller the journal,
the larger the expected fluctuation.
Eigenfactor
In 2007, the Web of Science JCR grew to include Eigenfactors and Article Influence Scores. Unlike the
Impact Factor, these metrics don’t follow a simple calculation. Instead, they borrow their methodology
from network theory.
What is an Eigenfactor?
The Eigenfactor measures the influence of a journal based on whether it’s cited within other reputable
journals over five years. A citation from a highly-cited journal is worth more than from a journal with few
citations.
To adjust for subject areas, the citations are also weighted by the length of the reference list that they’re
from. The Eigenfactor is calculated using an algorithm to rank the influence of journals according to the
citations they receive. A five-year window is used, and journal self-citations are not included.
This score doesn’t take journal size into account. That means larger journals tend to have larger
Eigenfactors as they receive more citations overall. Eigenfactors also tend to be very small numbers as
scores are scaled so that the sum of all journal Eigenfactors in the JCR adds up to 100.
Very roughly, the Eigenfactor calculation is based on the number of times articles from the journal
published in the past five years have been cited in the JCR year, but it also considers which journals have
contributed these citations so that highly cited journals will influence the network more than lesser cited
journals.
Page 8 of 23
have twice the eigenfactor of journal B, which puts out 500 articles annually, if each article is cited the
same number of times.
Eigenfactor is meant to measure the importance of a journal throughout the scientific community and
rewards large journals that publish a variety of topics. It’s no surprise that the journal Nature, a large
journal which publishes on pretty much everything in science, has the highest eigenfactor. But this is true
only because its contents are considered valuable and are much read and cited.
CiteScore
What is CiteScore?
CiteScore is the ratio of citations to research published. It’s currently available for journals and book series
which are indexed in Scopus.
The CiteScore calculation only considers content that is typically peer reviewed; such as articles, reviews,
conference papers, book chapters, and data papers.
Page 9 of 23
CiteScore metrics
CiteScore metrics are a suite of indicators calculated from data in Scopus, the world’s leading abstract and
citation database of peer-reviewed literature.
Calculating the CiteScore is based on the number of citations to documents (articles, reviews, conference
papers, book chapters, and data papers) by a journal over four years, divided by the number of the same
document types indexed in Scopus and published in those same four years. For more details, see this FAQ.
https://service.elsevier.com/app/answers/detail/a_id/14880/supporthub/scopus
CiteScore is calculated for the current year on a monthly basis until it is fixed as a permanent value in
May the following year, permitting a real-time view on how the metric builds as citations accrue. Once
fixed, the other CiteScore metrics are also computed and contextualise this score with rankings and other
indicators to allow comparison.
The scores and underlying data for nearly 26,000 active journals, book series and conference proceedings
are freely available at www.scopus.com/sources or via a widget (available on each source page on
Scopus.com) or the Scopus API.
Page 10 of 23
was revised, leading to some changes in the way it is calculated. These changes are explained in
another paper (an open access preprint is available here).
Indicators
CWTS Journal Indicators currently provides four indicators:
1. P. The number of publications of a source in the past three years.
2. IPP. The (Impact Per Publication), calculated as the number of citations given in the present year
to publications in the past three years divided by the total number of publications in the past three
years. IPP is fairly similar to the well-known journal impact factor. Like the journal impact factor,
IPP does not correct for differences in citation practices between scientific fields. IPP was
previously known as RIP (Raw Impact per publication).
3. SNIP. The (Source Normalized Impact Per Publication), calculated as the number of citations
given in the present year to publications in the past three years divided by the total number of
publications in the past three years. The difference with IPP is that in the case of SNIP citations
are normalized in order to correct for differences in citation practices between scientific fields.
Essentially, the longer the reference list of a citing publication, the lower the value of a citation
originating from that publication. A detailed explanation is offered in our scientific paper.
4. % self cit. The percentage of self citations of a source, calculated as the percentage of all citations
given in the present year to publications in the past three years that originate from the source itself.
In the calculation of the above indicators, only publications that are classified as article, conference paper,
or review in Scopus are considered. Publications of other document types are ignored. Citations
originating from such publications are ignored as well. Furthermore, citations are not counted if they
originate from special types of sources (referred to as non-citing sources), in particular trade journals and
sources with very few references to other sources (which includes many sources in the arts and
humanities).
Stability intervals
IPP and SNIP are provided with stability intervals. A stability interval reflects the stability or reliability
of an indicator. The wider the stability interval of an indicator, the less reliable the indicator. If for a
particular source IPP and SNIP have a wide stability interval, the indicators have a low reliability for this
source. This for instance means that the indicators are likely to fluctuate quite significantly over time.
CWTS Journal Indicators employs 95% stability intervals constructed using a statistical technique known
as bootstrapping.
Page 11 of 23
The SNIP and SJR calculation is:
SNIP -- Source Normalized Impact per Paper (SINP) normalizes its sources to allow for cross-disciplinary
comparison. In practice, this means that a citation from a publication with a long reference list has a
lower value.
SNIP only considers citations to specific content types (articles, reviews, and conference papers), and
does not count citations from publications that Scopus classifies as “non-citing sources”. These include
trade journals, and many Arts & Humanities titles.
SJR - Scimago Journal Rank
The SJR aims to capture the effect of subject field, quality, and reputation of a journal on citations. It
calculates the prestige of a journal by considering the value of the sources that cite it, rather than
counting all citations equally.
Each citation received by a journal is assigned a weight based on the SJR of the citing journal. So, a
citation from a journal with a high SJR value is worth more than a citation from a journal with a low
SJR value.
How is it calculated? As an example -- The figure shown on Taylor & Francis Online is the total number
of times articles in the journal were viewed by users in the previous calendar year, rounded to the nearest
thousand. This includes all of the different formats available on Taylor & Francis Online, including
HTML, PDF, and EPUB. Usage data for each journal is updated annually in February.
There are other online platforms which provide journal access, including aggregator services such as
JSTOR and EBSCO. Of course, some readers still prefer print over online, so it’s important you consider
these sources when building a broader picture of usage. There are set out the limitations of this
metric which could be guide for researchers. https://authorservices.taylorandfrancis.com/publishing-your-
research/choosing-a-journal/journal-metrics/#usage
Page 12 of 23
Speed metrics
The following speed metrics, which are available for many journals on Taylor & Francis Online, indicate
how long different stages of the publishing process might take. The speed metrics published on Taylor &
Francis Online are for the previous full calendar year and are updated in February.
All of these metrics have limitations, which authors should consider when using them to choose a journal.
These limitations are set out in researcher guide to understanding journal metrics.
https://authorservices.taylorandfrancis.com/publishing-your-research/choosing-a-journal/journal-
metrics/
How is it calculated? This is the median number of days from submission to decision for all peer reviewed
articles which received a first decision in the previous calendar year.
How is it calculated? On Taylor & Francis Online this figure is the median number of days from
acceptance to online publication of the Version of Record, for articles published in the previous calendar
year.
Acceptance rate
A journal’s acceptance rate is an indication of the number of submissions it receives for every article that’s
eventually published.
How is it calculated? This figure represents the articles accepted by the journal for publication in the
previous calendar year as a percentage of all papers receiving a final decision in that calendar year. It
includes all article types submitted to the journal, including those that are rejected without being peer
reviewed (desk rejects).
The acceptance rates published on Taylor & Francis Online are for the previous full calendar year and are
updated in February. Publishers ’ve set out the limitations of this metric in our guide for researchers.
Page 13 of 23
Sources of Research Metrics https://guides.library.mun.ca/researchvisibility/metrics
Specialized Tools
SciVal - SciVal is Elsevier's research benchmarking software, which is based on Scopus data.
Other Sources
Dimensions (free version) - another citation index, which includes research metrics such as citation
counts.
Publish or Perish - a free downloadable software program that retrieves and analyzes academic
citations from sources such such as Google Scholar or Microsoft Academic.
Research metrics are dependent on database coverage, so metrics should be drawn from the same
source whenever possible and the data source should be named when metrics are cited.
It encompasses the measure of the quantity of the work by a researcher and also its impact in that particular
field of study within a single number. Makes it easy for the researchers to understand both hence not
making it a single-aspect criterion and it covers different aspects such as citations, the total number of
highly cited paper, etc. It also allows one to gauge the academic output accurately and becomes an
influencing factor in terms of honors and awards.
Page 14 of 23
h-index cannot be used to compare researchers from two different disciplines, it might work well for
comparing two researchers of the same discipline but there are inter-disciplinary differences. It is also
dependent on each researcher's career duration because the longer the duration the higher the number of
citations. Papers with high citation numbers are very important to calculate the h-index value, but there is
a disadvantage in that. Once the papers are listed as top h papers, the number of citations they receive after
that is not taken into account. There is also a high risk of researchers indulging in a whole lot of self-
citation to increase their h-index. Sometimes it becomes the only determinant of a researcher's value and
other aspects are side-lined.
CiteScore is a journal metrics introduced by Elsevier. The major aim is to find out the impact of a
journal or article in a particular field of study. It has an extensive range of peer-reviewed literature from
across 50,000 journals. The calculation is very simple and easy to do and is very clear and accessible.
Access to CiteScore metrics is free of cost, they don't charge users for their access to the cite score. It is
updated every month, unlike the other metrics that are updated annually or once in 3 or 5 years. The
CiteScore tracker displays the CiteScore of the journals every month.
CiteScore takes everything into account: articles, conference papers, letters, and editorials. This might
cause a little dilution in the quality. It also seems to favour journals that are under the publication of the
house and doesn't include journals that are not indexed in Scopus or Web of Science. As a result, Journals
with different documents like reviews and editorials seem to get a low CiteScore than the ones who don't
include that. m-index though almost similar to that of the h-index takes into consideration the period of
the academic career of the author.
g-index is a variant of h-index. g-index takes into account the increase in the citation of the most cited
paper as well. To calculate g-index, g is the highest rank and the top g paper has at least g2 citations.
i10index refers to the number of articles that are cited at least 10 times.
h5index of an article refers to the number of citations of the number of published articles in 5 years. For
example, if the h5-index is 4, then in the last 5 years 4 articles published have been cited at least 4 times
each.
SCImago Journal Rank (SJR) ranks Journals. This is a scientific measure of scholarly journals that
takes into account the number of citations and the quality of the journals in which the citations are made.
SJR is a numeric value that depicts the number of weighted citations received in a particular year with the
previous three years. The higher the value the greater the journal prestige. This value can be of great use
in journal comparison in the evaluation process.
Citing Half-life counts all the journal citations in a particular year and determines the median article
publication date. An article can be cited many times, sometimes in the very year, it was published or
maybe even after decades. So citing half-life shows the articles cited from that particular journal before
Page 15 of 23
the particular year and after that particular year. These are a few parameters a research scholar must take
into account when he or she is sending an article to a journal. A journal’s prestige, quality of the article
published, impact factor, authors’ h-index everything needs to be taken into consideration.
i10-index The i10-index is the newest in the line of journal metrics and was introduced by Google
Scholar in 2011. It is a simple and straightforward indexing measure found by tallying a journal’s total
number of published papers with at least 10 citations (Google Scholar Blog, 2011). Webology has an i10-
index score of 52 according to Google Scholar (see Table 2). Table 2. i10-index of Webology Database
i10-index Google Scholar 52 Scopus 8
i20-index The i20-index, proposed in this editorial note, is obtained by tallying a journal’s total number
of published papers with at least 20 citations (see Table 3). Table 3. i20-index of Webology Database i20-
index Google Scholar 24 Scopus 5 Note that the total number of citations to Webology papers on Google
Scholar was 2423. It is interesting to note that the total number of citations received by i20 papers (i.e.,
24 papers out of all published papers) was 1693. This briefly means that i20 papers received 70 percent
of all citations to Webology.
The i20-index helps shift concern for editors and encourages journals to accept more relevant papers that
can be used and cited by peers. Citations in patents The number of citations to a journal in patents indicates
to what extent a journal is technology- oriented (Noruzi & Abdekhoda, 2014).
Webology is cited 16 times by patents on Google Patents and 13 times by patents issued by the USPTO.
Number of citations to Webology in patents Database No. of Citations Google Patents 16 USPTO 13 Note
that to retrieve the USPTO patents citing Webology, we used the following search command
(OREF/Webology) in the advanced search and to identify the number of citations on Google Patents, we
have conducted a keyword search by Webology. The field of OREF (Other References) on the USPTO
database contains other references cited as prior art, including journals, books, and conference
proceedings. https://www.webology.org/2016/v13n1/editorial21.pdf
Page 16 of 23
What are the advantages of the Altmetric Attention Score?
Altmetric starts tracking online mentions of academic research from the moment it’s published. That
means there’s no need to wait for citations to come in to get feedback on a piece of research.
Author metrics
h-index
What is the h-index?
The h-index is an author-level research metric, first introduced by Hirsch in 2005. The h-index attempts
to measure the productivity of a researcher and the citation impact of their publications.
Page 17 of 23
Although the basic calculation of the h-index is clearly defined, it can still be calculated using different
databases or time-frames, giving different results. Normally, the larger the database, the higher the h-index
calculated from it. Therefore, a h-index taken from Google Scholar will nearly always be higher than one
from Web of Science, Scopus, or PubMed. (It’s worth noting here that as Google Scholar is an uncurated
dataset, it may contain duplicate records of the same article.)
The main differences between the indicators provided by CWTS Journal Indicators, in particular the IPP
and SNIP indicators, and the journal impact factor (JIF) can be summarized as follows:
Based on Scopus (IPP and SNIP) vs. based on Web of Science (JIF).
Correction for field differences (SNIP) vs. no correction for field differences (IPP and JIF).
Three years of cited publications (IPP and SNIP) vs. two years of cited publications (JIF).
Citations from selected sources and selected document types only (IPP and SNIP) vs. citations
from all sources and document types (JIF).
Citations to selected document types only (IPP and SNIP) vs. citations to all document types (JIF).
Page 18 of 23
In the interpretation of this indicator, one should keep in mind that in general larger journals can be
expected to have a higher percentage of self citations than smaller journals.
Small journals. IPP and SNIP are less reliable for small journals with only a limited number of publications
than for larger journals. For this reason, CWTS Journal Indicators by default displays statistics only for
journals with at least 50 publications. Notice that smaller journals also tend to have wider stability intervals
than larger journals.
Non-citing sources. As explained above, some journals have been classified as non-citing sources. This
applies in particular to many journals in the arts and humanities. Citations originating from these journals
are not counted in the calculation of IPP and SNIP, but these journals may have IPP and SNIP values
themselves. These IPP and SNIP values should be interpreted with extreme caution. These values for
instance do not include journal self citations, and therefore the values tend to be artificially low. SCImago
Journal Rank (SJR) (Elsevier)
“The SCImago Journal & Country Rank is a portal that includes the journals and country scientific
indicators developed from the information contained in the Scopus® database (Elsevier B.V.).” Scopus
contains more than 15,000 journals from over 4,000 international publishers as well as over 1000 open
access journals. SCImago's "evaluation of scholarly journals is to assign weights to bibliographic citations
based on the importance of the journals that issued them, so that citations issued by more important
journals will be more valuable than those issued by less important ones." (SJR indicator).
Page 19 of 23
stability intervals that indicate the reliability of the SNIP value of a journal. SNIP was created by
Professor Henk F. Moed at Centre for Science and Technology Studies (CWTS), University of L
Time
Time is needed for publications to receive citations, so citation-based metrics are less useful when
applied to early career researchers.
Citations accrue at different rates in difference disciplines and for different publication types.
Size
The value of some metrics tends to increase as the size of the entity being considered increases. For
example, a small research group will tend to have fewer publications and citations than a large
department; in such cases, a metric such as citations per publication may be more appropriate than a
total citation count.
Normalization https://guides.library.mun.ca/researchvisibility/metrics
Since publication and citation patterns vary across disciplines and over time, bibliometric indicators
are often normalized to enable comparisons across different fields, time periods, document types, or
other factors.
A field-normalized citation score, for example, compares the total number of citations received by a
publication (or author) to the expected number of citations of a publication (or author) in the same
field.
A value of 1.00 indicates that the publication has received an average number of citations for
publications in that field. A value >1.00 indicates that the number of citations this publication has
received is greater than the world average for that field.
Page 20 of 23
6B.4 Summary
This course materials provides an overview of finding, gathering, and using metrics and other information
about your research and scholarship to tell the story of your scholarly impact.
Research in the sciences and the humanities will continue to develop in the way it is conducted and
organized, as well as in the way it is embedded in society. This, in turn, will lead to evolving views on
good research practices. From time to time, the standards for good research practices and the related duties
of care must be reviewed and the Code updated. Some areas of research practices are subject to change;
for example, the growing importance of the way data is used and managed and the developments in the
area of open science. It is to be expected that these and other advances will require additions and
adjustments to the Code in future
The publication is the final step in the process of research and indexing is very crucial and primary aspect
in publishing articles and research work. This helps a researcher filter and find a good journal among the
number of predatory journals and send the work to be validated, reviewed and published. The better the
knowledge of a researcher about this, the better is his/her chance of producing a quality work that will
benefit his/her research, discipline and the community.
An author's impact on their field or discipline has traditionally been measured using the number of times
their academic publications are cited by other researchers. There are numerous algorithms that account
for such things as the recency of the publication, or poorly or highly cited papers. While citation metrics
may reflect the impact of research in a field, there are many potential biases with these measurements and
they should be used with care. For a critique of author impact factors.
Journal impact measurements reflect the importance of a particular journal in a field and take into account
the number of articles published per year and the number of citations to articles published in that
journal. Like author impact measurements, journal impact measures can be only so informative, and
researchers in a discipline will have the best sense of the top journals in their field.
There are many tools available for measuring and tracking your research impact. Be aware that
comparisons across different tools are not advised, as they may use different algorithms and different
citation data to calculate impact which have been discussed.
Today, there are many new forms of scholarly publishing, networking and collaborating. Beyond the more
traditional means of scholarly communication, researchers can now reach vast and distant audiences well
beyond the borders of their research communities. In addition, there are now tools that can be used to
measure research impact in these non-traditional forms of scholarly communication, also referred to
as 'altmetrics'.
Research metrics are quantitative tools used to help assess the quality and impact of research outputs.
Metrics are available for use at the journal, article, and even researcher level. However, any one metric
only tells a part of the story and each metric also has its limitations. Therefore, a single metric should
Page 21 of 23
never be considered in isolation. For a long time, the only tool for assessing journal performance was the
Impact Factor – more on that in a moment. Now there are a range of different research metrics available,
from the Impact Factor to altmetrics, h-index, and more. But what do they all mean? How is each metric
calculated? Which research metrics are the most relevant to your journal? And how can you use these
tools to monitor your journal’s performance?
Answer Key:
Page 22 of 23
1(a) Almind and Ingwerson
2(b) Total Citations Received/Total Papers
3(c) Impact factor
4(d) Highlighting issues and depiction of the status
5(b) Quoted in another paper by another author.
Page 23 of 23