Module 6 Research Metrics

Download as pdf or txt
Download as pdf or txt
You are on page 1of 23

E-text

Research and Publication Ethics (RPE)


Module-6B: Research Metrics

Content
6B.0 Objectives
6B.1 Introduction
6B.2 Impact Factors of journal as per Journal Citation Report, SNIP, SJR, IPP, Cite Score
6B.3 Metrics : h-index, g-index, i10 Index, Almetrics
6B.4 Summary
6B.5 Questions/ Self Assessment Questions
6B.6 References/ Bibliography/ Select Reading

6B.0 Objectives

After going through this unit learners will be able


 To understand a journal’s standing and quality and it's implication
 To find different indexing parameters.
 To know about the Impact Factors of journal as per Journal Citation Report.
 To know about the significance of SNIP, SJR, IPP, Cite Score in regards to research metrics.
 To know different research metrics -- h-index, g-index, i10 Index, Almetrics and it's implication in
research.

6B.1 Introduction

Research metrics are quantitative tools used to help assess the quality and impact of research outputs.
Metrics are available for use at the journal, article, and even researcher level. However, any one metric
only tells a part of the story and each metric also has its limitations. Therefore, a single metric should
never be considered in isolation. https://editorresources.taylorandfrancis.com/understanding-research-
metrics/

For a long time, the only tool for assessing journal performance was the Impact Factor – more on that in
a moment. Now there are a range of different research metrics available, from the Impact Factor to
altmetrics, h-index, and more.
But what do they all mean? How is each metric calculated? Which research metrics are the most relevant
to your journal? And how can you use these tools to monitor your journal’s performance?
Keep reading for a more in-depth look at the range of different metrics available.

Page 1 of 23
In March 2021 Taylor & Francis signed the San Francisco Declaration on Research Assessment (DORA),
which aims to improve the ways in which researchers and the outputs of scholarly research are evaluated.

Researchers should be assessed on the quality and broad impact of their work. While research metrics can
help support this process, they should not be used as a quick substitute for proper review. The quality of
an individual research article should always be assessed on its own merits rather than on the metrics of
the journal in which it was published. https://newsroom.taylorandfrancisgroup.com/taylor-francis-signs-up-
to-principles-outlined-in-dora-supporting-balanced-and-fair-research-assessment/

Usage of metrics to promote journal


Journal metrics can be a useful tool for researchers when they’re choosing where to submit their research.
One may therefore be asked by prospective authors about their journal’s metrics. One might also want to
highlight certain metrics when they’re talking about the journal, to illustrate its reach or impact.

It is advisable that authors should always quote at least two different metrics, to give researchers a richer
view of journal performance. This should also accompany this quantitative data with qualitative
information that will help researchers assess the suitability of the journal for their research, such as its
aims & scope.

Publisher's researcher guide to understanding journal metrics explains in more detail how authors can use
metrics as part of the process of choosing a journal.
https://authorservices.taylorandfrancis.com/publishing-your-research/choosing-a-journal/journal-
metrics/

How to use metrics to monitor particular journal (Example of Taylor and Francis Journal)
 Metrics can help you to assess your journal’s standing in the community, raise its profile, and support
growth in high-quality submissions. But only if you know how to interpret and apply them.
 Most journals on Taylor & Francis Online display a range of metrics, to help give a rounded view of
a journal’s performance, reach, and impact. These metrics include usage, citation metrics, speed
(review and production turnaround times), and acceptance rate.
 Guide to Taylor & Francis Online journal metrics --- how they’re calculated and the advice given to
researchers about their use.

How to identify the right metrics for the journal


To monitor journal’s performance, first we need to identify which research metrics are the most
appropriate. To do this, we need to think about that journal and its objectives.

It may help to structure this thinking around some key questions:

Target audience
For journals with a practitioner focus, academic citations may be less valuable than mentions in policy
documents (as reported by Altmetric). If journal is for a purely academic audience, traditional citation
metrics like Impact Factor are more relevant. If journal has a regional focus, then geographical usage
might be important.

Page 2 of 23
Achieving target
If objective is to publish more high-quality, high-impact authors, need to consider analyzing the h-indices
of authors in recent volumes to assess to achieve this. If aim is to raise one's journal’s profile within the
wider community, it makes sense to consider altmetrics in one's analysis. Perhaps goal is to generate more
citations from high-profile journals within your field – so looking at Eigenfactor rather than Impact Factor
would be helpful.

Subject area -- working


The relevancy of different research metrics varies hugely between disciplines. Is Impact Factor
appropriate, or would the 5-year Impact Factor be more representative of citation patterns in your field?
Which metrics are your competitors using? It might be more useful to think about your journal’s ranking
within its subject area, rather than considering specific metrics in isolation.

Business model of journal


For journals following a traditional subscription model, usage can be particularly crucial. It’s a key
consideration for librarians when it comes to renewals.

How to interpret research metrics?


 It’s tempting to reach for simple numbers and extrapolate meaning, but be careful about reading too
closely into metrics. The best strategy is to see metrics as generating questions, rather than answers.
 Metrics simply tells us “what”. What are the number of views of the work? What are the number of
downloads from the journal? What are the number of citations?
 To interpret metrics effectively, we should think less about “what” and use your metrics as a starting
point to delve deeper into “who”, “how”, and “why”:
 Who is reading the journal? Where are they based, what is their role, how are they accessing it?
 Who are the key authors in your subject area? Where are they publishing now?
 How are users responding to your content? Are they citing it in journals, mentioning it in policy
documents, talking about it on Twitter?
 How is your subject area developing? What are the hot topics, emerging fields, and key conversations?
 Why was a specific article successful? What made the media pick up on it, what prompted citations
from other journals, who was talking about it?

6B.3 Impact Factors of journal as per Journal Citation Report, SNIP, SJR, IPP,
Cite Score

It’s easy to damage the overall picture of your research metrics by focusing too much on one specific
metric. For example, if you wanted to boost your Impact Factor (IF) by publishing more highly-cited
articles, you might be disregarding low-cited articles used extensively by your readers. Therefore, if you
chose to publish only highly-cited content for a higher Impact Factor, you could lose the value of your
journal for a particular segment of your readership. Generally, the content most used by practitioners,
educators, or students (who don’t traditionally publish) is not going to improve your Impact Factor, but
will probably add value in other ways to your community.

Page 3 of 23
Fundamentally, it’s important to consider a range of research metrics when monitoring your journal’s
performance. It can be tempting to concentrate on one metric, like the Impact Factor, but citations are not
the be-all and end-all. Think about each research metric as a single tile in a mosaic: you need to piece
them all together to see the bigger picture of journal performance.

So that the Impact Factor doesn’t penalize journals that publish rarely-cited content like book reviews,
editorials, or news items, these content types are not counted in the denominator of the calculation (the
total number of publications within the two-year period). However, citations to this kind of content are still
counted.

This creates two main problems. Firstly, the classification of content is not subjective, so content such as
extended abstracts or author commentaries fall into an unpredictable gray area. Secondly, if such articles
are cited, they increase the Impact Factor without any offset in the denominator of the equation.

Research metrics
Research metrics are sometimes controversial, especially when in popular usage they become proxies for
multidimensional concepts such as research quality or impact. Each metric may offer a different emphasis
based on its underlying data source, method of calculation, or context of use.

For this reason, Elsevier promotes the responsible use of research metrics encapsulated in two “golden
rules”. Those are: always use both qualitative and quantitative input for decisions (i.e. expert
opinion alongside metrics), and always use more than one research metric as the quantitative input. This
second rule acknowledges that performance cannot be expressed by any single metric, as well as the fact
that all metrics have specific strengths and weaknesses. Therefore, using multiple complementary metrics
can help to provide a more complete picture and reflect different aspects of research productivity and
impact in the final assessment.

On this page some of the most popular citation-based metrics employed at the journal level. Where
available, they are featured in the “Journal Insights” section on Elsevier journal homepages (for example),
which links through to an even richer set of indicators on the Journal Insights homepage (for example).
https://www.journals.elsevier.com/global-environmental-change

Impact Factor - What is it?; Why use it ? https://researchguides.uic.edu/if/impact,


The impact factor (IF) is a measure of the frequency with which the average article in a journal has been
cited in a particular year. It is used to measure the importance or rank of a journal by calculating the times
it's articles are cite.

How Impact Factor is Calculated?


The calculation is based on a two-year period and involves dividing the number of times articles were
cited by the number of articles that are citable.

Page 4 of 23
Journal Citation Reports

Journal Citation Reports provides ranking for journals in the areas of science, technology, and
social sciences. For every journal covered, the following information is collected or calculated:
Citation and article counts, Impact factor, Immediacy index, Cited half-life, citing half-life, Source
data listing, Citing journal listing, Cited journal listing, Subject categories, Publisher information.

 Limited to the citation data of Journals indexed in Web of Science


 Process to determine journals included in the tool
 Indexes over 12,000 journals in arts, humanities, sciences, and social sciences

You can enter a journal title in the Search box under "Go to Journal Profile". Because impact
factors mean little on their own, it's best to view the journal you are interested in comparison to
the other journals in the same category. To determine the impact factor for a particular journal,
select a JCR edition (Science and/ or Social Science), year, and Categories, found on the left of
the screen. Click Submit. Scroll the list to find the journal you are interested in. The list can be
resorted by Journal time, Cites, Impact Factor, and Eigenfactor

Indexing Parameters
To understand a journal’s standing and quality there are certain parameters used to decide that and those
are called indexing parameters. Bibliometrics and Scientometrics are used for the measurement of all
aspects related to publication and reading of books and documents. (Vol-5 Issue-5 2019 IJARIIE-
ISSN(O)-2395-4396 10866 www.ijariie.com)

These are some of the indexing parameters:

Impact Factor is the measure of the number of citation of a particular article in a year. This stands as a
value for measuring the quality of the article and how much it has contributed to the academic community.
It is also one of the strongest aspects that determines the rank of a journal and this can be calculated only
after two years of establishment of the journal.

How to calculate Impact Factor?


The first important aspect is that the journal should have been publishing articles for a minimum of three
continuous years.
D = No. of articles indexed in the year 2018 and 2019
N = No. of citations of D in the year 2020
So, N/D = Impact factor of the year 2020 [5]

This is how the impact factor for any journal is calculated. It is either calculated for two years or five
years. The higher the impact factor the journal has, the higher its quality. It is very useful in finding an
objective measure of quality. Impact Factors reflect the changing status of a journal in a particular
discipline as it is calculated every once in two years and is updated. It takes into account two major criteria:
the number of journals a particular article has and the number of citation each article of that journal has.

Page 5 of 23
So, even a journal which has published very few articles might have the chance of having very high
citations and hence ensures quality.

Limitation of Impact Factors


Though the impact factor is a quality parameter, it also has certain drawbacks. For example, it cannot be
used as a standard of comparison between different disciplines. It depends on the subject area, the impact
factor of one journal might be high in one discipline and low in another. It also cannot be used to gauge
the success of individual research paper. It does not give the impact number for the journals that have less
than 3 years of existence. [9]

Review Methods
Deborah in her article talks about the efficiency of the journal review process. It mostly depends on the
time taken for the journal to review and publish an article from the date of the article being received. Peer-
review is very critical when it comes to publishing. There are different review methods that each journal
follows to review the articles sent to them. They usually follow this to prevent bias and plagiarism.

The most commonly followed peer-review process is double-blind, the journal also follows single-blind,
triple-blind and open review process. [8]

Single Blind Review method is the one where the reviewer’s identity remains anonymous to the author,
but the author’s identity is disclosed to the reviewer. Every reviewing process has its own merits and
demerits. The author need not worry about the reviewer’s area of expertise and will not be influenced by
the critical attitude of the reviewer since the identity of the reviewer is not disclosed to the author. Though
the author does not know the reviewer, the author’s identity is revealed to the reviewer which will, in turn,
result in bias of any kind or the reviewer might opt to be overly critical about the article since his/her
identity is not revealed.

In a Double-Blind Review process, the identity of the author and the reviewer is kept from each other.
This is done with a primary focus of encouraging unbiased review process. [4] The author is supposed to
submit a manuscript that doesn’t reveal his/her identity in any way. If the author has a concern against a
particular person reviewing their work, then they can let the editorial board know about it through their
conflict of interest. Most of the journals follow the Double-Blind process to have a healthy peer-review
environment and provide the space required for knowledge dissemination.

In Triple Blind Review Process, neither the author nor the reviewer or the editors knows each other.
Everybody’s identity is concealed from the other to encourage an unbiased peer-review process. This type
of review is rarely used because to conceal the identity of the author, reviewer and the editors includes
very complex logistics. [4]

The last one is the Open Review Process where the author and the reviewer know each other. This
review process is opted by a very few journals and are is considered to be a very open process. Here the
reviewer and the author even discuss the manuscript together and work together in the comments. The
names of the reviewers are also published along with the authors in the article or the journal. (Vol-5 Issue-
5 2019 IJARIIE-ISSN(O)-2395-4396 10866 www.ijariie.com 320)

Page 6 of 23
Scholarly Publishing Resources for Faculty: Scopus Metrics (CiteScore, SNIP & SJR, h-index)
https://liu.cwp.libguides.com/c.php?g=45770&p=4417804

Journal-level metrics (https://www.elsevier.com/authors/tools-and-resources/measuring-a-


journals-impact)
Metrics have become a fact of life in many - if not all - fields of research and scholarship. In an age of
information abundance (often termed ‘information overload’), having a shorthand for the signals for where
in the ocean of published literature to focus our limited attention has become increasingly important.

How can I get an Impact Factor for my journal?


 Only journals selected to feature in the Science Citation Index Expanded (SCIE) and Social
Sciences Citation Index (SSCI) receive an official Impact Factor.
 To be eligible for coverage in these Web of Science indices, journals must meet a wide range of
criteria. You can find out more about the journal selection process on the Clarivate website.
 For many journals, the first step to receiving an Impact Factor is to feature in the Emerging Sources
Citation Index (ESCI). For more information on the ESCI,

What are the disadvantages of the Impact Factor?


 The Impact Factor is an arithmetic mean and doesn’t adjust for the distribution of citations.
 This means that one highly-cited article can have a major positive effect on the Impact Factor,
skewing the result for the two years. Most journals have a highly-skewed citation distribution, with
a handful of highly-cited articles and many low- or zero-cited articles.
 The JCR doesn’t distinguish between citations made to articles, reviews, or editorials.

The Impact Factor only considers the number of citations, not the nature or quality.
An article may be highly cited for many reasons, both positive and negative. A high Impact Factor only
shows that the research in a given journal is being cited. It doesn’t indicate the context or the quality of
the publication citing the research.

You can’t compare Impact Factors like-for-like across different subject areas.

Different subject areas have different citation patterns, which reflects in their Impact Factors. Research in
subject areas with typically higher Impact Factors (cell biology or general medicine, for example) is not
better or worse than research in subject areas with typically lower Impact Factors (such as mathematics or
history).
The difference in Impact Factor is simply a reflection of differing citation patterns, database coverage,
and dominance of journals between the disciplines. Some subjects generally have longer reference lists
and publish more articles, so there’s a larger pool of citations.

Impact Factors can show significant variation year-on-year, especially in smaller journals.

Page 7 of 23
Because Impact Factors are average values, they vary year-on-year due to random fluctuations. This
change is related to the journal size (the number of articles published per year): the smaller the journal,
the larger the expected fluctuation.

What is the 5-year Impact Factor?


The 5-year Impact Factor is a modified version of the Impact Factor, using five years’ data rather than
two. A journal must be covered by the JCR for five years or from Volume 1 before receiving a 5-year
Impact Factor.

The 5-year Impact Factor calculation is:


The 5-year Impact Factor is more useful for subject areas where it takes longer for work to be cited, or
where research has more longevity. It offers more stability for smaller titles as there are a larger number
of articles and citations included in the calculation. However, it still suffers from many of the same issues
as the traditional Impact Factor.

Eigenfactor
In 2007, the Web of Science JCR grew to include Eigenfactors and Article Influence Scores. Unlike the
Impact Factor, these metrics don’t follow a simple calculation. Instead, they borrow their methodology
from network theory.

What is an Eigenfactor?
The Eigenfactor measures the influence of a journal based on whether it’s cited within other reputable
journals over five years. A citation from a highly-cited journal is worth more than from a journal with few
citations.
To adjust for subject areas, the citations are also weighted by the length of the reference list that they’re
from. The Eigenfactor is calculated using an algorithm to rank the influence of journals according to the
citations they receive. A five-year window is used, and journal self-citations are not included.
This score doesn’t take journal size into account. That means larger journals tend to have larger
Eigenfactors as they receive more citations overall. Eigenfactors also tend to be very small numbers as
scores are scaled so that the sum of all journal Eigenfactors in the JCR adds up to 100.
Very roughly, the Eigenfactor calculation is based on the number of times articles from the journal
published in the past five years have been cited in the JCR year, but it also considers which journals have
contributed these citations so that highly cited journals will influence the network more than lesser cited
journals.

Eigenfactor vs. Impact Factor: How are They Different?


Eigenfactor and impact factor are two widely used measures of a journal’s value. However, they measure
two different factors and cannot be used interchangeably.

Eigenfactor: How Many People Read this Journal?


Although the calculation is complicated, a journal eigenfactor is basically a measure of how many people
read a journal and think its contents are important. Since this cannot be directly calculated, it is measured
by counting the total number of citations a journal receives over a five-year period. Note that eigenfactor
measures the total number of citations. Therefore, journal A, which publishes 1000 articles a year, will

Page 8 of 23
have twice the eigenfactor of journal B, which puts out 500 articles annually, if each article is cited the
same number of times.
Eigenfactor is meant to measure the importance of a journal throughout the scientific community and
rewards large journals that publish a variety of topics. It’s no surprise that the journal Nature, a large
journal which publishes on pretty much everything in science, has the highest eigenfactor. But this is true
only because its contents are considered valuable and are much read and cited.

Impact Factor: How Many People Read My Article?


In contrast, a journal impact factor measures how many citations an individual article receives on average
if published in a certain journal. Thus it is an indirect measure of how many people read an article and
think it’s important. Here journal A and journal B may have identical impact factors even though their
eigenfactors differ by a factor of two. But suppose the two journals had the same eigenfactor, now the
smaller journal has twice the impact factor, suggesting that either it has more readers or that readers
consider its contents to be of higher quality than the bigger journal.

Which Metric to Use?


It seems to me that a measure of citations per article is a more useful metric for an individual researcher
to consider when choosing a journal to submit to. A library may be more interested in a journal’s overall
importance (so as to know which ones to stock) but a researcher wants to know how many people will
read his particular article and think it’s important. Impact factor attempts to measure this; eigenfactor does
not. This does not mean that every researcher should submit every article to Nature. Statistical metrics
must be used with care. An article on the kinetics of nitroglycerin decomposition might be ignored by the
readers of Nature but have an avid following in Propellants, Explosives, and Pyrotechnics. Each article
has an ideal target journal and impact factor won’t tell you what it is. The most it can do is help make a
choice between two journals that seem equally suitable. https://www.enago.com/academy/eigenfactor-
vs-impact-factor/

Article Influence Score


What is an Article Influence Score?
The Article Influence Score is a measure of the average influence of a journal’s articles in the first five years after
publication. A score greater than 1.00 shows above-average levels of influence.

The Article Influence Score calculation is:


These are then normalized so that the average journal in the JCR has a score of 1.
Like 5-year Impact Factors, journals don’t receive an Article Influence Score unless they have been
covered by the JCR for at least five years, or from Volume 1.

CiteScore
What is CiteScore?
CiteScore is the ratio of citations to research published. It’s currently available for journals and book series
which are indexed in Scopus.
The CiteScore calculation only considers content that is typically peer reviewed; such as articles, reviews,
conference papers, book chapters, and data papers.

The CiteScore calculation is:

Page 9 of 23
CiteScore metrics
CiteScore metrics are a suite of indicators calculated from data in Scopus, the world’s leading abstract and
citation database of peer-reviewed literature.

Calculating the CiteScore is based on the number of citations to documents (articles, reviews, conference
papers, book chapters, and data papers) by a journal over four years, divided by the number of the same
document types indexed in Scopus and published in those same four years. For more details, see this FAQ.
https://service.elsevier.com/app/answers/detail/a_id/14880/supporthub/scopus

CiteScore is calculated for the current year on a monthly basis until it is fixed as a permanent value in
May the following year, permitting a real-time view on how the metric builds as citations accrue. Once
fixed, the other CiteScore metrics are also computed and contextualise this score with rankings and other
indicators to allow comparison.

CiteScore metrics are:


 Current: A monthly CiteScore Tracker keeps you up-to-date about latest progression towards the next
annual value, which makes next CiteScore more predictable.
 Comprehensive: Based on Scopus, the leading scientific citation database.
 Clear: Values are transparent and reproducible to individual articles in Scopus.

The scores and underlying data for nearly 26,000 active journals, book series and conference proceedings
are freely available at www.scopus.com/sources or via a widget (available on each source page on
Scopus.com) or the Scopus API.

What are the differences between CiteScore and Impact Factor?


 CiteScore is based on the Scopus database rather than Web of Science. This means the number of
citations and journal coverage in certain subject areas is notably higher.
 CiteScore uses a four-year citation window, whereas Impact Factor uses a two-year citation
window.
 CiteScore covers all subject areas, whereas the Impact Factor is only available for journals
indexed in the SCIE and SSCI.
 CiteScore suffers from some of the same problems as Impact factor; namely that it isn’t
comparable across disciplines and it is a mean calculated from a skewed distribution.

CWTS Journal Indicators https://www.journalindicators.com/methodology, https://www.cwts.nl/


The Centre for Science and Technology Studies (CWTS) studies scientific research and its connections to
technology, innovation, and society. Our research, bibliometric and scientometric tools, and evaluation
expertise provide a solid basis for supporting research assessment and strategic decision making and for
developing science policy.
CWTS Journal Indicators offers a number of bibliometric indicators on scientific journals. These
indicators have been calculated based on the Scopus bibliographic database produced by Elsevier. A key
indicator offered by CWTS Journal Indicators is the SNIP indicator, where SNIP stands for source
normalized impact per paper. The original version of the SNIP indicator was developed by Henk Moed
in 2009 and is documented in a scientific paper (an open access preprint is available here). In 2012, SNIP

Page 10 of 23
was revised, leading to some changes in the way it is calculated. These changes are explained in
another paper (an open access preprint is available here).

Indicators
CWTS Journal Indicators currently provides four indicators:
1. P. The number of publications of a source in the past three years.
2. IPP. The (Impact Per Publication), calculated as the number of citations given in the present year
to publications in the past three years divided by the total number of publications in the past three
years. IPP is fairly similar to the well-known journal impact factor. Like the journal impact factor,
IPP does not correct for differences in citation practices between scientific fields. IPP was
previously known as RIP (Raw Impact per publication).
3. SNIP. The (Source Normalized Impact Per Publication), calculated as the number of citations
given in the present year to publications in the past three years divided by the total number of
publications in the past three years. The difference with IPP is that in the case of SNIP citations
are normalized in order to correct for differences in citation practices between scientific fields.
Essentially, the longer the reference list of a citing publication, the lower the value of a citation
originating from that publication. A detailed explanation is offered in our scientific paper.
4. % self cit. The percentage of self citations of a source, calculated as the percentage of all citations
given in the present year to publications in the past three years that originate from the source itself.

In the calculation of the above indicators, only publications that are classified as article, conference paper,
or review in Scopus are considered. Publications of other document types are ignored. Citations
originating from such publications are ignored as well. Furthermore, citations are not counted if they
originate from special types of sources (referred to as non-citing sources), in particular trade journals and
sources with very few references to other sources (which includes many sources in the arts and
humanities).

Stability intervals
IPP and SNIP are provided with stability intervals. A stability interval reflects the stability or reliability
of an indicator. The wider the stability interval of an indicator, the less reliable the indicator. If for a
particular source IPP and SNIP have a wide stability interval, the indicators have a low reliability for this
source. This for instance means that the indicators are likely to fluctuate quite significantly over time.
CWTS Journal Indicators employs 95% stability intervals constructed using a statistical technique known
as bootstrapping.

SNIP - Source Normalized Impact per Paper


SNIP is a journal-level metric which attempts to correct subject-specific characteristics, simplifying
cross-discipline comparisons between journals. It measures citations received against citations expected
for the subject field, using Scopus data. SNIP is published twice a year and looks at a three-year period.

Page 11 of 23
The SNIP and SJR calculation is:
 SNIP -- Source Normalized Impact per Paper (SINP) normalizes its sources to allow for cross-disciplinary
comparison. In practice, this means that a citation from a publication with a long reference list has a
lower value.
 SNIP only considers citations to specific content types (articles, reviews, and conference papers), and
does not count citations from publications that Scopus classifies as “non-citing sources”. These include
trade journals, and many Arts & Humanities titles.
 SJR - Scimago Journal Rank
 The SJR aims to capture the effect of subject field, quality, and reputation of a journal on citations. It
calculates the prestige of a journal by considering the value of the sources that cite it, rather than
counting all citations equally.
 Each citation received by a journal is assigned a weight based on the SJR of the citing journal. So, a
citation from a journal with a high SJR value is worth more than a citation from a journal with a low
SJR value.

The SJR (Scimago Journal Rank ) calculation is:


 As with SNIP and CiteScore, SJR is calculated using Scopus data.
 Journal metrics: usage, speed, and acceptance rate
 As explained above, citations aren’t the only way to monitor the performance of desired journal.

What does it measure?


A journal’s usage is the number of times articles are viewed/downloaded. Gives a quick impression of the
journal’s size and reach.

How is it calculated? As an example -- The figure shown on Taylor & Francis Online is the total number
of times articles in the journal were viewed by users in the previous calendar year, rounded to the nearest
thousand. This includes all of the different formats available on Taylor & Francis Online, including
HTML, PDF, and EPUB. Usage data for each journal is updated annually in February.

How we can access our journal’s usage data?


We can easily access article-level usage data via the “Metrics” tab --- Example -- Taylor & Francis Online.
Publishers also provide more detailed annual usage reports to their journal editors. Detail the COUNTER
compliant data available. https://editorresources.taylorandfrancis.com/understanding-research-
metrics/counter-compliant-data/

There are other online platforms which provide journal access, including aggregator services such as
JSTOR and EBSCO. Of course, some readers still prefer print over online, so it’s important you consider
these sources when building a broader picture of usage. There are set out the limitations of this
metric which could be guide for researchers. https://authorservices.taylorandfrancis.com/publishing-your-
research/choosing-a-journal/journal-metrics/#usage

Page 12 of 23
Speed metrics
The following speed metrics, which are available for many journals on Taylor & Francis Online, indicate
how long different stages of the publishing process might take. The speed metrics published on Taylor &
Francis Online are for the previous full calendar year and are updated in February.
All of these metrics have limitations, which authors should consider when using them to choose a journal.
These limitations are set out in researcher guide to understanding journal metrics.
https://authorservices.taylorandfrancis.com/publishing-your-research/choosing-a-journal/journal-
metrics/

Speed from submission to first decision


What does it measure? This metric indicates how long after submission it may take before you receive a
decision about your article.
How is it calculated? This is the median number of days from submission to first decision for all
manuscripts which received a first decision in the previous calendar year.

Speed from submission to first post-review decision


What does it measure? This metric only considers those articles that are sent out for peer review by experts
in the field. It indicates how long it may take before you receive a decision on your peer reviewed article.

How is it calculated? This is the median number of days from submission to decision for all peer reviewed
articles which received a first decision in the previous calendar year.

Speed from acceptance to online publication


What does it measure? This metric tells you about the journal’s production speed, indicating how long
you are likely to wait to see your article published online once the journal’s editor has accepted it.

How is it calculated? On Taylor & Francis Online this figure is the median number of days from
acceptance to online publication of the Version of Record, for articles published in the previous calendar
year.

Acceptance rate
A journal’s acceptance rate is an indication of the number of submissions it receives for every article that’s
eventually published.

How is it calculated? This figure represents the articles accepted by the journal for publication in the
previous calendar year as a percentage of all papers receiving a final decision in that calendar year. It
includes all article types submitted to the journal, including those that are rejected without being peer
reviewed (desk rejects).

The acceptance rates published on Taylor & Francis Online are for the previous full calendar year and are
updated in February. Publishers ’ve set out the limitations of this metric in our guide for researchers.

6B.3 Metrics : h-index, g-index, i10 index, almetrics

Page 13 of 23
Sources of Research Metrics https://guides.library.mun.ca/researchvisibility/metrics

6B.3.1 Citation-Tracking Databases


Research metrics are typically derived from a citation index (a database that tracks citations among papers,
in addition to bibliographic details). The most commonly used sources are:
 Scopus
 Web of Science
 Google Scholar

Specialized Tools
SciVal - SciVal is Elsevier's research benchmarking software, which is based on Scopus data.

Other Sources
 Dimensions (free version) - another citation index, which includes research metrics such as citation
counts.
 Publish or Perish - a free downloadable software program that retrieves and analyzes academic
citations from sources such such as Google Scholar or Microsoft Academic.
 Research metrics are dependent on database coverage, so metrics should be drawn from the same
source whenever possible and the data source should be named when metrics are cited.

6B.3.2Eigen factor Score


Eigen factor Score is similar to that of the impact factor. Its score weighs each reference to a scholastic
measure of the number of time researchers spend reading the journal. It also avoids self-citation. h-index
was created by John.E Hirsh, a physicist, who along with his colleagues started ranking the author and
journal using this indexing parameter.

6B.3.3 The h-index


The h-index is short for the Hirsch index, which was introduced by Jorge E. Hirsch (2005) as a way to
quantify the productivity and impact of an individual author. Similar to how the Impact Factor (IF) is now
be used to measure a journal or an author to their scientific field, the h-index has become another measure
of relative impact of scientific publications. While the IF is derived from the quotient of total citations and
total papers in a two-year span, the h-index is simply a count of the largest number of papers (h) from a
journal or author that have at least (h) number of citations. For example, Webology has an h-index of 21
based on Google Scholar which indicates that the journal has published 21 papers with at least 21 citations.
h-index of Webology Database H-index Google Scholar 21 Scopus 9 H-index is calculated based on the
number of papers published by the author and the number of times each of those papers was cited. For
example, if an author has published 3 papers and all three papers were cited three times each, then the
author’s h-index is 3.

It encompasses the measure of the quantity of the work by a researcher and also its impact in that particular
field of study within a single number. Makes it easy for the researchers to understand both hence not
making it a single-aspect criterion and it covers different aspects such as citations, the total number of
highly cited paper, etc. It also allows one to gauge the academic output accurately and becomes an
influencing factor in terms of honors and awards.

Page 14 of 23
h-index cannot be used to compare researchers from two different disciplines, it might work well for
comparing two researchers of the same discipline but there are inter-disciplinary differences. It is also
dependent on each researcher's career duration because the longer the duration the higher the number of
citations. Papers with high citation numbers are very important to calculate the h-index value, but there is
a disadvantage in that. Once the papers are listed as top h papers, the number of citations they receive after
that is not taken into account. There is also a high risk of researchers indulging in a whole lot of self-
citation to increase their h-index. Sometimes it becomes the only determinant of a researcher's value and
other aspects are side-lined.

CiteScore is a journal metrics introduced by Elsevier. The major aim is to find out the impact of a
journal or article in a particular field of study. It has an extensive range of peer-reviewed literature from
across 50,000 journals. The calculation is very simple and easy to do and is very clear and accessible.
Access to CiteScore metrics is free of cost, they don't charge users for their access to the cite score. It is
updated every month, unlike the other metrics that are updated annually or once in 3 or 5 years. The
CiteScore tracker displays the CiteScore of the journals every month.

CiteScore takes everything into account: articles, conference papers, letters, and editorials. This might
cause a little dilution in the quality. It also seems to favour journals that are under the publication of the
house and doesn't include journals that are not indexed in Scopus or Web of Science. As a result, Journals
with different documents like reviews and editorials seem to get a low CiteScore than the ones who don't
include that. m-index though almost similar to that of the h-index takes into consideration the period of
the academic career of the author.

If N = No. of years since the author first published the paper


Then m-index = h/N (h being the h-index of the author)

g-index is a variant of h-index. g-index takes into account the increase in the citation of the most cited
paper as well. To calculate g-index, g is the highest rank and the top g paper has at least g2 citations.
i10index refers to the number of articles that are cited at least 10 times.

h5index of an article refers to the number of citations of the number of published articles in 5 years. For
example, if the h5-index is 4, then in the last 5 years 4 articles published have been cited at least 4 times
each.

SCImago Journal Rank (SJR) ranks Journals. This is a scientific measure of scholarly journals that
takes into account the number of citations and the quality of the journals in which the citations are made.
SJR is a numeric value that depicts the number of weighted citations received in a particular year with the
previous three years. The higher the value the greater the journal prestige. This value can be of great use
in journal comparison in the evaluation process.

Citing Half-life counts all the journal citations in a particular year and determines the median article
publication date. An article can be cited many times, sometimes in the very year, it was published or
maybe even after decades. So citing half-life shows the articles cited from that particular journal before

Page 15 of 23
the particular year and after that particular year. These are a few parameters a research scholar must take
into account when he or she is sending an article to a journal. A journal’s prestige, quality of the article
published, impact factor, authors’ h-index everything needs to be taken into consideration.

i10-index The i10-index is the newest in the line of journal metrics and was introduced by Google
Scholar in 2011. It is a simple and straightforward indexing measure found by tallying a journal’s total
number of published papers with at least 10 citations (Google Scholar Blog, 2011). Webology has an i10-
index score of 52 according to Google Scholar (see Table 2). Table 2. i10-index of Webology Database
i10-index Google Scholar 52 Scopus 8

i20-index The i20-index, proposed in this editorial note, is obtained by tallying a journal’s total number
of published papers with at least 20 citations (see Table 3). Table 3. i20-index of Webology Database i20-
index Google Scholar 24 Scopus 5 Note that the total number of citations to Webology papers on Google
Scholar was 2423. It is interesting to note that the total number of citations received by i20 papers (i.e.,
24 papers out of all published papers) was 1693. This briefly means that i20 papers received 70 percent
of all citations to Webology.

The i20-index helps shift concern for editors and encourages journals to accept more relevant papers that
can be used and cited by peers. Citations in patents The number of citations to a journal in patents indicates
to what extent a journal is technology- oriented (Noruzi & Abdekhoda, 2014).

Webology is cited 16 times by patents on Google Patents and 13 times by patents issued by the USPTO.
Number of citations to Webology in patents Database No. of Citations Google Patents 16 USPTO 13 Note
that to retrieve the USPTO patents citing Webology, we used the following search command
(OREF/Webology) in the advanced search and to identify the number of citations on Google Patents, we
have conducted a keyword search by Webology. The field of OREF (Other References) on the USPTO
database contains other references cited as prior art, including journals, books, and conference
proceedings. https://www.webology.org/2016/v13n1/editorial21.pdf

Article metrics ----- Altmetric Attention Score


The Altmetric Attention Score tracks a wide range of online sources to capture the conversations
happening around academic research.

How is the Altmetric Attention Score calculated?


 Altmetric monitors each online mention of a piece of research and weights the mentions based on
volume, sources, and authors. A mention in an international newspaper contributes to a higher
score than a tweet about the research, for example.
 The Altmetric Attention Score is presented within a colorful donut. Each color indicates a different
source of online attention (ranging from traditional media outlets to social media, blogs, online
reference managers, academic forums, patents, policy documents, the Open Syllabus Project, and
more). A strong Altmetric Score will feature both a high number in the center, and a wide range
of colors in the donut.
 Discover the different ways you can make Altmetric data work for you by reading this
introduction from Altmetric’s Head of Marketing, Cat Chimes.

Page 16 of 23
What are the advantages of the Altmetric Attention Score?
Altmetric starts tracking online mentions of academic research from the moment it’s published. That
means there’s no need to wait for citations to come in to get feedback on a piece of research.

Get a holistic view of attention, impact and influence


The data Altmetric gathers provides a more all-encompassing, nuanced view of the attention, impact, and
influence of a piece of research than traditional citation-based metrics. Digging deeper into the Altmetric
Attention Score can reveal not only the nature and volume of online mentions, but also who’s talking
about the research, where in the world these conversations are happening, and which online platforms
they’re using.

What are the disadvantages of the Altmetric Attention Score?

Biases in the data which Altmetric collects


There’s a tendency to focus on English-speaking sources (there’s some great thinking around this by Juan
Pablo Alperin). There’s also a bias towards Science, Technology and Medicine (STM) topics, although
that’s partly a reflection of the activity happening online around research.

Limited to tracking online attention


The Altmetric Attention Score was built to track digital conversations. This means that attention from
sources with little direct online presence (like a concert, or a sculpture) are not included. Even for online
conversations, Altmetric can only track mentions when the source either references the article’s Digital
Object Identifier (DOI) or uses two pieces of information (i.e. article title and author name).

Author metrics
h-index
What is the h-index?
The h-index is an author-level research metric, first introduced by Hirsch in 2005. The h-index attempts
to measure the productivity of a researcher and the citation impact of their publications.

The basic h-index calculation is:


Number of articles published which have received the same number of citations. For example, if you’ve
published at least 10 papers that have each been cited 10 times or more, you will have a h-index of 10.

What are the advantages of the h-index?


Results aren’t skewed
The main advantage of the h-index is that it isn’t skewed upwards by a small number of highly-cited
papers. It also isn’t skewed downwards by a long tail of poorly-cited work.
The h-index rewards researchers whose work is consistently well cited. That said, a handful of well-placed
citations can have a major effect.

What are the disadvantages of the h-index?


Results can be inconsistent

Page 17 of 23
Although the basic calculation of the h-index is clearly defined, it can still be calculated using different
databases or time-frames, giving different results. Normally, the larger the database, the higher the h-index
calculated from it. Therefore, a h-index taken from Google Scholar will nearly always be higher than one
from Web of Science, Scopus, or PubMed. (It’s worth noting here that as Google Scholar is an uncurated
dataset, it may contain duplicate records of the same article.)

Results can be skewed by self-citations


Although some self-citation is legitimate, authors can cite their own work to improve their h-index.

Results aren’t comparable across disciplines


The h-index varies widely by subject, so a mediocre h-index in the life sciences will still be higher than a
very good h-index in the social sciences. We can’t benchmark h-indices because they are rarely calculated
consistently for large populations of researchers using the same method.

Results can’t be compared between researchers


The h-index of a researcher with a long publication history including review articles cannot be fairly
compared with a post-doctoral researcher in the same field, nor with a senior researcher from another
field. Researchers who have published several review articles will normally have much higher citation
counts than other researchers.

Differences with journal impact factor

The main differences between the indicators provided by CWTS Journal Indicators, in particular the IPP
and SNIP indicators, and the journal impact factor (JIF) can be summarized as follows:
 Based on Scopus (IPP and SNIP) vs. based on Web of Science (JIF).
 Correction for field differences (SNIP) vs. no correction for field differences (IPP and JIF).
 Three years of cited publications (IPP and SNIP) vs. two years of cited publications (JIF).
 Citations from selected sources and selected document types only (IPP and SNIP) vs. citations
from all sources and document types (JIF).
 Citations to selected document types only (IPP and SNIP) vs. citations to all document types (JIF).

Guidelines for interpretation


In the interpretation of the indicators provided by CWTS Journal Indicators, in particular the IPP and
SNIP indicators, please take into account the following considerations:
Review articles. IPP and SNIP do not distinguish between ordinary research articles and review articles.
Review articles tend to be cited substantially more frequently than ordinary research articles. Journals that
publish many review articles therefore tend to have higher IPP and SNIP values than journals that publish
mainly ordinary research articles.

Journal self citations.


Some journals may try to increase their citation impact by increasing their number of self citations,
sometimes in questionable ways (e.g., coercive citing). IPP and SNIP do not correct for this. However,
the percentage of self citations of a journal is reported as a separate indicator in CWTS Journal Indicators.

Page 18 of 23
In the interpretation of this indicator, one should keep in mind that in general larger journals can be
expected to have a higher percentage of self citations than smaller journals.
Small journals. IPP and SNIP are less reliable for small journals with only a limited number of publications
than for larger journals. For this reason, CWTS Journal Indicators by default displays statistics only for
journals with at least 50 publications. Notice that smaller journals also tend to have wider stability intervals
than larger journals.

Skewness of citation distributions.


The distribution of citations over the publications in a journal tends to be highly skewed, with many
uncited and lowly cited publications and only a small number of highly cited ones. Because of this
skewness, the average citation impact of a journal, as measured using indicators such as IPP and SNIP
(but also the journal impact factor), is not very representative of the citation impact of individual
publications in the journal. One should therefore be careful in assessing individual publications based on
the journal in which they have appeared (see for instance the San Francisco Declaration on Research
Assessment).

Outliers. IPP and SNIP


Outliers. IPP and SNIP are sensitive to ‘outliers’, that is, these indicators may sometimes be strongly
influenced by one or a few very highly cited publications. If for a particular journal IPP and SNIP are
largely determined by a few very highly cited publications, this is reflected by wide stability intervals. In
the interpretation of IPP and SNIP, it is therefore important to take into consideration not only the value
of the indicator but also the width of the stability interval.

Non-citing sources. As explained above, some journals have been classified as non-citing sources. This
applies in particular to many journals in the arts and humanities. Citations originating from these journals
are not counted in the calculation of IPP and SNIP, but these journals may have IPP and SNIP values
themselves. These IPP and SNIP values should be interpreted with extreme caution. These values for
instance do not include journal self citations, and therefore the values tend to be artificially low. SCImago
Journal Rank (SJR) (Elsevier)

“The SCImago Journal & Country Rank is a portal that includes the journals and country scientific
indicators developed from the information contained in the Scopus® database (Elsevier B.V.).” Scopus
contains more than 15,000 journals from over 4,000 international publishers as well as over 1000 open
access journals. SCImago's "evaluation of scholarly journals is to assign weights to bibliographic citations
based on the importance of the journals that issued them, so that citations issued by more important
journals will be more valuable than those issued by less important ones." (SJR indicator).

SNIP (Source Normalized Impact per Paper) https://researchguides.uic.edu/if/impact


Source Normalized Impact per Paper (SNIP) measures contextual citation impact by weighting citations
based on the total number of citations in a subject field. The impact of a single citation is given higher
value in subject areas where citations are less likely, and vice versa. Unlike the well-known journal impact
factor, SNIP corrects for differences in citation practices between scientific fields, thereby allowing for
more accurate between-field comparisons of citation impact. CWTS Journal Indicators also provides

Page 19 of 23
stability intervals that indicate the reliability of the SNIP value of a journal. SNIP was created by
Professor Henk F. Moed at Centre for Science and Technology Studies (CWTS), University of L

Limitations of Research Metrics


Disciplinary Differences
Publication and citation practices vary widely by discipline; bibliometric comparisons between disciplines
are generally not recommended (unless normalized metrics are used).
Books, book chapters, and other non-article formats (e.g. exhibits, performances, etc.) are not well
represented (if at all) in citation tracking databases; thus, metrics based on journals and journal articles
have limited applicability for some disciplines (particularly in the arts and humanities).

Scope (Coverage) and Accuracy of Source Data


 Research metrics are usually calculated based on data contained in commercial citation indices
(citation-tracking databases), such as Scopus, Web of Science, and Google Scholar. The measures are
therefore tied to the content indexed in of each of those databases. It is likely that the value of an
author's h-index, for example, will be different in Scopus, Web of Science, and Google Scholar due to
differences in coverage of the author's works and of citing documents.
 Coverage of different languages, geographic regions, and disciplines varies across databases, as does
data accuracy.
 Citation-tracking databases have an English-language bias and provide limited coverage of non-
English works. Similarly, their geographic coverage reflects the locations of major publishers; they
may have limited coverage of works that are of national or regional importance.

Time
 Time is needed for publications to receive citations, so citation-based metrics are less useful when
applied to early career researchers.
 Citations accrue at different rates in difference disciplines and for different publication types.

Size
 The value of some metrics tends to increase as the size of the entity being considered increases. For
example, a small research group will tend to have fewer publications and citations than a large
department; in such cases, a metric such as citations per publication may be more appropriate than a
total citation count.

Normalization https://guides.library.mun.ca/researchvisibility/metrics
 Since publication and citation patterns vary across disciplines and over time, bibliometric indicators
are often normalized to enable comparisons across different fields, time periods, document types, or
other factors.
 A field-normalized citation score, for example, compares the total number of citations received by a
publication (or author) to the expected number of citations of a publication (or author) in the same
field.
 A value of 1.00 indicates that the publication has received an average number of citations for
publications in that field. A value >1.00 indicates that the number of citations this publication has
received is greater than the world average for that field.

Page 20 of 23
6B.4 Summary

This course materials provides an overview of finding, gathering, and using metrics and other information
about your research and scholarship to tell the story of your scholarly impact.

Research in the sciences and the humanities will continue to develop in the way it is conducted and
organized, as well as in the way it is embedded in society. This, in turn, will lead to evolving views on
good research practices. From time to time, the standards for good research practices and the related duties
of care must be reviewed and the Code updated. Some areas of research practices are subject to change;
for example, the growing importance of the way data is used and managed and the developments in the
area of open science. It is to be expected that these and other advances will require additions and
adjustments to the Code in future

The publication is the final step in the process of research and indexing is very crucial and primary aspect
in publishing articles and research work. This helps a researcher filter and find a good journal among the
number of predatory journals and send the work to be validated, reviewed and published. The better the
knowledge of a researcher about this, the better is his/her chance of producing a quality work that will
benefit his/her research, discipline and the community.

An author's impact on their field or discipline has traditionally been measured using the number of times
their academic publications are cited by other researchers. There are numerous algorithms that account
for such things as the recency of the publication, or poorly or highly cited papers. While citation metrics
may reflect the impact of research in a field, there are many potential biases with these measurements and
they should be used with care. For a critique of author impact factors.

Journal impact measurements reflect the importance of a particular journal in a field and take into account
the number of articles published per year and the number of citations to articles published in that
journal. Like author impact measurements, journal impact measures can be only so informative, and
researchers in a discipline will have the best sense of the top journals in their field.

There are many tools available for measuring and tracking your research impact. Be aware that
comparisons across different tools are not advised, as they may use different algorithms and different
citation data to calculate impact which have been discussed.
Today, there are many new forms of scholarly publishing, networking and collaborating. Beyond the more
traditional means of scholarly communication, researchers can now reach vast and distant audiences well
beyond the borders of their research communities. In addition, there are now tools that can be used to
measure research impact in these non-traditional forms of scholarly communication, also referred to
as 'altmetrics'.

Research metrics are quantitative tools used to help assess the quality and impact of research outputs.
Metrics are available for use at the journal, article, and even researcher level. However, any one metric
only tells a part of the story and each metric also has its limitations. Therefore, a single metric should

Page 21 of 23
never be considered in isolation. For a long time, the only tool for assessing journal performance was the
Impact Factor – more on that in a moment. Now there are a range of different research metrics available,
from the Impact Factor to altmetrics, h-index, and more. But what do they all mean? How is each metric
calculated? Which research metrics are the most relevant to your journal? And how can you use these
tools to monitor your journal’s performance?

6B.5 QUESTIONS/ SELF ASSESSMENT QUESTIONS

1. Who coined the term webometrics?


a. Almind and Ingwerson
b. Alan Pritchard
c. Vassily V. Nalimov and Z. M. Mukchenco
d. None of the above

2. How to calculate Citation Per Publication (CPP)?


a. Total Citations Sent /Total Papers
b. Total Citations Received/Total Papers
c. Total Citations Received/Citation Per Paper
d. Total Citations Sent/Citation Per Paper
3. Research journals with a high-------------------- are commonly considered to be more important than
those with lower ones.
a. Eigen factor
b. h-index
c. impact factor
d. i10 score
4. Position papers are:
a. Compiling of academic articles
b. Providing review articles
c. Offering a synopsis of an extended research
d. Highlighting issues and depiction of the status

5. Citation means that a particular paper has been:


a. Sold to another publisher.
b. Quoted in another paper by another author.
c. Reproduced elsewhere.
d. Discussed orally by another author.

Answer Key:

Page 22 of 23
1(a) Almind and Ingwerson
2(b) Total Citations Received/Total Papers
3(c) Impact factor
4(d) Highlighting issues and depiction of the status
5(b) Quoted in another paper by another author.

6B.6 References/ Bibliography/ Select Reading


1. http://ijariie.com/AdminUploadPdf/An_Introduction_to_Indexing_and_Peer_review_Process_ijariie10866.pdf
2. https://liu.cwp.libguides.com/c.php?g=45770&p=4417804
3. https://researchguides.uic.edu/if/impact
4. https://researchguides.uic.edu/if/impact
5. www.ijariie.com 318
6. https://www.journalindicators.com/methodology
7. https://egyankosh.ac.in/bitstream/123456789/11125/1/Unit-13.pdf
8. https://editorresources.taylorandfrancis.com/understanding-research-metrics/
9. https://www.webology.org/2016/v13n1/editorial21.pdf
10. https://newsroom.taylorandfrancisgroup.com/taylor-francis-signs-up-to-principles-outlined-in-dora-
supporting-balanced-and-fair-research-assessment/
11. https://authorservices.taylorandfrancis.com/publishing-your-research/choosing-a-journal/journal-metrics/
12. https://www.journals.elsevier.com/global-environmental-change
13. https://www.enago.com/academy/eigenfactor-vs-impact-factor/
14. https://service.elsevier.com/app/answers/detail/a_id/14880/supporthub/scopus
15. https://authorservices.taylorandfrancis.com/publishing-your-research/choosing-a-journal/journal-metrics/
16. https://guides.library.cornell.edu/c.php?g=32272&p=203388

Page 23 of 23

You might also like