(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 15, No. 1, 2024
Transformative Automation: AI in Scientific
Literature Reviews
Kirtirajsinh Zala1 , Biswaranjan Acharya2 , Madhav Mashru3 ,
Damodharan Palaniappan4 , Vassilis C. Gerogiannis5 , Andreas Kanavos6 , Ioannis Karamitsos7
Department of Information Technology, Marwadi University, Rajkot, Gujarat 360003, India1,4
Department of Computer Engineering -AI & BDA, Marwadi University, Rajkot, Gujarat 360003, India2
Faculty of Engineering, Marwadi Education Foundation’s Group of Institutions, Rajkot, Gujarat 360003, India3
Department of Digital Systems, University of Thessaly, Larissa, Greece5
Department of Informatics, Ionian University, Corfu, Greece6
Research and Graduate Department, Rochester Institute of Technology, Dubai, UAE7
Abstract—This paper investigates the integration of Artificial
Intelligence (AI) into systematic literature reviews (SLRs), aiming
to address the challenges associated with the manual review
process. SLRs, a crucial aspect of scholarly research, often prove
time-consuming and prone to errors. In response, this work
explores the application of AI techniques, including Natural
Language Processing (NLP), machine learning, data mining, and
text analytics, to automate various stages of the SLR process.
Specifically, we focus on paper identification, information extraction, and data synthesis. The study delves into the roles of NLP
and machine learning algorithms in automating the identification
of relevant papers based on defined criteria. Researchers now
have access to a diverse set of AI-based tools and platforms
designed to streamline SLRs, offering automated search, retrieval,
text mining, and analysis of relevant publications. The dynamic
field of AI-driven SLR automation continues to evolve, with
ongoing exploration of new techniques and enhancements to
existing algorithms. This shift from manual efforts to automation
not only enhances the efficiency and effectiveness of SLRs but
also marks a significant advancement in the broader research
process.
Keywords—Artificial intelligence; systematic literature review;
scholarly data analysis; machine learning algorithms; natural
language processing; scientific publication automation
I.
I NTRODUCTION
Artificial intelligence (AI) has emerged to alleviate humans
from repetitive tasks that demand specific human skills. Like
any other field, scientific endeavors benefit from powerful
algorithms to expedite and enhance outcomes. Initiating a new
research project typically involves a thorough investigation of
relevant scholarly publications to comprehend the landscape
and identify activities significant for addressing similar or
related issues. The process of gathering documents, when
performed without prior training or well-defined parameters,
may lead to the omission of significant contributions [29]. A
comprehensive approach to searching and analyzing literature
can help reduce the likelihood of bias and inaccuracy in
research [24], [28].
A systematic literature review (SLR) is a secondary investigation that assesses existing research, employing a widely recognized procedure to identify related articles, extract pertinent
details, and present their main findings in an organized manner
[33]. It is anticipated that a published literature review will
deliver a comprehensive summary of a corresponding research
subject, often providing a historical perspective that facilitates
the identification of research trends and unresolved issues.
Literature reviews are now a fundamental component of many
scientific fields, including medicine (with 13,510 published
reviews) and computer science (with 6,342) [47].
Conducting a literature review is known to be timeconsuming, especially when addressing a vast research subject.
In recent years, various systematic literature review (SLR)related tools have been developed for diverse purposes [47].
These tools can automate digital database searches, designate
relevant outcomes based on inclusion criteria, and provide
visual support for analyzing information from works’ authors
and their citations, among other capabilities. Particularly, the
automation of the SLR process is gaining attention in the field
of computer science research, offering strategies to construct
search phrases and retrieve publications semi-automatically or
manually from relevant scientific databases [76]. The utilization of automated methods has proven to save time and costs in
selecting relevant articles [11], or providing a summary of the
findings [71]. However, some authors argue that the usefulness
of these automated tools is limited by their steep learning curve
and the lack of research analyzing the advantages they offer
[74].
This paper focuses on the computerized and automated
operation of SLR tasks, replacing manual labor with ML as
the primary driver. The goal is to enhance the capability of
automated review processes and technologies with some additional understanding and suggestions. The initial application
of AI methods to automate SLR tasks occurred in 2006 [12],
where it was suggested that neural networks could be used
to automate the selection of relevant articles. Initial resistance
to this idea stemmed from concerns regarding the use of data
gleaned from secondary sources through text mining [51].
Following this concept, previous works by other researchers have delved into powerful text mining techniques
[52], [58], [65]. Recent innovations in the field include the
integration of ML and natural language processing (NLP)
techniques [27], [76]. Considering the repetitive tasks involved
in a SLR methodology, the capabilities of AI for analyzing
scientific literature are vast. However, it’s crucial not to devalue
the role of human involvement in this process, as humans bring
www.ijacsa.thesai.org
1246 | P a g e
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 15, No. 1, 2024
a holistic perspective that current AI techniques may lack.
An exciting development in the field of SLR is the relatively recent introduction of AI tools for automating the entire
procedure — a field anticipated to continue expanding in the
coming years. The increasing curiosity level indicates that now
is an opportune time to analyze AI techniques presented as
solutions to various SLR tasks. This analysis includes a focus
on their intended use, sources of input and output, and the
need for human intervention. Several research efforts in the
field have incorporated AI techniques into their evaluations
of procedures and instruments for facilitating SLR tasks.
However, these investigations have taken either a more general
approach, considering any type of automation with or without
AI, or they have exclusively focused on AI [47], [76]. Some
experts have concentrated on the use of specific AI techniques,
such as ML methods, to address a particular problem [49] or
a specific SLR activity, like document selection [57], [58].
Despite these efforts, some investigations may lack a
comprehensive overview of the diverse ideas and procedures
involved in AI relevant to the entire SLR procedure. In
addition to providing a comprehensive overview of the field,
the present paper aims to expand on the significance of
human involvement—a perspective not fully addressed by the
partially autonomous SLR considered in the existing literature
reviews. Keeping these goals in mind, the following are a few
inquiry concerns, also known as Research Questions (RQs),
that inform our analysis of the current status of AI-based SLR
automation:
•
RQ1: Which stages of the SLR process have been
automated using artificial intelligence?
•
RQ2: Which AI methods facilitate the automation of
SLR tasks?
•
RQ3: To what extent does the human factor into SLR
automation with AI?
As part of our survey, we conducted a systematic literature
search to address the above RQs. We identified the latest
original research articles from an extensive collection of references retrieved through both mechanical and human search
systems. Reviewing these articles was essential to comprehend
the motivation behind employing AI for specific tasks. We then
scrutinized the inputs, outputs, and algorithmic choices of the
proposed methods, along with information on the experimental
evaluation of the approaches, including benchmarking metrics
and sample articles.
existing literature and studies pertinent to the integration of
AI in systematic literature reviews. In Section III, we delve
into the methodology employed in our research, elucidating
the approach and techniques used. Section IV explores the
landscape of AI-based support for the literature review process,
detailing advancements, tools, and strategies. Following this,
Section V outlines open issues and challenges associated with
AI-driven literature reviews. Section VI offers conclusions
drawn from our exploration and proposes avenues for future
research. These sections collectively contribute to a comprehensive understanding of the current state and potential future
developments in the intersection of AI and systematic literature
reviews.
II.
R ELATED W ORK
A systematic examination of existing research, known as
a Systematic Literature Review (SLR), is a type of secondary investigation in a research field that systematically
combines and evaluates scientific research to synthesize recent
information, critically discuss current initiatives, and detect
research patterns. SLRs use established procedures for conducting empirical research [33]. In particular, within Software
Engineering (SE), researchers have made efforts to provide a
comprehensive summary of methods devised for automating
the SLR procedure. With a methodical approach to searching,
they have reviewed the literature to shed light on different
approaches used to automate various aspects of the SLR
process [20].
In this context, our focus shifts to the work presented
in [19], which demonstrates how computer languages can
facilitate unsupervised ML for the synthesis and abstraction of
data sets taken from an SLR. This article skillfully showcases
the complementary roles that AI and ML techniques play in
coding, categorization, and synthesis of SLR data, utilizing the
qualitative method Deductive Qualitative Analysis [5].
While SLRs offer a clear and concise format for summarizing expertise in a field, they are not without challenges, such
as the time required to complete them and the challenging
task of assessing the integrity of primary research [35]. Recent
analysis has highlighted prevalent hazards associated with SLR
replication, emphasizing issues resulting from the absence of a
defined methodology [38]. The approach [33] divides the SLR
procedure into the following stages:
Our analysis revealed that certain SLR tasks have been
the subject of significantly more research compared to others,
with some ML approaches introduced in the early phases
still in use. However, we also identified more recent studies
investigating novel ML approaches that incorporate the human
dimension. Our findings in response to each RQ allowed us to
pinpoint several unresolved concerns and difficulties related
to the use of AI techniques for SLR tasks that they were
not specifically designed for. Additionally, we identified issues
related to experimental repeatability and other factors that have
not yet been thoroughly addressed.
1) Formulating phase: The first aspect involves making
a strategy. Justification for conducting an SLR in a research
area ensures it addresses a gap and contributes to knowledge.
Research queries are formulated to define the purview of
the SLR and guide its evolution. These queries may adhere
to predetermined structures, such as PICO (Population, Intervention, Comparison, and Outcome) or SPICE (Context,
Perspective, Intervention, Comparison, and Evaluation) [15].
In this stage, an evaluation procedure is designed, including a
comprehensive review technique applicable to each stage. The
search technique and its sources, such as science resources and
journals, are detailed in the protocol. Eligibility requirements
for article selection, data extraction, and quality evaluation
guidelines are also established.
The remainder of this paper is organized as follows:
Section II provides an overview of related work, highlighting
2) Conducting phase: The second phase involves the execution of autonomous searches in data and digital libraries.
www.ijacsa.thesai.org
1247 | P a g e
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 15, No. 1, 2024
Search strings are obtained from either the formulated research
queries or constructed using a supplementary method [50].
Additional sources, including dark texts and snowballs, are
considered [42]. The former includes materials not publicly
available, such as dissertations and presentations. The snowball
effect involves discovering new literary works by examining
references and citations from previously discovered papers.
Relevant studies are identified by removing duplicates, evaluating candidates based on the name and summary, and applying
inclusion and exclusion criteria. These criteria specify the
quality standards each article must meet to be included in
the scope [60]. The fundamental subjects are then analyzed to
extract data, and summary statistics are obtained to synthesize
and visualize the collected data.
3) Reporting phase: The third phase focuses on the reporting process and the evaluation of the final report’s completeness and quality. Authors determine the manner in which
the material is discussed and presented, as well as whether
the evaluation result is suitable for publication. Criteria are
considered to evaluate whether necessary data can be found in
the SLR report [44].
III.
It provides an examination of the relevance of AI in
SLR, along with a critique of the latest developments
in this field of study; and
The paper presents a summary of SLR tools applicable to one or more phases.
3)
B. Data Extraction
Once each primary research article has been identified, data
extraction is performed following the guidelines outlined in
[33]. One author reviews each article, with the assistance of
a second reviewer in cases of ambiguity. The data extraction
form includes meta-information such as authors and affiliations, research types, publication years, and publishing years.
The data extraction form also includes categories to define
the AI approach followed in each paper. Specifically, each
paper’s content is summarized based on the following criteria:
1) Phase and aim of the SLR: Each paper is classified
according to the phase of SLR automation, and each phase’s
categorization is followed by a description of the particular
step(s) involved in that phase.
2) AI domain and technique: The paper is assigned to
one or multiple automated academic subfields and contains
a concise explanation of the employed algorithm or technique.
We also record whether the societal factor is at play.
M ETHODOLOGY
A. Search Strategy
The search strategy employs a combination of automated
and human searching methods. Automated searches were
conducted using the following sources: ACM Library, IEEE
Xplore, Scopus, SpringerLink, and Web of Science. The
search criteria, designed to retrieve publications, include a
range of terms incorporating systematic review keywords and
automation-related terms. General terms associated with automation were used rather than an exhaustive list of specific
AI methods for two distinct purposes: (1) to avoid skewing
the findings in favor of certain methods, ensuring inclusion
of less prevalent ways in the final tally; and (2) to prevent
the creation of lengthy and complicated search strings that
may be challenging for databases to process. Title, keywords,
and abstracts were considered in the search criteria. The
resulting search string was easily adaptable for each data
source. Additionally, a manual search was conducted using
reverse snowballing. After reviewing the titles and abstracts of
the initial eight candidate papers, six were added to the final
list.
Fifty prospective papers were identified and underwent
further evaluation to ensure their alignment with our research
aims. For this purpose, both exclusion and inclusion criteria
were developed. Papers written in languages other than English, those with unavailable full content, and publications
lacking a demonstrated peer review process were excluded.
The inclusion criteria set specific requirements for paper content. Each research paper must focus on automating multiple
steps of an SLR and discuss the usage of AI-based methods
for inclusion in the current survey. This general criterion is
further subdivided into mutually exclusive options:
1)
2)
The paper explains a novel algorithm, instrument, or
method facilitating full or partial mechanization of
SLR;
3) Experimental framework: Types of primary research
include empirical, theoretical, application, and review. We
compile the body of data and the indicators used for performance evaluation for empirical investigations.
4) Repeatability: Changes are made if any of the supplied
tools, datasets, or algorithms require them. We verify the accessibility of any websites or repositories cited as supplementary
material to ensure repeatability.
IV.
AI- BASED S UPPORT FOR THE L ITERATURE R EVIEW
P ROCESS
A literature review process often involves some visionary
and mechanical constraints, prompting the development of
AI-based technologies to alleviate the workload for potential
authors. AI technologies aim to handle time-consuming and
repetitive tasks, allowing authors to focus on interpretation,
intuitive leaps, and skills [72].
To provide readers with insight into the current state of
knowledge in this domain, we systematically evaluate each
stage of the literature review process, highlighting existing
AI-based tools and discussing the potential for further AI
assistance. The following agenda can outline opportunities for
additional industrial development and enhancement [69].
Table I presents a concise overview, indicating whether
each stage is capable of being assisted by AI and guiding the
reader toward relevant tools. The term “melted brief” refers
to a succinct summary, capturing the essence of each stage’s
AI-assistive potential.
In particular, we conducted a comprehensive review of
relevant literature on AI-based tools, consulting sources such
as [1], [21], [26], [37]. Given the dynamic nature of AI
technologies and the rapid evolution of AI tools, we focused
www.ijacsa.thesai.org
1248 | P a g e
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 15, No. 1, 2024
TABLE I. AI- BASED T OOLS FOR S TEPS IN THE R EVIEW P ROCESS
Steps
1. Problem Formulation
AI-based Tools
1)
2)
Potential for AI-support
Resources for software development supporting thematic
analyses based on LDA models [3].
GUI applications and programming libraries supporting
scientometric analyses [67].
1)
Moderate potential with AI potentially pointing
researchers to promising areas and questions or
verifying research gaps.
TheoryOn enables ontology-based searches for constructs
and construct relationships in behavioral theories [43].
Litbaskets supports researchers in setting a manageable
breadth in regard to the magazines that have been covered
[9].
LitSonar allows syntax interpretation of queries between
database servers, as well as (journal coverage) creating
reports [66].
1)
Very high potential since the most important
search methods consist of steps that are repetitive and time-consuming, that is, amenable to
automation.
ASReview offers screening prioritization [75].
ADIT approach for researchers capable of designing and
programming ML classifiers [40].
1)
A significant opportunity for partly automated
assistance in the initial stage screen, which
requires many repetitive decisions.
Substantial possibility of an extra screen, requiring substantial expert judgement (particularly
for ambiguous cases).
2. Literature Search
1)
2)
3)
3. Screening for Inclusion
1)
2)
2)
4. Quality Assessment
1)
2)
Statistical software packages (e.g., RevMan).
RobotReviewer for experimental research [48].
1)
There is a possibility for partially automated
quality control ranging from low to considerable.
1)
Software for data extraction and qualitative content analysis (e.g., Nvivo and ATLAS.ti) offers AI-based functionality for qualitative coding, named entity recognition, and
sentiment analysis.
WebPlotDigitizer and Graph2Data for extracting data from
statistical plots.
1)
Moderate potential for reviews requiring formal
data extraction (descriptive reviews, scoping reviews, meta-analyses, and qualitative systematic
reviews).
Elevated for quantitative and discrete data points
(e.g., sample sizes), low for detailed information
that is ambiguous and open to multiple interpretations (e.g., theorizing and main results).
Descriptive synthesis Tools for text-mining [36], scientometric techniques, topic models [55], [63], and computational reviews aimed at stimulating conceptual contributions [4].
Theory building: Examples of inductive (computationally
intensive) theory development [8], [45], [56].
Tools for doing meta-analyses, such as RevMan and
dmetar, are used in the testing of hypotheses.
1)
2)
5. Data Extraction
2)
2)
6. Data Analysis and Interpretation
1)
2)
3)
our evaluation on those most pertinent to our primary objective
in the realm of Information Systems (IS) research. It is also
important to note that our review is not exhaustive; rather, its
purpose is to spotlight potential examples that can benefit IS
researchers. In the upcoming paragraphs, we take a focused
approach, examining AI-supported tasks individually. This
approach allows us to provide insights into tools that authors
can seamlessly integrate into a comprehensive data processing
and toolchain.
A. Step 1: Problem Formulation
In the initial phase of a comprehensive literature research,
authors bear the responsibility of not only defining and elucidating research topics but also elucidating the fundamental
ideas and theories within the relevant field [69]. Furthermore,
scholars are advised to conduct a preliminary assessment of the
research gap, determining whether the gap has been adequately
addressed. Evaluating if the study’s issue offers an opportunity
for a substantial contribution that surpasses previous works
and determining its significance in filling the existing void are
crucial considerations in this phase [54], [62].
3)
Very high potential for descriptive syntheses.
Moderate potential for (inductive) theory development and theory testing.
Low non-existent potential for reviews adopting
traditional and interpretive approaches.
We envisage that AI can significantly contribute to the
synthesis of research issues, particularly in the phase focused
on identifying and validating open questions. With substantial advancements in the scientific domain, researchers have
made significant strides in pinpointing gaps in the current
body of knowledge and formulating plausible hypotheses.
In this context, we anticipate that social science researchers
can leverage and adapt these findings to enhance their own
work. For instance, revolutionary developments in automated
hypothesis generation and experimental testing have emerged
in biochemistry, particularly within fully automated labs [34].
Additionally, ML strategies have been employed in scientometric approaches, facilitating literature-based discoveries in
computer science [70]. These advancements underscore the
potential for AI to play a transformative role in issue synthesis
across various scientific disciplines.
Significant strides in information technology, particularly in
the realm of database inquiries, have garnered attention. Three
noteworthy resources underscore these recent advancements.
Notably, TheoryOn’s search engine [43] stands out. While
these advancements hold promise, especially for studies in the
www.ijacsa.thesai.org
1249 | P a g e
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 15, No. 1, 2024
social sciences, their ultimate influence on research methodologies remains to be seen. In general, these technological
developments may prompt investigators to identify areas warranting further exploration or additional research. However,
we acknowledge that, for problematization-driven research
that generates queries, a continued reliance on deliberative
democracy, particularly in the phase of problem identification,
may be necessary [2].
In addition to recognizing voids within existing studies
and areas calling for extensive exploration, AI holds the
potential to assist researchers in assessing whether these gaps
persist by locating prior assessments with similar or identical
content. However, it’s important to acknowledge that utilizing
AI in this context may introduce a degree of unpredictability
during the phases of discovery and verification, particularly
when identifying prior assessments with content resembling
the current study.
In conclusion, the support for this pioneering initiative in
AI-backed research is still in its infancy, marked by a limited number of documented approaches. Notably, the existing
software operates autonomously, diverging from the reliance
on established graphical user interface tools. For researchers
proficient in programming, inspiration can be drawn from
exploring the intersections of various literature or study areas.
This exploration opens avenues for cross-disciplinary research
and identifies areas where further investigation is warranted.
To facilitate this exploration, researchers can leverage scientometric approaches and enhance their development [18],
[67]. Additionally, employing tools such as the LDA subject
model can contribute to the identification of potential research
directions [3]. As the field evolves, there is significant room
for growth, and researchers are encouraged to embrace the
interdisciplinary nature of AI-backed research to unlock its
full potential.
B. Step 2: Literature Search
In this stage, researchers embark on constructing a comprehensive corpus of published works utilizing a diverse set of
search techniques, including database queries, perusal of table
of contents, citation inquiries, and additional searches [69].
The objective of the literature search can vary, with authors
striving for either comprehensive, representative, or selective
coverage based on the review’s purpose [13].
Complex search strategies, involving multiple iterations
and leveraging the collaboration with artificial intelligencebased technologies, are devised from corresponding search
methods. Given the diverse nature of the knowledge retrieval
process, encompassing various data sources such as journals,
conference proceedings, books, and various forms of grey
literature, alongside concerns about data quality, authors must
employ appropriate data management strategies. These strategies should not only facilitate transparent reporting [61] but
also contribute to repeatability and reproduction [14].
Recent advancements in information technology, particularly in the realm of database inquiries, have been noteworthy.
Three notable resources stand out in this regard. Firstly,
TheoryOn’s search engine [43] enables researchers to conduct
ontology-based inquiries for distinct elements and interactions
between themes across different behavioral hypotheses. This
offers an alternative to traditional databases and presents a
more sophisticated approach to information retrieval. Secondly, Litbaskets [9] contributes to the development of search
methodologies by remotely estimating the likely number of
responses from a database search based on predefined phrases
across various journals with editable entries. Thirdly, LitSonar
[66] adds automation to the search process by translating
search terms for multiple book systems, including databases
like EBSCO Digital Library, AIS eLibrary, and Lexisnexis.
This tool is particularly promising as it provides real-time
updates on service availability, potentially identifying database
prohibitions (periods during which articles are not searchable)
and alleviating challenges associated with database deficiencies.
In a broader context, the literature search phase holds the
potential for automation, addressing numerous technological
tasks that researchers encounter. The expanding volume of research output, coupled with the imperative need for efficiency
and accuracy, underscores the significance of incorporating
robotics and AI assistance in activities that are predominantly
mechanical in nature [10], [25]. The integration of these
technologies not only expedites the literature search process
but also mitigates the inefficient use of faculty time resulting
from manual tasks. This becomes especially critical in an era
marked by the rapid proliferation of scholarly content.
C. Step 3: Screening for Inclusion
In this pivotal phase of the literature review, the authors
employ a systematic screening process to differentiate between
relevant and irrelevant papers. Conventionally, this phase is
bifurcated into a preliminary assessment based on titles and
abstracts, followed by a more rigorous second screening based
on full-texts [69].
Manual screening, involving the meticulous examination of
hundreds or thousands of documents, can be mentally taxing,
potentially hindering the accurate identification of challenging
scenarios. To mitigate this challenge, researchers are advised to
conduct an initial screening where obviously irrelevant articles
(based on titles and abstracts) are excluded. Articles posing
difficulty are intentionally saved for a more comprehensive
evaluation in the second round of screening.
The second screening involves a smaller sample, allowing
for efficient screening (after excluding the majority in the
initial screen). This stage involves a thorough examination of
materials, application of predefined stringent exclusion criteria,
and simultaneous independent evaluations, with group decisions on borderline cases. The screening process, particularly
in hypothesis-testing reviews, requires stringent scrutiny, as
inclusion errors could significantly impact the study outcomes
[69].
Over time, the landscape of screening tools has evolved
with the incorporation of AI-based tools [21]. Among them,
ASReview [75], a tool with minimal limitations, stands out
as a promising option for IS researchers. Operating on health
sciences databases and requiring PubMed IDs are not significant limitations for ASReview. This tool, developed recently,
is noteworthy for its transparency (under the Apache-2.0
License), script accessibility (implemented in Python), and the
www.ijacsa.thesai.org
1250 | P a g e
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 15, No. 1, 2024
ability to easily integrate new features. ASReview employs various ML classifiers, including Naive Bayes, logistic regression,
Logical Regressive Analysis, and randomly generated forest
classifiers. It leverages initial diversity decisions to enhance
subsequent decision accuracy. Researchers receive a ranked
catalog of papers (titles and descriptions), facilitating efficient
processing of an ordered compilation. The tool even allows
for automated exclusion after screening a specific number
of papers consecutively, streamlining the screening process.
Papers with borderline relevance can be deferred for later
assessment, guided by their content [75].
For researchers with coding proficiency, customization of
the discourse method is possible [40]. In situations where the
evaluation of popular theories becomes impractical due to the
sheer volume of pertinent documents, the prospect of randomly
selecting theory-contributing publications is proposed. This
approach leverages algorithms used in ML to identify a subset
of relevant journals from the larger pool, thereby offering a
randomized sample typical of how scientists articulate their
work during literature reviews [40].
Anticipating the AI-support potential, the first screening
demonstrates high efficacy, while the potential for the second
screening is moderate. The initial screen, involving fewer
exclusions, is more amenable to digitization and AI assistance. This presupposes computers with proficient reading and
comprehension capabilities for brief descriptions and titles.
Conversely, the subsequent screen deals with the remaining
instances and may prove challenging due to the less standardized nature of IS research. Unlike fields like Medical
and Biological Sciences, IS research lacks commonly used
categories for constructs, standard keyword vocabulary (e.g.,
MeSH terms), and consistently descriptive paper titles, making
effective classification challenging [58]. This challenge is not
exclusive to machines and equally affects human reviewers.
Screening and search, treated as information retrieval tasks,
should primarily be evaluated based on recall, representing
the proportion of successfully retrieved relevant papers. Traditionally, literature reviews aimed for high recall, resulting
in exhaustive searches, low precision, and increased screening
burdens [43]. AI-supported ontology-based searches, such as
those facilitated by ASReview, hold the promise of efficiently
alleviating a portion of the screening load by increasing
precision.
The screening processes remain among the most timeconsuming aspects of a literature review process [10]. When
considering the potential of AI assistance for these steps,
it’s crucial to recognize that the reliability of manual screening methods should not be overstated, as even screenings
conducted by experts exhibit a disagreement rate of 10%
on average [81]. Augmenting researchers’ screening activities
with AI tools can help identify inconsistent and potentially
erroneous screening decisions, enhancing the reliability of the
screening process.
evaluations, particularly those intended for theory testing, may
be influenced by various types of bias, such as selection bias,
mortality bias, and evaluation bias. Parallel and independent
execution of these procedures is recommended to ensure high
dependability [69].
The prospect of AI-based tools contributing to these processes is considered to have a low to middling chance for
two primary reasons. Firstly, the task of judging the quality
of a method is challenging, requiring expert opinion and often
presenting difficulties in achieving high inter-coder agreement
[22]. Secondly, IS reviews, whether quantitative or qualitative,
typically involve manageable numbers of samples, making
manual assessments feasible.
For researchers conducting meta-analyses and systematic
literature searches, conventional tools like RevMan, adhering
to standards for evaluating qualitative research methodology
and risk of bias [7], or equivalent statistical application environments such as R and SPSS are commonly used. Additionally, AI-based applications like RobotReviewer [48] offer relevance to IS meta-analyses. RobotReviewer, focusing on risk of
bias assessment in randomized controlled trials within the life
sciences, serves as an exemplary instance of explainable AI.
It enables scholars to trace ratings in each bias area back to
their source within the full-text document. This transparency
contributes to the reliability and interpretability of the bias
assessment process.
E. Step 5: Data Extraction
The extraction of data, both qualitative and quantitative,
involves the identification of relevant information and its categorization into a (semi) structured code sheet [69], [79]. This
step is more prominent in description, scoping, and theory testing reports compared to narrative reviews and assessments of
theoretical development, which tend to be more selective and
interpretive. Commonly utilized software for all-encompassing
subjective data analysis includes ATLAS.ti and NVivo, which
are increasingly incorporating ML and NLP techniques. These
techniques include methods for information extraction from
data tables, automation of descriptive coding, Named Entity
Recognition (NER), sentiment analysis, and analysis of statistical plots. Examples of tools for extracting data from statistical
plots include WebPlotDigitizer and Graph2Data.
The potential for AI support in this step is anticipated to
be moderate. Future advancements may focus on improving
the efficiency of information extraction, highlighting crucial
elements in an article, and facilitating the organization of information in suitable databases. However, complete automation
of more intricate data elements is not expected in the near
future. Despite the more standardized disclosure practices in
the medical professions, tools for extracting features such as
Population, Intervention, Comparison, and Outcome (PICO)
criteria are still in their early stages of development [26].
F. Step 6: Data Analysis and Interpretation
D. Step 4: Quality Assessment
The evaluation of major empirical research for methodological flaws and potential sources of bias is an integral
part of the quality assessment process [23], [33], [69]. This
phase aims to gauge the extent to which the findings of
The concluding stage of the evaluation process in literature
reviews can take various forms depending on the type of
assessment [69]. Some literature reviews emphasize intricate
scenarios that provide insights and profound hermeneutic interpretations, while others aim to eliminate subjectivity that
www.ijacsa.thesai.org
1251 | P a g e
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 15, No. 1, 2024
might compromise the reliability of summary statistics and
generalizations.
IS researchers employ various instruments for data analysis, depending on the main objectives for knowledge development [64]. For comprehensive synthesis, several wellestablished techniques are available, including text-mining
tools [36] and instruments that utilize scientometric, computational, or Latent Dirichlet Allocation (LDA) models to analyze
and visualize themes, theories, and research communities [6],
[17], [41], [55], [68], [70], [77], [78]. For example, text-mining
tools can provide descriptive insights based on topic modeling,
offering a promising approach to conceptual contributions [39],
[53], [63]. In the realm of IS, meta-analysis programs and
libraries, such as RevMan and the R package dmetar [7],
are utilized for putting hypotheses to the test. Future AIbased technologies supporting data analysis should consider
the diverse approaches available. While AI can efficiently ease
certain aspects of descriptive evaluations through topic modeling, the creative and unstructured nature of theory development
poses challenges for AI-led theory-building efforts [8], [45],
[56].
The inductive method of IS theory development seems
most amenable to AI assistance, although current examples
in the behavioral research domain may not match the ingenuity and originality exhibited by exceptional theoretical and
historical context articles [4], [39], [53], [63]. It is crucial to
emphasize that AI’s contribution to theory development lies
not only in identifying connections but also in elucidating
the “why” behind these connections and establishing fundamental philosophical foundations of justification [25], [82].
This aspect remains an unrestricted challenge for future theory
advancement based on AI.
Fig. 1 illustrates the three layers of the SLR-centric
approach, emphasizing the goals for research, design, and
action within the Information Systems (IS) domain. This threelayered SLR-centric model within the IS domain underscores
the interplay between infrastructure, methodologies/tools, and
actual research practices, showcasing the integral components
for successful literature reviews.
during the literature review [23], [33], [69]. Smart technologies, incorporating automation and clever algorithms, enhance
the efficiency and effectiveness of literature review operations
[30], [31], [46], [59], [73], [80]. Improved databases are
essential for efficiently storing, maintaining, and retrieving
literature review data [14]. Integrating these features into
the infrastructure supporting SLR substantially enhances the
quality, efficiency, and insights of the review process.
The standardization debate in the realm of Information Systems (IS) revolves around whether standardized approaches,
methodologies, and technology should be embraced across
multiple IS applications or whether variety and flexibility
should be prioritized [32]. The concept of sharing supplementary research outputs emphasizes the necessity of sharing not
only traditional research publications but also other significant
outputs contributing to the research process. This includes
datasets, code, methodology, negative outcomes, and other
relevant items [64].
V.
O PEN I SSUES AND C HALLENGES
A. The Emphasis is Primarily on One Activity
Automating SLR processes using AI research is heavily
skewed towards the execution of paper selection procedure,
especially during the phase of paper assignment. While this
activity is also time-consuming, it is important to note that
applying AI to various tasks within the SLR process requires
attention. Preliminary tasks, such as AI-driven writing activities (e.g., drafting research questions, specifying exclusion/inclusion criteria, and presenting SLR reports), are areas that
need further development.
B. More Research Needs to Be Done on AI Methods
While there is a broad range of AI fields and techniques,
certain ones have not yet been employed in SLR automation.
Methods for optimization and inquiry, for instance, have not
been thoroughly investigated as potential solutions for SLRrelated tasks. These strategies, traditionally used to resolve
planning issues, may find application in prioritizing resources
during the initial stages, such as selecting the best databases or
assigning papers to editors based on their skills. Compared to
ML, knowledge representation and NLP are less frequent, and
most proposals appear to be in early stages. Therefore, there
is a need for more tools and frameworks to develop solutions
based on these methods.
C. Additional Active Human Participation can Benefit Artificial Intelligence
Fig. 1. SLR-Centric research, design, and action.
Quality assurance is integral to ensuring the accuracy,
dependability, and consistency of data collected and analyzed
The cooperation between humans and AI methods or
instruments is currently limited in terms of scope and nature.
Under an active learning strategy, the human role is primarily
focused on providing labels for paper selection. However,
the organizing and composing phases, which demand greater
human capabilities, might benefit from engaging artificial
intelligence. Involving people in this process could result in
additional positive outcomes, such as tailoring the results to
their preferences.
www.ijacsa.thesai.org
1252 | P a g e
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 15, No. 1, 2024
D. The Adoption of AI for SLR Automation can be Enhanced
Most current successful ideas are rooted in either the medical or technological domains, with specific domain taxonomies
or concepts sometimes used to construct the list of capabilities.
Full replicability of genuine systematic literature reviews is
not always achieved, and the lack of benchmarks remains a
significant obstacle. Evaluating AI techniques across a broader
range of SLRs and expanding the scope of discussed issues
requires additional development.
E. Users of SLR Automation may Lack Expertise in Artificial
Intelligence
Many ML techniques examined thus far, such as support
vector machines (SVM) and neural networks, which are commonly referred to as “black-box” techniques. The challenge
arises from insufficient confidence in automated conclusions
due to the participation of scientists from diverse disciplines
in SLRs, who may not necessarily be experts in AI. There
has been limited exploration of models using human-readable
code, including simple decision trees and rule-based systems.
Furthermore, the utilization of contemporary explainable techniques has the potential to enhance the outcomes of black-box
artificial intelligence solutions developed in this field.
VI.
C ONCLUSIONS AND F UTURE W ORK
A. Conclusions
Literature reviews represent just one facet of a laborious
and error-prone process that can be streamlined with the
assistance of artificial intelligence (AI). It is not surprising that
not all tasks involved in Systematic Literature Review (SLR)
planning, execution, and reporting have been fully automated
to date. Our research indicates a strong inclination to leverage
AI, particularly Machine Learning (ML), to facilitate the paper
screening process, which entails sifting through thousands of
candidate papers to identify relevant ones. Natural Language
Processing (NLP) and ontologies prove especially beneficial
in handling semantic data for various tasks, although there is
a paucity of studies in these domains.
The findings highlight the need for a strategic roadmap
for AI-led research, development, and implementation across
different dimensions. Our primary objective is to foster the
growth of a vibrant AI-based Literature Reviews (AILR)
culture in the Information Systems (IS) field, offering enriching experiences for researchers at all stages of the research
process—from authors to reviewers to industry professionals.
The potential for extending AI-based tools and approaches
beyond the IS field, particularly for design science researchers,
holds great promise. We envision a future where IS researchers
actively engage in discussions and reflections on how to
optimally harness AI to advance their work.
B. Future Work
the call is for a more encompassing integration, suggesting
a desire to leverage AI capabilities throughout the entire
SLR workflow. This expansion could lead to more nuanced
and sophisticated automation, enhancing the efficiency and
effectiveness of literature review processes.
Additionally, the passage underscores the importance of
exploring advanced technologies, such as Natural Language
Processing (NLP) and ontologies, for semantic data analysis
in the context of literature reviews [16]. This suggests an
aspiration to move beyond basic automation and delve into
the realms of semantic understanding, potentially enabling AI
systems to discern and interpret the underlying meaning of
academic content. Moreover, the envisioned future involves
broadening the application of AI-based tools and methodologies to domains beyond Information Systems (IS), emphasizing
the need for transferability and interdisciplinary collaboration.
This push for broader applicability aligns with the broader
trend of fostering cross-disciplinary knowledge exchange and
collaboration.
Furthermore, the future work envisions a significant enhancement of the user experience for researchers engaging
with AI tools in the SLR process. This user-centric approach
aims to make AI more accessible to individuals with varying
skill levels, fostering inclusivity and democratizing the use of
advanced technologies in academia. Simultaneously, there is a
recognition of the importance of establishing benchmarks and
evaluation criteria to assess the effectiveness and efficiency of
AI-driven SLR processes. This emphasis on standardization
reflects a commitment to ensuring robust and comparable
outcomes in the application of AI methodologies to literature
reviews. Finally, ethical considerations emerge as a crucial
aspect of the envisioned future work, with a call to address
ethical implications and develop guidelines for responsible
AI implementation in research processes. This reflects a conscientious approach towards deploying AI in a manner that
upholds ethical standards and promotes responsible conduct
in academic research. In summary, the future work outlined in
the passage is characterized by a holistic vision, encompassing
technological advancements, usability improvements, interdisciplinary applications, ethical considerations, and a commitment to standardization in the realm of AI-based literature
reviews.
R EFERENCES
[1]
[2]
[3]
[4]
The future work outlined in the provided passage encompasses a comprehensive and forward-thinking strategy for
advancing the field of AI-based literature reviews. Initially,
the focus is directed towards extending the application of AI
across various phases of the systematic literature review (SLR)
process. While the current emphasis is on paper screening,
[5]
[6]
A. Al-Zubidy, J. C. Carver, D. P. Hale, and E. E. Hassler. Vision
for slr tooling infrastructure: Prioritizing value-added requirements.
Information and Software Technology, 91:72–81, 2017.
M. Alvesson and J. Sandberg. Generating research questions through
problematization. Academy of management review, 36(2):247–271,
2011.
D. Antons and C. F. Breidbach. Big data, big insights? Advancing
service innovation and design with machine learning. Journal of Service
Research, 21(1):17–39, 2018.
D. Antons, C. F. Breidbach, A. M. Joshi, and T. O. Salge. Computational
literature reviews: Method, algorithms, and roadmap. Organizational
Research Methods, 26(1):107–138, 2023.
C. F. Atkinson. Cheap, quick, and rigorous: Artificial intelligence and
the systematic literature review. Social Science Computer Review, page
08944393231196281, 2023.
B. Balducci and D. Marinova. Unstructured data in marketing. Journal
of the Academy of Marketing Science, 46:557–590, 2018.
www.ijacsa.thesai.org
1253 | P a g e
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 15, No. 1, 2024
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
L. Bax, L.-M. Yu, N. Ikeda, and K. G. Moons. A systematic comparison
of software dedicated to meta-analysis of causal studies. BMC medical
research methodology, 7:1–9, 2007.
N. Berente, S. Seidel, and H. Safadi. Research commentary—datadriven computationally intensive theory development. Information
Systems Research, 30(1):50–64, 2019.
S. Boell and B. Wang. An it artifact supporting exploratory literature searches. In Australasian conference on information systems.
http://www.litbaskets.io. Accessed, volume 21, 2021.
J. C. Carver, E. Hassler, E. Hernandes, and N. A. Kraft. Identifying
barriers to the systematic literature review process. In 2013 ACM/IEEE
international symposium on empirical software engineering and measurement, pages 203–212. IEEE, 2013.
A. L. Chapman, L. C. Morgan, and G. Gartlehner. Semi-automating
the manual literature search for systematic reviews increases efficiency.
Health Information & Libraries Journal, 27(1):22–27, 2010.
A. M. Cohen, W. R. Hersh, K. Peterson, and P.-Y. Yen. Reducing
workload in systematic review preparation using automated citation
classification. Journal of the American Medical Informatics Association,
13(2):206–219, 2006.
H. M. Cooper. Organizing knowledge syntheses: A taxonomy of
literature reviews. Knowledge in society, 1(1):104, 1988.
W. A. Cram, M. Templier, and G. Paré. (re) considering the concept
of literature review reproducibility. Journal of the Association for
Information Systems, 21(5):10, 2020.
K. S. Davies. Formulating the evidence based practice question: a
review of the frameworks. Evidence Based Library and Information
Practice, 6(2):75–80, 2011.
G. Drakopoulos, A. Kanavos, P. Mylonas, S. Sioutas, and D. Tsolis.
Towards a framework for tensor ontologies over neo4j: Representations
and operations. In 8th International Conference on Information,
Intelligence, Systems & Applications (IISA), pages 1–6. IEEE, 2017.
E. Dritsas, M. Trigka, G. Vonitsanos, A. Kanavos, and P. Mylonas.
Aspect-based community detection of cultural heritage streaming data.
In 12th International Conference on Information, Intelligence, Systems
& Applications (IISA), pages 1–4. IEEE, 2021.
J. A. Evans and J. G. Foster. Metaknowledge. Science, 331(6018):721–
725, 2011.
K. R. Felizardo and J. C. Carver. Automating systematic literature
review. Contemporary empirical methods in software engineering,
pages 327–355, 2020.
K. R. Felizardo, É. F. de Souza, B. M. Napoleão, N. L. Vijaykumar,
and M. T. Baldassarre. Secondary studies in the academic context:
A systematic mapping and survey. Journal of Systems and Software,
170:110734, 2020.
H. Harrison, S. J. Griffin, I. Kuhn, and J. A. Usher-Smith. Software
tools to support title and abstract screening for systematic reviews in
healthcare: an evaluation. BMC medical research methodology, 20:1–
12, 2020.
L. Hartling, M. Ospina, Y. Liang, D. M. Dryden, N. Hooton, J. K.
Seida, and T. P. Klassen. Risk of bias versus quality assessment of
randomised controlled trials: cross sectional study. Bmj, 339, 2009.
J. P. Higgins, J. Thomas, J. Chandler, M. Cumpston, T. Li, M. J.
Page, and V. A. Welch. Cochrane handbook for systematic reviews
of interventions. John Wiley & Sons, 2019.
M.-S. James, C. Marrissa, S. Mark, B. Anthea, et al. Systematic
approaches to a successful literature review. Systematic Approaches
to a Successful Literature Review, pages 1–100, 2021.
C. D. Johnson, B. C. Bauer, and F. Niederman. The automation of management and business science. Academy of Management Perspectives,
35(2):292–309, 2021.
S. R. Jonnalagadda, P. Goyal, and M. D. Huffman. Automating
data extraction in systematic reviews: a systematic review. Systematic
reviews, 4(1):1–16, 2015.
A. Kanavos, N. Antonopoulos, I. Karamitsos, and P. Mylonas. A comparative analysis of tweet analysis algorithms using natural language
processing and machine learning models. In 18th International Workshop on Semantic and Social Media Adaptation and Personalization
(SMAP), pages 1–6. IEEE, 2023.
[28]
[29]
[30]
[31]
[32]
[33]
[34]
[35]
[36]
[37]
[38]
[39]
[40]
[41]
[42]
[43]
[44]
[45]
[46]
[47]
A. Kanavos, C. Makris, Y. Plegas, and E. Theodoridis. Ranking web
search results exploiting wikipedia. International Journal on Artificial
Intelligence Tools, 25(3):1650018:1–1650018:26, 2016.
A. Kanavos, E. Theodoridis, and A. K. Tsakalidis. Extracting knowledge from web search engine results. In 24th International Conference
on Tools with Artificial Intelligence (ICTAI), pages 860–867. IEEE
Computer Society, 2012.
I. Karamitsos, M. Papadaki, and N. B. Al Barghuthi. Design of the
blockchain smart contract: A use case for real estate. Journal of
Information Security, 9(3):177–190, 2018.
I. Karamitsos, M. Papadaki, K. Al-Hussaeni, and A. Kanavos. Transforming airport security: Enhancing efficiency through blockchain smart
contracts. Electronics, 12(21):4492, 2023.
I. Karydis, A. Kanavos, S. Sioutas, M. Avlonitis, and N. I. Karacapilidis.
Multimedia content’s brokerage: An information system based on lesim.
International Journal of E-Services and Mobile Applications, 12(2):40–
58.
S. Keele et al. Guidelines for performing systematic literature reviews
in software engineering, 2007.
R. D. King, J. Rowland, S. G. Oliver, M. Young, W. Aubrey, E. Byrne,
M. Liakata, M. Markham, P. Pir, L. N. Soldatova, et al. The automation
of science. Science, 324(5923):85–89, 2009.
B. Kitchenham and P. Brereton. A systematic review of systematic
review process research in software engineering. Information and
software technology, 55(12):2049–2075, 2013.
V. B. Kobayashi, S. T. Mol, H. A. Berkers, G. Kismihók, and D. N.
Den Hartog. Text mining in organizational research. Organizational
research methods, 21(3):733–765, 2018.
C. Kohl, E. J. McIntosh, S. Unger, N. R. Haddaway, S. Kecke,
J. Schiemann, and R. Wilhelm. Online tools supporting the conduct
and reporting of systematic reviews and systematic maps: a case study
on cadima and review of existing tools. Environmental Evidence, 7:1–
17, 2018.
J. Krüger, C. Lausberger, I. von Nostitz-Wallwitz, G. Saake, and
T. Leich. Search. review. repeat? An empirical study of threats to
replicating slr searches. Empirical Software Engineering, 25:627–677,
2020.
M. Kunc, M. J. Mortenson, and R. Vidgen. A computational literature
review of the field of system dynamics from 1974 to 2017. Journal of
Simulation, 12(2):115–127, 2018.
K. R. Larsen, D. Hovorka, A. Dennis, and J. D. West. Understanding the
elephant: The discourse approach to boundary identification and corpus
construction for theory review articles. Journal of the association for
information systems, 20(7):15, 2019.
C. Laurell, C. Sandström, A. Berthold, and D. Larsson. Exploring
barriers to adoption of virtual reality through social media analytics
and machine learning–an assessment of technology, network, price and
trialability. Journal of Business Research, 100:469–474, 2019.
C. Lefebvre, E. Manheimer, and J. Glanville. Searching for studies.
Cochrane handbook for systematic reviews of interventions: Cochrane
book series, pages 95–150, 2008.
J. Li, K. Larsen, and A. Abbasi. Theoryon: A design framework and
system for unlocking behavioral knowledge through ontology learning.
MIS Quarterly, 44(4), 2020.
A. Liberati, D. G. Altman, J. Tetzlaff, C. Mulrow, P. C. Gøtzsche, J. P.
Ioannidis, M. Clarke, P. J. Devereaux, J. Kleijnen, and D. Moher. The
prisma statement for reporting systematic reviews and meta-analyses
of studies that evaluate health care interventions: explanation and
elaboration. Annals of internal medicine, 151(4):W–65, 2009.
A. Lindberg. Developing theory through integrating human and machine
pattern recognition. Journal of the Association for Information Systems,
21(1):7, 2020.
A. K. Lingaraju, M. Niranjanamurthy, P. Bose, B. Acharya, V. C.
Gerogiannis, A. Kanavos, and S. Manika. Iot-based waste segregation
with location tracking and air quality monitoring for smart cities. Smart
Cities, 6(3):1507–1522, 2023.
C. Marshall, P. Brereton, and B. Kitchenham. Tools to support
systematic reviews in software engineering: a feature analysis. In
Proceedings of the 18th international conference on evaluation and
assessment in software engineering, pages 1–10, 2014.
www.ijacsa.thesai.org
1254 | P a g e
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 15, No. 1, 2024
[48]
[49]
[50]
[51]
[52]
[53]
[54]
[55]
[56]
[57]
[58]
[59]
[60]
[61]
[62]
[63]
[64]
[65]
I. J. Marshall, J. Kuiper, and B. C. Wallace. Robotreviewer: evaluation
of a system for automatically assessing bias in clinical trials. Journal of
the American Medical Informatics Association, 23(1):193–201, 2016.
I. J. Marshall and B. C. Wallace. Toward systematic review automation:
a practical guide to using machine learning tools in research synthesis.
Systematic reviews, 8:1–10, 2019.
G. D. Mergel, M. S. Silveira, and T. S. da Silva. A method to support
search string building in systematic literature reviews through visual
text mining. In Proceedings of the 30th annual ACM symposium on
applied computing, pages 1594–1601, 2015.
A. Mohasseb, B. Aziz, and A. Kanavos. SMS spam identification and
risk assessment evaluations. In 16th International Conference on Web
Information Systems and Technologies (WEBIST), pages 417–424, 2020.
A. Mohasseb, M. Bader-El-Den, A. Kanavos, and M. Cocea. Web
queries classification based on the syntactical patterns of search types.
In 19th International Conference on Speech and Computer (SPECOM),
volume 10458 of Lecture Notes in Computer Science, pages 809–819.
Springer, 2017.
M. J. Mortenson and R. Vidgen. A computational literature review of
the technology acceptance model. International Journal of Information
Management, 36(6):1248–1259, 2016.
C. Müller-Bloch and J. Kranz. A framework for rigorously identifying
research gaps in qualitative literature reviews. 2015.
S. Nakagawa, G. Samarasinghe, N. R. Haddaway, M. J. Westgate, R. E.
O’Dea, D. W. Noble, and M. Lagisz. Research weaving: visualizing the
future of research synthesis. Trends in ecology & evolution, 34(3):224–
238, 2019.
L. K. Nelson. Computational grounded theory: A methodological
framework. Sociological Methods & Research, 49(1):3–42, 2020.
B. K. Olorisade, E. de Quincey, P. Brereton, and P. Andras. A
critical analysis of studies that address the use of text mining for
citation screening in systematic reviews. In Proceedings of the 20th
international conference on evaluation and assessment in software
engineering, pages 1–11, 2016.
A. O’Mara-Eves, J. Thomas, J. McNaught, M. Miwa, and S. Ananiadou.
Using text mining for study identification in systematic reviews: a
systematic review of current approaches. Systematic reviews, 4(1):1–22,
2015.
T. Panagiotakopoulos, D. P. Vlachos, T. V. Bakalakos, A. Kanavos, and
A. Kameas. A fiware-based iot framework for smart water distribution management. In 12th International Conference on Information,
Intelligence, Systems & Applications (IISA), pages 1–6. IEEE, 2021.
D. Papaioannou, A. Sutton, and A. Booth. Systematic approaches to
a successful literature review. Systematic approaches to a successful
literature review, pages 1–336, 2016.
G. Paré, M. Tate, D. Johnstone, and S. Kitsiou. Contextualizing the
twin concepts of systematicity and transparency in information systems
literature reviews. European Journal of Information Systems, 25:493–
508, 2016.
S. Rivard. Editor’s comments: The ions of theory construction. 2014.
T. Schmiedel, O. Müller, and J. Vom Brocke. Topic modeling as
a strategy of inquiry in organizational research: A tutorial with an
application example on organizational culture. Organizational Research
Methods, 22(4):941–968, 2019.
G. Schryen, G. Wagner, A. Benlian, and G. Paré. A knowledge
development perspective on literature reviews: Validation of a new
typology in the is field. Communications of the AIS, 46, 2020.
C. Stansfield, A. O’Mara-Eves, and J. Thomas. Text mining for
search term development in systematic reviewing: A discussion of some
[66]
[67]
[68]
[69]
[70]
[71]
[72]
[73]
[74]
[75]
[76]
[77]
[78]
[79]
[80]
[81]
[82]
methods and challenges. Research synthesis methods, 8(3):355–365,
2017.
B. Sturm and A. Sunyaev. Design principles for systematic search
systems: a holistic synthesis of a rigorous multi-cycle design science
research journey. Business & Information Systems Engineering, 61:91–
111, 2019.
D. R. Swanson and N. R. Smalheiser. An interactive system for finding
complementary literatures: a stimulus to scientific discovery. Artificial
intelligence, 91(2):183–203, 1997.
W. L. Tate, L. M. Ellram, and J. F. Kirchoff. Corporate social responsibility reports: a thematic analysis related to supply chain management.
Journal of supply chain management, 46(1):19–44, 2010.
M. Templier and G. Pare. Transparency in literature reviews: an
assessment of reporting practices across review types and genres in top
is journals. European Journal of Information Systems, 27(5):503–550,
2018.
M. Thilakaratne, K. Falkner, and T. Atapattu. A systematic review on
literature-based discovery: general overview, methodology, & statistical
analysis. ACM Computing Surveys (CSUR), 52(6):1–34, 2019.
M. Torres Torres and C. E. Adams. Revmanhal: towards automatic text
generation in systematic reviews. Systematic reviews, 6:1–7, 2017.
G. Tsafnat, P. Glasziou, M. K. Choong, A. Dunn, F. Galgani, and
E. Coiera. Systematic review automation technologies. Systematic
reviews, 3:1–15, 2014.
G. Tsaramirsis, I. Karamitsos, and C. Apostolopoulos. Smart parking:
An iot application for smart city. In 2016 3rd International conference
on computing for sustainable global development (INDIACom), pages
1412–1416. IEEE, 2016.
A. Van Altena, R. Spijker, and S. Olabarriaga. Usage of automation
tools in systematic reviews. Research synthesis methods, 10(1):72–82,
2019.
R. Van De Schoot, J. De Bruin, R. Schram, P. Zahedi, J. De Boer,
F. Weijdema, B. Kramer, M. Huijts, M. Hoogerwerf, G. Ferdinands,
et al. An open source machine learning framework for efficient and
transparent systematic reviews. Nature machine intelligence, 3(2):125–
133, 2021.
R. van Dinter, B. Tekinerdogan, and C. Catal. Automation of systematic
literature reviews: A systematic literature review. Information and
Software Technology, 136:106589, 2021.
W. van Zoonen and G. Toni. Social media research: The application of
supervised machine learning in organizational communication research.
Computers in human behavior, 63:132–141, 2016.
G. Vonitsanos, A. Kanavos, A. Mohasseb, and D. Tsolis. A nosql
approach for aspect mining of cultural heritage streaming data. In
10th International Conference on Information, Intelligence, Systems and
Applications (IISA), pages 1–4. IEEE, 2019.
G. Vonitsanos, A. Kanavos, P. Mylonas, and S. Sioutas. A nosql
database approach for modeling heterogeneous and semi-structured information. In 9th International Conference on Information, Intelligence,
Systems and Applications (IISA), pages 1–8. IEEE, 2018.
G. Vonitsanos, T. Panagiotakopoulos, A. Kanavos, and A. K. Tsakalidis.
Forecasting air flight delays and enabling smart airport services in
apache spark. In Artificial Intelligence Applications and Innovations
(AIAI), volume 628 of IFIP Advances in Information and Communication Technology, pages 407–417. Springer, 2021.
Z. Wang, T. Nayfeh, J. Tetzlaff, P. O’Blenis, and M. H. Murad.
Error rates of human reviewers during abstract screening in systematic
reviews. PloS one, 15(1):e0227742, 2020.
D. A. Whetten. What constitutes a theoretical contribution? Academy
of management review, 14(4):490–495, 1989.
www.ijacsa.thesai.org
1255 | P a g e