Generative Artificial Intelligence: Electronic Markets December 2023
Generative Artificial Intelligence: Electronic Markets December 2023
Generative Artificial Intelligence: Electronic Markets December 2023
net/publication/376238636
CITATIONS READS
2 914
2 authors:
All content following this page was uploaded by Gero Strobel on 06 December 2023.
FUNDAMENTALS
Abstract
Recent developments in the field of artificial intelligence (AI) have enabled new paradigms of machine processing, shift-
ing from data-driven, discriminative AI tasks toward sophisticated, creative tasks through generative AI. Leveraging deep
generative models, generative AI is capable of producing novel and realistic content across a broad spectrum (e.g., texts,
images, or programming code) for various domains based on basic user prompts. In this article, we offer a comprehensive
overview of the fundamentals of generative AI with its underpinning concepts and prospects. We provide a conceptual
introduction to relevant terms and techniques, outline the inherent properties that constitute generative AI, and elaborate
on the potentials and challenges. We underline the necessity for researchers and practitioners to comprehend the distinctive
characteristics of generative artificial intelligence in order to harness its potential while mitigating its risks and to contribute
to a principal understanding.
Keywords Generative AI · Artificial intelligence · Deep learning · Deep generative models · Large language models
13
Vol.:(0123456789)
63 Page 2 of 17 Electronic Markets (2023) 33:63
Deep Learning
Generative AI
13
Electronic Markets (2023) 33:63 Page 3 of 17 63
(Harmon, 1985; Patterson, 1990). Machine learning as a to understand complex data distributions, which allows
subfield of AI deals with the development of algorithms them to produce outputs that closely resemble real-world
capable to autonomously solve tasks through exposure to data. By leveraging statistics, the goal of DGM training is
data without being explicitly programmed—i.e., learning to learn high-dimensional probability distributions from a
(Brynjolfsson & Mitchell, 2017). In the realm of ML, there finite training dataset and create new, similar samples that
are several types of learning approaches based on the nature resemble an approximation to the underlying class of training
of the data and the desired outcome. Supervised learning is a data (Ruthotto & Haber, 2021). While discriminative models
common approach, e.g., for applications in commercial con- focus on modeling the relationship between input features
texts, as algorithms are trained on labeled datasets to clas- and output labels, generative models learn the inherent data
sify or forecast (business) data (Janiesch et al., 2021). The structure and generation processes (Jebara, 2004). Generative
algorithm learns to map inputs to outputs and, thus, is capa- models have been around for decades, with, for example, hid-
ble of making predictions on new, unseen data. Moreover, den Markov models or Bayesian networks aiming to model
unsupervised learning (i.e., discovering hidden structures or statistical problems involving time series or sequences (Gm
patterns within unlabeled data) and reinforcement learning et al., 2020). Nonetheless, DGMs relying on neural networks
(i.e., learning optimal decision-making by interacting with have paved the way for significantly higher-quality generated
an environment and maximizing cumulative rewards over content in recent advancements in the field of so-called gen-
time through trial and error) are further learning strategies in erative AI. Thus, the goals of DGMs differ from traditional
ML (Kühl et al., 2022). What ML algorithms share in com- discriminative AI models (e.g., in ML) because the focus
mon are their discriminative properties, i.e., the goal of pro- lies on the probabilistic generation of new data instead of
cessing data to conduct classification, regression, or cluster determining extant data’s decision boundaries (e.g., classi-
and determine decision boundaries. Exemplary algorithms fication, regression, or clustering) (Tomczak, 2022; Weisz
include decision trees, k-nearest neighbors, or support vector et al., 2023). In the following, we will focus on DGMs as
machines (Ray, 2019). Deep learning is a more advanced the underpinning of GAI and give an overview of four core
subset of ML and leverages artificial neural networks to DGMs that have shaped the evolution of GAI in Table 1.
model complex data representations and automatically detect To leverage DGMs in GAI applications, they can be
correlations and patterns in large datasets (Janiesch et al., trained to generate new data and enable a variety of use
2021; Samtani et al., 2023). Neural networks are computa- cases (we refer to DGMs implemented in GAI applica-
tional models inspired by the structure and function of the tions as GAI models). Training a GAI model can be differ-
human brain, consisting of interconnected layers of artificial ent than a discriminative AI model due to semi-supervised
neurons (Goodfellow et al., 2016). In DL, neural networks learning, a combination of learning techniques leveraging a
comprise of multiple hidden layers in a nested architecture small amount of labeled data (i.e., supervised) followed by
to learn hierarchical feature representations from the data, extensive unlabeled data (i.e., unsupervised) (Kingma et al.,
leading to improved performance on various tasks. Thus, DL 2014). For instance, recent GAI models apply techniques
is capable of processing high-dimensional data in various like supervised fine-tuning (SFT), reward models, and rein-
domains, ranging from one-dimensional data like signals forcement learning via proximal policy optimization (PPO)
and texts to multidimensional data such as images, video, or to achieve an alignment of the model with the developers’
audio (LeCun et al., 2015). These advances have enabled a intentions and values (OpenAI, 2023; Ouyang et al., 2022).
plethora of use cases across different domains, from societal This unique approach allows the training of very large data-
good, such as improving healthcare and environmental sus- sets required for GAI models without the need for difficult
tainability (Piccialli et al., 2021; Schoormann et al., 2023; complete labeling.
Strobel et al., 2023), to electronic markets, where DL can The application system functions as an interface for the
optimize pricing, serve as recommendation systems, forecast user to interact with a GAI model. Prompting is an interac-
demands, and detect fake consumer reviews (Ferreira et al., tion technique and unique GAI property that enables end
2016; M. Li et al., 2022; Zhang et al., 2023b). users using natural language to engage with and instruct GAI
application (e.g., LLMs) to create desired output such as
Toward generative AI text, images, or other types (Dang et al., 2022; Liu & Chil-
ton, 2022). Depending on the application, prompts vary in
Fueled by advancements in DL techniques, deep genera- their modality and directly influence the mode of operation.
tive models (DGMs) have emerged as a class of DL models For instance, text-to-image applications use textual prompts
to generate new content based on existing data, creating a describing the visuals of the desired image, while image-
variety of new possibilities for AI applications (Lehmann & to-image applications rely on an input image to steer the
Buschek, 2020; Tomczak, 2022). These models are trained generation process.
13
63 Page 4 of 17 Electronic Markets (2023) 33:63
Generative adversarial network (GAN) Generative adversarial networks consist of two competing neural networks: a generator and a discrimi-
nator (Goodfellow et al., 2020). The generator creates realistic data samples, while the discrimina-
tor distinguishes between real and generated samples (Pan et al., 2019). Both neural networks are
trained together until the discriminator is not able to differentiate both samples (Janiesch et al., 2021).
This adversarial competition results in the generator improving its data generation capabilities over
time, eventually producing high-quality, realistic outputs. Hence, GANs find various applications,
for instance, in image generation and manipulation, object detection and segmentation, and natural
language processing (Aggarwal et al., 2021; Gui et al., 2023)
Variational autoencoder (VAE) Variational autoencoders employ a neural network to learn encoding compressed input data into a
lower-dimensional latent space and then decode the data by reconstructing the original data from
the latent space representation (Kingma et al., 2014). By optimizing a variational lower bound on
the data likelihood in a probabilistic approach, VAEs can generate new samples that resemble the
original data distribution. Typical use cases for VAEs can be seen in the synthetic generation and
reconstruction of data such as images, in anomaly detection, and recommendation systems (Wei &
Mahmood, 2021)
Transformer Transformer models have become the basis for many state-of-the-art natural language processing tasks
and succeeding models. They are a specific type of neural network architecture that employ self-
attention mechanisms to capture long-range dependencies in the data, making them well-suited for
large-scale language modeling tasks (Vaswani et al., 2017)
Generative pre-trained transformers (GPT) build on the transformer architecture and were trained with
large datasets of unlabeled data (Brown et al., 2020). Due to their large size (i.e., a very large number
of trainable parameters), GPT trained on text data are often referred to as large language models
(LLMs) (Schramowski et al., 2022). The goal of LLMs is to generate novel, coherent, contextually
relevant human-like text by predicting which token is most likely to occur after the prior tokens in a
sentence (Brown et al., 2020; H. Li, 2022). Hence, LLMs can serve as the foundation for conversa-
tional AI tools like ChatGPT (Teubner et al., 2023). Besides conversing, the large amount of informa-
tion stored in LLMs can be used for text generation, writing, or even programming, e.g., to support
scholars (Cooper, 2023; Lund et al., 2023)
Latent diffusion model (LDM) Latent diffusion models are transformer based and build on the concepts of denoising score matching
and contrastive divergence to learn a stochastic data generation process (Rombach et al., 2022). In
LDMs, the generation process starts with a simple initial distribution, such as Gaussian noise. Then,
the data gets gradually refined through a series of noise-reduction steps following a predefined dif-
fusion process through a latent space (Ho et al., 2020). The key advantage of LDMs is their ability
to learn complex data distributions without requiring adversarial training (as in GANs) or optimiz-
ing variational lower bounds (as in VAEs). They also feature improved stability over other DGMs
during training to be less prone to issues like mode collapse (Kodali et al., 2017; Rombach et al.,
2022), making them well-suited for high-quality and detailed outputs, such as high-resolution image
synthesis (Ho et al., 2020)
By design, the outputs of generative AI models are prob- solved. The primary goal of generating new, probabilistically
abilistic and not replicable compared to the deterministic produced data (i.e., content) with varying outputs based on
outcomes of discriminative AI—i.e., variance (Weisz et al., the same input distinguishes generative AI from discrimina-
2023). For one exact input prompt, a GAI application will tive AI, which pursues boundary determination by analyzing
generate varying outputs each time it is prompted, but the data and making a decision (see Fig. 2). Hence, a primary
results remain valid and prompt fulfilling. On the other hand, difference lies in the role of data, as GAI leverages very large
different input prompts can lead to the same goal. Hence, datasets in its generative model to produce diverse content,
formulating a meaningful prompt that leads to the desired while discriminative AI processes user data based on a (pre-
outcome is based on trial-and-error process, e.g., by rephras- trained) algorithm.
ing textual prompts with the same keywords. The field of
prompt engineering deals with systematically constructing
prompts to improve the generated outputs (Liu & Chilton, Prospects and applications of generative AI
2022).
Based on the heuristic approach of prompt engineering Complementing discriminative AI, GAI has recently
and the inherent variance in the generated content, GAI emerged as a novel tool with a wide range of new possibili-
users continuously and iteratively specify their desired ties impacting multiple sectors, from education and health-
tasks as input prompts to generate outputs until their task is care (Brand et al., 2023; Burger et al., 2023; Cooper, 2023)
13
Electronic Markets (2023) 33:63 Page 5 of 17 63
Discriminative Model
Discriminative Boundary
Data Decision
AI Determination
Generativity
Generative
Prompt Specify Creation Generate Content
AI
Variance
Generative Model
to networked businesses (Dwivedi et al., 2023; Wessel et al., application systems. Depending on the training dataset, gen-
2023). These emerging applications inherit the generativity eral purpose models aim at solving a wide range of tasks in
and variance properties of GAI and, therefore, are capable of multiple domains (e.g., GPT-4), whereas customized models
producing unique and creative content, going beyond mere are designed for domain-specific tasks and were, therefore,
assistance. Hence, GAI becomes increasingly multidiscipli- trained on highly specific data (e.g., CodeBERT).
nary, enabling disruptive innovations and automating even Integrating these models into a system environment that
traditionally creative tasks, e.g., by generating customized affects people and organizations leads to the application
contextual texts or images, facilitating new opportunities for layer of generative AI. By providing a proper context for
businesses to innovate and differentiate themselves in the the artifact, users are able to leverage the capabilities of GAI
competitive economic landscape (Dwivedi et al., 2021; Lund models for a specific application use case. Observing the
et al., 2023; Pavlik, 2023). trend of various recently emerging GAI applications build-
Generative AI finds its utility across various modalities, ing on top of existing models, it becomes apparent to further
including the generation of text, image, video, code, sound, distinguish between end-to-end applications that are based
and other produced content, such as molecules or 3D ren- on undisclosed, proprietary models (e.g., Midjourney) and
derings (see Table 2). For example, GAI applications aim to open applications that are built around open-source models
create tailored marketing content, generate realistic (prod- or leverage publicly accessible pre-trained models (e.g., Jas-
uct) images or videos, and even assist in software devel- per and Elicit using OpenAI’s GPT-3 (Elicit, 2022; Jasper,
opment by generating code (Bakpayev et al., 2022; Elasri 2022)). Huang and Grady (2022) describe GAI applications
et al., 2022; Kowalczyk et al., 2023). Several modalities can as a “UI layer and ‘little brain’ [i.e., application layer] that
serve as the input for GAI models. Distinguishing the dif- sits on top of the ‘big brain’ that is the large general-purpose
ferent modality types, unimodal models generate the same models [i.e., model layer].” This perspective emphasizes that
output type as their input type, e.g., text-to-text or image- new business models and applications can be developed
to-image generation, whereas multi-modal models combine without the need to train large GAI models from scratch by
different input and output types, for instance, in a text-to- leveraging publicly available application programming inter-
image or code-to-text scenario. Different multi-modal mod- faces (API) or AI-as-a-service platforms (Burström et al.,
els can subsume as x-to-modality models (e.g., x-to-text or 2021; Janiesch et al., 2021; Lins et al., 2021). Indeed, the
x-to-image). accessibility and availability of pre-trained GAI models fos-
Examining the architecture of GAI-based systems, three ter value co-creation and can be leveraged via a connection
major component layers can be identified: model layer, con- layer (e.g., Hugging Face). Fully integrated GAI systems,
nection layer, and application layer (see Fig. 3). These parts on the other hand, employ their own, custom-trained propri-
embed generative AI in its information systems context and etary models. In many cases, end-to-end GAI applications
draw a boundary from external entities that can be inter- represent fully integrated systems (e.g., GitHub Copilot),
acted with its environment (e.g., users, organizations) and while open GAI applications leverage external models via
data (i.e., public and enterprise data) (Samtani et al., 2023). APIs (e.g., Stable Diffusion). Overall, GAI models may aim
Inside the boundaries of GAI-based systems, the preva- for general purposes or customized tasks regardless their
lent characteristics of generativity and variance persist and connection or application characteristics (see Fig. 3). To
affect all layers and processes. The model layer comprises enrich GAI-based systems with additional data beyond their
the pre-trained, deployable GAI artifact (i.e., a DGM) for training state and the GAI boundary, external data sources
13
63 Page 6 of 17 Electronic Markets (2023) 33:63
Text X-to-text applications are centered around text generation and natural language processing. The goal is to generate human-like writ-
ten text that fits the user’s input prompt by providing a meaningful answer within the context. For instance, chatbots like OpenAI’s
ChatGPT imitate textual conversations with the user and can be guided to output text artifacts as desired (OpenAI, 2023). Fur-
thermore, text-producing applications can be leveraged for content creation (e.g., copywriting or specific writing in e-commerce
contexts) (Bakpayev et al., 2022; Brand et al., 2023). Moreover, text generation can support processes in sales or support by provid-
ing the ability to produce customized texts tailored toward the requests (Mondal et al., 2023). Systems integrating GAI models
with further knowledge bases (e.g., enterprise data and Internet access) extend the available information beyond the model’s initial
training dataset
Image X-to-image applications generate images based on the user’s prompting. Relying on GANs or diffusion models as DGMs, synthetic
images are created that find use cases in marketing, design and fashion, or creative fields in the form of new visual art (Haase
et al., 2023; Mayahi & Vidrih, 2022; Zhang et al., 2023a). For instance, Stable Diffusion is an open-sourced x-to-image model that
enables the generation of images in multiple GAI applications (Rombach et al., 2022). Moreover, generated synthetic images can
act as training data for further ML models to train classifiers (e.g., medical images to detect diseases (Ali et al., 2023)). Besides a
text-to-image creation process, image editing capabilities are possible, e.g., via image-to-image systems that manipulate and extend
images according to the user’s prompting (Oppenlaender, 2022)
Video X-to-video applications deal with the creation of synthetic videos, i.e., dynamic motion images. New video clips are generated by
describing the content of the desired video footage (text-to-video) or applying the style and composition via text or image prompt
to a source video (video-to-video) (Esser et al., 2023). These prospects allow the fast and convenient creation and editing of videos
via natural language and other modalities (Zhan et al., 2021). Thus, not only videographers benefit from x-to-video applications
but also people without filming and editing skills are enabled to creatively express themselves due to an accessible creation process
(Anantrasirichai & Bull, 2022). Besides recreational and entertainment purposes, x-to-video GAI models find application in sales
and marketing (e.g., product marketing videos), onboarding and education (e.g., virtual avatars in training videos), or in customer
support (e.g., how-to videos) (Leiker et al., 2023; Mayahi & Vidrih, 2022). As an exemplary application, Synthesia is a video crea-
tion platform specialized in generating professional videos with virtual avatars and synthetic voiceovers (Synthesia, 2023)
Code In the realm of software development, x-to-code GAI applications offer transformative potential in how developers work and code
by providing x-to-text capabilities specific to programming languages. Models like CodeBERT (Feng et al., 2020) or GraphCode-
BERT (Guo et al., 2021) were trained on programming code to generate source code from natural language or modeling languages
for new software programs. Several x-to-text models also offer coding capabilities because general-purpose LLMs are trained with
increasingly large datasets that contain code (e.g., Stability.ai, 2023). Programmers using applications such as GitHub Copilot are
supported by automatically written chunks of code, ideas converted into actionable scripts, auto-completion functions, generated
unit tests, duplicate code detection, and bug fixing (Sun et al., 2022). These automation potentials allow developers to focus on
higher-level tasks and problem solving, enhancing their productivity and the final product’s overall quality, reducing time-to-market,
supporting rapid prototyping, and promoting continuous innovation for the product and business
Audio X-to-audio applications focus on audio content generation and comprise, for instance, the generation of speech with synthetically
generated human-like voices (Borsos et al., 2022; Wang et al., 2023). Especially text-to-speech and speech-to-speech models are
being heavily researched and can be used to power various applications, ranging from digital assistants and customer services to
audiobook and training narration and accessibility tools (Moussawi et al., 2021; Qiu & Benbasat, 2005). GAI models like Micro-
soft’s VALL-E (Wang et al., 2023) offer a more personalized and engaging user experience by enabling realistic voice modeling.
Moreover, x-to-sound models find application in music creation. By specifying genres or melodies via prompts, unique pieces of
music can be generated that respect the original intent (Agostinelli et al., 2023). GAI models such as MusicLM (Agostinelli et al.,
2023) help musicians in their creative process, offering inspiration and aiding the composition of complex pieces. Businesses in the
music industry can leverage high-fidelity music generation to create customized soundtracks for marketing, movies, or video games,
significantly reducing the cost and time associated with traditional music production (Anantrasirichai & Bull, 2022; Weng & Chen,
2020)
Other The applications of GAI extend beyond the stated modality types and domains, impacting multiple other, specific areas. For instance,
x-to-molecules models like AlphaFold (Jumper et al., 2021) and OpenBioML (Murphy & Thomas, 2023) generate viable protein
structures and design new molecules by generating valid, novel molecular structures, supporting drug discovery and bioengineer-
ing researchers (Walters & Murcko, 2020). 3D modeling is also impacted by GAI applications such as DreamFusion (Poole et al.,
2023), Nvidia GET3D (Gao et al., 2022), and Point-E (Nichol et al., 2022), which generate realistic and complex 3D models that
facilitate a range of applications from product design and architecture to virtual reality and game development
can be connected. Enterprise data (e.g., internal documents, Employing GAI in enterprises can extend the level of
enterprise resource planning (ERP) systems, knowledge assistance for workers and open up opportunities for aug-
bases) and public data (e.g., the Internet, libraries, social mentation and automation of the job, leading to new forms
media) may serve as complementary, contextual data that of collaborations between humans and machines (Einola &
GAI applications can further draw upon for more relevant Khoreva, 2023). Furthermore, GAI transforms the way busi-
and personalized results. nesses operate in their daily tasks, innovate, and interact
13
Electronic Markets (2023) 33:63 Page 7 of 17 63
Generative AI
Application
Layer
End-To-End Open
Connection
Layer
Fully-Integrated API
Model
Layer
Customized General Purpose
Social Knowledge
Libraries Internet Documents ERP
Media Base
with their customers (Brynjolfsson et al., 2023; Mondal due to its latest focus on data-driven efforts (Selz, 2020).
et al., 2023). Thus, the prospects of value co-creation come Outlining and emphasizing these challenges relevant for
in hand with potential changes in human work roles, requir- research and practice helps to raise awareness of the con-
ing workforces in various domains to adapt their tasks as straints as well as supports future efforts in developing,
a diverse set of tasks could be impacted by generative AI implementing, and improving GAI-based systems.
(Brynjolfsson & McAfee, 2016; Eloundou et al., 2023). The
ongoing diffusion of AI into businesses gets accelerated by Bias
GAI applications, resulting in a possible replacement of
human jobs on the one hand but also the creation of new jobs Because of GAI’s data-driven nature, data quality plays an
(e.g., for prompt engineers or with new business models) on essential role in how GAI-based systems perform and, thus,
the other hand (Einola & Khoreva, 2023). Hence, the effect how feasible their adoption for real-world scenarios in busi-
on the labor market by the disruption needs to be discussed, ness contexts is. Similar to their traditional discriminative AI
and businesses should seek to understand and embrace the relatives, GAI models are prone to bias causing biased deci-
potential of generative AI (Eloundou et al., 2023; Willcocks, sions, disadvantages, and discriminations (Ferrara, 2023;
2020). Schramowski et al., 2022). Biases manifest in different ways
and evolve primarily during two development phases of an
AI-based system: training and inference.
Challenges for generative AI‑based systems Data bias gets injected during the model’s training phase
and leads to biased results because of faulty datasets. Fac-
While generative AI holds transformative potential for tors such as non-representative, imbalanced sampling, incor-
individuals, organizations, and society due to its vast pos- rect labeling, and mismeasured features during the selection
sible application space, the technology also inherits vari- and processing of datasets hinder an unbiased training of
ous challenges that parallel those of traditional ML and the GAI model, ultimately leading to biased algorithmic
DL systems. The domain of electronic markets is a prime outcomes (Mehrabi et al., 2022; Ntoutsi et al., 2020). The
example that moved into the center of transformation development of large-scale training datasets is especially
13
63 Page 8 of 17 Electronic Markets (2023) 33:63
important for GAI models and often involves strategies of may not reach as far as discriminative AI use cases (e.g.,
scraping public-available data on the Internet (Schuhmann decision-making or dynamic pricing), research on explain-
et al., 2022). This approach is usually performed unsuper- able generative AI is still in its infancy, and the justification
vised and autonomously, which complicates the dataset’s for more transparency is without a doubt (Brasse et al., 2023;
quality assurance because of its large quantity of unstruc- Sun et al., 2022). Governments are already discussing the
tured data. Since GAI models are often trained to be gen- enforcement of AI regulations that include explainable AI to
eral-purpose and multi-modal, they require and rely even protect the general society and mitigate risks tied to the tech-
more on such training datasets. Hence, moderating potential nology (Hamon et al., 2020). Interpretability (i.e., the human
data bias is crucial for applications in business contexts like capability to understand the AI system’s processes and deci-
electronic markets due to the closeness to customers (e.g., sions) is key, especially for GAI-based systems employed in
points of contact via advertisements, social media, or cus- large-scale information systems that affect large user groups,
tomer support). Furthermore, social bias as a form of data such as in networked businesses and digital platforms. In
bias can cause distorted views in generated texts or images these cases, generated content has the potential to impact
and should be considered as well as mitigated (Baeza-Yates, individuals and society, for instance, when generative AI
2018). serves as an advisor based on user questions and provides
Algorithmic bias is introduced during the inference phase, unsophisticated answers that are difficult to verify. Inaccu-
independent from the model’s training dataset (Mehrabi racy in generated product recommendations can have vary-
et al., 2022). In this case, the models have been trained on ing consequences depending on the situation, ranging from
diverse, unbiased input data, and either the model’s algo- selecting the wrong product to taking the wrong medication.
rithm or the application around it introduces biases affecting Early studies have shown how the chatbot ChatGPT per-
users. Overfitting is a typical phenomenon that originates formed surprisingly well in medical exams, suggesting inher-
from the chosen learning strategies or optimization functions ent knowledge similar to medical students (Bhayana et al.,
and causes biased algorithmic outcomes (Danks & London, 2023; Gilson et al., 2023). However, the seemingly omnisci-
2017; Hooker, 2021). In this case, GAI models might intro- ent capabilities may be restricted because, in the case of the
duce biases not reflected in the data because they fail to learn medical exams, the GAI model might have been trained on
the data distribution correctly. Likewise, the presentation of the exam data and can reproduce its answers but is not able
and the user interaction with GAI-based systems can cause to comprehend the contextual state of an individual relevant
biases, such as when only selected generated content (e.g., for medical assessment. Therefore, understanding how the
one image out of multiple variants) is shown to the user system performs sensemaking and generates its data helps
(Baeza-Yates, 2018). users and businesses to achieve their goals responsibly and
Thus, generative AI applications exerting biased results effectively, satisfying stakeholders’ needs and expectations
influence users’ opinions and judgement and require control (Miller, 2019; Sun et al., 2022). Particularly for autonomous
mechanisms (Jakesch et al., 2023a). Strategies should be systems in critical business applications that interact with
developed to prevent, detect, and mitigate biases in order to human beings, supervision and explainability of the GAI-
safeguard users and ensure the service quality and reputa- generated content remain vital to ensure reliable, safe, and
tions of a company. One approach to steer the quality of trustworthy outputs (Brasse et al., 2023; Hamm et al., 2023).
outputs from GAI models is via reinforcement learning from Another angle of transparency concerns the debate
human feedback (RLHF) (Christiano et al., 2017; Griffith between open-source and closed-source models. Legal
et al., 2013). The technique involves feedback from human issues revolving around copyright, licenses, and intellectual
evaluators to guide the model’s training process, with evalu- property make it difficult for individuals and enterprises to
ators assessing and comparing the quality of generated out- deploy GAI-based systems, especially when the large train-
puts. This approach enables generative models to refine their ing data of closed-sourced GAI models is procured through
output generation process, aiming for better alignment with Internet scraping (Jin et al., 2023; Smits & Borghuis, 2022).
human expectations and objectives. Nevertheless, determin- Research initiatives revolving around open-source datasets
ing what content is “good” or “right” remains a difficult and (e.g., Schuhmann et al., 2022) and open-source models (e.g.,
bias-prone task (Teubner et al., 2023). Stability.ai, 2023) aim at increasing the transparency on data
provenance and highlight, for instance, the data sources as
Transparency well as the presence of watermarks on images (Schuhmann
et al., 2022). Although most system engineers will rely on
The need for explainability arises with the unpredictability of pre-trained models and perform fine-tuning for their specific
the inherent generative nature of GAI models and the over- use case, open-source efforts are facilitating a legally safer
all functionality of ML models as “black boxes” (Janiesch route for businesses to deploy GAI models, ensuring legal
et al., 2021; Meske et al., 2022). While the impact of GAI compliance, and mitigating associated risks.
13
Electronic Markets (2023) 33:63 Page 9 of 17 63
Hallucinations Misuse
Due to the probabilistic variance property of GAI, genera- The access to GAI-based content creation tools with realistic
tive models are not immune to output errors, or so-called outputs does not only enable new creative opportunities for
hallucinations, which manifest themselves in confidently good (e.g., novel automation and innovation prospects across
generated results that seem plausible but are unreasonable multiple domains for businesses and users), but can also be
with respect to the source of information (Ji et al., 2023; leveraged for malicious purposes to intentionally cause risk
Susarla et al., 2023). The underlying causes of hallucinations and harm to society (Weidinger et al., 2022). Deepfakes have
are still being researched, with early findings suggesting that become increasingly sophisticated over the past decade as a
the training data that might contain contradictory or fictional result of the low cost and ease of creating such media using
content besides factual information (Dziri et al., 2022). This x-to-image, x-to-video, and x-to-sound GAI models (Mirsky
combination of varied inputs can lead to the generation of & Lee, 2022; Vasist & Krishnan, 2022). They are authentic
outputs that deviate from reality and introduce false infor- media content designed to impersonate individuals, such
mation. As a result, the uncertainty of generation quality is as celebrities or politicians, and are created to entertain or
further fueled by closed-source models that do not disclose manipulate the viewers. For example, deepfakes depicting
any information on their training, making it crucial to care- Trump’s arrest have circulated on social media and in the
fully select appropriate datasets and models to mitigate the news, causing misinformation due to their hyperrealistic,
risk of hallucinations. almost unrecognizable appearance (BBC, 2023). This is just
To illustrate the occurrence of hallucinations, studies one example of how generative AI carries potential for abuse
have identified GAI-based image generators that facili- and can be extended to other areas of daily life, such as
tate anatomical inaccuracies in the generated images of fraudulent service offers, identity theft, or fake shops (Houde
humans (Choi et al., 2022; Hughes, 2023). These inaccu- et al., 2020; Weidinger et al., 2022). The availability of GAI
racies suggest that GAI models require further refinement models provides starting points for new applications and
and improvement before they can be reliably used for unsu- business models for misuse and criminals, ultimately being
pervised production tasks (e.g., advertisement production, leveraged to spread misinformation and influence the media
automated social media posts). Additionally, errors in sim- and politics, or to defraud individuals and businesses (Hart-
ple arithmetic operations have also been observed (Bubeck mann et al., 2023; Kreps et al., 2022; Mirsky & Lee, 2022).
et al., 2023), highlighting the limitations and potential Such issues, combined with low data quality and bias, pro-
shortcomings of current generative models in performing vide a preview of potential social and ethical harms that may
even basic computations accurately. Due to the seemingly result in discrimination, exclusion, toxicity, and information
realistic data produced by GAI models, the detection and hazards (Weidinger et al., 2022). This increases the urgency
evaluation of hallucinations are a challenging task. Current for disclosure and transparency of GAI models, along with
automatic evaluation includes statistical methods to measure the aforementioned calls for explainability (Brasse et al.,
discrepancies between ground-truth references and gener- 2023; Horneber & Laumer, 2023; Raj et al., 2023).
ated data and model-based metrics that leverage additional Generative AI researchers seek to develop measures for
DL models to detect content inconsistencies (Ji et al., 2023). safer and more responsible use (van Slyke et al., 2023).
However, both approaches can be subjected to errors and RLHF and carefully crafted open-source datasets are first
are still inferior to cumbersome human evaluation. These attempts at improvement, besides input filters restricting user
instances emphasize the importance of appropriate halluci- prompts to harmless content. However, applications can still
nation mitigation methods, such as human supervision, to be tricked into bypassing filters and safeguards of GAI mod-
ensure the quality and accuracy of generated content. els, for instance, through prompt injections that insert mali-
Moving forward, addressing the issue of hallucinations cious prompts to achieve misaligned outputs of generative
in generative AI requires ongoing research and development AI applications (Perez & Ribeiro, 2022). The collaborative
efforts. Enhancing the transparency of training data and efforts between researchers, organizations, and regulators
computation processes as well as promoting the adoption (e.g., by initiatives such as the European Union AI Act or
of open-source models can help mitigate the risk of gen- US National AI Initiative) serve as promising initial steps
erating misleading or flawed results. Furthermore, refining toward opening pathways for future research to effectively
the underlying algorithms and incorporating robust error- address these issues, ensuring that AI-generated content is
checking mechanisms can contribute to the overall reliability morally, ethically, and legally appropriate and cannot be mis-
and trustworthiness of GAI models (Zhou et al., 2023). used (Hacker et al., 2023).
13
63
13
Table 3 Future research questions regarding the challenges of generative AI-based systems
Generative AI perspective Environment perspective Data perspective
Page 10 of 17
Bias • How do potential biases from GAI-generated content • How can we prevent GAI-based systems from perpetu- • How can GAI help identify and address bias in user-
affect service offerings? ating biases or discrimination against certain groups generated data?
• What are implications of addressing bias in GAI mod- of people? • How can GAI be used to synthesize diverse and
els and data within electronic markets, and how can • What measures can be taken to ensure that GAI- representative datasets that minimize bias in decision-
they be measured and managed? generated content respects cultural values and societal making?
norms?
Transparency • How can we ensure that GAI-based systems are trans- • How can GAI-based systems be designed to respect • How can platforms involving GAI transparently disclose
parent and accountable? individual privacy while still delivering personalized the origins of data and preserve copyright?
• How and to what extent should GAI-based services content? • How can data transparency promote trust in GAI-based
declare the use of GAI technology? • How does transparency in GAI-generated content services, particularly when dealing with sensitive
• What measures can be taken to ensure that users are impact user trust and engagement within digital plat- information?
aware of when they are interacting with GAI-generated forms and ecosystems?
content, thus enhancing transparency?
Hallucinations • How can GAI-related hallucinations be mitigated on • What is the impact of hallucinations toward consumer • What role can explainable AI play in identifying and
business strategy level? behavior? addressing hallucinations in GAI-generated content?
• When and how should humans-in-the-loop be inte- • How do hallucinations in GAI-generated content affect • How can data preprocessing and validation methods be
grated in a GAI-based system to tackle hallucinations? user trust, engagement, and decision-making on digital improved to detect and mitigate hallucinations?
platforms?
Misuse • Which impact does the misuse of GAI has on current • What are the ethical implications of using GAI in • How can GAI models be designed to flag and reject
digital platforms and electronic markets? electronic markets? content that exhibits signs of potential misuse during the
• How can we detect and prevent the misuse of • How do instances of GAI misuse impact competition generation process?
(e-commerce) platforms by fraudulent providers and and market dynamics within platform ecosystems, • How can data sources be checked and authenticated to
customers who leverage GAI? and what regulatory frameworks can be established to ensure that training data for GAI is not compromised or
• How can businesses develop effective strategies and address this? manipulated to encourage misuse?
regulations to prevent the misuse of GAI and protect
the privacy and security of market participants?
Societal impact • Where should the boundary of liability be drawn when • How can we ensure that GAI-based systems do not • How can GAI models continuously adapt to evolving
GAI generates false content? lead to worker displacement or other negative social societal norms, ethics, and cultural sensitivities, particu-
• What are the potential implications of GAI-based impacts? larly in content generation and data handling?
systems on the future of work? • How can GAI enable individuals to perform tasks and • How can the generation of data be leveraged with GAI
Electronic Markets
• How can GAI empower users to understand and con- offer novel services they have not been trained for? models to achieve greater goods for society?
trol the content they interact with, promoting a more • How can businesses and platform ecosystems proac-
informed and empowered societal experience? tively engage with their user communities to address
societal concerns regarding AI-generated content and
adapt their practices accordingly?
(2023) 33:63
Electronic Markets (2023) 33:63 Page 11 of 17 63
13
63 Page 12 of 17 Electronic Markets (2023) 33:63
Funding Open Access funding enabled and organized by Projekt Brand, J., Israeli, A., & Ngwe, D. (2023). Using GPT for market
DEAL. research. Harvard Business School Marketing Unit Working
Paper. Advance online publication. https://doi.org/10.2139/
Open Access This article is licensed under a Creative Commons Attri- ssrn.4395751
bution 4.0 International License, which permits use, sharing, adapta- Brasse, J., Broder, H. R., Förster, M., Klier, M., & Sigler, I. (2023).
tion, distribution and reproduction in any medium or format, as long Explainable artificial intelligence in information systems: A
as you give appropriate credit to the original author(s) and the source, review of the status quo and future research directions. Electronic
provide a link to the Creative Commons licence, and indicate if changes Markets, 33, 26. https://doi.org/10.1007/s12525-023-00644-5
were made. The images or other third party material in this article are Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal,
included in the article’s Creative Commons licence, unless indicated P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agar-
otherwise in a credit line to the material. If material is not included in wal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child,
the article’s Creative Commons licence and your intended use is not R., Ramesh, A., Ziegler, D., Wu, J., Winter, C., & Amodei, D.
permitted by statutory regulation or exceeds the permitted use, you will (2020). Language models are few-shot learners. In H. Larochelle,
need to obtain permission directly from the copyright holder. To view a M. Ranzato, R. Hadsell, M. F. Balcan, & H. Lin (Eds.), Advances
copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. in neural information processing systems 33 (pp. 1877–1901).
Curran Associates Inc.
Brynjolfsson, E., & McAfee, A. (2016). The second machine age:
Work, progress, and prosperity in a time of brilliant technolo-
gies. W.W. Norton & Company.
Brynjolfsson, E., Li, D., & Raymond, L. (2023). Generative AI at
Work. Cambridge MA. https://doi.org/10.3386/w31161
References Brynjolfsson, E., & Mitchell, T. (2017). What can machine learning
do? Workforce implications. Science, 358(6370), 1530–1534.
Ågerfalk, P. J., Conboy, K., Crowston, K., Eriksson Lundström, J. https://doi.org/10.1126/science.aap8062
S. Z., Jarvenpaa, S., Ram, S., & Mikalef, P. (2022). Artificial Bubeck, S., Chandrasekaran, V., Eldan, R., Gehrke, J., Horvitz, E.,
intelligence in information systems: State of the art and research Kamar, E., Lee, P., Lee, Y. T., Li, Y., Lundberg, S., Nori, H.,
roadmap. Communications of the Association for Information Palangi, H., Ribeiro, M. T., & Zhang, Y. (2023). Sparks of artifi-
Systems, 50(1), 420–438. https://d oi.o rg/1 0.1 7705/1 CAIS.0 5017 cial general intelligence: Early experiments with GPT-4. https://
Aggarwal, A., Mittal, M., & Battineni, G. (2021). Generative adver- doi.org/10.48550/arXiv.2303.12712
sarial network: An overview of theory and applications. Interna- Burger, B., Kanbach, D. K., Kraus, S., Breier, M., & Corvello, V.
tional Journal of Information Management Data Insights, 1(1), (2023). On the use of AI-based tools like ChatGPT to sup-
100004. https://doi.org/10.1016/j.jjimei.2020.100004 port management research. European Journal of Innova-
Agostinelli, A., Denk, T. I., Borsos, Z., Engel, J., Verzetti, M., Cail- tion Management, 26(7), 233–241. https://doi.org/10.1108/
lon, A., Huang, Q., Jansen, A., Roberts, A., Tagliasacchi, M., EJIM-02-2023-0156
Sharifi, M., Zeghidour, N., & Frank, C. (2023). MusicLM: Gen- Burström, T., Parida, V., Lahti, T., & Wincent, J. (2021). AI-enabled
erating Music From Text. https://doi.org/10.48550/arXiv.2301. business-model innovation and transformation in industrial eco-
11325 systems: A framework, model and outline for further research.
Ali, H., Murad, S., & Shah, Z. (2023). Spot the fake lungs: Generat- Journal of Business Research, 127, 85–95. https://doi.org/10.
ing synthetic medical images using neural diffusion models. In 1016/j.jbusres.2021.01.016
L. Longo & R. O’Reilly (Eds.), Communications in Computer Castelvecchi, D. (2016). Can we open the black box of AI? Nature,
and Information Science. Artificial Intelligence and Cognitive 538(7623), 20–23. https://doi.org/10.1038/538020a
Science (Vol. 1662, pp. 32–39). Springer Nature Switzerland. Choi, H., Chang, W., & Choi, J. (2022). Can we find neurons that cause
https://doi.org/10.1007/978-3-031-26438-2_3 unrealistic images in deep generative networks? In R. Dechter &
Anantrasirichai, N., & Bull, D. (2022). Artificial intelligence in the cre- L. de Raedt (Eds.), Proceedings of the thirty-first international
ative industries: A review. Artificial Intelligence Review, 55(1), joint conference on artificial intelligence (pp. 2888–2894). Inter-
589–656. https://doi.org/10.1007/s10462-021-10039-7 national Joint Conferences on Artificial Intelligence Organiza-
Baeza-Yates, R. (2018). Bias on the web. Communications of the ACM, tion. https://doi.org/10.24963/ijcai.2022/400
61(6), 54–61. https://doi.org/10.1145/3209581 Christiano, P. F., Leike, J., Brown, T., Martic, M., Legg, S., & Amo-
Bakpayev, M., Baek, T. H., van Esch, P., & Yoon, S. (2022). Pro- dei, D. (2017). Deep reinforcement learning from human pref-
grammatic creative: AI can think but it cannot feel. Australa- erences. In I. Guyon, U. von Luxburg, S. Bengio, H. Wallach,
sian Marketing Journal, 30(1), 90–95. https://doi.org/10.1016/j. R. Fergus, S. Vishwanathan, & R. Garnett (Eds.), Advances in
ausmj.2020.04.002 neural information processing systems 30 (Vol. 30). Curran
BBC. (2023). Fake Trump arrest photos: How to spot an AI-generated Associates, Inc.
image. https://www.bbc.com/news/world-us-canada-65069316 Cooper, G. (2023). Examining science education in ChatGPT: An
Berente, N., Gu, B., Recker, J., & Santhanam, R. (2021). Special issue exploratory study of generative artificial intelligence. Journal
editor’s comments: Managing artificial intelligence. MIS Quar- of Science Education and Technology, 32(3), 444–452. https://
terly, 45(3), 1433–1450. https://doi.org/10.25300/MISQ/2021/ doi.org/10.1007/s10956-023-10039-y
16274 Dang, H., Mecke, L., Lehmann, F., Goller, S., & Buschek, D. (2022).
Bhayana, R., Krishna, S., & Bleakney, R. R. (2023). Performance of How to prompt? Opportunities and challenges of zero- and few-
ChatGPT on a radiology board-style examination: Insights into shot learning for human-ai interaction in creative applications of
current strengths and limitations. Radiology, 307(5), e230582. generative models. In Generative AI and HCI Workshop: CHI
https://doi.org/10.1148/radiol.230582 2022, New Orleans, LA. https://doi.org/10.48550/arXiv.2209.
Borsos, Z., Marinier, R., Vincent, D., Kharitonov, E., Pietquin, O., 01390
Sharifi, M., Teboul, O., Grangier, D., Tagliasacchi, M., & Zeghi- Danks, D., & London, A. J. (2017). Algorithmic bias in autono-
dour, N. (2022). AudioLM: a Language Modeling Approach to mous systems. In F. Bacchus & C. Sierra (Eds.), Proceedings
Audio Generation. https://doi.org/10.48550/arXiv.2209.03143 of the twenty-sixth international joint conference on artificial
13
Electronic Markets (2023) 33:63 Page 13 of 17 63
intelligence (pp. 4691–4697). International Joint Conferences Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, & A.
on Artificial Intelligence Organization. https://doi.org/10.24963/ Oh (Eds.), Advances in Neural Information Processing Systems
ijcai.2017/654 35. Curran Associates, Inc.
Dwivedi, Y. K., Hughes, L., Ismagilova, E., Aarts, G., Coombs, C., Gilson, A., Safranek, C. W., Huang, T., Socrates, V., Chi, L., Taylor, R.
Crick, T., Duan, Y., Dwivedi, R., Edwards, J., Eirug, A., Gala- A., & Chartash, D. (2023). How does ChatGPT perform on the
nos, V., Ilavarasan, P. V., Janssen, M., Jones, P., Kar, A. K., United States Medical Licensing Examination? The implications
Kizgin, H., Kronemann, B., Lal, B., Lucini, B., & Williams, M. of large language models for medical education and knowledge
D. (2021). Artificial intelligence (AI): Multidisciplinary per- assessment. JMIR Medical Education, 9, e45312. https://d oi.o rg/
spectives on emerging challenges, opportunities, and agenda for 10.2196/45312
research, practice and policy. International Journal of Informa- Gm, H., Gourisaria, M. K., Pandey, M., & Rautaray, S. (2020). A com-
tion Management, 57, 101994. https://doi.org/10.1016/j.ijinf prehensive survey and analysis of generative models in machine
omgt.2019.08.002 learning. Computer Science Review, 38, 100285. https://doi.org/
Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, 10.1016/j.cosrev.2020.100285
A. K., Baabdullah, A. M., Koohang, A., Raghavan, V., Ahuja, Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning.
M., Albanna, H., Albashrawi, M. A., Al-Busaidi, A. S., Bal- The MIT Press.
akrishnan, J., Barlette, Y., Basu, S., Bose, I., Brooks, L., Buhalis, Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D.,
D., & Wright, R. (2023). “So what if ChatGPT wrote it?” Multi- Ozair, S., Courville, A., & Bengio, Y. (2020). Generative adver-
disciplinary perspectives on opportunities, challenges and impli- sarial networks. Communications of the ACM, 63(11), 139–144.
cations of generative conversational AI for research, practice and https://doi.org/10.1145/3422622
policy. International Journal of Information Management, 71, Griffith, S., Subramanian, K., Scholz, J., Isbell, C. L., & Thomaz, A. L.
102642. https://doi.org/10.1016/j.ijinfomgt.2023.102642 (2013). Policy shaping: Integrating Human feedback with rein-
Dziri, N., Milton, S., Yu, M., Zaiane, O., & Reddy, S. (2022). On forcement learning. In C. J. C. Burges, L. Bottou, Z. Ghahramani,
the origin of hallucinations in conversational models: Is it the & K. Q. Weinberger (Eds.), Advances in Neural Information Pro-
datasets or the models? In M. Carpuat, M.-C. de Marneffe, & I. cessing Systems 26 (Vol. 26). Curran Associates, Inc.
V. Meza Ruiz (Eds.), Proceedings of the 2022 Conference of the Gui, J., Sun, Z., Wen, Y., Tao, D., & Ye, J. (2023). A review on genera-
North American Chapter of the Association for Computational tive adversarial networks: Algorithms, theory, and applications.
Linguistics: Human Language Technologies (pp. 5271–5285). IEEE Transactions on Knowledge and Data Engineering, 35(4),
Association for Computational Linguistics. https://doi.org/10. 3313–3332. https://doi.org/10.1109/TKDE.2021.3130191
18653/v1/2022.naacl-main.387 Guo, D., Ren, S., Lu, S., Feng, Z., Tang, D., Liu, S., Zhou, L., Duan,
Einola, K., & Khoreva, V. (2023). Best friend or broken tool? Explor- N., Svyatkovskiy, A., Fu, S., Tufano, M., Deng, S. K., Clement,
ing the co-existence of humans and artificial intelligence in the C., Drain, D., Sundaresan, N., Yin, J., Jiang, D., & Zhou, M.
workplace ecosystem. Human Resource Management, 62(1), (2021). GraphCodeBERT: Pre-training code representations
117–135. https://doi.org/10.1002/hrm.22147 with data flow. 9th International Conference on Learning Rep-
Elasri, M., Elharrouss, O., Al-Maadeed, S., & Tairi, H. (2022). Image resentations 2021 (ICLR), Virtual.
generation: A review. Neural Processing Letters, 54(5), 4609– Haase, J., Djurica, D., & Mendling, J. (2023). The art of inspir-
4646. https://doi.org/10.1007/s11063-022-10777-x ing creativity: Exploring the unique impact of AI-generated
Elicit. (2022). Frequently asked questions: What is elicit? https://e licit. images. AMCIS 2023 Proceedings.
org/faq#what-is-elicit Hacker, P., Engel, A., & Mauer, M. (2023). Regulating ChatGPT and
Eloundou, T., Manning, S., Mishkin, P., & Rock, D. (2023). GPTs other large generative AI models. 2023 ACM Conference on
are GPTs: An early look at the labor market impact potential Fairness, Accountability, and Transparency (pp. 1112–1123).
of large language models. arXiv. https://doi.org/10.48550/arXiv. ACM. https://doi.org/10.1145/3593013.3594067
2303.10130 Hamm, P., Klesel, M., Coberger, P., & Wittmann, H. F. (2023).
Esser, P., Chiu, J., Atighehchian, P., Granskog, J., & Germanidis, A. Explanation matters: An experimental study on explain-
(2023). Structure and content-guided video synthesis with diffu- able AI. Electronic Markets, 33, 17. https://doi.org/10.1007/
sion models. https://doi.org/10.48550/arXiv.2302.03011 s12525-023-00640-9
Feng, Z., Guo, D., Tang, D., Duan, N., Feng, X., Gong, M., Shou, L., Hamon, R., Junklewitz, H., & Sanchez, I. (2020). Robustness and
Qin, B., Liu, T., Jiang, D., & Zhou, M. (2020). CodeBERT: A explainability of artificial intelligence: From technical to
pre-trained model for programming and natural languages. In policy solutions. EUR: Vol. 30040. Publications Office of the
T. Cohn, Y. He, & Y. Liu (Eds.), Findings of the association European Union.
for computational linguistics: EMNLP 2020 (pp. 1536–1547). Harmon, P. (1985). Expert systems: Artificial intelligence in busi-
Association for Computational Linguistics. https://doi.org/10. ness. Wiley & Sons.
18653/v1%2F2020.findings-emnlp.139 Hartmann, J., Schwenzow, J., & Witte, M. (2023). The political
Ferrara, E. (2023). Should ChatGPT be biased? Challenges and risks ideology of conversational AI: Converging evidence on Chat-
of bias in large language models. arXiv. https://d oi.o rg/1 0.4 8550/ GPT's pro-environmental, left-libertarian orientation. https://
arXiv.2304.03738 doi.org/10.48550/arXiv.2301.01768
Ferreira, K. J., Lee, B. H. A., & Simchi-Levi, D. (2016). Analytics Ho, J., Jain, A., & Abbeel, P. (2020). Denoising diffusion probabil-
for an online retailer: Demand forecasting and price optimiza- istic models. In H. Larochelle, M. Ranzato, R. Hadsell, M. F.
tion. Manufacturing & Service Operations Management, 18(1), Balcan, & H. Lin (Eds.), Advances in Neural Information Pro-
69–88. https://doi.org/10.1287/msom.2015.0561 cessing Systems 33 (pp. 6840–6851). Curran Associates Inc.
Fügener, A., Grahl, J., Gupta, A., & Ketter, W. (2021). Will humans- Hooker, S. (2021). Moving beyond “algorithmic bias is a data prob-
in-the-loop become borgs? Merits and pitfalls of working with lem”. Patterns (New York, N.Y.), 2(4), 100241. https://doi.org/
AI. MIS Quarterly, 45(3), 1527–1556. https://doi.org/10.25300/ 10.1016/j.patter.2021.100241
MISQ/2021/16553 Horneber, D., & Laumer, S. (2023). Algorithmic accountability.
Gao, J., Shen, T., Wang, Z, Chen, W., Yin, K., Li, D, Litany, O., Business & Information Systems Engineering. Advance online
Gojcic, Z., & Fidler, S. (2022). GET3D: A generative model publication. https://doi.org/10.1007/s12599-023-00817-8
of high quality 3D textured shapes learned from images. In S.
13
63 Page 14 of 17 Electronic Markets (2023) 33:63
Houde, S., Liao, V., Martino, J., Muller, M., Piorkowski, D., Rich- Kodali, N., Abernethy, J., Hays, J., & Kira, Z. (2017).On convergence
ards, J., Weisz, J., & Zhang, Y. (2020). Business (mis)Use and stability of GANs. arXiv. https://doi.org/10.48550/arXiv.
Cases of Generative AI. In W. Geyer, Y. Khazaeni, & M. 1705.07215
Shmueli-Scheuer (Eds.), Joint Proceedings of the Workshops Kowalczyk, P., Röder, M., & Thiesse, F. (2023). Nudging creativity in
on Human-AI Co-Creation with Generative Models and User- digital marketing with generative artificial intelligence: Opportu-
Aware Conversational Agents co-located with 25th Interna- nities and limitations. ECIS 2023 Research-in-Progress Papers,
tional Conference on Intelligent User Interfaces (IUI 2020). Article 22.
CEUR. https://doi.org/10.48550/arXiv.2003.07679 Kreps, S., McCain, R. M., & Brundage, M. (2022). All the news that’s
Hu, K. (2023, February 2). ChatGPT sets record for fastest-growing fit to fabricate: AI-generated text as a tool of media misinforma-
user base - Analyst note. Reuters. https://www.reuters.com/ tion. Journal of Experimental Political Science, 9(1), 104–117.
techno logy/chatgp t-s ets-r ecord-fastes t-g rowin g-u ser-b ase- https://doi.org/10.1017/XPS.2020.37
analyst-note-2023-02-01/ Kühl, N., Schemmer, M., Goutier, M., & Satzger, G. (2022). Artificial
Huang, S., & Grady, P. (2022). Generative AI: A Creative New intelligence and machine learning. Electronic Markets, 32(4),
World. Sequoia. https://w ww.s equoi acap.c om/a rticl e/gener 2235–2244. https://doi.org/10.1007/s12525-022-00598-0
ative-ai-a-creative-new-world/ LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature,
Hughes, A. (2023). Why AI-generated hands are the stuff of night- 521(7553), 436–444. https://doi.org/10.1038/nature14539
mares, explained by a scientist. BBC Science Focus. https:// Lehmann, F., & Buschek, D. (2020). Examining autocompletion as a
www.sciencefocus.com/future-technology/why-ai-generated- basic concept for interaction with generative AI. I-Com, 19(3),
hands-are-the-stuff-of-nightmares-explained-by-a-scientist/ 251–264. https://doi.org/10.1515/icom-2020-0025
Jakesch, M., Bhat, A., Buschek, D., Zalmanson, L., & Naaman, M. Leiker, D., Gyllen, A. R., Eldesouky, I., & Cukurova, M. (2023). Gen-
(2023a). Co-writing with opinionated language models affects erative AI for learning: Investigating the potential of synthetic
users’ views. In A. Schmidt, K. Väänänen, T. Goyal, P. O. learning videos. In 24th International Conference of Artificial
Kristensson, A. Peters, S. Mueller, J. R. Williamson, & M. Intelligence in Education (AIED 2023), Tokyo, Japan.
L. Wilson (Eds.), Proceedings of the 2023 CHI Conference Li, H. (2022). Language models. Communications of the ACM, 65(7),
on Human Factors in Computing Systems (pp. 1–15). ACM. 56–63. https://doi.org/10.1145/3490443
https://doi.org/10.1145/3544548.3581196. Li, J., Li, M., Wang, X., & Thatcher, J. B. (2021). Strategic directions
Jakesch, M., Hancock, J. T., & Naaman, M. (2023b). Human heuris- for AI: The role of CIOs and boards of directors. MIS Quarterly,
tics for AI-generated language are flawed. Proceedings of the 45(3), 1603–1644. https://doi.org/10.25300/MISQ/2021/16523
National Academy of Sciences of the United States of America, Li, M., Bao, X., Chang, L., & Gu, T. (2022). Modeling personalized
120(11), e2208839120. https://d oi.o rg/1 0.1 073/p nas.2 2088 representation for within-basket recommendation based on deep
39120 learning. Expert Systems with Applications, 192, 116383. https://
Janiesch, C., Zschech, P., & Heinrich, K. (2021). Machine learning and doi.org/10.1016/j.eswa.2021.116383
deep learning. Electronic Markets, 31(3), 685–695. https://doi. Lins, S., Pandl, K. D., Teigeler, H., Thiebes, S., Bayer, C., & Suny-
org/10.1007/s12525-021-00475-2 aev, A. (2021). Artificial intelligence as a service. Business &
Jasper. (2022). ChatGPT vs. Jasper: How it’s different from Jasper Information Systems Engineering, 63(4), 441–456. https://doi.
chat. https://www.jasper.ai/blog/what-is-chatgpt org/10.1007/s12599-021-00708-w
Jebara, T. (2004). Generative versus discriminative learning. In T. Liu, V., & Chilton, L. B. (2022). Design guidelines for prompt engi-
Jebara (Ed.), Machine Learning (pp. 17–60). Springer US. neering text-to-image generative models. In S. Barbosa, C.
https://doi.org/10.1007/978-1-4419-9011-2_2 Lampe, C. Appert, D. A. Shamma, S. Drucker, J. Williamson,
Ji, Z., Lee, N., Frieske, R., Yu, T., Su, D., Xu, Y., Ishii, E., Bang, Y. & K. Yatani (Eds.), CHI Conference on Human Factors in
J., Madotto, A., & Fung, P. (2023). Survey of hallucination in Computing Systems (pp. 1–23). ACM. https://doi.org/10.1145/
natural language generation. ACM Computing Surveys, 55(12), 3491102.3501825
1–38. https://doi.org/10.1145/3571730 Longoni, C., Fradkin, A., Cian, L., & Pennycook, G. (2022). News
Jin, Y., Jang, E., Cui, J., Chung, J.‑W., Lee, Y., & Shin, S. (2023). from generative artificial intelligence is believed less. In 2022
DarkBERT: A language model for the dark side of the Internet. ACM Conference on Fairness, Accountability, and Transpar-
In 61st Annual Meeting of the Association for Computational ency (pp. 97–106). ACM. https://doi.org/10.1145/3531146.
Linguistics (ACL’23), Toronto, Canada. 3533077
Johnson, D. G., & Verdicchio, M. (2017). AI Anxiety. Journal of the Lukyanenko, R., Maass, W., & Storey, V. C. (2022). Trust in arti-
Association for Information Science and Technology, 68(9), ficial intelligence: From a Foundational Trust Framework to
2267–2270. https://doi.org/10.1002/asi.23867 emerging research opportunities. Electronic Markets, 32(4),
Jumper, J., Evans, R., Pritzel, A., Green, T., Figurnov, M., Ronneberger, 1993–2020. https://doi.org/10.1007/s12525-022-00605-4
O., Tunyasuvunakool, K., Bates, R., Žídek, A., Potapenko, A., Lund, B. D., Wang, T., Mannuru, N. R., Nie, B., Shimray, S., &
Bridgland, A., Meyer, C., Kohl, S. A. A., Ballard, A. J., Cowie, Wang, Z. (2023). ChatGPT and a new academic reality: Arti-
A., Romera-Paredes, B., Nikolov, S., Jain, R., Adler, J., & Has- ficial intelligence-written research papers and the ethics of the
sabis, D. (2021). Highly accurate protein structure prediction large language models in scholarly publishing. Journal of the
with AlphaFold. Nature, 596(7873), 583–589. https://doi.org/ Association for Information Science and Technology, 74(5),
10.1038/s41586-021-03819-2 570–581. https://doi.org/10.1002/asi.24750
Kingma, D. P., & Welling, M (2014). Auto-encoding variational Bayes. Lysyakov, M., & Viswanathan, S. (2022). Threatened by AI: Analyz-
International Conference on Learning Representations 2021 ing users’ responses to the introduction of AI in a crowd-sourc-
(ICLR), Banff, Canada. ing platform. Information Systems Research, 34(3). Advance
Kingma, D. P., Mohamed, S., Jimenez Rezende, D., & Welling, M. online publication. https://doi.org/10.1287/isre.2022.1184
(2014).Semi-supervised learning with deep generative models. Mayahi, S., & Vidrih, M. (2022). The impact of generative AI on the
In Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, & K. Q. future of visual content marketing. https://doi.org/10.48550/
Weinberger (Eds.), Advances in Neural Information Processing arXiv.2211.12660
Systems 27 (Vol. 27). Curran Associates, Inc.
13
Electronic Markets (2023) 33:63 Page 15 of 17 63
Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, Educator, 78(1), 84–93. https://doi.org/10.1177/1077695822
A. (2022). A survey on bias and fairness in machine learn- 1149577
ing. ACM Computing Surveys, 54(6), 1–35. https://doi.org/10. Pentina, I., Hancock, T., & Xie, T. (2023). Exploring relationship
1145/3457607 development with social chatbots: A mixed-method study of
Meske, C., Abedin, B., Klier, M., & Rabhi, F. (2022). Explain- replika. Computers in Human Behavior, 140, 107600. https://
able and responsible artificial intelligence. Electronic doi.org/10.1016/j.chb.2022.107600
Markets, 32(4), 2103–2106. https:// d oi. o rg/ 1 0. 1 007/ Perez, F., & Ribeiro, I. (2022). Ignore previous prompt: Attack tech-
s12525-022-00607-2 niques for language models. In D. Hendrycks, V. Krakovna, D.
Microsoft. (2023). Microsoft and OpenAI extend partnership. https:// Song, J. Steinhardt, & N. Carlini (Chairs), Thirty-sixth Confer-
blogs.microsoft.com/blog/2023/01/23/microsoftandopenaiex ence on Neural Information Processing Systems (NeurIPS),
tendpartnership/ Virtual.
Miller, T. (2019). Explanation in artificial intelligence: Insights from Piccialli, F., Di Cola, V. S., Giampaolo, F., & Cuomo, S. (2021). The
the social sciences. Artificial Intelligence, 267, 1–38. https://doi. role of artificial intelligence in fighting the COVID-19 pan-
org/10.1016/j.artint.2018.07.007 demic. Information Systems Frontiers : A Journal of Research
Mirbabaie, M., Brünker, F., Möllmann Frick, N. R. J., & Stieglitz, S. and Innovation, 23(6), 1467–1497. https://doi.org/10.1007/
(2022). The rise of artificial intelligence – Understanding the s10796-021-10131-x
AI identity threat at the workplace. Electronic Markets, 32(1), Poole, B., Jain, A., Barron, J. T., & Mildenhall, B. (2023). DreamFu-
73–99. https://doi.org/10.1007/s12525-021-00496-x sion: Text-to-3D using 2D diffusion. In Eleventh International
Mirsky, Y., & Lee, W. (2022). The creation and detection of deepfakes. Conference on Learning Representations (ICLR 2023), Kigali,
ACM Computing Surveys, 54(1), 1–41. https://doi.org/10.1145/ Rwanda.
3425780 Qiu, L., & Benbasat, I. (2005). An investigation into the effects of text-
Mondal, S., Das, S., & Vrana, V. G. (2023). How to bell the cat? A to-speech voice and 3D avatars on the perception of presence and
theoretical review of generative artificial intelligence towards flow of live help in electronic commerce. ACM Transactions on
digital disruption in all walks of life. Technologies, 11(2), 44. Computer-Human Interaction, 12(4), 329–355. https://doi.org/
https://doi.org/10.3390/technologies11020044 10.1145/1121112.1121113
Moussawi, S., Koufaris, M., & Benbunan-Fich, R. (2021). How percep- Raj, M., Berg, J., & Seamans, R. (2023). Art-ificial intelligence: The
tions of intelligence and anthropomorphism affect adoption of effect of AI disclosure on evaluations of creative content. arXiv.
personal intelligent agents. Electronic Markets, 31(2), 343–364. https://doi.org/10.48550/arXiv.2303.06217
https://doi.org/10.1007/s12525-020-00411-w Ray, S. (2019). A quick review of machine learning algorithms. In 2019
Murphy, C., & Thomas, F. P. (2023). Generative AI in spinal cord International Conference on Machine Learning, Big Data, Cloud
injury research and care: Opportunities and challenges ahead. and Parallel Computing (COMITCon) (pp. 35–39). IEEE. https://
The Journal of Spinal Cord Medicine, 46(3), 341–342. https:// doi.org/10.1109/COMITCon.2019.8862451
doi.org/10.1080/10790268.2023.2198926 Riedl, R. (2022). Is trust in artificial intelligence systems related to user
Nichol, A., Jun, H., Dhariwal, P., Mishkin, P., & Chen, M. (2022). personality? Review of empirical evidence and future research
Point-E: A system for generating 3D point clouds from complex directions. Electronic Markets, 32(4), 2021–2051. https://d oi.o rg/
prompts. arXiv. https://doi.org/10.48550/arXiv.2212.08751 10.1007/s12525-022-00594-4
Ntoutsi, E., Fafalios, P., Gadiraju, U., Iosifidis, V., Nejdl, W., Vidal, Rix, J., & Hess, T. (2023). From “handmade” to “AI-made”: Mitigat-
M.-E., Ruggieri, S., Turini, F., Papadopoulos, S., Krasanakis, ing consumers’ aversion towards AI-generated textual products.
E., Kompatsiaris, I., Kinder-Kurlanda, K., Wagner, C., Karimi, AMCIS 2023 Proceedings.
F., Fernandez, M., Alani, H., Berendt, B., Kruegel, T., Heinze, Rombach, R., Blattmann, A., Lorenz, D., Esser, P., & Ommer, B.
C., & Staab, S. (2020). Bias in data-driven artificial intelligence (2022). High-resolution image synthesis with latent diffusion
systems—An introductory survey. WIREs Data Mining and models. In 2022 IEEE/CVF Conference on Computer Vision and
Knowledge Discovery, 10(3), e1356. https://doi.org/10.1002/ Pattern Recognition (CVPR) (pp. 10674–10685). IEEE. https://
widm.1356 doi.org/10.1109/CVPR52688.2022.01042
OpenAI. (2023). GPT-4 technical report. arXiv. https://doi.org/10. Ruthotto, L., & Haber, E. (2021). An introduction to deep generative
48550/arXiv.2303.08774 modeling. GAMM-Mitteilungen, 44(2), e202100008. https://doi.
Oppenlaender, J. (2022). The creativity of text-to-image generation. org/10.1002/gamm.202100008
Proceedings of the 25th International Academic Mindtrek Con- Samtani, S., Zhu, H., Padmanabhan, B., Chai, Y., Chen, H., &
ference (pp. 192–202). ACM. https://doi.org/10.1145/3569219. Nunamaker, J. F. (2023). Deep learning for information systems
3569352 research. Journal of Management Information Systems, 40(1),
Ouyang, L., Wu, J, Jiang, X., Almeida, D., Wainwright, C. L., 271–301. https://doi.org/10.1080/07421222.2023.2172772
Mishkin, P., Zhang, C, Agarwal, S., Slama, K., Ray, A., Schul- Schneider, J., Seidel, S., Basalla, M., & vom Brocke, J. (2023). Reuse,
man, J., Hilton, J., Kelton, F., Miller, L., Simens, M., Askell, A., reduce, support: Design Principles for green data mining. Busi-
Welinder, P., Christiano, P., Leike, J., & Lowe, R. (2022). Train- ness & Information Systems Engineering, 65(1), 65–83. https://
ing language models to follow instructions with human feedback. doi.org/10.1007/s12599-022-00780-w
https://doi.org/10.48550/arXiv.2203.02155 Schoormann, T., Strobel, G., Möller, F., Petrik, D., & Zschech, P.
Pan, Z., Yu, W., Yi, X., Khan, A., Yuan, F., & Zheng, Y. (2019). Recent (2023). Artificial intelligence for sustainability - A systematic
progress on generative adversarial networks (GANs): A survey. review of information systems literature. Communications of the
IEEE Access, 7, 36322–36333. https://d oi.o rg/1 0.1 109/A
CCESS. Association for Information Systems, 52(1), 199–237. https://d oi.
2019.2905015 org/10.17705/1CAIS.05209
Patterson, D. W. (1990). Introduction to artificial intelligence and Schramowski, P., Turan, C., Andersen, N., Rothkopf, C. A., & Ker-
expert systems. Prentice Hall. sting, K. (2022). Large pre-trained language models contain
Pavlik, J. V. (2023). Collaborating with ChatGPT: Considering the human-like biases of what is right and wrong to do. Nature
implications of generative artificial intelligence for journal- Machine Intelligence, 4(3), 258–268. https://doi.org/10.1038/
ism and media education. Journalism & Mass Communication s42256-022-00458-8
13
63 Page 16 of 17 Electronic Markets (2023) 33:63
Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C. W., Wight- van Slyke, C., Johnson, R., & Sarabadani, J. (2023). Generative artifi-
man, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Worts- cial intelligence in information systems education: Challenges,
man, M., Schramowski, P., Kundurthy, S. R., Crowson, K., consequences, and responses. Communications of the Associa-
Schmidt, L., Kaczmarczyk, R., & Jitsev, J. (2022). LAION-5B: tion for Information Systems, 53(1), 1–21. https://doi.org/10.
An open large-scale dataset for training next generation image- 17705/1CAIS.05301
text models. In S. Koyejo, S. Mohamed, A. Agarwal, D. Bel- Vasist, P. N., & Krishnan, S. (2022). Deepfakes An integrative review
grave, K. Cho, & A. Oh (Eds.), Advances in neural information of the literature and an agenda for future research. Communica-
processing systems 35. Curran Associates, Inc. tions of the Association for Information Systems, 51, 590–636.
Selz, D. (2020). From electronic markets to data driven insights. https://doi.org/10.17705/1CAIS.05126
Electronic Markets, 30(1), 57–59. https://d oi.o rg/1 0.1 007/ Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez,
s12525-019-00393-4 A. N., Kaiser, U., & Polosukhin, I. (2017). Attention is all you
Smits, J., & Borghuis, T. (2022). Generative AI and intellectual prop- need. In I. Guyon, U. von Luxburg, S. Bengio, H. Wallach, R.
erty rights. In B. Custers & E. Fosch-Villaronga (Eds.), Informa- Fergus, S. Vishwanathan, & R. Garnett (Eds.), Advances in neu-
tion Technology and Law Series. Law and Artificial Intelligence ral information processing systems 30 (pp. 5999–6009). Curran
(Vol. 35, pp. 323–344). T.M.C. Asser Press. https://doi.org/10. Associates Inc.
1007/978-94-6265-523-2_17 Walters, W. P., & Murcko, M. (2020). Assessing the impact of genera-
Stability.ai. (2023). Stability AI launches the first of its StableLM suite tive AI on medicinal chemistry. Nature Biotechnology, 38(2),
of language models. https://s tabil ity.a i/b log/s tabil ity-a i-l aunch es- 143–145. https://doi.org/10.1038/s41587-020-0418-2
the-first-of-its-stablelm-suite-of-language-models Wang, C., Chen, S., Wu, Y., Zhang, Z., Zhou, L., Liu, S., Chen, Z., Liu,
Strobel, G., Banh, L., Möller, F., & Schoormann, T. (2024). Explor- Y., Wang, H., Li, J., He, L., Zhao, S., & Wei, F. (2023). Neural
ing generative artificial intelligence: A taxonomy and types. codec language models are zero-shot text to speech synthesizers.
In Hawaii International Conference on System Sciences 2024 arXiv. https://doi.org/10.48550/arXiv.2301.02111
(HICSS 2024), Hawaii, USA. Wanner, J., Herm, L.-V., Heinrich, K., & Janiesch, C. (2022). The
Strobel, G., Schoormann, T., Banh, L., & Möller, F. (2023). Artificial effect of transparency and trust on intelligent system acceptance:
intelligence for sign language translation – A design science Evidence from a user-based study. Electronic Markets, 32(4),
research study. Communications of the Association for Informa- 2079–2102. https://doi.org/10.1007/s12525-022-00593-5
tion Systems, 53(1), 42–64. https://doi.org/10.17705/1CAIS.05303 Wei, R., & Mahmood, A. (2021). Recent advances in variational
Sun, J., Liao, Q. V., Muller, M., Agarwal, M., Houde, S., Talamadu- autoencoders with representation learning for biomedical infor-
pula, K., & Weisz, J. D. (2022). Investigating explainability of matics: A survey. IEEE Access, 9, 4939–4956. https://d oi.o rg/1 0.
generative AI for code through scenario-based design. In 27th 1109/ACCESS.2020.3048309
International Conference on Intelligent User Interfaces (pp. Weidinger, L., Uesato, J., Rauh, M., Griffin, C., Huang, P.‑S., Mel-
212–228). ACM. https://doi.org/10.1145/3490099.3511119 lor, J., Glaese, A., Cheng, M., Balle, B., Kasirzadeh, A., Biles,
Susarla, A., Gopal, R., Thatcher, J. B., & Sarker, S. (2023). The Janus C., Brown, S., Kenton, Z., Hawkins, W., Stepleton, T., Birhane,
effect of generative AI: Charting the path for responsible con- A., Hendricks, L. A., Rimell, L., Isaac, W., Gabriel, I. (2022).
duct of scholarly activities in information systems. Information Taxonomy of risks posed by language models. In 2022 ACM
Systems Research, 34(2), 399–408. https://doi.org/10.1287/isre. Conference on Fairness, Accountability, and Transparency (pp.
2023.ed.v34.n2 214–229). ACM. https://doi.org/10.1145/3531146.3533088
Synthesia. (2023). Synthesia | #1 AI Video Generation Platform. Weisz, J., Muller, M., He, J., & Houde, S. (2023). Toward general
https://www.synthesia.io/ design principles for generative AI applications. In 4th Work-
Teubner, T., Flath, C. M., Weinhardt, C., van der Aalst, W., & Hinz, shop on Human-AI Co-Creation with Generative Models, Syd-
O. (2023). Welcome to the era of ChatGPT et al.: The pros- ney, Australia.
pects of large language models. Business & Information Weng, S.-S., & Chen, H.-C. (2020). Exploring the role of deep learning
Systems Engineering, 65, 95–101. https://d oi.o rg/1 0.1 007/ technology in the sustainable development of the music produc-
s12599-023-00795-x tion industry. Sustainability, 12(2), 625. https://doi.org/10.3390/
The Washington Post. (2022). The Google engineer who thinks the su12020625
company’s AI has come to life. https://w ww.w ashin gtonp ost.c om/ Wessel, M., Adam, M., Benlian, A., Majchrzak, A., & Thies, F. (2023).
technology/2022/06/11/google-ai-lamda-blake-lemoine/ Call for papers to the special issue: Generative AI and its tran-
Tomczak, J. M. (2022). Deep generative modeling. Springer Interna- formative value for digital platforms. Journal of Management
tional Publishing. https://doi.org/10.1007/978-3-030-93158-2 Information Systems. https://www.jmis-web.org/cfps/JMIS_SI_
Tomitza, C., Schaschek, M., Straub, L., & Winkelmann, A. (2023). CfP_Generative_AI.pdf
What is the minimum to trust AI?—A requirement analysis Willcocks, L. (2020). Robo-Apocalypse cancelled? Reframing the
for (generative) AI-based texts. Wirtschaftsinformatik 2023 automation and future of work debate. Journal of Information
Proceedings. Technology, 35(4), 286–302. https://d oi.o rg/1 0.1 177/0 26839 6220
van den Broek, E., Sergeeva, A., & Huysman Vrije, M. (2021). When 925830
the machine meets the expert: An ethnography of developing AI Winston, P. H. (1993). Artificial intelligence (3. ed., reprinted with
for hiring. MIS Quarterly, 45(3), 1557–1580. https://doi.org/10. corr). Addison-Wesley.
25300/MISQ/2021/16559 Yang, R., & Wibowo, S. (2022). User trust in artificial intelligence:
van Dun, C., Moder, L., Kratsch, W., & Röglinger, M. (2023). Pro- A comprehensive conceptual framework. Electronic Markets,
cessGAN: Supporting the creation of business process improve- 32(4), 2053–2077. https://doi.org/10.1007/s12525-022-00592-6
ment ideas through generative machine learning. Decision Sup- Zhan, F., Yu, Y., Wu, R., Zhang, J., Lu, S., Liu, L., Kortylewski, A.,
port Systems, 165, 113880. https://doi.org/10.1016/j.dss.2022. Theobalt, C., & Xing, E. (2021). Multimodal Image Synthesis
113880 and Editing: A Survey. arXiv. https://doi.org/10.48550/arXiv.
2112.13592
13
Electronic Markets (2023) 33:63 Page 17 of 17 63
Zhang, C., Zhang, C., Zhang, M., & Kweon, I. S. (2023a). Text-to- evaluating algorithmic and human solutions. In A. Schmidt, K.
image diffusion models in generative AI: A survey. arXiv. https:// Väänänen, T. Goyal, P. O. Kristensson, A. Peters, S. Mueller,
doi.org/10.48550/arXiv.2303.07909 J. R. Williamson, & M. L. Wilson (Eds.), Proceedings of the
Zhang, D., Li, W., Niu, B., & Wu, C. (2023b). A deep learning 2023 CHI Conference on Human Factors in Computing Systems
approach for detecting fake reviewers: Exploiting reviewing (pp. 1–20). ACM. https://doi.org/10.1145/3544548.3581318
behavior and textual information. Decision Support Systems, 166,
113911. https://doi.org/10.1016/j.dss.2022.113911 Publisher's Note Springer Nature remains neutral with regard to
Zhou, J., Zhang, Y., Luo, Q., Parker, A. G., & Choudhury, M. de (2023). jurisdictional claims in published maps and institutional affiliations.
Synthetic lies: Understanding AI-generated misinformation and
13