Paper8 Bohm

Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

The use of Generative AI in the domain of human

creations – a case for co-evolution?


Karsten Böhm1 and Lisa-Maria Schedlberger1
1 FH Kufstein Tirol – University of Applied Sciences, Andreas-Hofer-Str. 7, 6330 Kufstein, Austria

Abstract
The appearance and sudden success of generative technologies and, within this domain, Large Language
Models (LLMs) have generated many open questions about the benefits and challenges of those
technologies and the approaches to deal with them in the future. Put into context, we can see a new level
of dialog ability and contextual understanding that is new to the interaction between the human user
and the machine. This contribution uses the concept of co-evolution, originally from the biology field, to
explore some of the implications of the new technologies, both on an individual and at a collective level.
The perspective of co-evolution will be reflected in the application areas of Software Development and
for the (higher) Education context to provide detailed insights.

Keywords
Generative Technologies, Human Creativity, Co-evolution, Software Development, Higher Education

1. Introduction
Generative technologies are a type of sub-symbolic unsupervised machine learning algorithms
that gained much attention recently due to their impressive abilities to act on complex and
heterogeneous input data by not only processing that information to classify or interpret them,
for example, but also to generate new data that is suitable for the given task. The general concept
had been around for some years, initially described by Google researchers [1]. Designed initially
for the domain of machine translation of texts, the so-called transformer models follow the idea
of learning the context of a given text sequence in the source language and mapping that to the
destination language. Since the models were trained with large amounts of training data, they
became known as Large Language Models or LLMs for short. The researchers of the company
OpenAI built on the initial idea. They developed their Generative Pretrained Transformers (GPT)
[2], and finally, the release of their GPT3-model [3] with a chat interface brought the LLMs into
the consciousness of a broad community of users. Since then, evolutions have led to the current
version GPT-4 from OpenAI. Similar models were used for other data types, such as images,
videos and sound [4]. In the meantime, several applications integrate the functionalities as
assistance into existing tools (like search engines, e.g., Microsoft Bing or photo editing software,
e.g., Adobe Firefly). More advanced applications of the approach are already appearing, such as
the Vision-Language-Action-models (VLA-models), which use aggregated transformer models to
chain prompts and simulate reasoning for complex robotics [5].
This contribution will focus on the socio-technical aspects of applying the technologies,
subsuming them under the term Generative Artificial Intelligence (Generative-AI).

Proceedings The 9th International Conference on Socio-Technical Perspectives in IS (STPIS’23)


27.-28.10. Portsmouth, UK
[email protected] (K. Böhm); [email protected] (L.-M. Schedlberger)
0000-0002-2950-7433 (K. Böhm); 0009-0006-3560-4594 (L.-M. Schedlberger)
© 2023 Copyright for this paper by its authors.
Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
CEUR Workshop Proceedings (CEUR-WS.org)

CEUR
ceur-ws.org
Workshop ISSN 1613-0073
Proceedings

90
The uniqueness of the new type of software systems can be summarized in three main qualities
that are unmatched compared to earlier ICT systems:

1) The ability of those software systems to interact with the users using natural language
(called “prompts”) at a level close to how humans communicate with each other, including
errors, omissions and vague expressions. While natural language interfaces have been
around for some time, the level of interaction and understanding was never reached at
that level, thus representing a new quality.
2) The ability of generative technologies to take context into account. The contextual
processing of information is often the key to understanding the information correctly e.g.
resolving homonyms and references to objects or persons in human language. While
context helps to understand the natural language in general, contextual processing is also
extended to the discourse with the user. That conversational context enables a discourse
with a history that the machine and the human user can refer to, much like the
communication between human actors.
3) Finally, since the systems have been trained with massive amounts of data and are
extensive models, they can be used for non-trivial tasks and produce interesting, complex
and original results. While not being perfect and sometimes erroneous (called
“hallucinations”), the potential of the systems is already being demonstrated in several
applications [6].

These three main qualities lead to a new type of socio-technical systems in which human users
can interact with ICT in a way and at a level that can lead to impressive results in a very short
time – it could be seen as an amplifier for human creative work. The technology is probably at the
beginning of its development and will build more momentum soon.
Based on those observations, the research question for this contribution is how those
technologies will impact the relation between the human user and the technical system both on
an individual and at a collective level (sociotechnical impacts). The authors are especially
interested in the impact on creative aspects of human activities, and here, the sub-question is how
those impacts can be estimated. To shed some light on the relation, the concept of co-evolution,
borrowed from biology, will be used to describe these sociotechnical impacts.
An illustration of the sociotechnical impacts of the current developments of Generative AI is
not yet fully understood and, in the context of co-evolution, challenging to visualize convincingly.
As illustrated in Figure 1 below, the use of tools using Generative AI to generate such an
illustration suggests a number of images that all share the common idea of two systems or worlds
interacting with each other and almost melting into something new, which might be – at the end
– not a bad metaphor to start into discussing that subject further.

Figure 1: Some results of an experiment to create illustrations on the topic of co-evolution in


the socio-technical context generated by Microsoft Bing Image Search (based on Dall-E from
OpenAI) based on iterative prompting by the authors.

91
The rest of this positional paper is structured as follows: In the following section, the concept
of co-evolution is introduced and related to the domain of generative technologies. After that, two
application areas are discussed that are considered knowledge-intensive and involve a high
degree of creative work: the domain of software development and the education sector. The
paper concludes with a discussion and a summary also indicating the next steps in this research.

2. The concept of co-evolution in the domain of Generative AI


Co-evolution [7] is a concept that initially stems from the field of Biology and describes the
phenomenon in the evolution of species with a reciprocal adaptation of a population that
eventually leads to a dependency between each other. A typical example of co-evolution is the
relationship between plants and butterflies: Plants rely on butterflies for their replication
process, and butterflies rely on (certain) plants as a (primary) food source. The positive
development of the member of one species (evolution) will benefit for the members of the other
species and vice versa. Over time, a mutual dependency is created that connects their
evolutionary processes. The effect was called co-evolution. Another example is hermits living
with calliactis, sea anemones, on their shells to provide protection (for the hermit) and a food
source (for the calliactis). Different types of such a co-dependent development are known, e.g.,
parasitic/predative, competitive or symbiotic/mutualistic co-evolution or between species
(interspecific) or also within a species (intraspecific) [8].
The phenomenon has been studied intensively in the field, and a good overview can be found
in [8]. Interestingly, the concept has also been applied to other sectors, e.g., sociology, economy
and computer science. A general approach to the co-evolution of humankind and machines is
provided in [9], and the author elaborates on the intertwined nature in chapter 14 of the book in
a similar sense as the authors of this contribution understand it. However, being written in 2020
before the advent of Generative AI in this massive sense, many of the mentioned approaches can
be understood better now that some parts of that co-evolution process are unfolding.
It seems to be a universal idea that the evolution (=iterative development) of different types
of systems can benefit in their development in terms of quality and development speed when they
interrelate to each other – and this seems to be true in the relation of men and machines, too.
Those benefits come with the price of a (possibly very strong) dependency of the two (or more)
systems on each other, which could be a constraint with existential consequences. If one system
(not only an individual) fails, the other system (and all its individuals) might fail too (and get
extinct in the field of Biology). The fact that co-evolution appears in nature again in very different
ecosystems might indicate that the advantage outweighs the drawbacks. An interesting approach
in the context of book authoring was made by Chris Duffey in 2019 – before the advent of ChatGPT
– in his book on superhuman innovation [10], in which her included responses of an algorithm
that helped him during the research for the book as a sort of co-author. This approach made sense
in some way since the author claimed that technology will massively support human abilities for
innovation in the near future. Still, it seemed to be an at least unusual approach to authoring back
then. Now, only a couple of years later, it does not seem strange at all and might even be a good
example in terms of transparency since the author clearly distinguished between the content of
the algorithm and his contributions.
In the domain of information technology, the co-evolutionary phenomenon could also be
observed; examples are the Internet and the dependency of our information acquisition
strategies, including sourcing (e-commerce) and retrieval services (search engines). Mobile
technologies – most prominently the smartphone – have become an almost irreplaceable
universal gadget of our daily lives for many activities. Navigation systems have (almost) replaced
typical maps and improved our ability to orient ourselves in unknown locations using digital
systems. However, on the other hand, they have harmed our (intuitive) ability to orient ourselves
without them. While the advantages and the disadvantages of technology use have been widely
discussed, they have rarely been put under the co-evolution scheme as a socio-technical
explanation model [11].

92
On the other hand, up to now, all of those supportive technologies have been associated with
the level of assistive tooling that usually was limited to one specific domain of our lives and not
at the level of advanced cognitive or creative tasks. This might differ from Generative AI due to
the three mentioned general qualities mentioned in the Introduction above. Generative AI
technologies are more universal and closer to human communication (language and explorative,
iterative behavior, try-and-error approach). This raises the general question of the generated
content or artifact type: What are the qualities of an original creation? Such inventive, innovative
or creative power is – up to now – exclusively associated with human actors, but the borders get
blurred by Generative AI and the urge for new answers. Co-evolution might help to answer some
of those questions or provide explanations for developments that can be observed in the
pronounced use of the technology. This contribution does not fundamentally expand or refine the
co-evolution theory itself. To the best knowledge of the authors, the relation between co-
evolution as a concept and the relation between the human user and the Generative-AI-system
has not been pointed out by others. Therefore, the creation of awareness for that connection
might be helpful and is put into the discussion with this contribution.
One of those answers could be a co-evolutionary pattern that drives evolution in both
directions and creates original and creative work with a new quality:

• From the Generative AI to the human user: the use of already influences the creative
power of the human user by quantitative means (producing results faster) and by
qualitative means (producing better results, esp. in those areas in which the human user
is not an expert). In this way, the use of GT can improve (or amplify) his/her creative
abilities (effectively and efficiently) in a way and to an extent that, over time, might become
essential for the human user, e.g., the human user loses his/her capability to be creative in
a similar way without it. The looming and increasing dependency would be an indication
of a co-evolution.
• From the human user to the Generative AI: the human might influence the learning
behavior of the GT over time, esp. if the interaction becomes more individual and more
local (models being built/trained for the individual user and/or application). Suppose that
interaction and adaptation that comes with it persists over a longer time. In that case, the
specific GT might become dependent on the user (or user community or use case) that it
is interacting with, creating the opportunity to excel in the interaction with those users but
also becoming dependent on those users. Once again, this would indicate the pattern of co-
evolution.

2.1. Requirements for a co-evolution effect of Generative AI

Currently, the use of Generative AI, which has gained attraction in a massive user base, focuses
on exploring and applying new functionalities in existing application areas. Following the idea of
Oppermann [12], there could be a separator in the application of Generative AI into areas or
spaces [13], see Figure 2 below. 1) The Application Space in which the new functionalities are
mainly used to augment, replace or otherwise use the new tools for problems and use cases
already known – most application areas currently explore this area. That strategy is
straightforward and expected. Results are already remarkable, but the interesting results will
probably evolve in another area. Development of the Generative AI system (e.g., how to interpret
the context in an ongoing dialogue correctly) and the user (e.g., how to apply the functionality of
a Generative AI system effectively and efficiently) can occur in the context of the situational task
but is likely to stay within the boundaries of the existing application framing. These developments
could be considered an evolution but is probably not so tightly coupled that it can be called a co-
evolutionary development that exhibits a mutual dependency of both systems.

93
Figure 2: Separation of the action space of Generative AI solutions into the area where co-
creational aspects are possible (“Novelty Space”) and the other area that focuses on the use of
the technology (“Application Space”) [13]

2) The Novelty Space will use Generative AI in a way that helps to extend and expand the level
of human creation in a way that was not existent/unthinkable before due to the limitations of the
technology before the advent of Generative AI. This space will evolve over time and probably
represent the more innovative area since it is not bound to existing use cases but new ones that
emerge during space exploration.
Co-evolution is more likely to occur in the Novelty Space since the intensified and open-
minded application will lead to the co-evolution of Generative AI technologies and human users
over time. It is important to stress that human users need to go beyond the pure application
within the existing frame of applications to preserve the creative potential and the power to
create original works that are not only an imitation or recombination of existing prior work. New
technology has often inspired (a new generation of) artists to express themselves or their ideas
in novel ways that were not possible before. As Lee nicely puts it: “It has never been the case that
the art is created by a paintbrush, but a good paintbrush can make a big difference.” [9].

2.2. Co-evolution and diversity of Generative AI

Current Generative AI systems are trained with massive amounts of data using large-scale and
often specialized infrastructures. Consequently, the resulting models are large and run on
centralized services of large cloud providers (such as OpenAI, Microsoft or Google). They are
provided as a service (SaaS) or a platform (PaaS) that can be extended or integrated, e.g., by using
APIs or providing customized services. Such a service model provides the advantages of rapid
use, good scalability and ready-to-use services. However, it also has the disadvantage of a
centralized provider that might lead to vendor lock-in effect and the need to use always online
services.
Moreover, the centralized services use the same model for every user and every use case.
Learning and adaptation are only achieved based on the prompts and responses as contextual
information within the user session.
From a sociotechnical perspective, such an application model could drive toward centralized
and monopolized infrastructures that might also lead to difficult situations as a critical mass of
power could be focused on a single application/technology or organization. Co-evolution in this
situation would only move into one or a few directions, which limits its evolutionary power – in
the field of Biology, this would represent a monoculture known to weaken an eco-system and are
less resilient against change or negative influences. As a result, the diversity of such a world of
centralized Generative AI systems would be low and the evolutionary potential limited.
A world with a limited number of models managed, controlled and owned by a few
organizations could also be a problem, especially if the human society cannot obtain any control
over them (e.g., for the case of publicly owned organizations). Too little diversity in the models

94
leads to a monopoly of Generative AI that might cause social problems, e.g., if access to the
technology is limited and thus the creative potential of some humans is significantly lowered –
they are exempt from the benefits of a co-evolutionary development.
If, on the other hand, the technology scales down in terms of resources needed to train and
run Generative AI systems (locally), the result would be a higher number of systems, that will be
different with respect to the model and the use case. The world of Generative AI has become more
diverse. It could explore more paths of evolution and co-evolution with the different user groups:
decentralized models lead to a more diverse ecosystem of Generative AI systems.
Moreover, the user would gain control over the use of ‘their’ generative AI model, e.g., they
could decide when and how long to use it. The lifecycle of the model is determined by the user
and not by the provider, which becomes more important if the model is more contextualized than
a central model and if it has a longer lifecycle in which it can accumulate more input data that
improves the model over time: decentralized Generative AI systems can be more personal, and its
lifecycle is under the control of their owners.
In a way, decentralization of Generative AI can be understood as a democratization of the
technology that is beneficial for society in the long term as it lowers the danger of power misuse
of centralized solutions while opening up for the full potential of a co-evolutionary development.
First movements towards that direction can be seen already with the Llama and the LLama2
model of Meta [14]and the genuinely open models such as the initiative EleutherAI [15] and
approaches that make it easier to run LMMs locally, such as GPT4All [16]. Although they cannot
compete with commercial counterparts, they also exhibit a fast development pace with significant
improvements in short cycles.

2.3. Social impacts of co-evolutionary Generative AI

Generative AI, in its current development speed and user adoption, is likely to initiate a co-
evolutionary pattern that will have an impact on both the technology and the human users –
individually but also at a collective level. Thus, it is likely that there will be social impacts soon
which are not predictable due to their disruptive nature. It might be useful to look back into the
history of technology development to anticipate some of the societal reactions that are likely to
occur or already occurring.
One illustrating example was the machine-breaking movements during the Industrial
Revolution in the 19th century when the appearance of mechanical waving looms represented a
danger to the existing craft of hand looming, endangering the work and economic existence of
many people in that industry by the time. The so-called Luddites were destroying the (new)
technology – the mechanical looms and the factories – to fight the technology’s societal impacts
(e.g. lost job opportunities, lower wages, lost opportunities for skilled workers). History tells us
that the resistance of the Luddites did not succeed in the long term and that the negative effects
of the introduction of the new technology were outweighed by the advantages of that technology
(e.g., new jobs created, lowered prices that made products affordable for a larger group of
customers).
Furthermore, while the historical example happened long in the past, the general role of a
Luddite remains alive in our times at an individual level, at a group level and even at the level of
a society. Today, skepticism and inactivity against the (possible) impacts of Generative AI are also
creating a danger that investable developments will be recognized too late and possibly
addressed in the wrong direction – against the technology and not against the social impacts that
come with it.
Proper redemption will be the most appropriate way to deal with it, as the technology
available to humankind will always be used if it seems beneficial (at least from some). With that
respect, it is also useful to revisit the critical remarks of Joseph Weizenbaum in [17], where he
points out the differences between computer technologies and human intelligence and although
many of his claims and raised questions date back some time ago, they are (again) very relevant
in the context of the technologies in question.

95
The advent of Generative AI lets us enter an age in which, again, a certain amount of
assumptions, routines and skills are challenged; this time not on the level of mechanical skills but
on the more advanced level of cognitive skills that we consider to be exclusively human – until
now. With the understanding of a co-evolutionary development, we should look for ways that
help humankind evolve with the technology in a way that represents progress (in increasing our
abilities) and that offers redemption for the negative social impacts that occur in the transition
phase in the best way possible.

3. Application of the concept


After introducing the concept of co-evolution in the context of Generative AI, this section explores
the theory with a reflection in two distinct application areas: first in software development and
second in education, more specifically, the Higher Education sector. These sectors have been
chosen because they are distinctive in their perception of the originality and creativity of the
results they create: in both cases, it requires highly skilled human individuals to create the
outcome, the environment that they operate in is complex and dynamically changing, and it is
highly contextualized – different iteration seem to be similar from a more general point of view
but are very different when investigated more closely.

3.1. The domain of Software Development

Software development can be a very challenging field, even for proficient developers. Working as
a developer means finding creative solutions every day because every problem a developer faces
can be utterly different from those he or she has worked on before. Therefore, software
development is not only about writing code and working with programming languages and
control structures but more about “translating” existing real-life processes to a data model and
the associated business logic.
What makes the software development sector in the context of Generative AI so interesting is
that software development has always been a professional sector that has driven and
implemented digitalization. However, due to the need for highly creative work, it was less affected
by digitization than other sectors. Generative AI gained much popularity, especially in the last few
months, because of the release of ChatGPT. Therefore, software development must undergo
extensive change because it is now a target of the digitalization transformation process for the
first time. The hype about Generative AI in software development started with the release of
ChatGPT. ChatGPT creates code snippets based on user input in natural language and completes
existing unfinished code snippets. Besides ChatGPT, there are also many other Generative AI tools
for developers. According to a recent paper by Ebert & Louridas, the most relevant tools currently
are ChatGPT, CoPilot, Tabnine, and Hugging Face [18]. This paper focuses on CoPilot from GitHub
because it has the most recent research and is the most convenient tool to integrate into the
developer’s workflow.
CoPilot is available as an extension for various development tools like Visual Studio Code,
Visual Studio, Neovim, and the JetBrains suite of integrated development environments (IDEs).
CoPilot is developed by GitHub, Microsoft, and OpenAI and has been trained on code from sources
open to the public, including source code in public repositories, which has been published on
GitHub. CoPilot works by providing suggestions for code auto-completion based on the existing
code. Developers can use those suggestions easily [19]. To reach the point where co-evolution
can be possible, developers must integrate provided generative AI tools, such as CoPilot or
ChatGPT, in their workflows. Therefore, the most essential aspect is the acceptance of Generative
AI since it has the potential of a transformative change of existing working habits for the sector of
software development. Russo investigated how the acceptance of Generative AI in software
development can be ensured. He developed the Human-AI Collaboration and Adaptation
Framework (HACAF) based on his mixed-methodology investigation. Even though developers see

96
advantages in using various Generative AI tools, seamless integration into existing workflows is
sometimes the most substantial aspect of accepting these tools. [24]
Provided that those Generative AI tools are integrated, the next step would be focusing on how
a potential co-evolution of software developers and Generative AI could look. Recent research
shows how software developers interact with CoPilot during development. For this research, the
authors of [20] recruited 20 programmers and investigated how the programmers interact with
GitHub Copilot. Barke, James & Polikarpova recommend how the CoPilot should evolve to best
integrate into the workflow of the developer [20]. Among other things, it was revealed that there
are two different modes of interaction: acceleration mode and exploration mode. Acceleration
mode means when the developer has a concrete idea of what he or she wants to write next.
CoPilot helps to be more efficient and faster by completing the code based on function names or
comments in natural language. When programmers still need to become proficient in a
programming language or are unsure how to solve a problem, CoPilot is used in the so-called
exploration mode. In this mode, CoPilot does not act as a programming assistant but as a partner
who supports developers in finding solutions [20].
The co-evolution pattern, which is the focus of this research, would only occur in exploration
mode because, referring to Figure 2 in section 2.1., acceleration mode would be positioned in the
so-called "Application space". Only the exploration mode has the potential to reach the "Novelty
space", even if the tool has yet to reach that point.
However, before focusing on co-evolution in collaboration of humans and Generative AI, initial
steps need to be taken to familiarize the human users and gain experience with this new form of
interaction. There are undoubtedly many things a machine or, in this case, Generative AI tools can
do better than humans. Machines being able to work with large amounts of data is a fact people
need to accept. The essential contribution humans have to collaboration between software
developers and Generative AI will be investigated in future research by the authors. Until now,
there is only research about what potential Generative AI could have in the software development
sector. According to Ebert & Louridas, Generative AI in software development can increase
productivity by supporting developers during their creative processes and solving problems
where complex algorithms need to be implemented. Furthermore, it can be used for writing
summaries about interviews, meetings, reviews or documentation. [17]
Even if there is no empirical research evidence about the nature of irreplaceable human skills
for the software development process yet, the fear that Generative AI would replace software
developers entirely seems unnecessary. Nevertheless, the field of software development is going
to change significantly. Welsh even writes in a recent paper, “Programming will be obsolete.” He
is convinced that programming, the way people do it today, will be utterly irrelevant 20 years
from now. He describes that basic programming concepts, such as programming sorting
algorithms, will be irrelevant. He compares that to developments experienced in the past; for
example, today’s software developers do not need to understand the functionality of a CPU.
Future software developers probably do not need to understand how complex algorithms work.
[21]
In conclusion, Generative AI will allow software developers to focus on more abstract and
complex subjects like software architecture. At the same time, repetitive tasks like the configuring
of standard code blocks (“boilerplate code”) or automated testing can be done by Generative AI.
Suppose using those tools becomes a daily routine for the software developers and an ordinary
tool in their toolbox. The authors believe that more advanced working practices will unfold that
show a co-evolutionary pattern.
If the skillset of software developers is going to change, education in the software development
sector will also change. Educating software developers is also an auspicious sector that needs to
evolve because of tools like ChatGPT. However, not only education in software development will
change, but significantly Higher Education will change a lot. This topic will be focused in the next
section.

97
3.2. The domain of (Higher) Education

Edward Lee raises the question, “What should we teach the young?” ([9] p.309), and this becomes
an even more prominent question in the rise of Generative AI. He argues that men, unlike
machines, have always learned everything from scratch and that the transfer is much more
difficult than with machines, where data can be copied without loss or complicated learning
processes. Nevertheless, maybe this process of (re)-constructing the realities from generation to
generation is also a form of interaction between the generations that enables us to think and
experience knowledge and wisdom in new ways – again and again. Lee has a point when he
mentions that wisdom and the question of the how and why might take a back seat in favor of
skills and knowledge in current education, but Generative AI might force us to think differently
in this respect. It might become less important to have (all the) knowledge and be able to excel at
(all the) skills that might be relevant because those can be supported (or replaced?) by the
application of Generative AI.
Instead, it becomes more important to have a good understanding of the technology’s inner
workings, the effects, the effects that it causes for us at the individual and at the collective level
and the goals we are reaching for. This should be a educational goal for the engineering and
Computer Science subjects in order to prepare the next generations for a profound understanding
of the technology to ensure a) the correct function of existing and developing systems and b) to
lay the foundation to the further development of the technology. All other subjects should be
taught to develop a reflected and critical position to the application of Generative AI to develop
an individual and independent position in which technology remains a supportive tool and not a
tool that the user ultimately depends on. That would enable us to estimate the potential and the
limitations of the use of Generative AI technologies – leading to more informed decisions and
actions than just to use the technology as it is without further questioning.
Providing that ability will become a cornerstone of education programs in Higher Education,
probably with variations on the depth level depending on the different subjects. Similar to the
distinction in the adoption and use of digital technologies in general between the group of Digital
Natives and Digital Immigrants [22],[23],[24], two different fields of education paths can be
identified:

• Skill up the existing generations to make them fit for a future in which generative AI
will be a part of everyday life and the working environment. Here, it will be important
to connect the existing knowledge and experience that the learners have acquired over
their lifetime in the different paths they took in education and during their
professional activities. Consequently, this part not only focuses on teaching the
teaching of the technology insights but also addresses the needed change(s) to existing
practices and the adoption of new practices and ways of thinking. This inertia in
technology adoption is not new or specific to Generative AI but probably will play an
important role because the application is not limited to a few specific use cases but will
be rather broad and iterative but expanding. In that sense, it might be compared to
adopting the Internet, which started as a particular piece of infrastructure that became
a common and general tool for everyone over several decades and with substantial
sociotechnical implications. While adopting Generative AI might not take that long, the
sociotechnical effects might not be less prominent and thus affect the world
significantly.
If addressed correctly, it could lead to the situation that the new technology can be
seen as a chance rather than a danger, which will help with the technology adoption.
• Educating future generations with the skills that will be needed in the future of
Generative AI. Besides the education about the inner workings of the technology to
create transparency for learners, it will also be important to focus on tasks that are
automated or heavily supported by Generative AI to prevent an ultimate dependency
on the technology. Following the classic example that using calculators will not replace
mathematical understanding, this will also address other skills like writing essays,

98
preparing presentation materials or doing information research. While tools and
technologies will support those activities, among others, in a substantial way, it
remains important that learners possess the core elements of those skills in order to
remain independent from the technology and develop good judging capabilities in
order to distinguish good from bad results – in a similar way like mathematical skills
help to quickly estimate the results of calculations in order to spot major mistakes.

This field will also relate to shaping education and the educational system itself. With
the advent of Generative AI as support tool, assessment methods need to be revised,
and it becomes apparent that problem-based learning and proving the applicability of
a certain competence gains importance over pure knowledge acquisition and
knowledge testing in exams.

The extensive survey by Zang et al. [4] also investigates the (higher) education field. It asserts
high potential that could “…revolutionize education by improving quality and accessibility of
educational content, increasing student engagement and retention, and providing personalized
support for learners.” [4]. It is interesting to note that several dimensions are affected by the
content creation/improvement at the side of the lecturer and the side of the learner as the most
prominent example. However, the education process aspect is also mentioned by providing a
higher level of automatization (e.g., with chat-based feedback). At the same time, a higher level of
individualization can be reached. This would improve the scalability of the overall process by
freeing time for the valuable direct and personal interaction of the lecturer and the student on
more advanced or tricky parts of the educational content that require an intense dialogue to be
mastered.
The study of Nah et al. collected several challenges that come with the use of Generative AI,
such as “…harmful or inappropriate content, bias, over-reliance, misuse, privacy and security, and
the widening of the digital divide.” [6]. From the perspective of educational use, over-reliance is
probably one of the severe dangers for the learning process. It often occurs slowly and unnoticed
but damages the learning process significantly. The tool use might even create the perception in
the learner to have a better understanding of a topic, which could be accompanied faster than by
ordinary means. However, it might be only at a superficial level of understanding if not being
practiced, e.g., by problem-based learning approaches.
Applying the model with the separate usage spaces for the Generative AI technologies
introduced in section 2.1, a number of activities can be identified in the education domain, see
figure 3.

Figure 3: Using the model of the two different spaces in the realm of Generative AI (“Novelty
space”, “Application space”) to position a examples of educational activities in the different
areas. [13]

99
From the application perspective (“Application space”), the most obvious and foremost
example is the education of the students to use the tools that use Generative AI competently and
knowledgeably, appropriate for the individual domain, which leads over the different courses to
a toolbox for the students and a grown knowledge (and experience) which tools & technologies
to use for which application.
Although incorporating those new tooling in the educational process, it will be the respon-
sibility of the Higher Education Institution (HEI) to balance the tool use in such a way that the
learning process for the specific subject remains intact and goes deeper than learning to use the
tool(s) only. This area is expected to involve the adaptation of didactic methods and materials
and iterative experimentation of which usages are effective and which are not.
The co-evolutionary effects determine the other space (“Novelty space. It allows the students to
explore the inner workings of Generative AI at an advanced application or technology level. Here,
the foremost goal would be to develop engineering skills for working with Generative AI by
learning to use and adapt the technology and possibly also the models, given that they are small
and accessible enough to allow experimentation with the limited resources of an HEI. Over the
long run, it should be the intention of the HEI to stimulate curiosity and serendipity within the
students to develop and evolve the technologies further – either in terms of technology
development or in terms of application areas. This could drive the co-evolutionary process
between the technology and the student’s learning process within the HEI.

4. Discussion & Conclusion


This contribution evaluated the well-known concept of co-evolution, originally coming from the
field of Biology, in the context of the current rapid technology of Generative AI that reached its
breakthrough by combining Large Language Model with Natural Language Interfaces in the style
of a chatting application (e.g., ChatGPT). While not being limited to that application area, the
authors have pointed out that Generative AI is not (only) yet another assistive technology, but
one that might help users to develop their skills and knowledge to new levels and to improve the
technology while interacting with the technology – they co-evolve in the process. While this might
not happen with every application of Generative AI, the authors anticipate that potential in areas
in which the use of technology leads to novel and original creations (called the Novelty Space).
Conditions and implications of co-evolutional patterns have been sketched in the contribution,
and the reflection on two application areas showed the potential of the co-evolutionary pattern
using the technology.
It is certainly discussable whether framing human users and Generative AI systems as co-
evolving is an appropriate metaphor. However, the author’s position is that the awareness of the
concept of co-evolution might help to gain insights into the nature of the interaction and the
adaptive development of both systems. It might contribute to developing an explanation model
about the nature of the relationship and trajectories of change over time. The authors hope to
propose co-evolution as a new way to analyze sociotechnical phenomena to draw useful metho-
dological and theoretical exchanges, which becomes more important now as the generative
technologies provide a new quality of the dialogue between man and machine that will open an
even more interesting field of research.
This contribution aims to give an impulse to think if co-evolution is an appropriate pattern for
the collaboration of humans and Generative AI. The goal is to highlight the potential of Generative
AI in two exemplary sectors. Developing software and learning (especially in Higher Education)
are highly creative and cognitive processes where people craft something. While in software
development, the result is a working program, the result in the learning process is implicit. The
authors consider and pose the question of whether the use of Generative AI is changing not only
the results of the mentioned processes but also the result-creating process. Could the use of
Generative AI in the software development and Higher Education sector lead to irreversible
process changes? Is it possible that developing software and learning will be only possible

100
efficiently or effectively with collaboration with Generative AI? This contribution and the
presented concepts are not fundamentally new. However, the authors would like to present this
assumption and concepts with this contribution to encourage more research on the relationship
between humankind and Generative AI.
Due to the novelty of Generative AI and the current breakthroughs, many of the effects can
only be anticipated in the future. Empirical research will be needed to reflect on the conceptual
work of this contribution. This will be subject to future research by the authors. Still, using an
established pattern might help shed some light on the observable effects that become visible in
the following months and years to come – something that we might call the age of Generative AI
when looking back to these days from the future.

Acknowledgements
The authors would like to thank the University of Applied Kufstein Tirol for supporting the
research and the reviewers of an earlier version for their valuable suggestions that helped to
improve the quality of the paper.

References
[1] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A.N. Gomez, L. Kaiser, I. Polosukhin,
Attention Is All You Need, (2023). https://doi.org/10.48550/arXiv.1706.03762.
[2] A. Radford, K. Narasimhan, Improving Language Understanding by Generative Pre-Training,
in: 2018. https://www.semanticscholar.org/paper/Improving-Language-Understanding-
by-Generative-Radford-Narasimhan/cd18800a0fe0b668a1cc19f2ec95b5003d0a5035
(accessed August 7, 2023).
[3] OpenAI, GPT-3 powers the next generation of apps, (2021). https://openai.com/blog/gpt-
3-apps (accessed August 7, 2023).
[4] C. Zhang, C. Zhang, S. Zheng, Y. Qiao, C. Li, M. Zhang, S.K. Dam, C.M. Thwal, Y.L. Tun, L.L. Huy,
D. kim, S.-H. Bae, L.-H. Lee, Y. Yang, H.T. Shen, I.S. Kweon, C.S. Hong, A Complete Survey on
Generative AI (AIGC): Is ChatGPT from GPT-4 to GPT-5 All You Need?, (2023).
https://doi.org/10.48550/arXiv.2303.11717.
[5] A. Brohan, N. Brown, J. Carbajal, Y. Chebotar, X. Chen, K. Choromanski, T. Ding, D. Driess, A.
Dubey, C. Finn, P. Florence, C. Fu, M. Gonzalez Arenas, K. Gopalakrishnan, K. Han, K.
Hausman, A. Herzog, J. Hsu, B. Ichter, A. Irpan, N. Joshi, R. Julian, D. Kalashnikov, Y. Kuang, I.
Leal, L. Lee, T.-W.E. Lee, S. Levine, Y. Lu, H. Michalewski, I. Mordatch, K. Pertsch, K. Rao, K.
Reymann, M. Ryoo, G. Salazar, P. Sanketi, P. Sermanet, J. Singh, A. Singh, R. Soricut, H. Tran,
V. Vanhoucke, Q. Vuong, A. Wahid, S. Welker, P. Wohlhart, J. Wu, F. Xia, T. Xiao, P. Xu, S. Xu,
T. Yu, B. Zitkovich, RT-2: Vision-Language-Action Models, (2023). https://robotics-
transformer2.github.io/assets/rt2.pdf (accessed August 7, 2023).
[6] F. Fui-Hoon Nah, R. Zheng, J. Cai, K. Siau, L. Chen, Generative AI and ChatGPT: Applications,
challenges, and AI-human collaboration, Journal of Information Technology Case and
Application Research. 0 (2023) 1–28. https://doi.org/10.1080/15228053.2023.2233814.
[7] P.R. Ehrlich, P.H. Raven, Butterflies and plants: a study in coevolution, Evolution. (1964)
586–608.
[8] D. Carmona, C.R. Fitzpatrick, M.T.J. Johnson, Fifty years of co-evolution and beyond:
integrating co-evolution from molecules to species, Molecular Ecology. 24 (2015) 5315–
5329. https://doi.org/10.1111/mec.13389.
[9] E.A. Lee, The Coevolution: The Entwined Futures of Humans and Machines, The MIT Press,
2020. https://doi.org/10.7551/mitpress/12307.001.0001.
[10] C. Duffey, Superhuman Innovation: Transforming Business with Artificial Intelligence,
Kogan Page Inspire, 2019.

101
[11] L. Roucoules, N. Anwer, Coevolution of digitalisation, organisations and Product
Development Cycle, CIRP Annals. 70 (2021) 519–542.
https://doi.org/10.1016/j.cirp.2021.05.003.
[12] Oppermann, Axel, Wen künstliche Intelligenzen über sich dulden, IT & Karriere. (2023).
https://mediadaten.heise.de/wp-
content/uploads/2023/04/ITKarriere_Ausgabe01_2023.pdf (accessed August 8, 2023).
[13] K. Böhm, On the implications for knowledge intensive activities in the presence of Large
Language Models, 2023.
[14] Llama 2, Meta AI. (n.d.). https://ai.meta.com/llama/ (accessed August 8, 2023).
[15] J. Phang, H. Bradley, L. Gao, L. Castricato, S. Biderman, EleutherAI: Going Beyond “Open
Science” to “Science in the Open,” (2022). https://doi.org/10.48550/arXiv.2210.06413.
[16] GPT4All, (n.d.). https://gpt4all.io (accessed August 8, 2023).
[17] J. Weizenbaum, Computer Power and Human Reason: From Judgment to Calculation, W. H.
Freeman & Co., USA, 1976.
[18] C. Ebert, P. Louridas, Generative AI for Software Practitioners, IEEE Software. 40 (2023) 30–
38. https://doi.org/10.1109/MS.2023.3265877.
[19] GitHub Copilot · Your AI pair programmer, GitHub. (n.d.).
https://github.com/features/copilot (accessed August 16, 2023).
[20] S. Barke, M.B. James, N. Polikarpova, Grounded Copilot: How Programmers Interact with
Code-Generating Models | Proceedings of the ACM on Programming Languages, Proceedings
of the ACM on Programming Languages. 7 (2023) 85–111.
https://doi.org/10.1145/3586030.
[21] M. Welsh, The End of Programming, Communications of the ACM. 66 (2023) 34–35.
https://doi.org/10.1145/3570220.
[22] H. Elaoufy, Bridging the Gap between Digital Native Students and Digital Immigrant
Professors: Reciprocal Learning and Current Challenges: Reciprocal Learning and Current
Challenges, American Journal of Education and Technology. 2 (2023) 23–33.
https://doi.org/10.54536/ajet.v2i2.1522.
[23] C. Lau, C. Caires, An Attempt to Examine Digital Native and Digital Immigrant Macau Graphic
Designers’ Perceptions of Adopting Generative Design, International Journal of Creative
Interfaces and Computer Graphics. 13 (2022) 1–16.
https://doi.org/10.4018/IJCICG.311424.
[24] D. Buragohain, A Survey on Digital Immigrants’ Technology Usage and Practice in Teaching
Digital Natives, International Research in Education. 8 (2019).
https://doi.org/10.5296/ire.v8i1.15560.

102

You might also like