7c48cf43-e5d2-4ec6-8c50-4ceb0372f935
7c48cf43-e5d2-4ec6-8c50-4ceb0372f935
7c48cf43-e5d2-4ec6-8c50-4ceb0372f935
{ernestine.dickhaut, leimeister}@uni-kassel.de
2 University of St.Gallen, St.Gallen, Switzerland
{andreas.janson, janmarco.leimeister}@unisg.ch
Abstract. Higher legal standards with regards to the data protection of individu-
als such as the General Data Protection Regulation (GDPR) are increasing the
pressure on developers of IT artifacts. Typically, when developing systems, we
subsequently evaluate them with users to elaborate aspects such as user experi-
ence perceptions. However, nowadays, other evaluation aspects such as legality
and data policy issues are also important criteria for system development. For
this purpose, we introduce LEGIT (legal design science evaluation), which pro-
vides developers with guidance when considering legal requirements. We use the
case of the GDPR to illustrate the feasibility, applicability, and benefit to the de-
velopment process. With this novel method adapted from law research, we are
able to derive actionable guidance for developers to evaluate developer efforts in
increasing legal compatibility. To illustrate our methodological approach, in this
paper, we describe the key steps of the method with respect to the evaluation of
a learning assistant. We develop an AI-based learning assistant for university stu-
dents to demonstrate the application of the novel evaluation method. We briefly
discuss how this procedure can serve as the foundation for a new evaluation
method of legally compatible systems in design science research.
1 Introduction
One major goal of design science research (DSR) is the development of innovative and
novel artifacts to solve real-world problems of business and society. However, these
novel IT artifacts bring new risks, e.g., legal risks, which are sometimes not anticipated
correctly beforehand [1]. Consider the practical cases during the COVID-19 pandemic
to illustrate this area of conflict between useful IT artifacts and legal risks quite well:
video conferencing tools such as Microsoft Teams or ZOOM were (and still are) facing
legal disputes questioning their legality and legal compatibility. Numerous COVID-19
tracing apps have dealt with conflicts related to how to balance the usefulness in track-
ing and meethind regulation rules such as the GDPR. Thus, legal and data policy aspects
have always been important for many companies to avoid reputational risks but gaining
importance due new conditions such as novel IT artifacts, negative media reports and
increasing end-user interest in legal aspects.
Typically, when developing systems, we subsequently evaluate them with users to
elaborate on if our system design is appropriate, e.g., regarding usability, user experi-
ence perceptions, or outcomes of IT use, which we evaluate through evaluation frame-
works such as [2] or [3]. However, nowadays, other evaluation aspects, such as legality
and data policy issues, have also become important criteria for system development.
Nonetheless, we usually do not evaluate legal aspects when deploying these systems,
oftentimes caused by the lack of appropriate evaluation methodologies for legal aspects
when considering novel systems. In this context, simulations are a great support for IT
development. They help to visualize and play through abstract content quickly and
without great effort [4]. As a rule, individual parameters can be easily varied to achieve
the best possible results. In consequence, we draw on these advantages for evaluating
legal aspects by imitating real-world usage of systems. With the possibility to play
through different system development parameters under realistic conditions, changes
can be made relatively easily during the development. Thus, the simulation study intro-
duced by the law discipline [5] provides a method-based foundation to evaluate tech-
nology in a practical manner concerning legally compatibility.
Therefore, we propose in the following a comprehensive evaluation methodology,
which we call LEGIT (legal design science evaluation), that provides developers with
guidance when considering the legal requirements in DSR, especially related to the
GDPR. For the application of the novel evaluation methodology, we develop an AI-
based learning assistant for university students, with two overarching but somewhat
conflicting design goals: (1) a high user experience that offers as much support during
learning processes as possible but (2) also considers legal compatibility, i.e., achieving
a higher legal standard than is required by law. LEGIT allows us to implement and
evaluate our ideas for a legally compatible AI-based assistant to get feedback at an early
stage, which can be used for the further development of the AI-based assistant.
We have to pay attention to the requirements from various disciplines to develop design
science artifacts, such as user experience, ethical, and legal requirements. Requirements
such as user experience are given much attention during the development, while legal
requirements are often addressed to a minimum extent in order to be compliant with the
minimal requirements of law [6]. Today, higher legal standards with regards to the data
protection of individuals increase the pressure on development [7]. Data protection is
gaining importance, and thus the storage and processing of personal data are becoming
an integral part of system design. Legality decides on the market approval of novel
technologies, which means the fulfillment of minimum legal requirements, and is still
common practice in many system developments projects. Legal compatibility goes fur-
ther than mere legality and is defined as the greatest possible compliance with higher-
order legal goals to minimize the social risks from technical system use [6].
However, the technology neutrality of law always leaves some room for maneuver-
ing in the implementation and interpretation of the legal requirements, which leaves
developers and companies uncertain about whether they have achieved legality. In
times where data policy issues have been gaining in importance for developers, espe-
cially since 2018 due to the GDPR, there is a growing body of literature that recognizes
the importance of the relevance of the consideration of data protection in technical sys-
tems, so we should keep legal aspects in mind early on in the development [8–10].
In summary, conducting the user study should have two things in mind. First, by pro-
voking critical situations, the user study will provide insights into the handling of the
situations by the user. Second, through the interaction with the artifact in a real-world
situation critical situation can also arise that were not previously considered.
In the second part of LEGIT, we move slightly away from the users to get a reliable
judgment on the legal compatibility of the developed artifact. In this part, legal viola-
tions based on the user study are derived. As described above, the violations were either
provoked by designing the evaluation setting, in which case some violations may occur,
but some that are expected do not occur in reality, or the legal variations were derived
from usage without expecting them in advance. Thereupon, legal experts create court
infringements to negotiate in court.
1 2 3
Naturalisitic User Deduction of Legal
Legal Assessment
Study Violations
Goal: investigate technology use Goal: derivation of claims Goal: capturing legal
under natural conditions Actions: deduction of legal violations compatibility
Actions: provoke critical situations; based on the user study Actions: legal analysis, court case
identify unexpected situations Stakeholder: development team, Stakeholder: legal experts
Stakeholder: potential users legal experts
The third part of LEGIT includes simulated court cases based on the deducted legal
violation. Thus, the legal assessment can be conducted as realistically as possible with
claims that could arise from the practical use. The simulated court cases are built on the
outcomes of the first part of our study. The situations of conflict that were previously
provoked will be discussed and judged by legal experts, simulating a real court trial.
The selection of a range of cases to trial during the legal assessment that are of high
importance in the daily use of the technology is an important step towards the success
of the evaluation. Early evaluations (especially smaller projects) should use at least one
experienced legal expert for this purpose. Extensive and advanced projects, on the other
hand, should evaluate the simulated court cases as realistically as possible in several
proceedings in order to avoid subsequent legal violations. The legal experts should have
completed at least the second state examination and have initial practical experience.
4 Application of LEGIT
In this section, we demonstrate the application of LEGIT that was embedded in a larger
AI-based assistant development project (see [14, 15]. The developed AI-based assistant
should support a university course by providing individual learning support. A special
feature of these systems is the individual adaptation to the user, which requires a large
amount of user data. Among the user data are also personal data that are considered
particularly worthy of protection according to GDPR guidelines. Consider, for exam-
ple, the case of Amazon’s Alexa, which activates itself when nobody is home or serves,
as a consequence of its data collection efforts, as a witness in court (see also [16]).
Thus, AI-based assistants are a good way to apply LEGIT to evaluate the legal compat-
ibility of this novel class of systems (see Figure 2).
Step Naturalistic User Study Deduction of Legal Violations Legal Assessment
Stakeholder
System Judge Lawyer Lawyer
Legal Experts Developer Plaintiff
User Defendant
The goal of our evaluation is to evaluate an AI-based assistant. Our use case for
deploying the learning assistant was a course for business administration that was taken
by about 150 students. Thus, in the first part of our evaluation, we offerred a course
that, in addition to the lecture, allowed students to prepare for the upcoming exam to-
gether with the learning assistant. The user study allowed us to capture possible con-
flicts with the law beforehand. The deduction of legal violations included legal experts
as well as the developer team. One exemplary cause of action was the disclosure of the
learning data of individual students beyond the actual purpose—the use in preparation
for the exam—for the decision of a job posting at the university. The legal assessment
included four court cases in which the developed claims were negotiated.
Acknowledgements
This paper presents research that was conducted in context of the projects AnEkA (pro-
ject 348084924), funded by the German Research Foundation (DFG) and Nudger (grant
16KIS0890K), funded by the German Federal Ministry of Education and Research.
References
1. Pordesch, V., Roßnagel, A., Schneider, M.: Simulation Study Mobile and secure
communication in Healthcare. DuD, 76–80 (1999)
2. Sonnenberg, C., Vom Brocke, J.: Evaluations in the Science of the Artificial –
Reconsidering the Build-Evaluate Pattern in Design Science Research. DESRIST,
381–397 (2012)
3. Venable, J., Pries-Heje, J., Baskerville, R.: FEDS: a Framework for Evaluation in
Design Science Research. European Journal of Information Systems 25, 77–89
(2016)
4. Borges, G.: Legal Framework for Autonomous Systems, 977–982 (2018)
5. Roßnagel, A., Schuldt, M.: The Simulation Study as a Method of Evaluating So-
cially Acceptable Technology Design, 108–116 (2013)
6. Hoffmann, A., Schulz, T., Zirfas, J., Hoffmann, H., Roßnagel, A., Leimeister,
J.M.: Legal Compatibility as a Characteristic of Sociotechnical Systems. BISE
57, 103–113 (2015)
7. Barati, M., Petri, I., Rana, O.F.: Developing GDPR Compliant User Data Policies
for Internet of Things. Proceedings of the 12th IEEE/ACM International Confer-
ence on Utility and Cloud Computing, 133–141 (2019)
8. Bourcier, D., Mazzega, P.: Toward measures of complexity in legal systems. Pro-
ceedings of the 11th international conference on Artificial intelligence and law,
211–215 (2007)
9. Spiekermann, S.: The challenges of privacy by design. Communications of the
ACM, 38–40 (2012)
10. van der Sype, Y.S., Maalej, W.: On Lawful Disclosure of Personal User Data:
What Should App Developers Do? IEEE, Piscataway, NJ (2014)
11. Venable, J., Pries-Heje, J., Baskerville, R.: A Comprehensive Framework for
Evaluation in Design Science Research. DESRIST, 423–438 (2012)
12. Peffers, K., Rothenberger, M., Tuunanen, T., Vaezi, R.: Design Science Research
Evaluation. DESRIST, 398–410 (2012)
13. Roßnagel, A.: Simulationsstudien zur Gestaltung von Telekooperationstechnik.
GMD Spiegel Nr. 2 (1993)
14. Dickhaut, E., Janson, A., Leimeister, J.M.: The Hidden Value of Patterns – Using
Design Patterns to Whitebox Technology Development in Legal Assessments.
16th International Conference on Wirtschaftsinformatik (WI 2021) (2021)
15. Dickhaut, E., Li, M.M., Janson, A., Leimeister, J.M.: Developing Lawful Tech-
nologies – A Revelatory Case Study on Design Patterns. HICSS 54 (2021)
16. Rüscher, D.: Alexa, Siri and Google as digital spies on behalf of the investigation
authorities?, 687–692 (2018)
17. Li, H., Yu, L., He, W.: The Impact of GDPR on Global Technology Develop-
ment. Journal of Global Information Technology Management 22, 1–6 (2019)