Princeton-AI-Ethics-Case-Study-Hiring by Machine

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

HI RI NG

BY
MACH I NE
CASE STUDY: 5

The development of artificial intelligence (AI) systems and their deployment in

society gives rise to ethical dilemmas and hard questions. This is one of a set

of fictional case studies that are designed to elucidate and prompt discussion

about issues in the intersection of AI and Ethics. As educational materials,

the case studies were developed out of an interdisciplinary workshop series

at Princeton University that began in 2017-18. They are the product of a

research collaboration between the University Center for Human Values

(UCHV) and the Center for Information Technology Policy (CITP) at Princeton.

For more information, see http://www.aiethics.princeton.edu


A
s the means of warfare have modernized, the US Army has placed increasing emphasis on
training new recruits in programming and computer engineering. This focus on tech not only
helps US military operations remain competitive on a world stage, but it also provides many
military professionals with skill sets that can later be leveraged into non-military environments
This was the case for the small, enthusiastic group of Army veterans who co-founded the non-profit
company, Strategeion, after having been honorably discharged during the 2008 recession. Building on their
previous experiences supporting various military operations with IT solutions, this group of programmers
set out to create jobs for themselves and improve the lives of other ex-military personnel by producing
an online platform that would enable veterans to stay in touch with their cohorts and share experiences
dealing with civilian life.

The co-founders did not stop there, however. Having been instilled with a strong sense of civic
virtue, and having witnessed first-hand the problems of poverty, joblessness and homelessness that
many American communities were facing during the economic downturn, the developers knew they
wanted to use their programming skills to effect broad social change. The group was always looking
for interesting new technical problems to address, and vowed to develop services, platforms and
technical solutions for the benefit of all. Strategeion’s unofficial motto became “leave no one behind.”

In order to fulfill this pledge, Strategeion’s founders believed the best path forward was one of collaboration
and peer production. One of the company’s key commitments was to an open-source model, at least
regarding their public-facing products and services. This meant Strategeion would make the source code
for much of their software freely available to the public. The hope was that other organizations would
not only use Strategeion’s code to serve their own communities, but build upon and improve it such that
benefits would accrue to all. As an added bonus, the open-source model also ensured some measure
of transparency and public accountability. Over time, these features contributed to Strategeion’s growing
reputation as an honest, trustworthy company.

Discussion Question #1:


Trust is an increasingly important branding tool for tech companies in competitive markets. When
a company opts for an open-source model, as Strategeion did, that decision may increase public
perceptions of its trustworthiness. But does open sourcing necessarily imply trustworthiness? What
other factors, if any, go in to determining a tech company’s trustworthiness? If a tech company
chooses not to share its source code, does that mean it is untrustworthy?

Strategeion’s business model proved successful. The once-small platform quickly expanded to include a
range of services—from social networking to personal blogging and even a location-based search app that
helped individuals moving to new communities discover local events and hotspots—which were popular
across many demographics. After only a few years in operation, the company could boast a steady growth
of both users and revenue.

Despite the expansion of Strategeion’s products to serve an increasingly broad market, the company
never abandoned its special commitment to addressing the needs of veterans. This was evident in certain
of its products, which were geared towards former military servicemembers—for example, a resume
writing feature that translates military experience into civilian language—as well as in Strategeion’s
hiring practices. Whereas other innovative tech companies mostly employ young, recent graduates from
prestigious universities, Strategeion was to be staffed largely by ex-military personnel. The company
considered this to be a win-win. Strategeion was glad to provide job opportunities and support to veterans,
a group that had been particularly hard-hit by the recession. Many of these individuals returned home
2 | AI Ethics Case - Hiring by Machine
bearing the scars of physical and/or psychological traumas, and often found it difficult adjusting to aspects
of the civilian workforce. Veterans generally fit in well at Strategeion, however, a company that prided itself
in maintaining certain aspects of the military’s social culture. Strategeion’s ex-military employees tended to
excel at the company and reported high levels of job satisfaction. Once hired, they were likely to remain at
the company for years, rising through the ranks.

In recognition of its employee satisfaction and high retention rates, Wealth magazine listed Strategeion
among its “100 Best Companies to Work For 2013.” This resulted in a surge in job inquiries. By the following
year, when Strategeion was once again featured on the list, the number of applications had far outpaced
the number of positions available, at a ratio of almost 100 to one. And despite minimal civilian outreach
efforts, more and more of these applications were coming from the kind of traditional candidates that
might have typically applied for jobs at larger, for-profit tech companies. At one point, Strategeion’s human
resources (HR) team became so overwhelmed with the number of resumes it received that they had to
cease hiring to deal with their backlog. HR representatives complained on Strategeion’s internal message
board that they now expended so much energy on the first-round selection process that they no longer
had enough time to perform other essential aspects of the job, such as performing background checks and
processing new hires.

A group of Strategeion’s developers interpreted the messages from HR as a call for help. In keeping with
the company’s tradition of developing in-house solutions for internal problems, they offered to create a
bespoke resume vetting system to help HR deal with the influx of resumes. Diagnosing the problem as
a simple issue of information overload, this group of developers expected it could be easily solved by
implementing some clever technical tricks to automatically pre-sort resumes according to a candidate’s
desirability, optimizing especially for projected “fit” within the company.

After having weighed several options, the team decided to implement a system that utilized natural
language processing (NLP) and machine learning (ML) to look for markers in resumes that distinguished
the best candidates. They dubbed the system PARiS, in tribute to the Trojan hero who was tasked with
judging a contest to determine the most beautiful goddess among the deities of Mount Olympus. In order
to train the system, HR provided the engineering team with dozens of resumes from current and previous
employees who were deemed either exemplary or poor in terms of professional attributes and fit. PARiS
would rate incoming resumes according to their match with the ideal types and cast aside those that were
below a set threshold. In order to ensure PARiS’ continuing effectiveness, HR staff were given the ability
to “tweak” the system on an ongoing basis by highlighting words, sentences or key variables in new and
existing resumes that they thought modeled a good candidate for Strategeion.

PARiS’ rollout was met with a collective sigh of relief from HR. Because poor matches could be discarded
automatically, HR no longer had to devote the overwhelming number of hours required for humans to read
each resume the company received. And while some members of the team were initially hesitant about
delegating the first-stage application sorting to an algorithm, skepticism about PARiS quickly abated. After
only a few weeks in operation, the lists of candidates PARiS suggested consistently reflected those that
would have been assembled by human HR agents, instilling confidence that the system had absorbed
Strategeion’s values. But PARiS was so much faster and more efficient! After the first six months, PARiS
was automatically discarding approximately 80 percent of received resumes.

Discussion Question #2:


PARiS promised to make the hiring process more efficient. But are there other values that might be
desirable in hiring? Diversity? Equity? Creativity? What, if anything, do companies risk losing when
hiring procedures are so singularly focused on maximizing efficiency?

This work is licensed under a Creative Commons Attribution 4.0 International License. AI Ethics Case - Hiring by Machine | 3
Hermann, a promising and hard-working computer science major at one of Berlin’s most prestigious
universities, received an automated rejection email from Strategeion within hours of applying for a job through
its website. He was disappointed, as he had been convinced he was an ideal candidate for the company. He
had strong academic qualifications and he had carefully crafted his C.V. to reflect his civic commitments and
experience with non-profit organizations that advocated for wheelchair users such as himself. His ambitions
to develop transparent, responsible tech solutions to improve the lives of those with disabilities seemed a
perfect match for Strategeion’s mission to “leave no one behind.” Disappointed at his rejection, Hermann wrote
to the company asking for feedback on his application. He also published a blog post about the experience,
promising his readers to share any response from Strategeion.

Hermann promptly received a reply from one of Strategeion’s HR representatives. She explained that the
company had incorporated an AI system into its hiring processes and could now pinpoint factors that had
contributed to the decision to deny his application in the first round, which she would explain as openly
and honestly as possible. In Hermann’s case, it seemed PARiS had considered his lack of interest in
physically demanding activities to indicate a weak cultural fit for Strategeion. Having been trained on data
from Strategeion employees, which included a disproportionate number of veterans, PARiS had learned to
associate success and longevity at the company with those who had previous military experience. While such
individuals were not necessarily all that different from Hermann—indeed, many also used wheelchairs—
military experience and interest in athletics were highly correlated. PARiS interpreted this correlation in such a
way that individuals who did not care for sports, were categorized as “risky” hires.

Hermann was surprised and dismayed to learn about PARiS and its decision-making processes, and
he published the company’s response on his website. Many of Hermann’s readers sympathized with his
disappointment, and several voiced their own ethical and legal objections to PARiS.

Ethical Objection #1: Fairness


Hermann’s frustration about his immediate rejection was rooted in his belief that he had been treated
unfairly. He felt he was a good fit for Strategeion across many dimensions, and thought he deserved
a shot at the job. Indeed, the HR agent indicated that the only red flag PARiS identified in his file was
a lack of sporting interests. But Hermann was sure that physical capabilities and interests are poor
indicators of one’s professional potential. To discriminate against his application on the basis of an
irrelevant characteristic (i.e., his athletic history) was unfair. All he wanted was to be judged on the
basis of his relevant characteristics and achievements. Allowing the system to exclude him because
of a lack of sporting experience meant that Hermann had not been treated the same as other equally
qualified candidates. Some of Hermann’s colleagues from the non-profit world went a step further,
arguing that Hermann’s situation demanded more than mere equality of opportunity. If anything, they
argued, given the history of marginalization and the lack of accommodations traditionally made for
persons with disabilities, the fairest thing for Strategeion to have done would have been to engineer
PARiS to positively discriminate in favor of those with physical disabilities. They pointed out that
helping those in need was supposedly one of Strategeion’s core principles and challenged the company
to use its hiring tools to correct injustice.

Ethical Objection #2: Equality and Discrimination By Proxy


Some of Hermann’s readers argued that, while it is all well and good to talk of abstract notions of
fairness, such pleas aren’t strictly necessary. One of Hermann’s close friends, a pre-law student in the
US, reminded him that there are laws in that country prohibiting discrimination against “protected
categories,” which includes persons with disabilities. The US Constitution prohibits such discrimination
by federal and state governments against public employees, and even private corporations are subject
to a growing body of anti-discrimination law. Regarding discrimination against disabled persons in
4 | AI Ethics Case - Hiring by Machine
hiring, specifically, private employers are required under both the Rehabilitation Act of 1973 and the
Americans with Disabilities Act (ADA) of 1990 to treat all prospective applicants equally. More recently
the ADA Amendments Act of 2008 defined “equal treatment” clearly within the framework of the equal
opportunity principle, meaning that persons with disabilities cannot be at a placed at a disadvantage
in hiring by virtue of their disabilities. Hermann’s friend believed she could show that PARiS did not
adhere to these legal standards. Even if the algorithm was not intentionally discriminating against
resumes based on protected attributes, it seemed “redundant encodings” in Strategeion’s data
had allowed the system to infer such attributes from other, seemingly innocuous data, such as an
applicant’s record of participating in extra-curricular sports. Such a discriminatory practice might not
seem as blatantly demeaning as a blanket hiring policy against those with physical disabilities, she
conceded, but it was all the more dangerous for its insidiousness.

Ethical Objection #3: Consent And Contextual Integrity


Hermann was dismayed that automated decision-making tools had been used to evaluate his C.V.
without his explicit consent. And he wasn’t alone. Upon learning about PARiS, many of Strategeion’s
current and former employees were unhappy that their resumes might have been used to train the
underlying datasets without their knowledge or permission. These employees had provided Strategeion
with personal information under the reasonable expectation that it would be used in a limited context
(i.e., to inform hiring decisions about them, as individuals). While it is true that expectations regarding
what is right and proper for an employer to do vis-à-vis an employee’s resume might have been merely
implied, rather than explicit, Strategeion’s use of its employees’ personal information for unexpected
and undisclosed purposes left them open to allegations that they had violated privacy norms and
standards. Moreover, several employees qualified their complaints by noting that it wasn’t just that they
had not consented to broad use of their personal information, but that they would not have consented
to this particular use, which had the effect of contributing towards discriminatory hiring practices.

Hermann responded to the HR representative’s letter by filing an official complaint with Strategeion,
incorporating many of these arguments. He accused the company of unfairness, illegal discrimination and
inappropriate use of personal information. Hermann believed all he would need to definitively prove his case
was access to the system’s source code and some basic information about Strategeion’s hiring procedures.
He argued that Strategeion should be willing to provide a full disclosure of its source code, as secrecy would
be inconsistent with the company’s commitment to openness and transparency. And in case the appeal to
Strategeion’s values didn’t work, Hermann also called upon the rights awarded to him, as a German citizen,
under the new European Union General Data Protection Regulation (GDPR), to demand an explanation of
automated decision-making tools

Discussion Question #3:


While it is uncontroversial to state that American companies must follow American law, cross-
jurisdictional legal questions are trickier. To what extent are American companies and other entities
that process data bound by European law when designing data processing tools in the US? To what
extent should American organizations be bound by European law? Does it matter what the law aims to
achieve?

This work is licensed under a Creative Commons Attribution 4.0 International License. AI Ethics Case - Hiring by Machine | 5
Strategeion’s advisory board convened a meeting to discuss the merits of Hermann’s claims. They began by
handing the legal questions to their in-house counsel who could assess whether the AI hiring system really
was in violation of US anti-discrimination law and determine the American company’s responsibilities under
GDPR. The lawyers could also ascertain whether Strategeion had committed a legal wrong by using their
employees’ resumes to train PARiS without their knowledge or explicit consent.

Even if the lawyers were able to show that Strategeion had not acted illegally and was not required to share
its hiring source code, however, it was clear that the company, which prided itself on providing transparent,
honest tech solutions in service of the public good, would need to take a long, hard look at itself. The advisory
board was distressed at having been accused of unfairness and wished to dispute the characterization.
Members pointed out that, through its history, Strategeion had consistently promoted a robust notion of
fairness through its positive efforts to recruit employees from a group they believed to be in dire need of help
(i.e., veterans). Indeed, many of those individuals had injuries and ailments that would qualify as disabilities.
Yet despite these efforts, the advisory board had no choice but to acknowledge that, in deploying PARiS,
Strategeion had failed to live up to its self-conception as an inclusive company that strives to “leave no one
behind.” The board wished to apologize to Hermann and express their regret that PARiS’ design and the
biases in the training data had led the hiring system to automatically deny interviews to him and similar
qualified applicants. Something would need to be done to ensure that all strong applications were given a fair
shot. The question was: what?

A complete overhaul to the company’s hiring policies would be difficult. Strategeion wished to be a positive
force in the world, but it also wanted to hire individuals who would be in it for the long haul. Thus, projected
cultural fit was an extremely important part of the hiring criteria. In order for PARiS to make a determination
on fitness, the system’s engineers had decided to train it on samples from past and present Strategeion
employees. But while this approach meant that PARiS was adept at picking out resumes of people who most
resembled successful Strategeion employees, because of the company’s historical hiring practices that
favored of military types, it also meant that people who did not fit that mold would be discriminated against. In
other words, given the system design, Strategeion’s biased data would produce biased results and promote
biased outcomes.

One option to address PARiS’ bias problem was to implore the system’s engineers to infuse more diversity
into their models. A second option called for rethinking the value of a homogenous workforce. Recent reports
in management studies have shown that more diverse project teams are able to evaluate products and
services from a wider range of perspectives, typically resulting in all-round better output, as well as more
productive workplaces. Upon reading some of this literature, even Strategeion’s co-founders tentatively agreed
that it might be worth considering a change in hiring priorities.

Discussion Question #4:


Biased data sets pose a problem for ensuring fairness in AI systems. What could engineers at
Strategeion have done to counteract the gaps in employee data? To what extent are such proactive
efforts the responsibility of individual engineers or engineering teams?

Discussion Question #5:


Social science increasingly shows that there are advantages to a heterogenous workforce, but there
are also advantages to homogeneity. A diverse workforce helps protect organizations against “group
think,” for example, but groups that share certain experiences and backgrounds may find it easier to
communicate with and understand one another, thereby reducing collective action problems. If you
were a manager in charge of hiring at Strategeion, for which position would you advocate? Would you
try to maintain the corporate culture by hiring people who resemble current employees, or would you
argue that PARiS should be realigned to optimize for a broader range of types?

6 | AI Ethics Case - Hiring by Machine


Reflection & Discussion Questions
Fairness: As the discussion surrounding Hermann’s blog post illustrated, when people speak of “fairness,”
they are often drawing on several different concepts. Fairness may be based on meritocratic values, meaning
that people get what they deserve on the basis of relevant judging criteria (e.g., physical ability, intelligence,
hard work). Related to this conception of fairness is the idea of equal opportunity, which stipulates that all
individuals must be presented with the same opportunities to develop and display their merits so that they
can be judged and compensated correspondingly. Fairness may also refer to the opposite concept: equality of
outcomes. To satisfy this vision of fairness, the fruits of society must be distributed to everyone equally. Such
distributional fairness is justified, not on the basis of individuals’ merits, but simply by virtue of their humanity
or membership in a community. Going a step further, some conceptions of fairness imply that affirmative,
active efforts should be made to correct for inequalities. Within the social justice framework, fairness demands
that those who are systematically disadvantaged in society be given a leg up by those who have profited
from inegalitarian institutional arrangements. Finally, fairness may refer to the standard of treating all people
equally across various dimensions: procedural, interpersonal, etc. What any one individual considers to be
fair hinges on which of these (or other) definitions of fairness she chooses. Is it what I deserve? What we
deserve? What does anyone deserve? Is it for us to have the same or to be treated the same? What would
that even look like? These are difficult questions without clear, commonly-accepted answers. And yet despite
the lack of consensus about its meaning, fairness remains a prominent moral value – one that engineers may
be encouraged to reflect in the design of AI systems. To the extent that fairness must be articulated in order to
be coded, it becomes increasingly important that we understand the different values underlying this principle.

○○ A computer scientist recently showed that there are no fewer than 21 definitions of fairness used
in programming. Philosophers might consider that a low estimate in their own field. If we cannot
all agree on what fairness entails, we must accept that different notions of fairness will prevail
in different instances. If you were one of the engineers behind PARiS, which interpretation(s) of
fairness would you choose? What if you were a job applicant?

○○ Given the goal of selecting job applicants at Strategeion, to what extent can and should PARiS be
programmed to reflect the value of fairness? How could this be operationalized technically?
○○ Consider this famous thought experiment: Imagine you exist before the institution of society (“the
original position”) and you have no way of knowing who you are and how you will fit into that
particular society once it’s formed (“the veil of ignorance”). Given these conditions, what rules
would you choose to govern the distribution of social goods, such as jobs? Specifically, how would
you wish for an algorithm to determine an individual’s suitability for a job interview, if at all? How
does your response—which represents one way of thinking about fairness—align with the notions
of fairness you discussed in the previous question regarding the perspectives of engineers and
applicants?

Irreconcilability: Optimizing for fairness is difficult in part because the concept encompasses many
different values (see “Fairness”), but even more so because those values are often mutually exclusive. For
example, a company might wish to ensure fairness by providing that all job applicants be judged equally
according to their merits without regard to any ascriptive characteristics, such as race, sex, etc. However,
such an approach to fairness would mean that that company could not, at the same time, promote a notion
of fairness that corrects for historical disadvantages and social injustices that may have contributed towards
group differences. For someone who cares about both principles, their incompatibility can be frustrating. But it
is nothing new. Humans have always had to grapple with the irreconcilability of certain values, which they may
hold dear. This often entails making compromises and acting in philosophically inconsistent ways. In the case
of AI, it is unclear whether these systems can and should be developed to model human behavior regarding
multiple, irreconcilable values.

This work is licensed under a Creative Commons Attribution 4.0 International License. AI Ethics Case - Hiring by Machine | 7
○○ Some argue that AI systems cannot simultaneously hold and enact several competing principles
at once. As products of code that must adhere to the values written into them, AI systems are
accused by some of lacking the moral flexibility—or pragmatism—of humans. Others, however,
contend that AI are just as capable as humans in this regard. They argue that what humans think
of as their capacity for compromise is actually just arbitrariness, and that that trait can easily
be encoded into algorithms by incorporating randomness. Proponents of this view argue that
algorithmic decision-making isn’t making the irreconcilability problem any harder, but has merely
thrown the issue into sharper relief. Discuss which view you find most convincing and why.
○○ How can programmers account for the multiple values for which they may wish to optimize when
those values are orthogonal to one another? Should machines be made to mimic humanity’s
capacity for moral inconsistency, or should a different approach be devised?

Diversity: Hiring is an inherently discriminatory process in the technical sense that some applicants receive
offers and others do not on the basis of certain criteria used to define a “good” job candidate. Companies
mostly decide for themselves which characteristics they wish to optimize for and the level of diversity with
which they are comfortable. Many, like Strategeion, may prefer a more homogenous model. This is why PARiS
was designed to compare new resumes to those of successful employees – the engineering team was trying
to minimize the uncertainty that accompanies difference. However, there are legal restrictions regarding
which criteria a company may use to distinguish between job candidates. Hermann had the law—both US
and international—on his side when he argued that his application should not be dismissed on the basis of
his physical disability. (And had PARiS excluded Hermann explicitly as a result of his disability, rather than
secondary-order characteristics, his case against Strategeion would have been a slam dunk.) Beyond law,
much contemporary research now shows that companies enjoy concrete advantages by promoting a diverse
workplace.
○○ Should the values of inclusivity, diversity and tolerance be actively included as functional
requirements in AI systems for recruitment purposes? Does your answer change according to your
standpoint (e.g. HR representative, Strategeion advisory board member, job applicant, etc.)?

○○ PARiS’ lists of suggested applicants closely resembled the lists that would have been drafted by
Strategeion’s human HR team. To the extent that PARiS was biased towards a particular kind
of applicant, this suggests that the human HR workers were as well. Indeed, it can be argued
that PARIS is merely an extension of the human biases already in existence at Strategeion. Are
computational biases necessarily worse than human biases in a recruitment context? Or are
humans just as bad? Even if humans are no better than machines, are there reasons we might
want to keep human actors involved in hiring, at least for people with protected characteristics
under the law?

Capabilities: It is unsurprising that humans may wish to use AI technologies designed to save them
tedious labor. This may be especially true in cases where an AI system is capable of replicating an
individual’s decision-making, but in a way that is more efficient and/or faster than a human could achieve.
For example, an expert runner who may have, over many years, developed a talent for calibrating his runs
to fit his needs at any given moment may choose to delegate that task to an AI enabled technology if that
system seems to perform just as well as his own judgment. In such a case, the runner no longer has to
think about how fast to move or which direction to turn, and can funnel that mental energy towards other
tasks. Over time, however, his capability to navigate his surroundings while running may whither. In the
case of PARiS, once the system’s outputs consistently cohered to the HR team’s expectations of an ideal
candidate, agents seemed content to delegate the initial sorting of job applications to PARiS. This gave
them more time to worry about other aspects of the job, since they trusted that PARiS knew what it was
doing. Indeed, over time, they may have stopped thinking about how to perform first-round application
sorting altogether.

8 | AI Ethics Case - Hiring by Machine


○○ Reliance on systems like PARiS is likely to produce efficiency gains, as they are capable of
performing previously human tasks more quickly and with fewer errors. But what, if anything, do
organizations risk losing when they replace human judgment with that of AI systems?

○○ What happens to individuals—and society—when they grow accustomed to trusting AI systems –


either more than or in lieu of their own intuitions and training? Is this necessarily problematic?

Contextual Integrity: Contextual integrity is a theory of privacy that evaluates the appropriateness
of some use of an individual’s information according to how well that use conforms to the reasonable
expectations the individual had when consenting to share her information. For example, if you told your
local banker about financial difficulties you might be experiencing, you would be shocked if the bank then
conveyed that information to local businesses, considering it a breach of trust. You may have agreed to
share this information in order to be considered for a bank loan, but that does not imply consent for the
banker to share that information with others. In the Strategeion case, previous and current employees
felt that the company’s use of their personal information to train PARiS violated their privacy. They had
entrusted the company with their personal information in order to be evaluated for a job, but any further
use of their resumes would have required their consent.

○○ The theory of contextual integrity provides an alternative to the idea of “informed consent,” or the
requirement that an agent must be informed about which data will be collected and how it will
be used in order for consent to be binding. In the tech area, where informed consent may not
always be either possible or desirable, contextual integrity offers a more flexible way to think about
appropriate data use. Evaluate Strategeion’s use of its employees’ resumes using each of these
theories. Does one theory produce a more convincing conclusion than the other?

○○ Had Strategeion replied to its employees’ concerns about the ways in which their resumes were
used to train PARiS, it may have argued that, once the resumes had been submitted, that data
belonged to the company and could be used as its agents saw fit. Do you agree or disagree with
this claim? What implications might stem from such a view of data ownership?

AI Ethics Themes:
Fairness
Irreconcilability
Diversity
Capabilities
Contextual integrity

This work is licensed under a Creative Commons Attribution 4.0 International License. AI Ethics Case - Hiring by Machine | 9
This work is licensed under a Creative Commons Attribution 4.0 International License.

You might also like