top of page
Writer's pictureMerle van den Akker

Automated Decision-Making and Ethics


We are moving towards an increasingly automated society. This is a society ruled by algorithms and artificial intelligence (AI - which, for purposes of this article, refers to the ‘intelligent’ programs which use these algorithms to make decisions). Now, this brave new world might be a wet dream for the average sci-fi geek, but for the average person this can be annoying, or even terrifying. To understand some of the underlying issues within automated decision-making, let’s first dive into how it works.



The Algorithms of Decision-Making

When we talk about decision-making we think of people before we consider programs or machines, so let’s start there. Within the realm of human decision-making there are algorithms too. An algorithm in human decision-making is simply a tool (thought process) to address a particular problem. These fall into type 1 and type 2 reasoning:


Type 1 - this reasoning structure is very quick and mainly experienced based. It is an automated process, which can produce incredibly accurate results if enough experience has been gathered beforehand and the problem to be solved is similar enough to problems previously experienced. If these conditions are not in place, type 1 can be incredibly fallible. A famous example of this is regarding a baseball bat and ball. People often indicate the wrong answer. However, type 1 reasoning need not revolve around numbers. It is often emotion-based as well. The question "should incest be allowed?" raises visions of sleeping with one’s siblings or parents, accompanied by a feeling of disgust; your own algorithm has just determined the ethics regarding incest: hell no!


Type 2 - this is based on logic, careful reasoning and deduction. It is much slower and not automated. An algorithmic example of this reasoning would be a process of elimination. Going through all the characteristics of an object, establishing their presence or lack thereof, leads us to identifying it as such. Through this process we can identify objects of which we have no or little experience. But this process goes much further than object identification. It allows us to do many more complicated things, such as high-level cognition, public discourse, moral reasoning, complex calculations and intricate pattern recognition.


AI works similarly. Software is designed to address a particular set of challenges and to yield answers from a defined data set. Although defined, this data set can be as big as every transaction ever made, or the medical records of every person that is registered in the world. The good thing is that, when it has learned enough, AI can produce results as fast as type 1 reasoning, although its process will look much more like type 2. So it is quicker, yet less fallible than the reasoning of most people, due its increased capacity.



Automated Decision-making and Ethics

When it comes to human decision-making, it is a mix of type 1 and type 2 reasoning that leads us to our own ethical viewpoints. There is an initial reaction that is often emotion-based, followed by arguments for and against your own viewpoints. Through careful discourse, ethical bounds are established. And as such, laws safeguarding these bounds are established (ideally).


When it comes to ethics and AI, things become a bit more muddled. The inherent issue with algorithms - and AI as a consequence - is that they are built by people to solve something that has been deemed an issue. It depends on the person(s) building the algorithm as to what is defined as a problem, and which ways are acceptable to solve it. After the “problem” has been solved and an output (answer) has been produced, the creator of the AI determines what to do with it, or the creator has told the AI what it can do with a possible range of outcomes. This is where ethics - or a lack thereof - comes in. The fact that the new AI ethics committee of Google included the head of a right-wing think tank and the chief executive of a drone company on it raises questions about its independence; they are hardly people without skin in the game.


Being impartial when it comes to judging what AI can or should be allowed to do is important - because AI can go way beyond human understanding. When researchers created two AIs, for the sole purpose of studying their interactions, those two came up with their own language. Eventually they were shut down, as their creators didn’t understand the language being used and seemed to lose control over their own creation. It’s hardly Frankenstein’s tale, but it does offer a rather unknown perspective into what AI could be. A seperate learning algorithm was built, for the sole purposes of studying human interactions (online). This creation backfired as well, as it became "racist", basing itself on online interactions. The AIs we build are based on values their creators give them, and they learn from the data set they are given. If the AI has to learn from a data set of racist, sexist, fallible, short-sighted people, it will emulate those characteristics. There is great danger in this if AI is subsequently relied upon by humans



The Legal Angle

When it comes to the law, the rising use of algorithms by both the state and business throws up some difficult questions. Decisions about individuals are increasingly made by computer software - and because programmers are often a third party whose involvement ends once the software has been written, the decision makers who rely on the software are sometimes unclear as to exactly how the algorithm works. As a result, the classic ‘computer says no’ sketch from Little Britain has come to signify the reality of many elements of modern day life; human decision makers are left without any authority to make decisions and individuals are often none the wiser as to the reasons for their fates.


The Law Society has recently warned against the over-reliance upon algorithms by police forces and the probation service, with president Christina Blacklaws saying that “there is a need for a range of new mechanisms and institutional arrangements to improve the oversight of algorithms used in the justice system”. For example, the legal basis for facial recognition systems needs to be addressed - especially in light of a legal claim brought by an office worker against South Wales police who argues that it is an unlawful violation of privacy. Meanwhile, algorithms involved with individual risk assessment and predictive crime mapping could potentially target the wrong types of suspects (particularly if there is an element of human bias in the programming) or waste police resources. One of the key recommendations of the Law Society is to create a statutory code of practice for algorithms in the justice system under the Data Protection Act.


Outside of criminal justice, decisions made by software to assess credit ratings or eligibility for consumer services can have deleterious effects for individuals. There are even greater dangers when algorithms are relied upon in the welfare system, particularly for vulnerable people who lack the resources to challenge an unfair decision. In some cases it may be possible to invoke the (relatively) new article 22 of the GDPR which provides limited protections for data subjects in the context of ‘automated individual decision-making’. Certain elements of human rights law and case law may provide additional steer - but overall there is a dearth of regulation in relation to algorithms and automated decision making.





This article has been written in collaboration with Alex Heshmaty, legal copywriter and journalist, focusing on the legal aspects.



1 Comment


Lawsyst uk
Lawsyst uk
Apr 05, 2021

Great and trending topic you have shared with us and i am glad and i bookmarked it to share it with more. Regards, Lawsyst, family practice management software

Like

Behavioural Science

Personal Finance

Interviews

PhD

bottom of page