A Boston Dynamics robot.
//

The Ethical and Existential Threat Of Artificial Intelligence

While it is unlikely that AI will destroy the world, the uncritical embrace of it can have unwanted outcomes for humans’ most important skills.

While experts say that Artificial Intelligence (AI) will make our life better in the future, there are many who have concerns about what developments in AI mean, not only for the landscape of the job market or for the environment, but also for how it might affect the core meaning of what it means to be productive, and therefore, what it means to be human.

The social issues surrounding AI – those of bias, ethical concerns and overdependence on technology are just some of the ones relevant in the debate on AI. As these questions become more pressing, and the safety of this being a future problem slowly disappears, it is clear that there is much to be worried about when it comes to AI, despite the promises of great progress it might bring. AI is already in use today, probably more than one would expect.

The European Parliament features a section on their website titled “Some AI applications that you may not realise are AI powered” and lists, among others, search engines, online shopping websites, language translation software and digital personal assistants. While many may think of AI as something futuristic, AI is simply any kind of ability of a machine to display human-like capabilities, such as reasoning, learning, planning and being creative. Other than the ways AI is already being used though, there are hopes of introducing it more widely in many sectors, such as health care or education, due to its wide range of data and pattern recognition and the ability to increase efficiency.

However, many are worried about the consequences of AI. A survey conducted in 2017 showed that 88% of Europeans believed AI should be closely managed. In 2018 Marina Gorbis, executive director of the Institute for the Future, which is a nonprofit research and consulting organization based in Silicon Valley, said, “Without significant changes in our political economy and data governance regimes [AI] is likely to create greater economic inequalities, more surveillance and more programmed and non-human-centric interactions. Every time we program our environments, we end up programming ourselves and our interactions. Humans have to become more standardized, removing serendipity and ambiguity from our interactions. And this ambiguity and complexity is what is the essence of being human.”

Subscribe to our Monthly Newsletter!

…and stay informed about new articles and writing opportunities.


Worries about AI culminated in May 2023 when the organisation Center for AI Safety released a statement, signed by many key actors in the field, calling for the mitigation of “the risk of extinction from AI” to be made a priority and seen as on the same level of importance as mitigating other risks, such as pandemics and nuclear war.

These fears are based on concerns about the fact that AI can be hard to predict and therefore can accomplish goals in ways people who assigned the task would not expect. Such worries are conceptualised by the “paper clip maximiser” – a thought experiment created by Nick Bostrom, a philosopher at Oxford University.

In this experiment AI is tasked with acquiring as many paperclips as possible. As the AI devotes all its energy to doing just that, it starts improving itself in new ways and attempting to increase the amount of paper clip manufacturing we have available and even transforming Earth to reach that goal. The scenario Bostrom created is intended to illustrate the point that AIs do not have human-like ways of thinking.

Another similar scenario, is an AI being task with securing a reservation at a popular restaurant and therefore shutting down cellular networks and causing traffic issues in order to prevent anyone else from getting a table at the restaurant. These scenarios illustrate the worry of AI becoming an intelligence we cannot understand, one that might be good at accomplishing goals but getting there in ways that are not ethical in our eyes.

However, to say AI is as dangerous as a pandemic or a nuclear war might be an overstatement, at least in the current moment. The technology lacks the capacity for the judgement necessary to perform these scenarios and it lacks infrastructure to cause such serious damage. In reality the dangers of AI are much more philosophical and existential than apocalyptical. What seems as a common understanding amongst many key players in the AI industry is that the concerns are mostly in the possibility for misuse of AI due to algorithmic biases and the capabilities AI has to alter the way people see themselves and their abilities.

Bias and Ethical Concerns

AI systems have already been introduced to make decisions on things such as loan approvals and hiring processes. This is particularly concerning because of the possibility for algorithmic bias inside of AI. Underneath the code and underneath the machine, there is still a team of humans who translate their biases, either conscious or unconscious, into the code and the way that the AI operates.

There is a potential for the misuse of AI which comes from algorithmic bias. One example of algorithmic bias can be the way designers provide representative data. One case of this is if one wants to train a human detection algorithm, but fails to show diverse enough images of what one defines as human. If, for example, designers only show images of people with blonde hair, then the system may fail to recognise anyone who does not have blonde hair as human. In practice, such mistakes have led to AI systems with racial or gender biases.

Another ethical concern, that goes largely unregulated, is the overwhelming amount of opportunities AI has to influence our choices. From online shopping suggestions, to the ads we receive and the suggestions for which shows we should watch next… Companies are free to develop algorithms with the goal of maximising their profit through their engagement. The reality we are left with is integration of AI into human-centered systems, often with many of us unaware of such integration.

Increased Dependence on Technology and the Loss of Human Connection

With the power of AI comes the challenge of managing the reliance on it. The risk with having a machine which can make all of our decisions for us, is that critical thinking skills of the individual might diminish. Humans are judgment-making creatures, we rationally think about many daily occurrences; like whom we should hire, who should get a loan, what we should watch next. But as more of these judgments become automated, people might gradually lose their ability to make such judgments, or become too lazy to make their own judgments.

A study done in 2023 which looked at the impact of AI on the loss of decision-making, laziness and privacy concerns among university students in Pakistan and China showed that AI significantly impacts human decision making and laziness. A 27.7% decrease in the amount of decision-making was found to be due to the impact of AI in Pakistani and Chinese society, and an even more alarming 68.9% of laziness in humans was concluded to be due to the impact of AI.

There is also the very real threat of AI being used for propaganda, which combined with potentially diminished critical skills AI might create can be quite a serious problem. There is a very real potential for AI to join existing data into ways that can be misleading. This will present even further challenges to the already complex issue of misinformation and the concerns of tailored for-you pages that create different versions of the “truth” on the internet. It will become even harder to distinguish fact from fiction if fiction becomes more accurate than ever. We already see issues of it today, with deep-fake audio and video being used to undermine political challengers and sway elections.

Another important point to consider is how humans value the role of chance in their lives. However, algorithmic engines reduce that kind of serendipity with its meticulous planning and prediction. In the quest to raise productivity and efficiency AI will erase the chances for coming across a piece of information, place or activity by accident. Or by one’s own efforts.

Furthermore, as we begin to converse with AI more there is a risk of the loss of human interaction. Conversing with AI lacks the authenticity of human communication, however it is also possible that over time the tailored responses from AI will become better and better until we prefer such consistent communication over the unpredictable nature of human conversation.

This shift could lead to a decline in social and emotional skills, and the ability to form meaningful relationships. A study conducted at the University of California, Los Angeles, found that children who spent five days at a camp without access to screens demonstrated an improved recognition of nonverbal emotional cues compared to the children who had regular screen times. It is crucial to try to set boundaries to the use of AI so as to not diminish the essence of human existence and connection.

Lastly, the threat of AI tools in creative fields or its writing capabilities present another set of challenges. Not only does it present an issue in education where tools such as ChatGPT can replace the need to think critically, but it also threatens to change the ecosystem of our language use. The consequence of such software that can write pretty comprehensively is that they are now composing the majority of the content we consume: the Amazon Web Service suggested that 57% of content on the internet today is either AI-generated or translated using AI. An increased dependence on AI results in the loss of what makes us human and the ability for connection.

These philosophical issues surrounding AI, with very real consequences, raise necessary concerns. While it is unlikely that AI will destroy the world, the uncritical embrace of it can have unwanted outcomes for humans’ most important skills. Algorithms are destroying people’s ability to make judgements and think critically, as well as enjoy chance encounters in their lives. Our very way of existence is threatened by AI, and while these concerns might not seem to require as swift of action as a threat of a pandemic or a nuclear war might – the more subtle changes will cost us in the end if ignored.


↓ Image Attributions

HM2_9609” by Web Summit // Licensed under CC BY 2.0