BY NANG YUWATI

Introduction

The use of AI is becoming more common in the life of today whether one likes it or not, from simulations run by large organizations to a random AI-generated photo uploaded by a Facebook user,  it is increasingly hard to escape from AI.

What is AI and why is it used? Artificial Intelligence, or AI, is a technology that enables computers and machines to simulate human intelligence and decision-making capabilities.

Why do some or many people hate AI?     

For starters, it is because of how wrong/misleading they can be. For instance, Google’s AI-integrated Search feature led users to malware-laden spam sites in March 2024 by recommending a spam site as part of the answer of a search. Due to the algorithm bias, COMPAS(Correctional Offender Management Profiling for Alternative Sanctions) output results are causing discrimination towards minorities. In 2016, a ProPublica non-profit news organization study found that COMPAS tended to underestimate the likelihood of recidivism for white people while overestimating it for black people. However,  the studies also continued to know that not only it was wrong, but it was in fact, the opposite.

How is AI affecting the minds of Humans?

Firstly, the concerns are mainly raised against generative AI which people believe is stripping away people’s creativity, exploiting humans’ work and easily being able to used to spread misinformation.

Generative artificial intelligence is artificial intelligence capable of generating content. Examples are text, images, videos, or other data that include prompts from the user. Generative AI models learn the patterns and structure of their input training data and then produce new results that have similar characteristics.

Let’s examine the many forms of generative AI tools and their potential impact on people’s mentalities.

Chatbots

Today, chatbots have become very popular,  offering everything from casual conversation to specialized support. It is not hard to find a fake chatbot of your favourite celebrity or even a therapist in need.  This widespread adoption reflects a growing dependence on AI for personal interaction, raising alarms about the implications and effects of these virtual relationships on human psychology.

People are forming emotional attachments to these chatbots. This anthropomorphism, driven by advances in conversational AI, means that chatbots are not just tools we use for fun but some people might consider them integral to their lives.  This emotional entanglement also lets companies potentially exploit and manipulate their buyers.

The controversy surrounding Replika’s shift in features highlights a darker and crazier side of this trend. In addition, people are also becoming addicted to apps like character AI. These chatbots are also letting delusions grow in people’s heads. As people these days call it, is it a Skill issue? But what about young children and teenagers who are still trying to navigate through the world and learning new behaviours? This growing reliance on virtual interactions can blur the lines between reality and artificiality. Character.AI has become a source of both fascination and concern. Many are reporting troubling levels of addiction, with some spending excessive hours interacting with virtual characters, leading to delusions, this especially wouldn’t be very uncomfortable if the chat box is based on a real person (this is not rare at all). It is harmful not only to the users but also to the people on whom these models are based. Intense emotional involvement like these not only risks dehumanizing real-life relationships but also insets a fake idea of the real person in the user’s head. There are even chats of celebrities who are underage and even creepier things happening with these chatbots. There are fictional characters too, of course, but addiction to the point of spending five hours a day talking to fake digital programs is not healthy at all.  The allure of these AI-driven interactions may mask deeper issues, including loneliness and psychological distress, raising significant ethical concerns about the implications of integrating these sophisticated chatbots into everyday lives.

Music and Art

The rise of AI art and music is causing considerable anxiety among traditional artists due to several factors. This shift threatens to devalue their work.

Furthermore, the sheer volume of AI-generated content risks saturating the market and diminishing the visibility of individual artists. The creativity of humans is going to be overshadowed by some prompts and to be honest, these AI art do not even look good. Even if this technology improves to make better “music” or “art”, is it art if there is no hard work and sincerity behind it? Not to mention that these AI art are generated by work from real people, it is not strange that real artists feel unsafe to post their work or feel wronged and disheartened.

Photos and videos

The increasing threat of unreal pictures and films, particularly those generated by sophisticated AI systems, is a major concern.

Creating content that seems to be real but is not an open-up misinformation campaign capability. This can be done through fake photos and videos which are used to promote fictitious narratives, manipulate public opinion or even spread terror amongst the masses.

For victims, the psychological ramifications of dealing with faked images or videos could be severe. False or damaging portrayals may lead to increased anxiety, stress, and even trauma for them. Hence, self-confidence may erode while trust problems and social isolation occur. Issues like depression and anxiety disorders are common as victims struggle affecting their well-being in general and decision-making in particular instances. They have resulted in some victims committing suicide.

(This content may be sensitive for some readers)

14-year-old Mia Janin took her own life in March 2021. The reason is due to a group of classmates spreading dehumanizing deepfake content about her.

This group shared and mocked Mia’s TikTok videos, including sharing fake nude photos of her and photoshopping faces onto pornographic bodies.

In addition, this proliferation of deepfakes can psychologically embolden malicious individuals by diminishing their fear of consequences.

These widespread deepfakes can make such wrongdoers to be less scared of any possible effects. If deepfakes become more normalized, they will offer a feeling of being anonymous and untouchable thus promoting bad behaviour without consequences. In such an atmosphere, they might become psychologically numb and as well as contribute to the creation of communities in which manipulation and lies are normal and even praised. This is going to serve to amplify their behaviours.

Education

We can see many students and teachers using AI tools for education these days. Still, recent research has highlighted troubling consequences associated with the use of ChatGPT and similar AI tools in education. Well, it is not exactly surprising. A study published in the International Journal of Educational Technology in Higher Education elucidated that heavy reliance on ChatGPT among students is linked to memory loss, procrastination, and declining grades. As it is easier and more convenient for students to use AI to complete their tasks, it isn’t surprising that their cognitive functions have become “lazy”. As students can easily make AI do the work for them, they naturally become less interested in their coursework. I would not lie and say that AI tools are not useful and efficient, but students should still be aware that these tools will hinder their learning processes.

Conclusion

In conclusion, the possibility of these sophisticated tools being used unethically grows as they are incorporated more into our daily lives. Strong legal regulations are desperately needed, as evidenced by the psychological effects of such technology, which range from the decline of trust to the encouragement of negative behaviour to procrastination to a reduction in cognitive abilities. Without these protections, the ease with which AI can be abused poses a threat to our moral and social structures as well as the possibility of normalizing harmful behaviour and psychologically desensitizing people to manipulation and lying. Thus, proactively addressing these challenges is essential to upholding societal integrity and shielding people from the practical and psychological effects of improper use of AI.

Reference list

Ahmad, S.F., Han, H., Alam, M.M., Rehmat, Mohd.K., Irshad, M., Arraño-Muñoz, M. and Ariza-Montes, A. (2023). Impact of artificial intelligence on human loss in decision making, laziness and safety in education. Humanities and Social Sciences Communications, 10(1), pp.1–14. doi:https://doi.org/10.1057/s41599-023-01787-8.

Angwin, J., Larson, J., Mattu, S. and Kirchner, L. (2016). Machine Bias. [online] ProPublica. Available at: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.

BBC (2024). Mia Janin: Schoolboys made fun of girl before her death. www.bbc.com. [online] 23 Jan. Available at: https://www.bbc.com/news/uk-england-london-68071440.

Belle, G. (2024). Everything Wrong with AI. [online] Youtu.be. Available at: https://youtu.be/tSSoArmKtO8?si=4o1SIPZOLJ0nH6O7 [Accessed 10 Aug. 2024].

Chung, F. (2024). ‘20pc of Google’: Bizarre new site explodes. [online] news. Available at: https://www.news.com.au/technology/online/internet/i-need-to-go-outside-young-people-extremely-addicted-as-characterai-explodes/news-story/5780991c61455c680f34b25d5847a341 [Accessed 5 Aug. 2024].

Darling, K. (2024). It’s No Wonder People Are Getting Emotionally Attached to Chatbots. [online] Wired. Available at: https://www.wired.com/story/its-no-wonder-people-are-getting-emotionally-attached-to-chatbots/.

Hass, P. (2015). The Real Reason to be Afraid of Artificial Intelligence. [online] Youtu.be. Available at: https://youtu.be/TRzBk_KuIaM?si=36QLOykLLVkpJAIQ [Accessed 10 Aug. 2024].

Landymore, F. (2024). ChatGPT Use Linked to Memory Loss, Procrastination in Students. [online] Futurism. Available at: https://futurism.com/the-byte/chatgpt-memory-loss-procrastination.

Maggie Harrison Dupré (2024). Google’s AI Search Caught Pushing Users to Download Malware. [online] Futurism. Available at: https://futurism.com/the-byte/google-ai-search-spam-malware [Accessed 10 Aug. 2024].

McGovern, G. (2024). How AI Bias Creates Dependency and Inequality. [online] CMSWire.com. Available at: https://www.cmswire.com/digital-experience/how-ai-bias-creates-dependency-and-inequality/ [Accessed 10 Aug. 2024].

Metz, R. (2024). Bloomberg – Are you a robot? [online] Bloomberg.com. Available at: https://www.bloomberg.com/news/articles/2024-07-02/google-s-emissions-shot-up-48-over-five-years-due-to-ai [Accessed 10 Aug. 2024].

Rasine, B. (2023). Is AI anxiety affecting your art and mental health? [online] The Muse. Available at: https://themuse.substack.com/p/is-ai-anxiety-affecting-your-art.

Sample, I. (2020). What are deepfakes – and how can you spot them? [online] The Guardian. Available at: https://www.theguardian.com/technology/2020/jan/13/what-are-deepfakes-and-how-can-you-spot-them.

Savino • •, M. (2024). Here’s what is being done to prevent so-called deep fakes in Connecticut. [online] NBC Connecticut. Available at: https://www.nbcconnecticut.com/news/politics/heres-what-is-being-done-to-prevent-so-called-deep-fakes-in-connecticut/3207841/.

Shekhawat, S.P.S. (2023). ChatGPT Addiction: The Hidden Pitfalls of Overuse. [online] www.linkedin.com. Available at: https://www.linkedin.com/pulse/chatgpt-addiction-hidden-pitfalls-overuse-singh-shekhawat/.

Siciliano, J. (2024). Google says datacenters, AI cause its carbon emissions to rise sharply. [online] Spglobal.com. Available at: https://www.spglobal.com/commodityinsights/en/market-insights/latest-news/electric-power/070924-google-says-datacenters-ai-cause-its-carbon-emissions-to-rise-sharply#:~:text=Google%20LLC%20said%20increased%20electricity [Accessed 10 Aug. 2024].

Stanford University (2024). Dangers of Deepfake: What to Watch For |. [online] uit.stanford.edu. Available at: https://uit.stanford.edu/news/dangers-deepfake-what-watch#:~:text=Not%20only%20has%20this%20technology.

Toolify (2024). The Impact of AI on Art and Music: Threats and Challenges. [online] Toolify.ai. Available at: https://www.toolify.ai/ai-news/the-impact-of-ai-on-art-and-music-threats-and-challenges-2489185 [Accessed 10 Aug. 2024].

Leave a Reply