AI in Psychology: Digital Revolution or Ethical Regression?

Photo by Xu Haiwei

The human mind is a labyrinth of psychological motives, needs, and desires. With every new path we take, we acquire an unspoken duty—to fulfill our daily needs and contribute to the betterment of humanity. This community-centric mindset has resulted in the creation of many inventions that have revolutionized society, from ancient stone boiling, stoves, and cell phones to generative artificial intelligence (AI)—a digital software that mimics human communication, intellect, and other cognitive abilities by expediting outputs to make life easier.

AI impacts nearly every aspect of life, but its recent upsurge greatly influences psychology, where research and counseling play a pivotal role. Many people in this discipline naturally feel tempted to utilize generative AI and view it as a multipurpose solution bank because of how fast and user-friendly it is. It can behave as a therapist, a teacher, a friend, and more. However, versatility does not indicate mastery. We must develop a nuanced understanding of its limitations, strengths, and socio-cultural impacts to ensure that the machine is not falsely idolized or prioritized over the human in unethical ways.

Though generative AI is highly capable, it is still a finite database that does not always contain the answers to our queries. Last year, I asked ChatGPT to analyze data from my research study. It confidently responded with the wrong findings and made several statistical errors. This instance proves that AI sometimes ‘hallucinates’ by regurgitating wildly incorrect outputs or failing to generate accurate information. Such an interaction calls for a cross-check after receiving AI outputs to avoid the horrific mishap of skewed results. Ethics aside, this is a logical reason why plagiarism through AI usage is futile. We cannot solely depend on AI to write papers, find research articles, or analyze data because of the accuracy problem. 

Other than spewing misinformation, AI can also create systemic divides in our society if heavily incorporated into classrooms and workplaces. We live in a hierarchical society with socioeconomic, racial, and ethnic disparities, where the rise of something almost always correlates with the decline of something else. In this case, AI algorithms might mimic human biases and prejudices by catering to privileged groups over marginalized communities. This phenomenon can manifest in the form of an interface design issue that does not consider accessibility for people with disabilities or as a chatbot that perpetuates elitism by using unnecessarily complicated language. It can also appear as a data processing issue, where AI only uses data from specific geographical locations to formulate outputs. Classrooms and workplaces face a major dilemma— to use AI and further alienate some groups or to prohibit the usage of the groundbreaking tool entirely. 

Even though challenges persist, it is possible to alleviate them by spreading proper awareness and regulating AI usage. Considering that AI interactions are highly personal, those in the psychology discipline can continue to educate themselves and others on the limitations and algorithmic biases. When one views AI as an essential tool for societal transformation and not as a faultless technical gadget, one can leverage emotional awareness to create highly generalizable studies, think critically about class concepts, assist clients from diverse backgrounds, or partner with engineers to improve these flawed systems. 

Another tactic is to permit the usage of AI as a supplementary tool that gently pushes for creativity and original thought instead of entirely producing it. People will always use AI since it is readily available to them. However, encouraging people to cite AI and provide adequate documentation of the outputs can address plagiarism concerns. People subconsciously receive inspiration from many sources. If we are mosaics of everything we interact with, i.e., books, media, people, etc., shouldn’t AI also fall under this umbrella? We can harness the power of artificial intelligence to receive positive encouragement, choose a topic for our upcoming essay, or even summarize an assigned text reading that contains technical jargon. These examples highlight AI’s role as a valuable decision-assisting tool. 

As students and professionals, we bear the same responsibility to promote the ethical usage of artificial intelligence in psychological domains. Tackling AI issues like over-reliance and systemic injustice requires us to refer back to some basic skills that form the backbone of psychology—emotional intelligence, equity, and bias reflection. Ultimately, the lesson is not to take AI outputs at face value and to conduct additional research to confirm that the received information is accurate.