AI to Replace People: Wrong Philosophy

by | Nov 1, 2023 | Artificial Intelligence, Ethics, I-O Psychology, Workplace | 0 comments

Don’t discount the value people bring when working with artificial intelligence.

Very recently, I attended an information session about the use of generative AI — specifically ChatGPT. This presentation, originally billed as a facilitated discussion, quickly became a soapbox for the views of the presenter, who stated that ChatGPT would become your new favorite employee. He touted the software as a worker who never complains, works odd hours, will do anything requested, conducts tedious research, and costs only $20 per month (on the paid subscription).

My heart sank. Yes, artificial intelligence can provide some conveniences, costs very little, and will work without complaint. However, do we really view people who work for us and with us as bothersome? Have we become so self-involved and so myopic as to believe that a computer and a sole-preneur trump an organization of people? Are we willing to sacrifice jobs, others’ pursuit of meaningful work, and social interactions in exchange for what a computer program spits out with a prompt?

Apparently, we may just have reached that point in some people’s eyes.

People Make the Difference

Celebrated quantum-theory physicist and author Michio Kaku highlighted in an interview the fact that AI such as ChatGPT, Bard, and Bing return the works of other people purely (JRE, 2023). These programs do not use reasoning or critical thinking. They return results based on information aggregated from a compendium of sources that have been indexed over the years. Then, the software uses pattern recognition to string relevant content together in a way people have written in the vernacular.

Let’s break down what exactly I found objectionable point-by-point from this presenter’s statement about people:

Complaints = Feedback

Within the formation of teams, one of the stages includes “Storming” — a period of time where people experience conflicts, have questions and doubts, and resist joining the group. Within change management, we learn that when people resist change, we should let them vent and express their negative emotions in a safe and productive way. During performance evaluations, 360-degree approaches and those that elicit feedback from multiple sources provide opportunities for growth and advancement.

Complain to an AI, and you may train it in a biased manner because of its inability to reason, think, or feel.

Always On = Burnout

Yes, AI never sleeps. In fact, researchers have suggested that generative AI and other forms of artificial intelligence may provide respite for workers in highly affected roles such as those in healthcare and human resources (Zielinski, 2023).

However, for many people (including the aforementioned presenter who claimed hundreds of hours of working with ChatGPT in a short amount of time), this may also lead to the user of the AI software to stay continuously plugged in. With endless possibilities, one can find oneself prompting one’s artificial-intelligence companion over and over again to quench the thirst for knowledge or accomplishment. Staying connected to devices can lead to a variety of physical and mental health issues including interrupted sleep patterns, eye strain, and stress not to mention fatigue which leads to burnout.

Turn it off at some point; it’s good to unplug for a while.

Mindless Obedience = Complacency

Unless you ask artificial intelligence to do something beyond its capabilities and scope of programming, it will not refuse a request. Constant indulgence of requests without pushback breeds self-entitlement and a sense of power. Furthermore, a user who relies too much on AI may become complacent and stop doing the most important thing: THINK! ChatGPT and other generative AI have proven that spurious results can and do occur, particularly with regard to mathematical computations, academic source citations, and scientific concepts (Tyson, 2023).

As users, we must validate and check the results for accuracy.

AI Research = Questionable Validity and Reliability

On a related note, many institutions of learning prohibit, restrict, or provide guidelines on the the use of generative AI in the classroom and for scholarly research (Purdue University, 2023). Students and researchers must not take all information gleaned from artificial intelligence as truth or fact. The programs pull information based on patterns to deliver results, and the quality of the results stems from how well the AI received training. The adage “garbage in, garbage out” as it relates to data still prevails when working with AI.

Always double and triple check work before publishing anything, and always ethically follow the rules of the institution from which you learn.

Cheap = Lack of Quality

I can’t help but think of the old adage, “You can have it cheap, fast, or of good quality… pick two.” The speed of AI affords convenience, and one can hardly beat the price. However, with speed and convenience, as we discussed in the previous paragraph, a user of generative AI may sacrifice quality. One cannot trust the information returned; it may deliver spurious results. Moreover, the language processing performed by AI produces syntax in the everyday speech of the public, which typically results in unimaginative and boring passive voice, thus sacrificing quality.

Some believe and teach that translating the results of a prompt to another language and then translating it back to English will produce a more refined opus. However, wrong information remains wrong in any language, and poor grammar may yet beget poor grammar regardless of the tongue in which one writes it, even from a robotic author.

I do not speak against artificial intelligence. I do, in fact, feel that generative AI, among other AI tools, provides opportunities for growth and exploration. In truth, I used Grammarly to proofread this article, Consensus to help with my research, and Bard within Google to find answers to questions. However, you simply cannot convince me that bots will replace our human need to live, work, learn, and love with other humans. Our workplace and our home life — even as consultants, entrepreneurs, or solo professionals — rely on our interactions with other people for the sake of our psychological health and for the quality of work and life we wish to produce.

References

JRE University. (2023). Michio Kaku: ChatGPT Is Just A Chat Bot | Joe Rogan Experience. https://youtu.be/CSpv4zX9dF8?si=RVghf2q5tns5yNni

Purdue University. (2023). Considerations for Your Syllabus and Course. https://www.purdue.edu/innovativelearning/teaching/module/considerations-for-your-syllabus-and-course

Tyson, J. (2023). Shortcomings of ChatGPT. Journal of Chemical Education, 100(8), 3098. doi:https://doi.org/10.1021/acs.jchemed.3c00361

Zielinski, D. (2023). Is technology the answer to HR’s growing burnout problem? HRMagazine, , 1. Retrieved from https://www.proquest.com/trade-journals/is-technology-answer-hrs-growing-burnout-problem/docview/2865076068/se-2

Join Our Newsletter

Name(Required)
Share This