Why Artificial Intelligence Is Not As Intelligent As We Think It Is

Written by Muhammad Muneeb Ur Rehman ·  2 min read >
Artificial Intelligence

Artificial intelligence is considered something that came from outer space and it is here to take up our jobs and eventually conquer the world people with lower tech knowledge actually believe this theory. But in reality, AI is just a bunch of data and algorithms that are written by really smart ‘humans’ and the AI’s intelligence is only limited to the intelligence of those algorithms ultimately depending on the human intelligence. 

The easiest words to prove these assumptions wrong is that the AI can only function based on the data it receives. Anything more than that would take on more than it can handle, and machines are not built that way. So, when the data inputted into the machine does not include a new area of work, or its algorithm does not include unforeseen circumstances, the machine becomes useless.

Neural Network models, used by computers, are modeled according to the functioning of the human brain which is excelling in previously unimaginable areas. This has led us to hope that AI will one day surpass our intelligence and solve all of our problems.  

Language tools, such as virtual assistants or automatic translation tools, are examples of the increasingly advanced capabilities of language tools. As the underlying models of AI can learn patterns from a large amount of data, language tools can mimic us. However, AI is increasingly being used in decision-making processes in fields like human resources, insurance, and banking, to mention a few. 

Machines are starting to understand us and our preferences better by analyzing human behavior through a massive volume of input data. Recommendation engines are then easily filtering out content and making recommendations for us on social media for films to watch, news to read, or things to wear, helping us with decision-making. 

Thus, according to the average population, these are the examples of “Artificial Intelligence” in this era, when being asked about it. However, there is a huge difference between “sounding like a human” and “being a human” and the former necessarily does not mean that there is always human intellect attached to it. And this is precisely the deception that we are living in.  

The major drawback of the AI Machine in comparison to human intellect is the reasoning ability. The machines are easily able to offer adequate feedback and answers to the questions, however, they are handicapped in the area of providing logical reasoning and explaining the process of reaching the conclusion.

Emotional intelligence is one distinguishing factor that makes humans forever relevant in the workplace. The importance of emotional intelligence in the workspace cannot be overemphasized, especially when dealing with clients.

Regardless of how well AI machines are programmed to respond to humans, it is unlikely that humans will ever develop such a strong emotional connection with these machines. Hence, AI cannot replace humans, especially as connecting with others is vital for business growth.

Artificial intelligence applications are indeed gaining ground in the workplace, and they will replace many jobs people perform today. However, the jobs it takes are often limited to repetitive tasks requiring less intense reasoning. Additionally, evolving workplace demands will create new roles for humans as the world moves towards a more integrated tech landscape.

There is a gulf between what AI technologies do and what the average user understands them to do. This problem is not unique to AI; it plagues many modern technologies. We’ve learned to live with the comforts — and discomforts — of allowing black boxes to run our lives, from smartphones to video games, and now large language models.

But when it comes to AI, the gap between how it works and what we know has higher stakes. How might ChatGPT influence practices like education and clinical medicine, long defined by meaningful human interactions between experts (teachers and clinicians) and the people they serve (students and patients)? When ChatGPT creates presumptive medical facts on-the-fly — bullshitting, perhaps, about which drug regimen is best for a patient in an examination room next door — the consequences could be corporeal, and dire.

Kate Crawford studies the social and political implications of artificial intelligence. She is a research professor of communication and science and technology studies at the University of Southern California and a senior principal researcher at Microsoft Research. Her new book, Atlas of AI, looks at what it takes to make AI and what’s at stake as it reshapes our world. She says in her book that:

“AI is niether artificial nor inteligent”

Read More:


Written by Muhammad Muneeb Ur Rehman
Muneeb is a full-time News/Tech writer at He is a passionate follower of the IT progression of Pakistan and the world and wants to educate the people of Pakistan about tech affairs. His favorite part about being a tech writer is tech reviews and giving an honest and clear verdict to his readers. Contact Muneeb on his LinkedIn at: Profile