News, Technology

Panic For Schools, Jealousy For Big Techs; ChatGPT Has Surely Moved the World

Written by Muhammad Muneeb Ur Rehman ·  2 min read >

Two months into the release of ChatGPt, it has gained a total user count of over 100 million per month, ChatGPT’s potential impact remains still complicated and unclear. All of it has started to make more sense since its creator announced on Wednesday that a paid subscription version will be launched in the United States. 

It is entirely possible that November’s release of ChatGPT by California company OpenAI will be remembered as a turning point in introducing a new wave of artificial intelligence to the wider public.

So far, over a million users have tested the potential and limits of ChatGPT – a chatbot developed by AI research company OpenAI – to write emails, poems, or code, or even produce entire research papers. The chatbot recently passed a Wharton School MBA exam, prompting further admiration and alarm.

Several academic journal publishers have banned authors from using ChatGPT, and professors are changing exams and assignments in response to the tool.

While many of us are stunned by how impressive and natural the output can be, it doesn’t mean it’s useful for everything we do. Here’s what we’ve found are the major implications for research, teaching, and learning.

When it comes to assignments with open-ended components, I have started to ask students to submit the ChatGPT response to the theme they picked and include an appendix telling me how they used it. 

I can’t be in the business of policing whether they use it or not, and this is a tool they need to learn how to use (my views would be different if I was teaching younger students – high schoolers for instance). I tell them they should worry less about ChatGPT making them redundant and more about being made redundant by somebody who can effectively use such technologies.  

ChatGPT seems to have three goals: Be helpful, be truthful, and be inoffensive. However, in its attempt to be helpful (and inoffensive), it occasionally makes stuff up. When it tries to be helpful and truthful, it can say things that are offensive. Will OpenAI’s reinforcement learning with human feedback catch and correct this? 

Punishing unhelpful answers may push the AI to give false answers; punishing false answers may make it give offensive ones; and punishing offensive answers may make it give unhelpful ones. OpenAI needs to grapple with this impossible trinity.

ChatGPT can be useful for compression, such as providing summaries of articles, emails, and books, but only if users apply critical thinking to weed out misinformation. It can also help generate “alternative perspectives” to understand how various groups of people perceive things such as product descriptions, political statements, mission statements, or the news. For example, you could ask ChatGPT for an ambitious, complimentary, cynical, or culture-specific summary of a piece of text as a way to discover new ways of thinking, raise new questions, and also improve the original text. 

This has implications for our education system. Instead of answering questions, students of the future might be asked to write 10 questions for AI and assess its answers based on the different versions and perspectives requested. In this sense, AI may indeed prove valuable for education much like the printing press was. “Supercreativity” – a concept we outlined a few years ago – is around the corner. In the words of Sebastian Thrun, the academic, entrepreneur, and founder of Google X: “We have not even begun to understand how creative AI will become. If you take all the world’s knowledge and creativity and put it into a bottle, you will be amazed by what will come out of it.”

By using ChatGPT to generate multiple texts on a topic, one can also distinguish that volume of text from those written by humans to identify possible differences and potential gaps for research. This ability to work together with AI to create better content, ideas, and innovations will become increasingly important. 

Going forward, we also need to develop innovative and strong processes for humans to work together with machines and oversee AI to ensure what it generates or does is safe and trustworthy.

Read More:


Written by Muhammad Muneeb Ur Rehman
Muneeb is a full-time News/Tech writer at He is a passionate follower of the IT progression of Pakistan and the world and wants to educate the people of Pakistan about tech affairs. His favorite part about being a tech writer is tech reviews and giving an honest and clear verdict to his readers. Contact Muneeb on his LinkedIn at: Profile