10 Things Chat GPT Still Can’t Do
© Emiliano Vittoriosi / Unsplash
Technology has advanced by leaps and bounds, but are there still things that even cutting-edge AI like Chat GPT cannot handle?
While Chat GPT has made significant strides in simulating human-like interactions, there are still areas where it falls short. These are 10 things that the AI giant still can’t do.
10 Things Chat GPT Can’t Do
These are 10 things Chat GPT still can’t do:
1. Understand Emotions and Feelings
ChatGPT cannot interpret tone or emotion from text-based conversations.
For instance, it cannot discern whether someone is genuinely excited or being sarcastic when they say something like “Can you image what that feels like?”
This limitation in emotional intelligence suggests that companies may benefit from using human agents to interact with customers who need nuanced responses beyond what chatbots can provide.
2. Access the Internet
OpenAI’s ChatGPT cannot access the internet, meaning it cannot provide real-time information, use location-based data, or offer URLs or references to online content.
This limitation restricts its functionality and the range of services it can offer. While the ability to crawl the web would enhance its capabilities, achieving real-time updates and constant training data for up-to-date information remains a challenge unless OpenAI implements online learning methods.
3. Edit Previous Texts
When using ChatGPT to edit and suggest improvements for text, it can effectively provide suggestions. However, if you ask to rephrase or elaborate further, it may misunderstand and refer to text not originally provided or offer minimal changes similar to past responses.
4. Proper Language Support
When you try experimenting with mixing languages you can see that ChatGPT’s support varied significantly depending on the language used.
Interestingly, issues encountered with certain languages recently seemed to have improved today, suggesting potential advancements in the near future.
5. Answer Complex Questions
Chatbots excel at basic tasks like FAQs and product suggestions but can falter with complex issues such as billing disputes or product returns if not equipped with the necessary knowledge base. For these scenarios, companies should prioritize human agents trained in problem-solving and customer service to ensure effective resolution and customer satisfaction.
6. Have Judgement
Chat-GPT can recognize qualitative aspects of language like stance and sentiment, but it cannot judge whether the text is offensive or determine concepts like good versus evil.
This limitation may be beneficial in maintaining neutrality. However, there could be value in a feature where Chat-GPT could provide advice, such as suggesting not to send an email or recommending rephrasing to soften the language.
7. Lie
Want to get help with getting a week off? Well, Chat GPT definitely does not have the answer.
ChatGPT is more adept at answering straightforward questions and it prefers not to deceive or lie. Yes, that includes helping you do that!
8. Avoid Plagiarism
If you try to prompt ChatGPT to compose an original poem in the style of Edgar Allan Poe, you will get a patchwork piece constructed by merging excerpts from Poe’s works into a sonnet that did not adhere to all the formal rules. A plagiarism checker would likely identify this as copied content, reflecting the mixed nature of the composition.
9. Understand Context
ChatGPT lacks the ability to grasp the context of conversations. For instance, if asked “Do you like the Titanic movie?”, it cannot discern whether the question seeks its opinion or is casual small talk. This limitation can result in misunderstandings and frustration for both users and the chatbot.
10. See Information After 2021
Lastly, one of the major things that sets Chat GPT back is that it only has information until 2021.
If you ask Chat GPT anything about after 2021, it will answer with: “As of my last update in 2021, I don’t have access to information beyond that point. This means any events, developments, or changes that occurred after 2021 are not within my knowledge base. If you have questions or need information about more recent events, it’s best to consult up-to-date sources or news platforms.” Or, it can also specificly reply with:
What Does This Mean for the Future of AI?
The statement suggests that an AI model based on data from a specific time, like 2021, will become outdated over time as new information and societal changes occur.
If the AI’s knowledge base were from 2019 instead, it would lack understanding of the significant changes brought about by events like the 2020 pandemic, potentially making it less relevant or accurate in understanding current societal conditions and developments. Thus, an AI that doesn’t evolve with new data risks losing relevance as time progresses and societal contexts change.
What do you think? Does Chat GPT do a good job?
You might also want to read: Study Finds ChatGPT Outperforms Physicians