Written by Ezgi Cakirgoz
Chatbots are programs that use artificial intelligence and natural language processing at the same time. They give you information like a computer but talk to you like a human. Just a few years ago, if you had said that such programs would be accessible to everyone and could be used in many different areas, we would have had a hard time believing it. However, now such applications, especially ChatGPT, have started to take an important place in our daily lives.
First, let's go back to when Apple first introduced Siri to their users. People got scared and thought it was an alien artificial intelligence. Even if it was a program that is made for helping people, it was not unexpected that people to be afraid of using these AIs, which we used to see in science fiction movies, trying to destroy humanity. However, over time, people saw that Siri is harmless and started using it. A similar situation occurred when ChatGPT was opened for use in the past few days. Currently, people may be afraid to use ChatGPT and similar programs, but it is a fact that these programs will take an important place in our lives in the future [1].
ChatGPT is the most advanced chatbot ever, capable of answering complex questions and performing many advanced tasks. This AI can create a new work from the mouth of an author, write and explain complex codes, play games with you, and much more. ChatGPT was developed by OpenAI. This AI is built upon the company’s foundational GPT models, specifically GPT-3.5 and GPT-4. In the background of this artificial intelligence, there is some training such as reading all the books written so far and analyzing everything written on social media [2].
ChatGPT launched on 30 November 2022 and got a lot of praise. Elon Musk, one of OpenAI's co-founders wrote that "ChatGPT is scary good. We are not far from dangerously strong AI". Kevin Roose of The New York Times called it "the best AI chatbot ever made public." The co-founder and CEO of the enterprise cloud company Box Aaron Levie, said “ChatGPT is one of those rare moments in technology where you see a glimmer of how everything is going to be different going forward.” Alex Kantrowitz of Slate magazine lauded ChatGPT's response to questions about Nazi Germany, including a statement that Adolf Hitler was building highways in Germany and had come across information that Nazi Germany was using forced labor [3].
Although ChatGPT received a lot of positive criticism, the number of negative comments was also high. Time magazine revealed that to build a safety system against harmful content such as racism, sexism, violence, etc. outsourced Kenyan workers are used. Workers were exposed to toxic and traumatic content; one worker described the task as "torture". Geoffrey Hinton, one of the "fathers of artificial intelligence", left Google after expressing concerns that future AI systems could surpass human intelligence after ChatGPT. Apart from these, there have been many comments about ChatGPT’s "artificial hallucination" tendency [4].
Hallucination in AI refers to the production of outputs that may seem plausible but are factually incorrect or irrelevant to the given context. These outputs often result from the AI model's lack of understanding of the real world. In other words, the AI system "hallucinates" information that it has not been explicitly trained on, leading to unreliable or misleading responses. Now, you may ask why hallucination is a problem. There are four main reasons for this.
Erosion of trust: When AI systems produce misleading information, users can lose their trust in technology and AI programs.
Ethical concerns: Hallucinated outputs can potentially perpetuate harmful stereotypes or misinformation, making AI systems ethically problematic.
Impact on decision-making: AI systems are increasingly used to inform critical decisions in fields such as finance, healthcare, and law. Hallucinations can lead to bad choices with serious consequences.
Legal implications: Incorrect or misleading output may expose AI developers and users to potential legal liability [5].
"Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," said the letter issued by the Future of Life Institute. This begs the question: Is the human race ready for AI or do we need more time to prepare for its limitless consequences [6]?
References:
Gross, D. (2011, October 4). Apple introduces Siri, web freaks out | CNN business. CNN. https://edition.cnn.com/2011/10/04/tech/mobile/siri-iphone-4s-skynet/index.html
OpenAI API. (n.d.). OpenAI API. https://platform.openai.com/docs/models/overview
Piper, K. (2022, December 15). ChatGPT has given everyone a glimpse at AI’s astounding progress. Vox. https://www.vox.com/future-perfect/2022/12/15/23509014/chatgpt-artificial-intelligence-openai-language-models-ai-risk-google
8 big problems with OpenAI's ChatGPT. (2022, December 22). MUO. https://www.makeuseof.com/openai-chatgpt-biggest-probelms/
Gungor, A. (2023, March 22). ChatGPT: What are hallucinations and why are they a problem for AI systems. Bernard Marr. https://bernardmarr.com/chatgpt-what-are-hallucinations-and-why-are-they-a-problem-for-ai-systems/
Kumar, D. (2023, March 30). Are humans ready for artificial intelligence (AI)? Times of India Blog. https://timesofindia.indiatimes.com/blogs/everything-under-the-sun/are-humans-ready-for-artificial-intelligence-ai/
Comments