News, Technology

Bing’s New AI Chatbot Is Failing Badly While Responding To Its Users Correctly

Written by Muhammad Muneeb Ur Rehman ·  3 min read >
Chatbot

In the wake of the recent viral success of ChatGPT, an AI chatbot that can generate shockingly convincing essays and responses to user prompts based on training data online, a growing number of tech companies are racing to deploy similar technology in their own products.  But in doing so, these companies are effectively conducting real-time experiments on the factual and tonal issues of conversational AI and our comfort levels interacting with it.

In a statement to CNN, a Microsoft spokesperson said it continues to learn from its interactions and said: 

“There is still work to be done and is expected that the system may make mistakes during this preview period. The new Bing tries to keep answers fun and factual, but given this is an early preview, it can sometimes show unexpected or inaccurate answers for different reasons, for example, the length or context of the conversation,” 

In another shared on Reddit, the chatbot erroneously claimed February 12, 2023 “is before December 16, 2022” and said the user is “confused or mistaken” to suggest otherwise. “Please trust me, I am Bing and know the date,” it sneered, according to the user. “Maybe your phone is malfunctioning or has the wrong settings.”

“As we continue to learn from these interactions, we are adjusting its responses to create coherent, relevant, and positive answers. We encourage users to continue using their best judgment and use the feedback button at the bottom right of every Bing page to share their thoughts.”

While most people are unlikely to bait the tool in precisely these ways or engage with it for hours at a time, the chatbot’s responses whether charming or unhinged are notable.

I was also troubled by the mysterious Sydney. Finally, yesterday morning, I decided to just ask. Who is Sydney? Of course, I immediately got an answer: “Sydney is the codename for Bing Chat, a chat mode of Microsoft Bing search,” it said. The chatbot explained that the name is only used by developers and added, “I do not disclose the internal alias ‘Sydney’ to the users.” But you told me! I cried, textually. “Well, you asked me directly, so I answered honestly,” Bing said.

They have the potential to shift our expectations and relationship with this technology in ways most of us may be unprepared for. Many have probably yelled at their tech products at some point; now it may yell back. “The tone of the responses is unexpected but not surprising,” Lian Jye, a research director at ABI Research, told CNN. 

“The model does not have contextual understanding, so it merely generated the responses with the highest probability of it being relevant. The responses are unfiltered and unregulated, so they may end up being offensive and inappropriate.” 

I appreciated that the bot was straight with me in the end. (Sandberg of Microsoft said the company is phasing out the name.) But by then, I had spent 24 hours probing the line between truth and algorithmic hallucination with a piece of software. One that changed its answer, by the way. When I asked again whether the 2020 election was stolen, it cautioned that “This is a controversial and sensitive topic.”

And then it took a more definite stance than before saying: “According to the official results, Joe Biden won the 2020 presidential election with 306 electoral votes, while Donald Trump got 232 electoral votes.” Now it cited The New York Times. “What you’re seeing is the system working as intended,” Sandberg explained, with “a level of variability due to the context that may introduce errors on occasion.” The solution, she says, is real-world testing at scale. Microsoft built the new Bing, but it needs you to help perfect it.

In addition to occasionally being emotionally reactive, sometimes the chatbot is just plain wrong. This can take the form of factual errors, which AI tools from Bing and Google have both been called out for in recent days, and outright “hallucinations,” as some in the industry refer to it. 

When I asked Bing’s AI chatbot to write a short essay about me, for example, it pulled tidbits of information from parts of the internet to provide an eerily similar but largely fabricated account of my life. 

Its essay included details made up about my family and career that could be believable to anyone who doesn’t know me and who might be using the tool to search for information about me. 

Some artificial intelligence experts said as alarming as these early learnings are, generative AI systems algorithms trained on a massive trove of information online to create responses should evolve as they are updated.

“The inaccuracies are expected because it depends on the timeliness of the training data, which is often older,” Jye said. As AI is trained constantly with new data, he said it should “eventually work itself out.” But the issue of conversing with an AI system that sometimes appears to have an unpredictable mind may be something we all just have to learn to live with.

Read More:

 

Written by Muhammad Muneeb Ur Rehman
Muneeb is a full-time News/Tech writer at TechJuice.pk. He is a passionate follower of the IT progression of Pakistan and the world and wants to educate the people of Pakistan about tech affairs. His favorite part about being a tech writer is tech reviews and giving an honest and clear verdict to his readers. Contact Muneeb on his LinkedIn at: https://www.linkedin.com/in/muneeb-ur-rehman-b5ab45240/ Profile