Amazon has reportedly warned employees to not put confidential data on ChatGPt, the AI-powered chatbot that is capable of solving complex queries in seconds. According to messages shared on an internal slack group and assessed by business insiders, Amazon employees use ChatGPT for research purposes and to solve daily purposes.
ChatGPT has been making the tech industry sweat since its rise in popularity last year, and now Amazon is feeling the heat too. According to internal communications from the company as viewed by Insider, an Amazon lawyer has urged employees not to share code with the AI chatbot.
Insider reported earlier this week that the lawyer specifically requested that employees not share “any Amazon confidential information (including Amazon code you are working on)” with ChatGPT, according to screenshots of Slack messages reviewed by the outlet. The guidance comes after the company reportedly witnessed ChatGPT responses that have mimicked internal Amazon data.
“This is important because your inputs may be used as training data for a further iteration of ChatGPT, and we wouldn’t want its output to include or resemble our confidential information (and I’ve already seen instances where its output closely matches existing material),” the lawyer wrote further, according to Insider.
Amazon is probably right about ChatGPT obtaining its data since, in a similar story, ChatGPT allegedly answered interview questions correctly for a software coding position at the company. According to Slack channel transcripts also reviewed by Insider, the AI provided correct solutions to software coding questions and even made improvements to some of Amazon’s code.
“I was honestly impressed!” an employee reportedly wrote on Slack. “I’m both scared and excited to see what impact this will have on the way that we conduct coding interviews.” The novelty of ChatGPT has not to warn off just yet, but questions surrounding how it may intersect with our daily lives have sprung up in recent weeks.
While the chatbot was able to pass a final exam at Wharton in an MBA level course (where it did struggle with some basic arithmetic), ChatGPT’s role in education among other fields is dubious. Some school systems, like the New York City Department of Education, have decided to ban the tech over fears of cheating, but OpenAI’s CEO simply believes school administrators need to get over it.
The report states that Amazon employees were impressed with the chatbot’s capabilities. After testing, team members of the Amazon Web Services cloud unit said ChatGPT was doing a “very good job” answering customer support questions and creating “very strong” training documents. Engineers also reportedly used the chatbot to review code, with favorable results. However, ChatGPT reportedly struggled with creating an “epic rap battle.”
This does not mean that ChatGPT cannot improve and its developer, OpenAI, may add more capabilities in the coming months. Google is also reportedly working on a ChatGPT rival as many believe that the AI-powered chatbot poses a big threat to its search engine. The key difference between the two platforms is that ChatGPT offers a single answer based on sources available online.
Amazon’s internal Slack channel has many employee questions about how to use ChatGPT. Some employees asked Amazon if there were official guidelines for using ChatGPT on work devices. Others wondered if they were allowed to work with AI tools. An employee is urging Amazon’s cloud computing division, AWS, to clarify its stance on using “generative AI (AIGC) tools.”
Soon, an Amazon corporate lawyer joined the discussion. A screenshot of the internal communication in the Slack channel shows, The lawyer warned employees not to provide ChatGPT with “any Amazon confidential information,” including Amazon code being written. He also advised employees to follow the company’s existing non-disclosure policy, as some of ChatGPT’s responses looked very similar to Amazon’s internal situation.
These exchanges suggest that the sudden emergence of ChatGPT has raised many new ethical questions. ChatGPT is a conversational AI tool that responds to queries with clearer, smarter answers. The rapid proliferation of ChatGPT has the potential to disrupt several industries, including media, academia, and healthcare, prompting efforts to find new use cases for chatbots and their possible impact.
How employees share confidential information with ChatGPT, and what its developer, OpenAI, does with it could become a thorny issue. That’s especially important for Amazon since archrival Microsoft has invested heavily in OpenAI, including a new funding round this week that reportedly totals $10 billion.
Emily Bender, who teaches computational linguistics at the University of Washington, said: “OpenAI is far from transparent about how it uses data, but if the data is used for training, I expect companies to think: After several months of widespread use of ChatGPT, is it possible to obtain confidential information of a private company through carefully crafted prompts?”
Amazon has many internal safeguards for employees using ChatGPT. For example, screenshots of the exchange show that when employees use work devices to access the ChatGPT website, a warning message pops up saying they are about to access a third-party service that “may not be approved for use by Amazon Security.”
Employees participating in the Slack channel chat said they could bypass the message simply by clicking on the “Acknowledge” tab. Staff speculated that the warning popup was to prevent employees from pasting confidential information onto ChatGPT, especially since they hadn’t seen the company’s policy on internal use.