Home News Breach of Trust: Three Samsung Employees Reportedly Leaked Sensitive Data to ChatGPT

Breach of Trust: Three Samsung Employees Reportedly Leaked Sensitive Data to ChatGPT

by admin

ChatGPT might look like a useful tool, but be careful what you share with it. Samsung employees found this out the hard way when information they thought was private got used by the chatbot to help it respond to other people. So, just keep in mind that anything you share with ChatGPT could end up everywhere!

Samsung’s engineers started using a chatbot called ChatGPT, but unfortunately it was not secure enough because three workers leaked secret information to it. One worker asked the chatbot to check their database code for mistakes, while another wanted code improvement and a third even sent an audio recording of a meeting to the bot and asked it to summarize the contents.

After finding out about its security issues, Samsung tried to reduce the number of mistakes it could make in the future by making sure its employees’ ChatGPT messages were limited to 1024 characters. They are also looking into three employees and building their own chatbot to stop this from happening again. BuyTechBlog asked Samsung for more information about this.

ChatGPT uses different sentences that users type to teach itself. OpenAI, the organization who owns ChatGPT, warns users not to give out secret or identifying information when using ChatGPT because it’s difficult for OpenAI to delete specific prompts from a person’s history. The only way to get rid of personal info on ChatGPT is by deleting the account, and it can take up to four weeks.

When you sign up for something, you are agreeing to follow the company’s rules and privacy policies. The recent Samsung problems show how careful we need to be when using chatbots – or any online activity. All your data will not be secure if you’re careless.

You may also like

Leave a Comment