Samsung apparently forbade the usage of generative AI tools like ChatGPT, Google Bard, and Bing AI chatbot following the recent data leak. The corporation has informed the staff at one of its largest divisions, according to Bloomberg. This happened after Samsung engineers unintentionally exposed private information using ChatGPT.

As a result of worries that the data utilised by these platforms is stored on external servers and can fall into the wrong hands, Samsung is said to have prohibited the usage of AI generative tools.

As per the report, Samsung’s memo to employees read, “Interest in generative AI platforms such as ChatGPT has been growing internally and externally. While this interest focuses on the usefulness and efficiency of these platforms, there are also growing concerns about security risks presented by generative AI.”

The memo further revealed that the new policy has come after Samsung engineers accidentally leaked internal source code by uploading it on ChatGPT. “HQ is reviewing security measures to create a secure environment for safely using generative AI to enhance employees' productivity and efficiency. However, until these measures are prepared, we are temporarily restricting the use of generative AI," the memo said.

The semiconductor division of Samsung had given permission for its engineers to use ChatGPT to assist in resolving issues with source code. Top-secret information, including the source code for a new programme and meeting notes about their hardware, were accidentally entered by the staff. Three such events were reported in just one month.

It was previously stated that Samsung Semiconductor is developing its own AI for internal use by staff in order to prevent making such errors. However, it will only be capable of handling prompts that are less than 1024 bytes in size.