Slack updates AI ‘privacy principles’ after user backlash

Slack updates AI ‘privacy principles’ after user backlash

Slack has updated its “privacy principles” in response to concerns about the use of customer data to train its generative AI (genAI) models. 

The company said in a blog post Friday that it does not rely on user data — such as Slack messages and files — to develop the large language models (LLMs) powering the genAI features in its collaboration app. But customers still need to opt out of the default use of their data for its machine learning-based recommendations.

Criticism of Slack’s privacy stance apparently started last week, when a Slack user posted on X about the company’s privacy principles, highlighting the use of customer data in its AI models and requirement to opt out. Others expressed outrage on a HacknerNews thread. 

On Friday, Slack responded to the frustrations with an update to some of the language of its privacy principles, attempting to differentiate between its machine learning and LLMs. 

Slack uses machine learning techniques for certain features such as emoji and channel recommendations, as well as in search results. While these ML algorithms are indeed trained on user data, they are not built to “learn, memorize, or be able to reproduce any customer data of any kind,” Slack said. These ML models use “de-identified, aggregate data and do not access message content in DMs, private channels, or public channels.”

No customer data is used to train the third-party LLMs used in its Slack AI tools, the company said.

2024-05-23 10:51:01
Article from www.computerworld.com

Exit mobile version