500K downloads achieved by OpenAI’s ChatGPT app for iPad and iPhone

500K downloads achieved by OpenAI’s ChatGPT app for iPad and iPhone

OpenAI has released its ChatGPT app for iPads and iPhones, which has already become one of the most popular applications in the last two years, with over half a million downloads in the first six days. However, this popularity also poses a challenge, as it means half a million potential data vulnerabilities.

Despite this, OpenAI has made the app available in 41 additional nations, making it one of the most successful software/service introductions of all time. However, IT leaders must be cautious and warn staff not to input valuable personal or corporate data into the service, as data gathered by OpenAI has already been attacked once, and it’s only a matter of time until someone gets at that information.

As digital security today isn’t about if an incident happens, but when, the best way to protect data online is not to put the information there in the first place. This is why iPhones and other products from Cupertino work on the principle of data minimization, reducing the quantity of information collected and taking pains to reduce the need to send it to servers for processing.

However, with ChatGPT apps, IT admins are almost completely reliant on trust when it comes to ensuring their staff don’t share confidential data with the bot. Despite stern warnings against such use, it’s inevitable that some people will accidentally share confidential data through the app, seeing it as the equivalent of searching the web.

IT must consider the App Privacy label OpenAI has attached to its product at the App Store, which makes it clear that when using the app, the following data is linked to the user: Contact info — email, name, phone number, User content — “other” user content, Identifiers — User ID, Usage data — Product interaction, and Diagnostics — Crash, Performance, Other diagnostic data.

OpenAI’s own Privacy Policy should also be explored, although the company has not disclosed the training data it uses for its latest bots. The challenge here is to ensure that staff are aware of the risks and take appropriate measures to protect confidential data.

2023-05-28 20:30:02
Original from www.computerworld.com

Exit mobile version