Amazon Q’s Alleged Hallucinations and Confidential Data Leaks Spark Concerns

Amazon Q’s Alleged Hallucinations and Confidential Data Leaks Spark Concerns

In less ⁤than a week of its launch, Amazon Q — Amazon’s Copilot competitor — is already facing a threat to its survival​ as a new report ⁢suggests the generative AI assistant ⁢could be hallucinating.

Q is grappling with inaccuracies and ‍privacy⁢ issues, including ⁢hallucinations and data leaks, The Platformer reported, citing leaked documents. Significantly, the⁣ report comes as two major studies showed that ​large language models (LLMs) are highly inaccurate⁢ when ‌connected to corporate​ databases ‌and are ​becoming less transparent.

However, according to an Amazon spokesperson, Amazon‌ Q has not leaked any⁤ confidential information.

“Some employees are sharing feedback through internal channels and ticketing systems, which is standard practice at Amazon. No security issue ⁣was identified as a⁣ result of that feedback. We appreciate⁣ all of the feedback we’ve already received and will continue to tune Q as it⁤ transitions​ from being a product in preview ⁣to being generally ​available,” the spokesperson said.

Despite Amazon’s tall claims of being a ‌work ⁤companion for millions⁤ and millions of ​people, Amazon Q may not ⁢be ready for⁣ corporate⁤ usage,‌ according⁢ to analysts tracking the industry.

“If hallucinations ​are present, you ‍cannot use it for‌ decision-making in a corporate setting,” said ⁤Pareekh Jain, CEO of EIIRTrend & Pareekh Consulting. “It’s fine ⁤for personal use or obtaining information, but not for decision-making processes.”

More testing needed

Amazon may face substantial testing challenges before its generative AI assistant is ready for commercial release. Jain emphasized the importance of conducting extensive internal trials to ensure ‍readiness.

“I think ⁤they need to do more testing with internal employees first,” Jain added. “Obviously, that is what they are doing now. In the end, nobody from external sources has reported these‍ issues. There are​ two things ​here: one ⁤is the data, and the other is algorithms. They have to ‌see if it’s an issue with ​the data or with ⁣the algorithm.”

Q leverages 17 years of AWS’ data and ‍development proficiency and is designed to ⁢work as a versatile tool for enterprises. Given the direction of the industry, much is at stake ⁣for Amazon ​with this AI offering.

While hallucinations ⁤don’t undermine the potential of generative AI for consumer and enterprise use cases, proper training is essential, ​according to Sharath Srinivasamurthy, associate vice president at market research firm IDC.

“Training the models on better quality data, prompt augmentation (guiding users with predefined prompts that the model ‍can understand easily), ⁣ongoing finetuning the models on organization or industry-specific data and policies, augmentating a layer of human check in case⁢ of response being suspicious are ⁤some of the​ steps that need to be​ taken to​ make the best use of this emerging technology,” Srinivasamurthy said.

Will⁢ hallucinations prompt urgency‌ to regulate?

Reports of…

2023-12-05⁣ 10:41:03
Link from www.computerworld.com rnrn

Exit mobile version