Lack of Trust in AI Revealed by Pew Research

Lack of Trust in AI Revealed by Pew Research

As⁢ time goes by, people are becoming less trusting of artificial intelligence (AI), according to the results of a recent Pew Research study. The study, which involved 11,000‌ respondents, found that attitudes ⁤had changed sharply⁣ in just the past two years.

In⁢ 2021,⁣ 37% of ​respondents said they were more concerned than excited about AI. That number stayed pretty much the same last year (38%), but has now jumped to 52%. (The percentage of those who are excited about⁤ AI declined from​ 18% in 2021 to just 10% this year).

This is a problem because, ‌to be effective, generative AI tools have to ‍be trained, and during training there are a number of ways data can be compromised and corrupted. If people do not trust‍ something, they’re not only unlikely to support it, but they are also more likely to act against it. Why would​ anyone support something they don’t trust?

This lack of trust ‍could well slow down the evolution⁣ of generative AI, possibly leading‌ to more tools ‍and‍ platforms​ that are corrupted⁣ and⁣ unable to do the tasks set ⁢out for them. Some of the issues ‍appear to⁣ follow from users intentionally ‌trying to undermine the technology, which only underscores the troubling ⁢behavior.

What makes generative AI unique

Generative AI learns from users. ‌It might initially be trained with large language models (LLMs), but as more people use it, the tools learn from how people use it. This is meant to ⁤create a better human interface that⁢ can optimize communication with each user. It then takes this learning and spreads it across its instances, similar to how a child ⁢learns from its parents and then shares that knowledge with peers. This can create cascading problems‍ if the ‍information being provided ‌is incorrect or biased.

The systems do seem to be able to handle infrequent​ mistakes and adjust to‌ correct them, ⁤but if AI tools are intentionally misled, their ability to self-correct from⁣ that ⁤type of attack ‍has so far proven inadequate. It’s unlike when ⁢an employee acts out and destroys their own work product; with AI, they could destroy the work of anyone using‍ an‌ AI tool once the misbehaving employee’s data is used to train other instances.

This suggests ⁤that an employee who undercuts genAI tools could do significant damage⁤ to their company beyond just the tasks they’re doing.

Why trust matters

People who are worried they will lose their job typically do⁣ not do‍ a good job ⁣training another employee because they fear they might be terminated and replaced ⁢by the employee they trained. If those same people are asked to train AI tools and ⁢fear they’re being replaced, they could either refuse⁤ to do the training or damage it so it cannot replace them.

That’s where we are now. There⁤ is ​little⁤ in the media about how AI ⁢tools will help users have ‍a better work/life balance, become more productive without doing more work, and (if properly⁤ trained) make fewer mistakes. Instead we get a regular litany of how AI will be ⁤taking jobs,…

2023-09-01 22:24:02
Link from www.computerworld.com rnrn

Exit mobile version