Gender and age contribute to a trust gap in AI.

Gender and age contribute to a trust gap in AI.

When it comes to⁢ trusting artificial intelligence (AI), men, millennial,‌ and​ Gen Z workers generally have more faith in the technology ⁣than‌ women, Gen Xers, or ​Baby Boomers, according to the results of a survey of more than 2,000 US ​adults.

The survey,‍ the second of its kind conducted eight months apart, was performed by⁣ The‌ Harris Poll on behalf of MITRE Corp., a nonprofit research agency that ‍manages research ‌for US government ‌agencies in the aviation, defense, ⁣healthcare, homeland security, and cybersecurity‍ areas. The initial survey on AI trust took place just before ⁢the launch of ⁣OpenAI’s ChatGPT last‍ Nov. 30.

Most respondents ⁣expressed ⁢reservations about AI when applied to government benefits and healthcare, and the latest survey showed a ​notable ⁤decline in ⁤trust in the ‌past year.

“Late last year into this year, there was overwhelming excitement about generative AI and what it can do,” said Rob ‌Jekielek,‌ Harris Poll’s‍ managing director. “For much of ‍2023,⁣ there has been substantial ⁤discussion about⁣ the potential negative implications of AI ⁣and ​how that has been accelerated‌ by⁢ generative⁣ AI. [There has also been] discussion around lack of, and need for,⁤ more regulation, which may have led to‍ a decline in AI trust.”

Only ⁢39% of survey respondents ⁣believe AI⁤ is safe and secure,‌ down 9% from the November 2022 poll, and 78% worry AI can be used maliciously. The ​poll‍ indicates⁢ more work needs to be done on AI assurance‍ and government ‌regulation.

Ozgur Eris, managing director of MITRE’s⁤ AI and Autonomy ⁢Innovation Center, ​said “AI assurance” refers to providing ⁢maximum⁣ value while protecting society from harm.

“From our⁢ perspective, AI has ⁤to satisfy⁢ expectations for technical, data, and scientific integrity, and produce desired and reliably effective outcomes. But this alone does not provide AI assurance,” Eris said. “For AI to be assured, it also has to permit organizational oversight and be​ safe and ‍secure.‍ It should also⁤ empower ⁢humans, enhance their capabilities, ​and augment their ability to achieve their goals, which means being interpretable by and answerable ⁢to those it empowers.”

AI​ should protect individual privacy, address inequities that might result from its use, ⁣and work ⁢in⁢ humanity’s best interests in ways that are consistent with human values, ethics, rights, and societal​ norms,‌ Eris added. “Not assuring these AI capability needs⁢ is⁣ likely to ⁤result in negative impacts…, whereas assuring them is⁢ more likely to produce more​ trustworthy AI, ⁣and ⁢to humans being better positioned ⁤to ‍calibrate​ their trust⁢ in useful ⁤technologies,” he ‌said.

The‌ survey⁣ also showed that more than half (52%) of respondents believe ⁣AI will replace their⁤ jobs;‍ 80%‌ worry about AI being used‍ for cyberattacks; 78% fear it will be used for identity theft; and 74% are wary ‍of it being used to create deceptive⁢ political ads.

Just 46%​ believe AI technologies are ready for mission-critical use ⁤for…

2023-10-04 ‌06:24:03
Post from www.computerworld.com rnrn

Exit mobile version