Companies could face liability for anti-bias missteps caused by AI tools

Companies could face liability for anti-bias missteps caused by AI tools

As lawmakers and others work to address privacy, security, and bias problems with generative ‍artificial intelligence (AI), experts warned companies this week that their tech suppliers won’t be holding the bag when something goes wrong — they will.

A panel of three ⁤AI and‌ legal experts held a press conference Wednesday in the wake ‍of ​several government and private business initiatives aimed at holding AI creators and users more responsible.

Miriam Vogel, CEO of the nonprofit EqualAI, an organization ‍founded five years ago to reduce unconscious bias and other ​“harms” in ⁤AI systems joined two other ⁢experts to‍ address potential pitfalls.

Vogel, who is chair of the White House‍ National AI Advisory Committee and a former associate deputy attorney general, said while AI is a powerful tool that can create tremendous ​business efficiencies, organizations using it must be “hypervigilant”‌ that AI systems don’t perpetuate and create new forms of discrimination.

“When creating EqualAI, the founders realized that​ bias⁤ and related harms are ​age-old issues in new medium. Obviously here, ⁣it can be harder‌ to detect, and the consequences can grief and much graver,” Vogel said. (EqualAI trains and advises companies on⁤ the responsible use of AI.)

Vogel was joined by Cathy O’Neil, CEO of ORCAA, a consulting firm that audits algorithms — including AI systems ⁢— for compliance and safety, and Reggie Townsend, vice president for data‍ ethics at analytics software vendor SAS Institute and an EqualAI board member.

The panel argued that managing ‍the safety⁢ and biases of AI is less about ‍being tech experts and more about management frameworks that span technologies.

AI in many forms has been around for decades, but it wasn’t until computer processors could support more sophisticated models and generative AI​ platforms such at ChatGPT that concerns over biases, security, and privacy escalated. Over the past six months, issues around ⁣bias ⁤in hiring and ‍employee evaluation‍ and promotion have surfaced,‌ spurring municipalities, states, ​and the US government to create statutes‍ to address the issue.

Even though companies are typically licensing AI software from third-party vendors, O’Neil said, legal liability will be more problematic for users than for AI tech‌ suppliers.

O’Neil worked in ‍advertising technology a ‍decade ago,​ when she said⁣ it was easier to differentiate people based on wealth, gender, and race. “That was the normalized approach to advertising. It⁢ was pretty clear from the get-go that this⁢ could go wrong.‌ It’s not that hard to find examples. Now, ⁢it’s 10 years later and we know things have gone wrong.”

Looking for points of​ failure

Facial recognition algorithms, for example, often work far better…

2023-07-29 ⁤04:48:02
Original from www.computerworld.com

Exit mobile version