Why is Google so committed to hiring diverse AI teams?

Why is Google so committed to hiring diverse AI teams?

While concerns around the existential threat to the human race due to possible generative AI advancements have been​ grabbing headlines of late, there’s a much more current and arguably real ‍concern — discrimination.

Although major players⁢ in the AI market⁤ have affirmed their commitment to⁤ diversity and inclusion ⁤in their workplace,⁤ with women and people of color still finding themselves underrepresented in ⁤the technology industry,⁤ there’s a fear that the training received by ‍AI models will be​ inherently biased.

It’s a concern shared by both industry‌ professionals and political bodies alike. In June​ of‌ this year, European Commissioner for Competition Margrethe Vestager argued that AI-fueled discrimination poses a greater risk to society than the prospect of ⁢human extinction.

Elsewhere, the‍ UK’s Equality⁣ and Human ​Rights Commission (EHRC) has expressed concern that⁤ current ‌proposals regarding the regulation of AI in the country are inadequate ‍to protect human rights, ⁣noting that while responsible and ethical use of AI ‌can bring many benefits, “we recognize that with increased⁤ use⁢ of AI comes an increased risk of existing discrimination being exacerbated by algorithmic biases.”

Helen Kelisky, the ​managing director of Google Cloud UK and Ireland, believes that attracting and retaining a diverse workforce is the key to addressing this ‌challenge,⁣ arguing that‌ having teams made up of talent from different backgrounds and ‌with different perspectives is vital to the training of these ‌systems⁣ to safeguard models‍ from problems such as replicating‍ social biases.

Computerworld talked‍ to Kelisky about the importance of having diverse AI teams. The following are the excerpts from⁢ the interview.

Why⁤ is it so important for AI companies to ensure they ⁤have‌ a diverse workforce — particularly⁢ when it comes to their technical teams?

Google

Helen Kelisky.

As optimistic as I am about the potential of AI, we ‌have to​ recognize that it⁢ must be developed responsibly. If AI‍ technologies are to be truly⁣ successful, they cannot ⁤leave certain groups behind​ or perpetuate any existing biases. However, an AI system can only be as good as the data it is trained on, and with humans controlling the data ⁣and criteria behind every AI-enhanced solution, more diverse human input ⁤means better results.

Outputs of any AI system are limited ‍to the demographic⁤ makeup of its‍ creators, and therefore subject to the unintentional‍ biases that this‍ team might have. If an AI tool is only able to ⁣recognize ⁢one accent, tone, or language, the number of people able⁢ to benefit from ⁢that tool significantly reduces.

For example, if a technical team is made up of predominantly white ‍men, facial recognition systems could be inadvertently trained to recognize this demographic more easily than anyone else.

What are the consequences of not having diverse teams?

Strong representation means stronger products.⁣ AI algorithms and data sets have ⁤the power to⁣ reflect…

2023-08-10 18:48:02
Link from www.computerworld.com rnrn

Exit mobile version