7.6 C
New York
Sunday, December 4, 2022
FREE TRAFFIC

Here’s What Google Covered During Its AI@ Event


Google AI Research Scientist Timnit Gebru speaks onstage during Day 3 of TechCrunch Disrupt SF 2018 at Moscone Center on September 7, 2018 in San Francisco, California.

Google AI Research Scientist Timnit Gebru speaks onstage during Day 3 of TechCrunch Disrupt SF 2018 at Moscone Center on September 7, 2018 in San Francisco, California.
Photo: Kimberly White (Getty Images)

A recurring theme at Google’s AI@ event was one of supposed reflection and cautious optimism. That sentiment rang particularly true on the topic of AI Ethics, an area Google’s struggled with in the past and one increasingly important in the emerging world of wild-generative AI. Though Google touted its own AI principles and Response AI teams for years, it has faced fierce blowback from critics, particularly after firing multiple high profile AI researchers.

Google Vice President of Engineering Research Marian Croak acknowledged some potential pitfalls presented by the technologies on display Wednesday. Those include fears around increased toxicity and bias heightened by algorithms, further degrading trust in news through deep fakes, and misinformation that can effectively blur the distinction between what is real and what isn’t. Part of that process, according to Croak, involves conducting research that creates the ability for users to have more control over AI systems so that they are collaborating with systems versus letting the system take full control of situations.

Croak said she believed Google’s AI Principles put users and the avoidance of harm and safety “above what our typical business considerations are.” Responsible AI researchers, according to Croak, conduct adversarial testing and set quantitative benchmarks across all dimensions of its AI. Researchers conducting those efforts are professionally diverse, and reportedly include social scientists, ethicists, and engineers among their mix.

“I don’t want the principles to just be words on paper,” Croak said. In the coming years, she said she hopes to see the capabilities of responsible AI embedded in the company’s technical infrastructure. Responsible AI, Croak said, should be “baked into the system.”



Source link

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Stay Connected

0FansLike
3,592FollowersFollow
20,300SubscribersSubscribe
- Advertisement -6000 PLR EBOOKS

Latest Articles