Monday, December 23
Shadow

AI systems claiming to ‘read’ emotions pose discrimination risks

AI systems claiming to ‘read’ emotions pose discrimination risks

Expert says technology deployed is based on outdated science and therefore is unreliable

Artificial Intelligence (AI) systems that companies claim can “read” facial expressions is based on outdated science and risks being unreliable and discriminatory, one of the world’s leading experts on the psychology of emotion has warned.

Lisa Feldman Barrett, professor of psychology at Northeastern University, said that such technologies appear to disregard a growing body of evidence undermining the notion that the basic facial expressions are universal across cultures. As a result, such technologies – some of which are already being deployed in real-world settings – run the risk of being unreliable or discriminatory, she said.

Continue reading…

Go to Source

Exit mobile version