That data is then used for a wide variety of applications, such as detecting when someone may be lying about insurance claims, helping diagnose depression, and tracking how students are responding to a teacher during online learning.
Gabi Zijderveld, the chief marketing officer of Smart Eye, an EAI company that claims its tech “understands, supports, and predicts human behavior,” said the firm had amassed vast amounts of emotion-based data by analyzing over 14.5 million videos of facial expressions it made with voluntary participants.
In these cases, paid participants watch online videos, and their facial expressions are analyzed by EAI technology to determine whether a joke in an ad was funny or whether a particular moment in a movie trailer elicited the emotional response it was meant to.
The idea behind the tech was noble: The EAI was supposed to mitigate bias and spot ideal candidates by identifying soft skills, cognitive ability, psychological traits, and emotional intelligence during video interviews.
In response to public outcry, the company said in 2021 it would stop the use of EAI facial analysis because it wasn’t worth the concern but would continue analyzing speech, intonation, and behavior during interviews.
Kat Roemmich, a Ph.D. student at the University of Michigan School of Information who has conducted research on EAI, has found that employees frequently remain unaware that their company is using the tech while they’re on the job.
The original article contains 2,089 words, the summary contains 232 words. Saved 89%. I’m a bot and I’m open source!
This is the best summary I could come up with:
That data is then used for a wide variety of applications, such as detecting when someone may be lying about insurance claims, helping diagnose depression, and tracking how students are responding to a teacher during online learning.
Gabi Zijderveld, the chief marketing officer of Smart Eye, an EAI company that claims its tech “understands, supports, and predicts human behavior,” said the firm had amassed vast amounts of emotion-based data by analyzing over 14.5 million videos of facial expressions it made with voluntary participants.
In these cases, paid participants watch online videos, and their facial expressions are analyzed by EAI technology to determine whether a joke in an ad was funny or whether a particular moment in a movie trailer elicited the emotional response it was meant to.
The idea behind the tech was noble: The EAI was supposed to mitigate bias and spot ideal candidates by identifying soft skills, cognitive ability, psychological traits, and emotional intelligence during video interviews.
In response to public outcry, the company said in 2021 it would stop the use of EAI facial analysis because it wasn’t worth the concern but would continue analyzing speech, intonation, and behavior during interviews.
Kat Roemmich, a Ph.D. student at the University of Michigan School of Information who has conducted research on EAI, has found that employees frequently remain unaware that their company is using the tech while they’re on the job.
The original article contains 2,089 words, the summary contains 232 words. Saved 89%. I’m a bot and I’m open source!