More and more businesses are moving into an era wherein artificial intelligence (AI) is a component of every new initiative. One such tool called emotion AI analyzes facial expressions based on a person’s faceprint to find their goals, attitudes, and interior emotions.
The basic emotions theory
The “basic emotions” theory, which asserts that people all over the world express the same six basic internal emotional states (happiness, surprise, fear, disgust, anger, and sadness) through their facial expressions, which are influenced by our biological and evolutionary origins, is the foundation of this application, which is also known as emotion AI or affective computing.
This idea sounds logical on the surface because nonverbal communication heavily relies on facial expressions.
Emotion AI is an emerging technology that “allows a computer and systems to identify, process, and simulate human feelings and emotions,” according to a recent report by tech industry research firm AIMultiple. It is an interdisciplinary field that combines cognitive science, psychology, and computer science to help organizations make better decisions, frequently to increase dependability, consistency, and efficiency. Even though this program raises some questions about ethical AI, AstraZeneca’s guidelines discussed on applicable regulations recently.
How can emotion AI be utilized?
Emotion AI software is being used extensively for rating video interviews with job prospects for traits like “enthusiasm,” “willingness to learn,” “conscientiousness and responsibility,” and “personal stability.” The program is also used by border patrol agents to identify threats at border checkpoints, as a tool for diagnosing and treating patients with mood disorders, to keep an eye out for disruption or boredom in classes, and to observe people’s behavior during video conversations.
The use of this technology is becoming more widespread. For instance, the use of emotion AI in job interviews has become so widespread in South Korea that employment counselors now require their clients to simulate AI interviews. Startup EmotionTrac sells software that allows attorneys to instantly analyze facial emotions to determine which points will most appeal to jurors. With a stated accuracy of 73%, Tel Aviv University devised a method to identify lies using facial muscle analysis. A patent for “changing operation of an intelligent agent in response to facial expressions and/or emotions” has been granted to Apple.
The relation of emotion AI with pseudoscience
However, there is much uncertainty and debate surrounding emotion AI, not least since studies have shown that facial emotions differ greatly between contexts and cultures. Additionally, there is a lot of proof that facial expressions are too inconsistent to serve as reliable indicators of emotional meaning. Some contend that the so-called universal expressions that are the foundation of recognition systems are actually just cultural prejudices. Additionally, there is mounting evidence that the scientific foundations upon which emotion detection is based are flawed, with critics arguing that there is insufficient proof to substantiate the assertion that facial expressions accurately, consistently, and specifically reflect emotional states.
Kate Crawford, an expert in AI ethics, goes a step further and claims there is no solid proof that a person’s facial expressions can accurately convey their emotions. As a result, judgements based on emotion AI are unpredictable.
At least some companies are delaying the development or use of emotion AI due to this worry. In order to promote trustworthy AI and assure more advantageous and equitable outcomes, Microsoft recently upgraded its Responsible AI Standard framework. This paradigm was used to conduct an internal review of AI products and services, and one result was the “retiring” of Azure Face’s “that infer emotional states and identity attributes” capabilities.
The company claims that the choice was made due to privacy issues and a lack of expert agreement on how to infer emotions from appearance, particularly across demographics and use cases. In other words, the organization is showing how to use AI responsibly, or at the very least, how to prevent potentially negative effects from the technology.
Despite these obvious worries, the industry for emotion AI is booming and is expected to expand at a 12 percent compound annual growth rate until 2028. The field is still receiving venture financing. For instance, Uniphore recently closed $400 million in series E fundraising with a valuation of $2.5 billion. Uniphore now sells software that incorporates emotion AI.
Businesses have been using similar emotion AI technology to increase productivity for a number of years. According to an Insider story, Chinese businesses utilize “emotional surveillance technologies” to alter workflows, such as employee placement and breaks, in order to boost output and profitability.
This technology appeals to a wide range of people, not just corporations. Recent reports claim that the Institute of Artificial Intelligence at the Hefei Comprehensive National Science Center in China developed an AI program that can “discern the level of acceptance for ideological and political education” by analyzing facial expressions and brain waves.
While the AI algorithm gathered and analysed the data, test subjects watched movies on the ruling party. After that, it gave the subject a score and determined whether they were sufficiently loyal and in need of further political education. The subject’s “determination to be grateful to the party, listen to the party, and follow the party,” according to The Telegraph article, was scored.
Even if emotion AI could be developed to perfectly detect everyone’s sentiments, would we want such personal surveillance in our lives? this is an important question that Neuroscience News poses. This query relates to the privacy’s core concern. Even if emotion AI would have some useful applications if it were founded on reliable research, it still poses a risk of sliding down a slippery slope and creating an Orwellian Thought Police.
Every technological wave produces winners and losers and introduces aspects that may affect some demographic segments. Many of the applications of emotion AI involve intrusive surveillance and Taylorism, which is a dubious combination. Furthermore, the field is founded on a dubious scientific assumption. What do you think, will AI replace designers, doctors or engineers in the future?