Skip to main content

Computers have emotions too: New research shows AI can teach technology to recognise emotions with 98% accuracy

EEG 920
A participant taking part in the study

The rise of artificial intelligence is one of the world’s most influential and talked-about technological advancements. Its rapidly increasing capabilities have embedded it into everyday life, and it’s now sitting in our living rooms and, some say, threatening our jobs.

Although AI allows machines to operate with some degree of human-like intelligence, the one thing that humans have always had over machines is the ability to exhibit emotions in response to the situations that they are in. But what if AI could be used to enable machines and technology to automatically recognise emotions?

New research from Brunel University London, and from Iran’s University of Bonab and Islamic Azad University, has used signals from EEGs - the test that measures the brain’s electrical activity - and from artificial intelligence to develop an automatic emotion recognition computer model to classify emotions, with an accuracy of more than 98%.

By focusing on training data and algorithms, computers can be taught to process data in the same way that a human brain can. This branch of artificial intelligence and computer science is called machine learning, where computers are taught to imitate the way that humans learn.

Dr Sebelan Danishvar, a research fellow at Brunel, said: “A generative adversarial network, known as a GAN, is a key algorithm used in machine learning that enables computers to mimic how the human brain works. The emotional state of a person can be detected using physiological indicators such as EEG. Because EEG signals are directly derived from the central nervous system, they have a strong association with various emotions.

“Through the use of GANs, computers learn how to perform tasks after seeing examples and training data. They can then create new data, whichenables them to gradually improve in accuracy.”

The new study, published in the journal Electronics, used music to stimulate the emotions of 11 volunteers, all aged between 18 and 32.

The participants were instructed to abstain from alcohol, medications, caffeine, and energy drinks for 48 hours before the experiment, and none of them had any depressive disorders.

During the study, the volunteers were all given 10 pieces of music to listen to, through headphones. Happy music was used to induce positive emotions, and sad music was used to induce negative emotions.

While listening to the music, participants were connected to an EEG brain device, and EEG signals were used to recognise their emotions. 

In preparation for the study, the researchers created a GAN algorithm, using an existing database of EEG signals. The database held data on emotions caused by musical stimulation, and this was used as their model against the real EEG signals.

As expected, the music elicited positive and negative emotions, according to the music played, and the results showed that there was a high similarity between the real EEG signals and the signals modelled by the GAN algorithm. This indicates that the GAN was effective in generating EEG data.

Dr Danishvar said: “The results show that the proposed method is 98.2% accurate at distinguishing between positive and negative emotions.  When compared with previous studies, the proposed model performed well and can be used in future brain–computer interface applications. This includes a robot’s capacity to discern human emotional states and to interact with people accordingly.

“For example, robotic devices may be used in hospitals to cheer up patients before major operations and to prepare them psychologically.

“Future research should explore additional emotional responses in our GAN, such as anger and disgust, to make the model and its applications even more useful.”

Reported by:

Nadine Palmer, Media Relations
+44 (0)1895 267090
nadine.palmer@brunel.ac.uk