Are you happy now? The uncertain future of emotion analytics. Image 1.

Elise Thomas

Author

Are you happy now? The uncertain future of emotion analytics. Image 2.

Leonard Peng

Illustrator

A year before the launch of the first mass-produced personal computer, British academic David Collingridge wrote in his book The Social Control of Technology that "when change is easy, the need for it cannot be foreseen; when the need for change is apparent, change has become expensive, difficult, and time-consuming." He was referring to the conundrum that the effects of a new technology cannot be fully understood until that technology becomes integrated into society, but that by the time its social and ethical implications are visible the technology is likely to be so much a part of daily life that it will be very hard to change.

Collingridge's dilemma is as relevant today as it was thirty-five years ago, if not more so. Many of us are less than thrilled that our smartphones routinely track our movements and transmit our data to multinational corporations or that Facebook has the right to sell or use our photos and videos in any way it sees fit. As unhappy as we may be about it few people go to the extent of deleting accounts or leaving the smartphone at home because these technologies are now integral parts of our personal and professional lives. So long as consumers are unable or unwilling to walk away, corporations have little incentive to change a situation which is working in their interests.

Right now, in a handful of computing labs scattered across the world, new software is being developed which has the potential to completely change our relationship with technology. Affective computing is about creating technology which recognizes and responds to your emotions. Using webcams, microphones or biometric sensors, the software uses a person's physical reactions to analyze their emotional state, generating data which can then be used to monitor, mimic or manipulate that person’s emotions.

The implications are dazzling and daunting. Early affective computing programs such as the MindReader software developed by Rana el Kaliouby, who went on to cofound the industry-leading company Affectiva, have been used to help people with autism. This kind of technology could be used to monitor coma patients for signs of pain, or to help blind people know what expression is on the face of the person they're talking to. When combined with the rapidly growing Internet of Things, the possibilities expand even further. Affective computing could be integrated into medical alert technology to detect warnings signs of a seizure, panic attack or possibly even for those on suicide watch and automatically contact help. It could be used to improve online and long distance education programs, by providing professors with feedback on how many faces are registering confusion or notifying students when their attention is wandering.

As affective computing develops from a branch of computer science into a fledgling tech industry, the focus is shifting away from affective computing as an assistive technology and moving towards broader commercial, political and security-related applications. The ability of machines to monitor and manipulate human emotion comes with significant ethical implications.

The Ethical Issues of Emerging ICT Applications (ETICA), a research project funded by the European Commission which ran from 2009 to 2011, found a variety of ethical considerations linked to affective computing, spanning from interpretation errors (e.g. a person being stopped in an airport on suspicion of planning an attack, when in fact they are just a very nervous flyer) to privacy concerns, to "a range of new dangers concerning personal integrity" as a result of affective computing's ability to persuade and manipulate.

The question is, are we doing enough to engage with these questions now? Or, by the time we start to think seriously about the place of emotion in our society and the kind of role we want emotionally intelligent technology to play in our future, will it already be too late to change?

 

Are you happy now? The uncertain future of emotion analytics. Image 3.

 

Humble beginnings

Affective computing grew out of a 1995 paper by MIT Professor Rosalind Picard, in which she argued that emotions play a crucial role in human thinking and decision-making ("Emotions pull the levers of our lives," as Picard rather poetically put it) and that the ability to recognize and respond to emotion was key to improving computer intelligence. The past 20 years of neuroscientific research have only confirmed Professor Picard’s belief in the importance of emotion in the decision-making process, often extending far beyond what we are consciously aware of.

 

 

This is hardly news to advertisers. Corporations spend billions each year trying to build "authentic" emotional connections to their target audiences. Marketing research is one of the most prolific research fields around, conducting thousands of studies on how to more effectively manipulate consumers’ decision-making. Advertisers are extremely interested in affective computing and particularly in a branch known as emotion analytics, which offers unprecedented real-time access to consumers' emotional reactions and the ability to program alternative responses depending on how the content is being received.

For example, if two people watch an advertisement with a joke and only one person laughs, the software can be programmed to show more of the same kind of advertising to the person who laughs while trying different sorts of advertising on the person who did not laugh to see if it's more effective. In essence, affective computing could enable advertisers to create individually-tailored advertising en masse.

"Humans are emotional beings and will benefit from emotionally intelligent technology," Gabi Zijderveld, VP of marketing at Affectiva, tells Hopes&Fears. Founded in 2009 by Professor Picard and Rana El Kaliouby, Affectiva is now a forerunner in the small but growing emotion analytics industry. Uncomfortable with the company’s commercial emphasis, Picard left the company in 2013 to focus on medical applications of affective computing. Today, Affectiva uses its flagship platform Affdex, which scans and analyzes facial expressions, to test thousands of advertisements and videos per year.

 

 

Affdex and Emotient

I tried Affdex out online. The trial shows viewers a series of advertisements and analyzes their responses, offering read-outs in Surprise, Smile, Concentration, Dislike, Attention, Expressiveness and Valence (which is a combination of the other factors). Clearly, the singing Muppets on the left do nothing for me. Puppies and horses, on the other hand, obviously have an effect on me.

 

 

 

Are you happy now? The uncertain future of emotion analytics. Image 4.

Are you happy now? The uncertain future of emotion analytics. Image 5.

 

 

↑ Affdex Dashboard, Valence Trace

 

 

I also trialed Emotient, another company leading the charge into affective technology. Unlike the Affdex trial, Emotient asks users to upload their own video and offers a second by second breakdown of the analysis. I decided to test it out by video-recording myself listening to an old episode of The Bugle, a podcast with John Oliver and Andy Zaltzman. The Bugle’s humor is a bit hit or miss for me, so I thought it would offer a good range of emotional responses.

Generally, Emotient's analysis seems to be fairly accurate and sensitive to relatively small changes of expression. The software appears to rely heavily on the shape of the mouth—it stopped analyzing entirely when my mouth was obscured by a mug—which means that if you're like me and have a tendency to purse your lips when you’re amused, it's going to read that as contempt. I suspect this is why the results concluded my reaction was only 43% positive when I would definitely say I enjoyed the podcast more than that.

The Emotient video analysis also highlighted the importance of context for understanding emotional responses, and how easily they might be misconstrued. When the hosts began talking about the then-recent murder of Russian opposition leader Boris Nemtsov, my face took on a bored expression and my attention levels dropped significantly. An analyst might take this as a sign that they should try other topics and that I don't care about Russian politics when, in fact, the opposite is true. I follow Russian politics closely, and "Putin did it" jokes bore me not because I'm not interested, but because I believe that Putin probably didn't do it and the whole thing was way more complicated than that. It's not that I want less Russian politics jokes, it's that I want less lazy Russian politics jokes.

In my case, it's trivial, but as the ETICA study argued with the example of the possible terrorist vs. nervous flyer, these kinds of interpretive errors could potentially have major impacts on people's lives. Finding a way to analyze not only emotional content but also emotional context may be one of the key challenges facing affective computing in the future.

It's a gap which Affectiva is determined to bridge. "Today, we live in a highly connected world, but there's no emotion sensing or emotional intelligence in this technology. We believe that, as we live more and more of our lives online, that’s a problem," says Zijderveld. "At the highest level, our goal is to bring emotional intelligence to the digital world." According to Affectiva, emotion analytics will facilitate not only better human to computer communication but also better human-to-human communication, by enabling the emotional exchange usually involved in face-to-face interactions to happen digitally.

 

 

 

Are you happy now? The uncertain future of emotion analytics. Image 6.

Are you happy now? The uncertain future of emotion analytics. Image 7.

Are you happy now? The uncertain future of emotion analytics. Image 8.

 Are you happy now? The uncertain future of emotion analytics. Image 9.

 

↑ Emotient Analytics (see full video here)

 

 

 

Often, however, it's likely to be less a case of human-to-human communication than corporation to consumer. Corporations are not always known for putting their customers' rights and well-being ahead of their profit margins, particularly industries which are infamous for basing their marketing strategy around making consumers feel bad about themselves such as the fashion and beauty industry.

Say 15 years from now a particular brand of weight loss supplements obtains a particular girl's information and locks on. When she scrolls through her Facebook, she sees pictures of rail-thin celebrities, carefully calibrated to capture her attention. When she turns on the TV, it automatically starts on an episode of "The Biggest Loser," tracking her facial expressions to find the optimal moment for a supplement commercial. When she sets her music on shuffle, it "randomly" plays through a selection of the songs which make her sad. This goes on for weeks. 

Now let's add another layer. This girl is 14, and struggling with depression. She's being bullied in school. Having become the target of a deliberate and persistent campaign by her technology to undermine her body image and sense of self-worth, she's at risk of making some drastic choices. 

This scenario is a long time from being reality, but it raises issues which are worth thinking about now. The question of who bears responsibility for safeguarding the wellbeing of those targeted by marketing using affective computing is a significant one. Should software developers build in pre-set "trip-wires" to stop the program if a high level of emotional distress is detected? Should there be limitations on affective marketing targeted at children, or those likely to be more emotionally vulnerable?

Affective computing has the potential to intimately affect the inner workings of society and shape individual lives. Access, an international digital rights organization, emphasizes the need for informed consent, and the right for users to choose not to have their data collected. "All users should be fully informed about what information a company seeks to collect," says Drew Mitnick, Policy Counsel with Access, "The invasive nature of emotion analysis means that users should have as much information as possible before being asked to subject [themselves] to it."

Affectiva makes a point of asking for users' consent to use their webcams and record their data, but those who buy Affectiva's software development kit are not necessarily bound by the same code of conduct. 'In our license agreement we make it very clear that we want this technology to be used with the opt-in of the person who is being recorded," says Zijderveld, but as it is not a legal obligation there is little that can be done to enforce this. 

Corporations and media companies aren't the only ones with an interest in affective computing. Emotion is a powerful factor in the democratic process, influencing not only how people vote in elections but also which issues capture public attention and make it onto the political agenda. Emotion analytics companies are already investigating the possible political applications of affective computing. A 2013 study by Affectiva and MIT's Media Lab used participants' webcams to analyze their facial expressions online as they watched clips of the 2012 US presidential debates.

Using this data they developed a method for predicting independent voter preferences based on facial expressions, with an accuracy of 73%. During the first Republican Primary debate in 2015, Emotient used its software to analyze the expressions of a live audience, finding that Donald Trump and Scott Walker evoked the strongest responses while Ted Cruz failed to make a significant emotional impact with the audience. 

The use of affective computing in politics has implications for the democratic process. On one hand, it could be argued that enabling political leaders to be better attuned to the emotional state and preferences of their constituents will lead to better, more democratic policy making. On the other, it could also contribute to an increase in short-term policy making in order to win votes, placing the focus on snappy one-liners which draw a smile rather than sound long-term policy. The use of affective computing in lobbying could lead to important issues receiving public attention. It could also become one more tool in the arsenal of powerful interest groups whose agendas may not necessarily line up with the public interest. 

Affective computing may also be intriguing to political leaders of a different, and less democratic stripe. Digital surveillance software has been used to target activists and human rights defenders in authoritarian states all over the world. Despite sanctions and sales regulations, once a technology is widely available it's extremely difficult to keep it out of the wrong hands. The same program used to analyze responses to advertising could easily be used to monitor people's faces during propaganda films, and identify potential dissenters. Surveillance using emotion analytics could be a reality in democratic and non-democratic states alike in the near future, and Affectiva say they have already rejected approaches by governments and security services for their products.

Access' Drew Mitnick observes that mass government surveillance runs counter to the principles of human rights. "Databases of emotion information would be particularly revealing, and [the] collection of this data should be highly protected under strict standards, including a restriction for the collection to occur on a targeted basis and a high burden of proof [that it is necessary]."

The potential for affective computing and emotion analytics to fundamentally change the way in which we relate to our technology, and to impact our personal, political and financial decisions needs to be taken seriously. Rather than falling into Collingridge's dilemma, and remaining ignorant of the implications until it’s too late to change, there should be an early and meaningful public dialogue about affective computing and digital rights, which draws in the perspectives of industry, rights activists, advertisers, political leaders and civil society. 

Technology should not be feared, but it should be respected for the powerful social tool that it is. When it comes to new technologies—especially those which touch upon our emotions, the very things which make us most human—we need to think carefully not just about the kind of technology we want to have, but about the kind of society we want to live in.