Emotional AI and the privacy implications
May 16, 20221K views0 comments
BY: MICHAEL IRENE, PhD
Emotional AI refers “to technologies that use affective computing and artificial intelligence techniques to sense, learn about and interact with human emotional life.” In this article, I assess the privacy implications of these technologies and how their applications make inferences about emotions and moods.
These technologies are gradually springing up and are, without a doubt, creating fantastic results. For example, a particular room can read the body temperature of individuals and adjust the cold or heat to meet the existing conditions in the room. These methodologies are becoming increasingly present in the so-called modern buildings.
Cars, games, mobile phones and wearable tech like watches and bands are now studying our patterns with the sole intention to help us live a better life. This tech is now used to also optimise what Andrew McStay called “the emotionality of spaces” in workplaces, hospitals, prisons, classrooms, travel infrastructures, restaurants, retail and chain stores. To put it succinctly, these technologies can now understand the mood in the room and employ an avenue to change the mood.
But what are the privacy implications of these emotional AI? Will individuals be willing to allow tech intrude sensitive data daily and allow companies commercialise these data sets? Empirical evidence shows that less than fifty percent of individuals will allow such intrusiveness if it leads to no known harm or limitation of their freedoms. Over fifty percent of individuals argue that they would allow companies to use their emotional AI data if they know it would help them live a better life especially from a medical perspective.
The gaps embedded in these new processing of data becomes clear when cast against data protection regulations. This set of information used for emotional AI borders on the use of special category data like biometrics, health data etc. Yet, there is an increase of the use of these data sets without considering the privacy implications on a large scale.
Some privacy professionals argue that since data will be anonymised, all privacy implications have been solved. However, new evidence suggests that some anonymisation techniques might become non-anonymous if mixed with other data sets, including publicly available information.
Following that companies would be using wide scale special category data here, then it is important to create explicit opt-in controls, and given the increasing role of emotion in data analytics and facilitating human-machine interaction, there is still the absence of clear opt-in methodologies.
Although most data protection laws don’t make reference to emotions and how organisation should use these data sets, it is important that companies begin to give serious consideration to data protection principles especially data privacy-by-design and default methodologies. These new technologies and artificial intelligence do portend new privacy risks, only serious organisations would begin to factor data privacy controls and procedures in place to aid the successful launching of such products/projects. The future of data analytics must be strictly guided by the regulatory perspective, meaning individuals’ security and safety is under the microscopic view of the organisations processing such data sets.