The EU doesn’t care about your feelings

The European Union’s decision to restrict emotion recognition technology like VERN AI under the forthcoming AI Act, specifically in workplaces and educational institutions except for medical or safety reasons, represents a significant oversight with far-reaching implications. This decision diverges sharply from the approaches of the United Kingdom and the United States, which advocate for a more balanced regulatory environment that fosters innovation while safeguarding against misuse.

The argument that technologies capable of emotional inference should be broadly restricted ignores the pervasive presence and acceptance of technologies such as webcams and microphones that carry similar, if not greater, risks for privacy invasion. These devices, integral to daily digital interactions, have the potential to be misused for snooping and spying on individuals’ private moments. Yet, the regulatory focus remains narrowly defined, overlooking the broader landscape of digital privacy and security concerns. The inconsistency in regulating these technologies suggests a misunderstanding of the nuanced risks and benefits inherent in digital innovation.

The EU’s stance also overlooks the transformative potential of emotion recognition technology in promoting mental health and well-being. VERN AI’s partnership with demonstrates the life-saving impact of such technologies. A comprehensive survey involving over 8,000 users across four countries reported significant improvements in mental health outcomes. 97% of users reported less anxiety, less  brain fog, and more motivation. Over half reported less negative self-talk, and a more positive outlook on life. By restricting access to these technologies, the EU risks denying its citizens the potential benefits that can arise from advanced emotion recognition applications.

The regulation also fails to consider the sophisticated safeguards and ethical considerations that companies like VERN AI implement to protect users’ privacy and data. VERN AI doesn’t store information, it’s stateless and compliant by design.


Your cameras and microphones are being accessed without your knowledge. (Ever wonder why Zuckerberg tapes up his laptop’s camera?)

Unlike indiscriminate access to cameras and microphones, emotion recognition technologies can be designed with strict ethical guidelines, transparency, and user consent at their core. This ensures that the technology is used responsibly, focusing on enhancing human well-being rather than infringing on privacy.

The EU’s prohibitive stance on emotion recognition technology is a missed opportunity to embrace and regulate a promising field of innovation that holds the potential for profound positive impacts on society. By adopting a more nuanced and informed regulatory approach, similar to that of the UK and the US, the EU could lead the way in harnessing the benefits of emotion recognition technology while establishing robust safeguards against misuse.

The goal should be to foster an environment where technological advancements and human well-being can coexist and mutually reinforce one another, rather than imposing blanket restrictions that hinder progress and probably cost lives of their constituents.