Peter Cade | Stone | Getty Images
In a striking revelation, the overwhelming majority of Europeans have thrown their weight behind the integration of artificial intelligence (AI) in law enforcement and military operations, as unveiled by a recent report from Madrid’s IE University. This study, dubbed “European Tech Insights,” surveyed over 3,000 individuals across the continent and discovered that a notable 75% endorse the utilization of AI technologies, encompassing facial recognition and biometric data, by police and military forces for surveillance activities.
The depth of this endorsement raises eyebrows, especially when one considers Europe’s stringent data privacy standards. In 2018, the European Union implemented the General Data Protection Regulation (GDPR)—a comprehensive framework that dictates how organizations must handle and protect user data. Violators of these regulations face severe penalties, with fines potentially soaring to 4% of a company’s annual global revenue or a staggering 20 million euros (approximately $21.7 million), whichever figure is loftier.
“It is not entirely evident that the public has contemplated the far-reaching implications of these AI applications,” cautioned Ikhlaq Sidhu, the dean of IE University’s School of Science and Technology.
The enthusiasm for AI’s role in public service extends even further, with an impressive 79% in favor of its application in areas like traffic optimization. Yet, when the discussion turns to more sensitive topics, such as parole judgments, a significant majority—64%—express hesitancy about AI’s involvement.
### AI’s Role in Election Manipulation
While support for AI in governmental and security-related tasks runs high, apprehensions loom large regarding its influence on the democratic process. The report illuminated a profound fear of potential AI-driven manipulation in elections, with a staggering 67% of Europeans voicing concerns.
The specter of misinformation, amplified by AI, raises alarming questions; certain users may deliberately exploit fabricated information to sway public sentiment. Particularly disconcerting is the threat posed by deepfakes—AI-generated images, videos, or audio recordings that can misrepresent the sentiments of politicians or disseminate misleading information.
Fascinatingly, generative AI tools, like OpenAI’s Dall-E and Stability AI’s Midjourney, have made it possible to conjure images from mere text prompts, a development that warrants scrutiny. CNBC reached out to both OpenAI and Stability AI for insights on this pressing issue.
“AI and deepfakes epitomize a growing trend of misinformation and the erosion of verifiability,” Sidhu remarked. “This trend has burgeoned since the dawn of the Internet, social media, and AI-driven search algorithms.”
The report further revealed that roughly 31% of Europeans believe AI has already swayed their voting decisions, an unsettling thought as the 2024 United States election approaches, where current Vice President Kamala Harris is set to contend with former President Donald Trump on November 5.
### Generational Gap in AI Trust
Another intriguing dimension of the report is the pronounced generational divide regarding trust in AI. Approximately one-third (34%) of individuals aged 18 to 34 would be amenable to allowing an AI-powered application to cast votes on their behalf. However, this figure dwindles to 29% for those aged 35 to 44, and plummets to a mere 9% among seniors aged 65 and above.
In an era where AI continues to shape our world in unexpected ways, these findings highlight a complex web of support, concern, and generational disparity that warrants close examination.
