How Kaspersky Leads AI Security Standards With EU AI Pact

Share
Kaspersky joins the European Commission's AI Pact
Kaspersky's commitment to the European Commission's AI Pact sets new benchmarks for ethical AI deployment, risk management and threat detection

The integration of AI into corporate ecosystems underscores the pressing need for regulatory policies harmonising technological innovation and cybersecurity mandates.

As AI becomes integral to various aspects of business operations, from refining customer interactions to fortifying cybersecurity measures, it's paramount that organisations deploy these technologies prudently and manage associated risks effectively.

This challenge is notably pronounced in the realm of cybersecurity. Here, AI technologies are increasingly becoming pivotal in threat detection and response initiatives.

Consequently, the EU has taken significant steps by proposing the AI Act, the first of its kind, aimed at creating a comprehensive legal boundary for AI usage.

In this context, global cybersecurity company Kaspersky has proactively signed the European Commission's AI Pact. Signing this agreement signifies a commitment to adapt to the forthcoming AI Act regulations, coming into effect in 2026.

Tackling AI's expanding role in business

The arrival of the AI Act is a response to global enterprises wrestling with AI's governance and ethical implications.

Youtube Placeholder

The legislation enforces strict guidelines that require businesses to modify their operational frameworks to align with new regulatory standards, while still encouraging tech advancements.

This pact necessitates that members execute stringent governance frameworks for AI systems, which employ algorithms to execute tasks that typically necessitate human intellect.

Joining the pact commits organisations to several critical responsibilities, they are urged to develop AI governance strategies that not only enhance technology adoption but also guarantee conformity with impending legal standards.

Additionally, organisations must map their AI mechanisms in sectors deemed high-risk by the act and bolster AI literacy among their workforce and associates.

AI governance: a core commitment by Kaspersky

Kaspersky is advancing its AI governance frameworks by identifying potential risks AI systems might pose to individuals and by maintaining clear communication about AI’s role in workplaces. This progresses Kaspersky's two-decade-long engagement with AI in detecting cybersecurity threats.

Eugene Kaspersky, Founder and CEO of Kaspersky (image credit: Kaspersky)

Eugene Kaspersky, the company's founder and CEO, emphasised the crucial balance between rapid AI deployment and rigorous risk management: "As we witness the rapid deployment of AI technologies, it's crucial to ensure that the drive for innovation is balanced with proper risk management."

Setting industry standards: the European commission's AI pact

The secured agreement sets forth conditions for businesses to inform individuals when they interact directly with AI systems, ensuring that AI utilisation respects safety and ethical norms. This harmonises with broader EU objectives to cultivate a reliable AI ecosystem, responsive to the potential pitfalls of AI deployment across Europe and beyond.

Additionally, the AI Act, formally established in 2024, epitomises the EU's proactive measures to address the multifaceted concerns regarding AI utilisation, promoting the development of trustworthy AI technologies.

Advancing AI security research and development

Kaspersky’s AI Technology Research Centre has also been active in setting down guidelines for safe AI implementation. These guidelines, unveiled at the 2024 UN Internet Governance Forum, are designed to aide organisations as they adopt AI, drawing from Kaspersky's robust experience in automated threat monitoring and data protection measures.

Key facts:
  • Kaspersky signed the European Commission's AI Pact on 13 January 2025
  • The AI Pact aims to prepare organisations for the implementation of the EU AI Act
  • The EU AI Act, enacted in 2024, will become fully applicable in mid-2026

As industry leaders in AI and cybersecurity standards, Kaspersky continues to champion ethical AI use, urging other organisations to adopt similar guidelines which reflect the industry’s growing acknowledgement of coordinated AI security measures.

Eugene reiterated the company’s dedication to advancing transparent and ethical AI practices: "Having been an advocate for AI literacy and the sharing of knowledge about AI-related risks and threats for years, we're happy to join the ranks of organisations working to help companies responsibly and securely benefit from AI technologies."

"We'll be working to further advance transparent and ethical AI practices and contribute to building confidence in this technology."


Explore the latest edition of Cyber Magazine and be part of the conversation at our global conference series, Tech & AI LIVE and Cyber LIVE.

Discover all our upcoming events and secure your tickets today. 


Cyber Magazine is a BizClik brand

Share

Featured Articles

AWS and Exclusive Networks: enhancing cloud security

Exclusive Networks has signed two agreements with AWS that enhance its cloud capabilities and provide advanced cybersecurity solutions

Arctic Wolf Completes Acquisition of Blackberry's Cylance

Arctic Wolf will incorporate Blackberry's Cylance endpoint security assets into its Aurora platform, helping organisations reduce risk exposure

AI-Powered Fraud on the Rise for Financial Institutions

Data from AuthenticID has found a rise in deepfake-related fraud attempts and synthetic identity fraud for finance firms

Sophos: Gen AI Flaws Could Negatively Impact Cybersecurity

Technology & AI

Is the UK Government Ready to Face Severe Cyber Threats?

Cyber Security

What Does DeepSeek’s Cyber Attack Mean for Data Privacy?

Cyber Security