Juhani Hintikka

Juhani Hintikka

Amber Jackson speaks with Juhani Hintikka about the role of AI in cybersecurity, Finland’s NATO membership and the new EU AI Act's impact on security

The role of AI within the cybersecurity space is forever changing. In line with rising cyberattacks and data breaches, businesses have had to quickly adapt to ensure they are well protected.

With this in mind, we spoke to the CEO of cybersecurity company WithSecure, Juhani Hintikka. With more than 15 years in the software industry, Hintikka has had plenty of long exposure to markets worldwide, including in Asia, the Middle East and the Americas.

Here, he speaks about how WithSecure is working alongside its customers to better combat digital threats, as well as advising on how AI industry changes will impact the sector.

1. Please explain the work that WithSecure does in connection with your job role. 

As the CEO of WithSecure, my role involves driving the strategic vision of our company which provides cybersecurity services to MSSPs, IT service providers and more than 100,000 corporate customers including large financial institutions, manufacturers, and thousands of other technology and communication providers. My responsibilities also extend to overseeing the deployment and efficiency of our AI-driven solutions for endpoint and cloud collaboration security.

As the largest cybersecurity firm in the Nordics, I take pride in our flexible commercial models and comprehensive portfolio that grows in sync with our customers’ evolving needs. 

2. When considering the EU AI Act, how can businesses better protect themselves from cyberattacks? 

The EU AI Act introduces a regulatory framework that covers the broader applications of AI, including data protection and cybersecurity.  

Although AI has been around for some time, it comes with risks, particularly with regard to data privacy and it’s right that this is being addressed through regulation. The issue now is about how we balance frameworks so that we don’t ‘over-regulate’ to negative effect.  

For example, one area to consider is the implications of how the Act could inadvertently slow down innovation in cybersecurity. Proposals to impose stringent requirements on high-risk AI systems could be quite restrictive in the context of critical infrastructure management or cybersecurity solutions. These requirements may conflict with the fast-paced, adaptive nature of cybersecurity, making it harder for companies like ours to swiftly counter emerging threats.

However, there are certain aspects of the legislation that can guide businesses on how to effectively use AI technologies without adding significant risks.  The proposed law is based on levels of risk and calls for specific transparency obligations, even for AI systems not classified as high-risk. So, organisations should review their customer-facing AI tools like chatbots; customers must be informed that they are interacting with an AI, and appropriate disclosures need to be implemented. 

Another important point is the focus on data protection and governance. Businesses should bolster their data management practices to ensure sensitive information is secure, especially when using AI tools for data analytics or customer engagement. This could include enhanced encryption and stringent data access controls.

It's also prudent for leaders to work closely with their cybersecurity vendors to understand how the new regulations will affect the development and deployment of AI-based security solutions. Vendor partnerships should be reviewed to ensure compliance with the Act's provisions, thereby mitigating the risk of heavy fines. Businesses should also explore policies to conduct comprehensive audits of their existing AI-driven cybersecurity tools to determine whether they fall under the category of 'high-risk' as per the Act.

3. As a relatively new NATO member, how do you see Finland working to combat cyber threats and rising ransomware attacks? Where does WithSecure fit into this? 

Finland's NATO involvement inevitably makes it a more appealing target for cyber threats, particularly from nation-state actors.

To combat these escalated threats, it has been reported that Finland's advanced defence sector is investing in technology that relieves personnel from routine tasks, allowing them to focus more on cybersecurity. In fact, there are – as of November 2023 – proposals in place to increase its cyber defence budget by 30% in 2024, specifically to tackle AI-based threats. 

As for WithSecure, we fit into this complex ecosystem as a leader in security innovation. Our AI-driven security solutions are designed to protect endpoints and detect threats proactively. This is particularly crucial given the warnings from the Finnish Security Intelligence Service (SUPO) about the heightened risk of cyber espionage and attacks on critical infrastructure. Our expertise allows us to work in conjunction with state agencies and private sector firms to bolster Finland's cybersecurity posture.

As such, there needs to be an increased focus on threat intelligence, early detection, and rapid response. Businesses and government agencies alike should invest in cybersecurity solutions that can adapt to the ever-evolving threat landscape. However, it’s important to understand that no one can do it alone. Both private and government security organisations need to embrace a co-security mindset, sharing information and intelligence from their experiences, discoveries, vulnerabilities, and research. 

The EU as a whole needs to take a stronger stance on cybersecurity, both to counter the increasing threats and to destabilise immediate attacks from nation-state actors.

4. How does WithSecure ensure that its clients are well protected to combat cyber threats? 

At WithSecure, our core proposition is our focus on outcome-based security. This means we don't just offer products; we offer solutions designed to achieve specific business outcomes while acknowledging that cybersecurity is an ever-evolving landscape. Our mission is to innovate and develop technologies, the expertise of our team, and delivery-business models that accelerate our customers’ and partners’ transition to an outcome-based security model where cybersecurity contributes to their overall business objectives. 

Our AI-driven protection secures endpoints and cloud collaboration, and our intelligent detection and response are powered by experts who identify business risks by proactively hunting for threats and confronting live attacks. Our consultants partner with enterprises and tech challengers to build resilience through evidence-based security advice. With more than 30 years of experience in building technology that meets business objectives, we've built our portfolio to grow with our partners through flexible commercial models.

5. In your opinion, how can organisations use AI ethically in cybersecurity? 

For ethical AI deployment in cybersecurity, the starting point should be transparency. Businesses need to inform stakeholders, including employees and customers when AI models are being used to make decisions, particularly those that could significantly impact them. This aligns with the transparency obligations laid out in both the GDPR and the proposed EU AI Act.

Accountability must be embedded into your AI policy. This involves creating audit trails for AI decisions and building in mechanisms for human oversight. For example, when an AI system flags a potential security vulnerability, the process leading to that decision should be verifiable by cybersecurity experts, to ensure that decisions are both explainable and justifiable.

Data privacy is another cornerstone. When you’re feeding any kind of data into AI, the integrity and confidentiality of that data must be maintained. This is not just a compliance requirement but also an ethical obligation to safeguard sensitive information. Companies must also educate their staff on the safe use of AI-powered solutions, focusing on the type of data being input into these systems.

Overall, organisations need to be conscious of the potential for AI to be misused by adversaries. As AI technology advances, it's crucial to consider both the offensive and defensive implications. The ethical deployment of AI in cybersecurity would involve continuous monitoring and updating of AI models to ensure they are not co-opted for malicious purposes.

******

Make sure you check out the latest edition of Cyber Magazine and also sign up to our global conference series - Tech & AI LIVE 2024

******

Cyber Magazine is a BizClik brand

Share

Featured Interviews

Featured

Lee Fosbrook

Partner at Eviden

Eviden, an Atos Group Company, is Focused on Digital Transformation. Partner Lee Fosbrook Discusses how Eviden is Helping Their Clients Improve Performance

Read More

Jad Elsohemy

VP of Technology & Innovation at OEC

Jad Elsohemy, VP of Technology & Innovation at OEC, discusses the importance of effective cybersecurity governance when protecting critical infrastructure

Read More
Establishing and maintaining a robust cybersecurity programme is not merely a choice but a paramount responsibility.
Jad Elsohemy
VP of Technology & Innovation at OEC

Gabor Szentivanyi

Chief Information Officer at Kymera International

Gabor Szentivanyi, CIO of leading specialty material manufacturer Kymera International, elucidates how digital prowess achieves multi-industry impacts

Read More

Andres Andreu

Chief Information Security Officer at 2U

Andres Andreu, Chief Information Security Officer at 2U, a leading EdTech player, explains why security is critical to the present & future of education

Read More

Simon Chassar

Chief Revenue Officer at Claroty

Simon Chassar, Chief Revenue Officer at Claroty discusses his career journey, what inspires him, and his proudest moments from his career so far

Read More

Lee Fosbrook

Partner at Eviden

Eviden, an Atos Group Company, is Focused on Digital Transformation. Partner Lee Fosbrook Discusses how Eviden is Helping Their Clients Improve Performance

Read More