Hiddenlayer CSO Tells Why It Made an AI Security Council
Although the AI revolution has seen the technology run rampant across various industries, cybersecurity is a sector that stands to be one of the most affected by the shift.
Yet, this is due to both unprecedented opportunities and unique challenges it presents in the realm of digital security.
AI-powered systems can enhance threat detection, automate incident response, and improve overall security posture. However, these same systems can also introduce new vulnerabilities and become targets for malicious actors seeking to exploit or manipulate AI algorithms.
This evolving landscape demands a collaborative approach, bringing together experts from various fields to develop comprehensive strategies for securing AI applications and infrastructure. A number of which joined together to create the ‘Security for AI Council’ for just that reason.
But what exactly is its aims? To find out more, we spoke with Malcolm Harkins, Chief Security & Trust Officer at HiddenLayer on why they created it and what they want it to do.
- Malcolm is a security veteran, having served at a mix of enterprises - including Intel - before beginning his role at HiddenLayer, where he is now responsible for enabling business growth through trusted infrastructure, systems, and peer outreach to evangelise best practices for mitigating AI risk.
The formation of the Security for AI Council stems from a pressing need to support the widespread adoption of AI security principles.
"We believe that when it comes to AI, any organisation can go from pause to possibilities,” explains Malcolm. “Those who can understand the risks AI can create and focus on practical solutions to mitigate those risks will be able to unlock AI's enormous potential to create worldwide social and economic benefits."
This statement underscores the council's belief in the transformative power of AI, whilst acknowledging the critical importance of understanding and mitigating associated risks. The council's formation is a proactive step towards ensuring that the cybersecurity industry can keep pace with the rapid deployment of AI technologies.
The Security for AI Council has set forth a clear mission to guide its efforts: revolutionise the cybersecurity industry by leading the secure adoption of AI.
“We want to empower the industry to fully realise its potential, ensuring its integration is secure and responsible,” explains Malcolm.
This mission statement reflects the council's commitment to not only addressing security concerns but also fostering an environment where AI can be leveraged to its fullest potential. By focusing on secure and responsible integration, the council aims to strike a balance between innovation and risk management.
The council has aligned on several key goals:
- Accelerate security for AI adoption
- Diminish the frequency, severity, and impact of attacks against AI
- Develop the CISO roadmap for securing AI
- Understand and influence emerging new regulations & industry frameworks
- Define success in securing this rapidly growing technology
These goals demonstrate the council's comprehensive approach to AI security, encompassing everything from practical adoption strategies to influencing regulatory frameworks. By focusing on these areas, the council aims to create a holistic ecosystem that supports secure AI implementation across various sectors.
As the council continues its work, certain themes have begun to emerge from discussions amongst its members.
"Two big themes have emerged. The first is whether AI efforts fit well into existing security development lifecycle/app security processes and privacy by design structures,” explains Malcom. “There's not a consensus here, but initial feedback is that data science teams are not sufficiently integrated into these existing processes."
This insight highlights the challenges organisations face in integrating AI development into established security frameworks. The lack of consensus on this issue underscores the complexity of the task at hand and the need for continued dialogue and exploration.
"Another big conversation driver is around which internal stakeholders are responsible for owning AI by shaping or directing the technology and its use cases,” he continues. “I think we'll continue to see this be a point of contention in many organisations."
This point illustrates the organisational challenges that come with AI adoption, particularly in determining ownership and responsibility for AI initiatives. As companies grapple with these issues, the council's insights and guidance will prove invaluable.
Steering the conversation on AI Security
Malcolm believes the role of councils like the Security for AI Council in increasing secure AI adoption cannot be overstated. "We're in a position where every company globally is trying to put artificial intelligence into their strategy. Boards are demanding it, it's being deployed fast, and we're seeing adoption in every way possible.
“That's a fantastic thing for innovation—but it's also incredibly vulnerable. And it represents a very large expansion of the threat landscape. Security won't be able to keep up with threats without more standardised security frameworks and principles."
This statement encapsulates the dual nature of AI adoption—its immense potential for innovation and the significant security challenges it presents.
By bringing together industry leaders to discuss benchmarks, challenges, and solutions, the Security for AI Council is aiming to help steer a dialogue on the future of AI security and insure it is both kept secure and used for security.
******
Make sure you check out the latest edition of Cyber Magazine and also sign up to our global conference series - Tech & AI LIVE 2024
******
Cyber Magazine is a BizClik brand