Elastic's Massimo Merlo Talks Securing Academic AI Research

These institutions play a crucial role in advancing AI technologies, but this position also makes them prime targets
Elastic's AVP UKIs and South Massimo Merlo explains how the vital AI research academic institutions are conducting is open to some unique risks

As AI research and development accelerates, universities find themselves at the forefront of innovation - and increasingly vulnerable to cyberattacks. 

These institutions play a crucial role in advancing AI technologies, fostering collaboration, and nurturing the next generation of AI experts. However, their position at the cutting edge of AI research also makes them prime targets for cybercriminals and state-sponsored actors seeking to exploit valuable intellectual property.

The open nature of academic environments, designed to facilitate knowledge sharing and collaboration, presents a unique set of challenges when it comes to cybersecurity. This balancing act is further complicated by budgetary constraints, diverse user bases, and the need to integrate cutting-edge AI tools with often outdated IT infrastructure.

But what can be done to secure this unique ecosphere of universities? To find out more, we spoke with Massimo Merlo, AVP UKIs and South at Elastic about the unique challenges universities face in safeguarding their AI assets.

Massimo Merlo bio
  • Dynamic leader with a proven track record, Massimo demonstrates his impact to corporate performance through the development, enhancement, and orchestration of business growth strategies that expand markets and maximise sales profitability.
Massimo Merlo, AVP UKIs and South at Elastic

"Often at the forefront of technological innovation, universities face unique challenges in protecting their AI research and development from cyberattacks," Massimo explains. "While knowledge sharing and open collaboration are essential for academic progress, they also expose sensitive data to potential attacks if strong security measures are not in place."

This openness, combined with limited resources for cybersecurity, creates a target-rich environment for attackers seeking valuable intellectual property. Massimo points out that the types of AI-related data at risk in university settings are diverse and critically important. "In university settings, the types of AI-related data and intellectual property most at risk include datasets for training AI models, ground-breaking research findings, and intellectual property," he notes.

The implications of such data being compromised extend far beyond academic setbacks. "If these datasets are compromised, they can be exploited for malicious purposes or give competitors an unfair advantage. The consequences of such breaches extend far beyond academic setbacks, leading to significant financial losses and damaging the university's reputation," says Massimo.

The stakes are particularly high for the UK's ambitions to be a global leader in AI. Successful cyberattacks on UK universities could severely impact the country's ambitions to be a global leader in AI and stifle primary research goals. Stolen research data can cripple advancements, leaving the UK struggling to compete with international rivals who are heavily investing in AI.

"Repeated attacks can erode trust, discouraging international collaboration, which is essential for knowledge exchange and innovation. This isolation would stifle progress and hinder the UK's ability to attract top researchers and secure funding," he explains.

Addressing the theft of AI research

To address these challenges, Massimo sees potential in the newly established AI Safety Institute. "The Institute can take the lead in developing clear, tailored guidelines for university AI research environments, empowering these institutions to strengthen their security posture and protect and accelerate research outcomes," he suggests. Massimo believes the Institute could play a crucial role in fostering collaboration among universities through a centralised information-sharing platform for cyber threats and security incidents.

Balancing open academic collaboration with the need to protect sensitive AI research is an ongoing challenge for universities. But this can be tackled by implementing a multi-tiered data access controls and secure collaboration platforms while emphasising cybersecurity awareness among researchers and faculty.

"By educating researchers and faculty on cyber threats and best practices, universities empower people in these innovative roles to proactively mitigate risks and protect sensitive AI assets without hindering open communication and collaboration," Massimo explains.

When asked about specific steps UK universities can take to enhance their cybersecurity posture for AI-related assets, Massimo advises a three-step approach: "First, prioritise your data security by implementing robust access controls and encryption for sensitive AI datasets. Second, provide cybersecurity training for researchers and faculty on common threats. Finally, universities should invest in modernising their approach to data and data management, and that's where a search AI platform can help."

This extends to the platforms universities use for cross-college work, where often multiple universities might be working on a joint project. Therefore, there is an imperative to secure collaboration platforms.

"These platforms should incorporate strict access controls and encryption capabilities, enabling international collaboration while mitigating risks," he says.

Universities compete with private sector companies, which often have larger budgets, for cybersecurity specialists. Additionally, universities have a diverse user base and a wide range of devices, making it challenging to secure their extensive digital ecosystems.

How security yields better AI

Despite these challenges, Massimo remains optimistic about the potential for universities to enhance their cybersecurity measures. He points to Cranfield University as an example of successful implementation: "Cranfield University chose Elastic to safeguard its networks, data, and devices to gain visibility across its entire threat landscape.

This comprehensive view allowed the team to proactively detect and mitigate against potential threats like ransomware, zero-day attacks, and brute-force attempts.

By implementing these measures, universities can create a more secure environment for AI research while maintaining their commitment to open collaboration and innovation. As Massimo concludes, protecting university research from cyber threats will be crucial in maintaining the UK's competitive edge in the global race for AI leadership.

With the right approach to cybersecurity, UK universities can continue to drive AI innovation while safeguarding their valuable intellectual property and research findings.

******

Make sure you check out the latest edition of Cyber Magazine and also sign up to our global conference series - Tech & AI LIVE 2024

******

Cyber Magazine is a BizClik brand

Share
Share

Featured Articles

Why the UK is Listing Data Centres as Critical Cyber Assets

Being Western Europe's leader in number of Data Centres, the UK has decided to take steps to ensure they receive adequate protection from cyber threats

Trustwave Reveals the Financial Sector's Cyber Threats

Although it's not new to think that financial services organisations are prime targets for cybercriminals, the threat landscape they find themselves in is

TCS and Google Cloud Join for Solution to Secure the Cloud

TCS partners with Google Cloud to launch a range of AI-powered cybersecurity solutions to help businesses secure their clouds against advanced threats

Cybersecurity Conglomerate Reveals Threats Facing Consumers

Cyber Security

Decoding the US' Most Misunderstood Data Security Terms

Cyber Security

Orange Cyberdefense's Wicus Ross Talks Cyber Extortion Trend

Hacking & Malware