Darktrace addresses generative AI concerns
In response to growing use of generative AI tools, Darktrace today announced the launch of new risk and compliance models to help its 8,400 customers around the world address the increasing risk of IP loss and data leakage. These new risk and compliance models for Darktrace DETECT™ and RESPOND™ make it easier for customers to put guardrails in place to monitor, and when necessary, respond to activity and connections to generative AI and large language model (LLM) tools.
This comes as Darktrace’s AI observed 74% of active customer deployments have employees using generative AI tools in the workplace. In one instance, in May 2023 Darktrace detected and prevented an upload of over 1GB of data to a generative AI tool at one of its customers.
New generative AI tools promise increases in productivity and new ways of augmenting human creativity. CISOs must balance the desire to embrace these innovations to boost productivity while managing risk. Government agencies including the UK’s National Cyber Security Centre have already issued guidance about the need to manage risk when using generative AI tools and other LLMs in the workplace. In addition, regulators in a variety of jurisdictions (including the UK, EU, and US) and in various sectors are expected to lay out guidance to companies on how to make the most of AI without exacerbating its potential dangers.
"Since generative AI tools like ChatGPT have gone mainstream, our company is increasingly aware of how companies are being impacted. First and foremost, we are focused on the attack vector and how well prepared we are to respond to potential threats. Equally as important is data privacy, and we are hearing stories in the news about potential data protection and data loss,” said Allan Jacobson, Vice President and Head of Information Technology, Orion Office REIT. “Businesses need a combination of technology and clear guardrails to take advantage of the benefits while managing the potential risks.”