Sensitive data like passwords and PII shared to AI chatbots

Share
The average enterprise experiences sensitive data being posted to generative AI apps like ChatGPT as much as eight times per working day, the report found
The explosive growth of generative AI is driving up the sharing of sensitive data, from source code to financial & healthcare data, according to Netskope

Sensitive data such as source code and personal identifiable information (PII) is being shared with large language model chatbots at an alarming rate, according to research published by Secure Access Service Edge (SASE) leader Netskope.

Its report, Cloud & Threat Report: AI Apps in the Enterprise, reveals how the explosive growth of generative AI is driving up the sharing of sensitive data, with research suggesting the average enterprise experiences sensitive data being posted to generative AI apps like ChatGPT as much as eight times per working day.

For every 10,000 enterprise users, an enterprise organisation is experiencing approximately 183 incidents of sensitive data being posted to the app per month, the report found.

“It is inevitable that some users will upload proprietary source code or text containing sensitive data to AI tools that promise to help with programming or writing,” said Ray Canzanese, Threat Research Director, Netskope Threat Labs. “Therefore, it is imperative for organisations to place controls around AI to prevent sensitive data leaks.”

“Controls that empower users to reap the benefits of AI, streamlining operations and improving efficiency, while mitigating the risks are the ultimate goal,” Canzanese adds. “The most effective controls that we see are a combination of DLP and interactive user coaching.”

Sensitive data sharing to ChatGPT includes source code, financial & healthcare data and passwords and keys

Netskope found that source code is posted to ChatGPT more than any other type of sensitive data, at a rate of 158 incidents per 10,000 users per month. Other sensitive data being shared in ChatGPT includes regulated data- including financial and healthcare data, PII - along with intellectual property excluding source code, and, most concerningly, passwords and keys, usually embedded in source code. 

To combat these cyber threats, some organisations have restricted access to chatbots like ChatGPT. Netskope’s data shows that in financial services and healthcare - both highly regulated industries - nearly one in five organisations have implemented a blanket ban on employee use of ChatGPT, compared to just one in 20 in the technology sector.

But as James Robinson, Netskope’s Deputy Chief Information Security Officer, explains, while blocking access to AI applications is a short term solution to mitigate risk, it comes at the expense of the potential benefits.

“As security leaders, we cannot simply decide to ban applications without impacting on user experience and productivity,” he says. “Organisations should focus on evolving their workforce awareness and data policies to meet the needs of employees using AI products productively. There is a good path to safe enablement of generative AI with the right tools and the right mindset.”

Youtube Placeholder

Netskope's report follows research by Imperva into the increase of what it describes ‘shadow AI’: warning that the twin factors of poor data controls and the advent of new generative AI tools will lead to a spike in data breaches.

“Forbidding employees from using generative AI is futile,” explained Terry Ray, SVP, Data Security GTM and Field CTO at Imperva, which has been acquired by defence and security specialist Thales. “We’ve seen this with so many other technologies - people are inevitably able to find their way around such restrictions and so prohibitions just create an endless game of whack-a-mole for security teams, without keeping the enterprise meaningfully safer.”

“People don’t need to have malicious intent to cause a data breach,” continued Ray. “Most of the time, they are just trying to be more efficient in doing their jobs. But if companies are blind to LLMs accessing their backend code or sensitive data stores, it’s just a matter of time before it blows up in their faces.”

******

For more insights into the world of Cyber - check out the latest edition of Cyber Magazine and be sure to follow us on LinkedIn & Twitter.

Other magazines that may be of interest - Technology Magazine | AI Magazine.

Please also check out our upcoming event - Cloud and 5G LIVE on October 11 and 12 2023.

******

BizClik is a global provider of B2B digital media platforms that cover Executive Communities for CEOs, CFOs, CMOs, Sustainability leaders, Procurement & Supply Chain leaders, Technology & AI leaders, Cyber leaders, FinTech & InsurTech leaders as well as covering industries such as Manufacturing, Mining, Energy, EV, Construction, Healthcare and Food.

BizClik – based in London, Dubai, and New York – offers services such as content creation, advertising & sponsorship solutions, webinars & events.

Share

Featured Articles

Palo Alto Networks, Deloitte and The Push to Platformization

By expanding their partnership to EMEA, Palo Alto Networks is bringing to Deloitte the platformization needed in the modern cybersphere

Insurers Now Spotlighting Identity and Privilege Compromises

Delinea's latest survey reveals a sharp rise in cybersecurity insurance claims, pushing for advanced identity protection measures. Dive into how AI and met

Trend Micro Address AI Threat to Mobile Users with New App

Trend Micro Check is an all-in-one solution that recognises the threats that deepfakes are now posing to mobile users in elaborate scams

Solarwinds CISO Wants Global Cyber Laws After Winning Case

Cyber Security

Resurgence of Spam: Cisco Talos Sound Alarm on New Tactics

Hacking & Malware

Cloudhouse Head Talks Laws Incoming After Crowdstrike Outage

Operational Security