Hackers of all hat colours have discovered OpenAI’s ChatGPT can be put to malicious use, so the SANS Institute is hosting a live webcast to discuss the cybersecurity implications of advanced AI and what cyber professionals can do to mitigate some of the risks.
OpenAI's ChatGPT chatbot is a powerful artificial intelligence system that has been trained on a vast amount of data to generate written text with remarkable accuracy and context. But even the most advanced AI remains vulnerable to cybersecurity risk, say SANS Institute researchers.
"AI tools like the new GPT ChatBot look like they have the potential to revolutionise cybersecurity, but the truth is that these advances also come with risks in the form of bias, misinformation, privacy concerns, automated attacks, and even malicious use," says David Hoelzer, a SANS fellow at the SANS Technical Institute. "This webcast event will help you separate hype from reality and discover the real impact of advanced AI solutions."
The virtual webcast will focus on the potential security risks of using the ChatGPT bot for tasks such as customer service, chatbots, process automation, and the steps that should be taken to address potential issues.
A talk to be held by SANS Principal Instructor Jorge Orchilles accepts that many security professionals have already leveraged ChatGPT to see how well it could help with offensive efforts. “Can it write our pen test proposals? Phishing pretext? How about help set up attack infrastructure and C2? Can it help us evade detection by leveraging LOLBAS or evade EDR with custom code?”
- Imperva: Shadow AI set to drive new wave of insider threatsCyber Security
- MEPs ready to negotiate rules for safe and transparent AITechnology & AI
- Castlepoint Systems wins ISACA Innovation Solutions AwardTechnology & AI
- A clear and present danger for infrastructure and operationsOperational Security