Skip to content

TechChannels Network:      Whitepaper Library      Webinars         Virtual Events      Research & Reports

×
GenAI

Q&A: Futurist Bernard Marr on the Challenges of GenAI in Cybersecurity

Stephen Lawton

May 06, 2024

Articles abound in the consumer and business press about the great benefits and fantastic failures of Generative Artificial Intelligence (GenAI) and large language models. Bernard Marr, author of Generative AI in Practice and Data Strategy: How to Profit from a World of Big Data, Analytics and Artificial Intelligence, looks at the future of GenAI and explains its benefits and challenges in the cybersecurity realm. 

Marr recently spoke with TechEvents about the potential impact of GenAI on cybersecurity, shedding light on its transformative power and the security risks it introduces.

Q. How will GenAI change the cybersecurity landscape in general and the job of the CISO in particular? Please address its impact on privacy, compliance, and using (or misusing) GenAI to defend against cyberattacks.

A. Generative AI (GenAI) is set to fundamentally transform the cybersecurity landscape, introducing both new tools and new threats. For CISOs, the adoption of GenAI can enhance threat detection and response capabilities by generating simulations of potential attacks and automating the creation of defense mechanisms. However, it also raises significant concerns around privacy and compliance, especially as AI systems can generate and disseminate sensitive information in unforeseen ways.

CISOs will need to develop robust frameworks for AI governance, focusing on compliance with data protection regulations and ensuring that GenAI applications respect privacy. The misuse of GenAI, whether through the creation of sophisticated phishing attacks or the unauthorized use of proprietary data, also demands a proactive approach to cybersecurity, where defensive strategies are continually updated to counteract the evolving AI-generated threat landscape.

Q. While GenAI is growing in popularity, it still is having issues with hallucinations, biased programming, and spewing misinformation. What needs to improve for GenAI truly to be a reliable business application and how far away is that?

A. For GenAI to become a reliable business tool, significant advancements are needed in addressing its current limitations such as hallucinations, biases, and the potential for misinformation. Enhancing the datasets used for training GenAI models to ensure they are comprehensive, representative, and free of biases is critical. Additionally, developing more sophisticated algorithms that can verify and validate generated content against trusted sources is essential. Implementing rigorous testing phases that simulate real-world applications will also be crucial. We may be a few years away from resolving these issues fully, as each advancement in AI technology presents new challenges and complexities.

Q. Looking forward, how will technologists use GenAI and what will it mean for application development going forward?

A. Looking forward, technologists will increasingly incorporate GenAI across various domains, not just in cybersecurity but also in areas like healthcare, finance, and customer service. In application development, GenAI will enable the creation of more intuitive and adaptive applications, capable of learning from user interactions to enhance functionality and user experience. This means applications that can evolve over time, providing increasing value to users. However, this also necessitates a new layer of complexity in development, where understanding and directing AI behavior becomes a crucial skill for developers.

Q. GenAI, combined with deep fakes, might well become the most destructive forms of cyberattacks going forward. What must be done now in the GenAI space to ensure that it doesn’t become a major security vulnerability?

A. To prevent GenAI from becoming a major security vulnerability, especially when combined with technologies like deep fakes, immediate action is required to develop AI detection and response mechanisms. Establishing norms and regulations that govern the ethical use of AI technologies is crucial. Additionally, investing in education and awareness programs to equip individuals and organizations with the knowledge to identify AI-generated falsehoods is necessary. Cybersecurity frameworks must evolve to include AI-specific considerations, ensuring that defenses are prepared to counteract the sophisticated nature of AI-driven threats. 

Q. What should CISOs and other security pros know about GenAI today that they don’t know? Conversely, what do security pros absolutely know about GenAI that’s absolutely wrong?

A. CISOs and security professionals should understand that GenAI can be a powerful ally in enhancing organizational security posture, not just a source of threats. It can automate complex analyses and predict security vulnerabilities using data at a scale not humanly possible. Conversely, a common misconception is that GenAI tools are inherently secure and unbiased. Security professionals must recognize that AI systems, like any technology, carry inherent risks and vulnerabilities, shaped by the data they are trained on and the contexts in which they are deployed. Vigilance and ongoing education are crucial as GenAI continues to evolve and integrate into the security infrastructure.

Share on

More News