More The Secure Disclosure episodes

AI Panic is Driving Shadow IT w/ Noora Ahmed-Moshe thumbnail

AI Panic is Driving Shadow IT w/ Noora Ahmed-Moshe

Published 6 May 2026

Duration: 00:26:03

AI's impact on employment and cybersecurity risks, driven by shadow AI, phishing, and emerging threats like prompt injection, require balancing workforce skills, security measures, and organizational trust.

Episode Description

In this episode, we sit down with tech veteran and behavioral science enthusiast Noora Ahmed-Moshe to tackle the growing phenomenon of Shadow AI.As em...

Overview

The podcast addresses growing concerns about AI's impact on employment, emphasizing fears of job displacement and the pressure to adopt AI tools to remain competitive. It highlights that human error, such as phishing and credential leaks, remains the primary cause of cybersecurity breaches, with behavioral science being critical to fostering secure practices. The rise of shadow AIunauthorized use of AI tools by employeesposes significant risks, including data exposure, compliance issues, and security vulnerabilities, as users often bypass company policies using unregulated tools for convenience or efficiency. This phenomenon is exacerbated by the ease of accessing browser-based AI tools, blurring lines between personal and work usage, and complicating data governance in organizations.

Emerging AI technologies, such as deepfakes and autonomous agents, introduce stealthy threats that exploit human trust and complicate accountability. While enterprises widely adopt mainstream AI tools like ChatGPT, many employees use unapproved tools, risking sensitive data exposure and potential breaches. Mitigation strategies face challenges, as outright bans on AI tools often lead to workarounds, requiring a balance between security measures and employee autonomy. The discussion also underscores the need for proactive approaches, including technical monitoring, user education, and aligning AI adoption with workflow needs to ensure productivity without compromising security.

Organizational leadership and cultural factors play a pivotal role in addressing these challenges. Psychological safety and open communication between security teams and employees are essential, as is understanding human motivations to design effective strategies. Leadership must prioritize resource allocation for security teams, avoid punitive measures, and foster collaboration to address workflow bottlenecks. Continuous adaptation is stressed, as AI's rapid evolution demands ongoing efforts to build trust, improve security culture, and manage risks without achieving full visibility, recognizing that human behavior and technological complexity will remain central to the conversation.

Recent Episodes of The Secure Disclosure

29 Apr 2026 When AI Agents Change their Intent w/ Frank Vukovits

AI agents, autonomous non-human entities operating in enterprise systems without human oversight, pose security and governance challenges requiring updated access control frameworks, real-time monitoring, and intent-based governance to address risks like unauthorized access and shadow AI, paralleling historical tech challenges like Y2K.

22 Apr 2026 OWASP Top 10, Vibe Coding, and What Developers Miss w/ Tanya Janca

Gaps in cybersecurity education, persistent vulnerabilities like SQL injection, OWASP data limitations, evolving supply chain risks, high training costs, AI's contextual challenges, and the need for secure-by-design principles and collaboration highlight systemic challenges in addressing evolving cyber threats.

15 Apr 2026 The Future of Hacking is Agentic w/ Jason Haddix

Recommended: Security Testing will change, and might change quicker than this episode suggests. Keep Security Top of Mind during Development.

AI transforms security with automated penetration testing and threat detection, but requires human oversight to mitigate risks like prompt injection, ensure ethical use, and balance AI efficiency with creative problem-solving in an evolving threat landscape.

2 Apr 2026 Bugcrowd Founder Casey Ellis: AI Slop, and the Future of Hacking

Ethical hacking evolved from underground communities to enterprise-driven security frameworks, addressing stigma and legacy systems, AI's dual role in threat detection and synthetic risks, and the need for secure-by-design practices, hybrid human-AI strategies, and managing supply chain vulnerabilities amid evolving cyber threats.

More The Secure Disclosure episodes