May 5: CyberSecurity Innovations, Threats and Solutions

Confirmed speakers, companies and topics are listed below. You can still register for this webinar, plus full details will be released shortly.



Balamurugan Balakreshnan, Chief AI Architect, Microsoft

AI-Powered Red Teaming

The AI Red Teaming Agent is a powerful tool designed to help organizations proactively find safety risks associated with generative AI systems during design and development of generative AI models and applications.

Traditional red teaming involves exploiting the cyber kill chain and describes the process by which a system is tested for security vulnerabilities. However, with the rise of generative AI, the term AI red teaming has been coined to describe probing for novel risks (both content and security related) that these systems present and refers to simulating the behavior of an adversarial user who is trying to cause your AI system to misbehave in a particular way.

The AI Red Teaming Agent leverages Microsoft's open-source framework for Python Risk Identification Tool's (PyRIT) AI red teaming capabilities along with Microsoft Foundry's Risk and Safety Evaluations to help you automatically assess safety issues in three ways:

  • Automated scans for content risks: Firstly, you can automatically scan your model and application endpoints for safety risks by simulating adversarial probing.

  • Evaluate probing success: Next, you can evaluate and score each attack-response pair to generate insightful metrics such as Attack Success Rate (ASR).

  • Reporting and logging Finally, you can generate a score card of the attack probing techniques and risk categories to help you decide if the system is ready for deployment. Findings can be logged, monitored, and tracked over time directly in Foundry, ensuring compliance and continuous risk mitigation.


Mariana Padilla, Community Evangelist, Tom Gore, Regional Manager, Harmonic Security

Agentic AI Security, DLP for GenAI, AI Data Guardrails: The AI Genie is out of the bottle - now what?

AI isn't coming, it's already here, and it didn't wait for approval. Most organizations are focused on visible AI tools while the real risk hides in everyday workflows, free accounts, and embedded AI features. This session breaks down what millions of real enterprise AI interactions reveal about shadow AI, why blocking backfires, and how teams can regain control without slowing innovation.

Harmonic provides the control layer for the AI-first workforce. Key capabilities that will be covered are listed below, plus click here to watch Angelbeat CEO Ron Gerber’s podcast interview with Harmonic.

Agentic AI Security & MCP Gateway Control for Enterprise: Agentic AI connects models directly to your data and systems. Gain visibility and control over Model Context Protocol (MCP) traffic to secure autonomous workflows without slowing engineering velocity.

AI Usage Governance: Govern AI use across your workforce, from GenAI apps and embedded tools to local MCP servers.

DLP for GenAI: Prevent Sensitive Data Leaks into AI Tools: Harmonic’s inline controls prevent leaks of source code, M&A data, PII with 96% greater accuracy than legacy DLP.