Get in Touch

Course Outline

Foundations: Threat Models for Agentic AI

  • Understanding agentic threats: misuse, escalation, data leakage, and supply-chain risks.
  • Analyzing adversary profiles and attacker capabilities specific to autonomous agents.
  • Mapping assets, trust boundaries, and critical control points for agents.

Governance, Policy, and Risk Management

  • Establishing governance frameworks for agentic systems, including roles, responsibilities, and approval gates.
  • Designing policies: acceptable use, escalation rules, data handling, and auditability.
  • Addressing compliance considerations and evidence collection for audits.

Non-Human Identity & Authentication for Agents

  • Designing identities for agents: utilizing service accounts, JWTs, and short-lived credentials.
  • Implementing least-privilege access patterns and just-in-time credentialing.
  • Managing identity lifecycle: rotation, delegation, and revocation strategies.

Access Controls, Secrets, and Data Protection

  • Applying fine-grained access control models and capability-based patterns for agents.
  • Managing secrets, ensuring encryption in transit and at rest, and practicing data minimization.
  • Protecting sensitive knowledge sources and PII from unauthorized agent access.

Observability, Auditing, and Incident Response

  • Designing telemetry for agent behavior: intent tracing, command logs, and provenance.
  • Integrating SIEM systems, setting alerting thresholds, and ensuring forensic readiness.
  • Developing runbooks and playbooks for agent-related incidents and containment.

Red-Teaming Agentic Systems

  • Planning red-team exercises: defining scope, rules of engagement, and safe failover procedures.
  • Employing adversarial techniques: prompt injection, tool misuse, chain-of-thought manipulation, and API abuse.
  • Conducting controlled attacks to measure exposure and impact.

Hardening and Mitigations

  • Implementing engineering controls: response throttles, capability gating, and sandboxing.
  • Utilizing policy and orchestration controls: approval flows, human-in-the-loop mechanisms, and governance hooks.
  • Applying model and prompt-level defenses: input validation, canonicalization, and output filters.

Operationalizing Safe Agent Deployments

  • Exploring deployment patterns: staging, canary releases, and progressive rollout for agents.
  • Managing change control, testing pipelines, and pre-deploy safety checks.
  • Establishing cross-functional governance: security, legal, product, and ops playbooks.

Capstone: Red-Team / Blue-Team Exercise

  • Executing a simulated red-team attack against a sandboxed agent environment.
  • Defending, detecting, and remediating as the blue team using controls and telemetry.
  • Presenting findings, remediation plans, and policy updates.

Summary and Next Steps

Requirements

  • Strong background in security engineering, system administration, or cloud operations.
  • Familiarity with AI/ML concepts and the behavior of large language models (LLMs).
  • Experience with identity and access management (IAM) and secure system design principles.

Target Audience

  • Security engineers and red-team specialists.
  • AI operations and platform engineers.
  • Compliance officers and risk managers.
  • Engineering leads responsible for agent deployments.
 21 Hours

Number of participants


Price per participant

Upcoming Courses

Related Categories