Cybersecurity in AI Systems Training Course
Protecting AI systems involves distinct challenges that set them apart from conventional cybersecurity methods. AI models face risks such as adversarial attacks, data poisoning, and model theft, which can severely affect business continuity and data integrity. This course delves into essential cybersecurity practices for AI environments, addressing adversarial machine learning, data protection within machine learning workflows, and the compliance standards necessary for deploying robust AI solutions.
This instructor-led, live training (available online or onsite) targets intermediate-level AI and cybersecurity professionals seeking to comprehend and mitigate security vulnerabilities inherent to AI models and systems. The content is particularly relevant for those operating in highly regulated sectors such as finance, data governance, and consulting.
Upon completion of this training, participants will be able to:
- Identify various adversarial attacks aimed at AI systems and learn methods to defend against them.
- Apply model hardening techniques to enhance the security of machine learning pipelines.
- Safeguard data security and integrity within machine learning models.
- Manage regulatory compliance requirements associated with AI security.
Course Format
- Interactive lectures and discussions.
- Extensive exercises and practical practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request customized training for this course, please contact us to arrange it.
Course Outline
Introduction to AI Security Challenges
- Understanding security risks unique to AI systems
- Comparing traditional cybersecurity vs. AI cybersecurity
- Overview of attack surfaces in AI models
Adversarial Machine Learning
- Types of adversarial attacks: evasion, poisoning, and extraction
- Implementing adversarial defenses and countermeasures
- Case studies on adversarial attacks in different industries
Model Hardening Techniques
- Introduction to model robustness and hardening
- Techniques for reducing model vulnerability to attacks
- Hands-on with defensive distillation and other hardening methods
Data Security in Machine Learning
- Securing data pipelines for training and inference
- Preventing data leakage and model inversion attacks
- Best practices for managing sensitive data in AI systems
AI Security Compliance and Regulatory Requirements
- Understanding regulations around AI and data security
- Compliance with GDPR, CCPA, and other data protection laws
- Developing secure and compliant AI models
Monitoring and Maintaining AI System Security
- Implementing continuous monitoring for AI systems
- Logging and auditing for security in machine learning
- Responding to AI security incidents and breaches
Future Trends in AI Cybersecurity
- Emerging techniques in securing AI and machine learning
- Opportunities for innovation in AI cybersecurity
- Preparing for future AI security challenges
Summary and Next Steps
Requirements
- Foundational knowledge of machine learning and AI concepts
- Familiarity with cybersecurity principles and practices
Audience
- AI and machine learning engineers aiming to enhance security within AI systems
- Cybersecurity professionals specializing in AI model protection
- Compliance and risk management professionals in data governance and security
Open Training Courses require 5+ participants.
Cybersecurity in AI Systems Training Course - Booking
Cybersecurity in AI Systems Training Course - Enquiry
Cybersecurity in AI Systems - Consultancy Enquiry
Testimonials (1)
The profesional knolage and the way how he presented it before us
Miroslav Nachev - PUBLIC COURSE
Course - Cybersecurity in AI Systems
Upcoming Courses
Related Courses
ISACA Advanced in AI Security Management (AAISM)
21 HoursAAISM serves as an advanced framework designed for assessing, governing, and managing security risks within artificial intelligence systems.
This instructor-led, live training, available either online or onsite, targets advanced-level professionals seeking to implement robust security controls and governance practices for enterprise AI environments.
Upon completing this program, participants will be equipped to:
- Evaluate AI security risks utilizing industry-recognized methodologies.
- Implement governance models that support the responsible deployment of AI.
- Align AI security policies with organizational objectives and regulatory requirements.
- Strengthen resilience and accountability in AI-driven operations.
Course Format
- Instructor-led lectures enriched with expert analysis.
- Hands-on workshops and assessment-driven activities.
- Applied exercises based on real-world AI governance scenarios.
Course Customization Options
- For training tailored to your organization's AI strategy, please contact us to customize the course.
AI Governance, Compliance, and Security for Enterprise Leaders
14 HoursThis instructor-led, live training in Taiwan (online or onsite) is designed for intermediate-level enterprise leaders who wish to understand how to responsibly govern and secure AI systems in compliance with emerging global frameworks such as the EU AI Act, GDPR, ISO/IEC 42001, and the U.S. Executive Order on AI.
Upon completion of this training, participants will be able to:
- Comprehend the legal, ethical, and regulatory risks associated with using AI across various departments.
- Interpret and apply major AI governance frameworks (including the EU AI Act, NIST AI RMF, and ISO/IEC 42001).
- Establish policies for security, auditing, and oversight regarding AI deployment within the enterprise.
- Develop guidelines for procurement and usage of both third-party and in-house AI systems.
AI Risk Management and Security in the Public Sector
7 HoursArtificial Intelligence (AI) introduces new dimensions of operational risk, governance challenges, and cybersecurity exposure for government agencies and departments.
This instructor-led, live training (online or onsite) is aimed at public sector IT and risk professionals with limited prior experience in AI who wish to understand how to evaluate, monitor, and secure AI systems within a government or regulatory context.
By the end of this training, participants will be able to:
- Interpret key risk concepts related to AI systems, including bias, unpredictability, and model drift.
- Apply AI-specific governance and auditing frameworks such as NIST AI RMF and ISO/IEC 42001.
- Recognize cybersecurity threats targeting AI models and data pipelines.
- Establish cross-departmental risk management plans and policy alignment for AI deployment.
Format of the Course
- Interactive lecture and discussion of public sector use cases.
- AI governance framework exercises and policy mapping.
- Scenario-based threat modeling and risk evaluation.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Introduction to AI Trust, Risk, and Security Management (AI TRiSM)
21 HoursThis instructor-led, live training in Taiwan (online or onsite) is aimed at beginner-level to intermediate-level IT professionals who wish to understand and implement AI TRiSM in their organizations.
By the end of this training, participants will be able to:
- Grasp the key concepts and importance of AI trust, risk, and security management.
- Identify and mitigate risks associated with AI systems.
- Implement security best practices for AI.
- Understand regulatory compliance and ethical considerations for AI.
- Develop strategies for effective AI governance and management.
Building Secure and Responsible LLM Applications
14 HoursThis instructor-led, live training in Taiwan (online or onsite) targets intermediate to advanced AI developers, architects, and product managers who aim to identify and mitigate risks associated with LLM-powered applications, such as prompt injection, data leakage, and unfiltered output. The course also covers integrating security controls like input validation, human-in-the-loop oversight, and output guardrails.
Upon completing this training, participants will be able to:
- Comprehend the core vulnerabilities inherent in LLM-based systems.
- Apply secure design principles to LLM application architecture.
- Utilize tools such as Guardrails AI and LangChain for validation, filtering, and safety assurance.
- Integrate techniques like sandboxing, red teaming, and human-in-the-loop review into production-grade pipelines.
EXO Security and Governance: Offline Model Management
14 HoursThis instructor-led, live training in Taiwan (online or onsite) is aimed at security engineers and compliance officers who wish to harden EXO deployments, control model access, and govern AI workloads running entirely on-premise.
Introduction to AI Security and Risk Management
14 HoursThis instructor-led, live training in Taiwan (online or on-site) is designed for beginner-level IT security, risk, and compliance professionals who want to grasp foundational AI security concepts, threat vectors, and global frameworks such as NIST AI RMF and ISO/IEC 42001.
By the end of this training, participants will be able to:
- Comprehend the unique security risks posed by AI systems.
- Identify threat vectors such as adversarial attacks, data poisoning, and model inversion.
- Apply foundational governance models like the NIST AI Risk Management Framework.
- Align AI usage with emerging standards, compliance guidelines, and ethical principles.
OWASP GenAI Security
14 HoursBased on the latest OWASP GenAI Security Project guidance, participants will learn to identify, assess, and mitigate AI-specific threats through hands-on exercises and real-world scenarios.
Privacy-Preserving Machine Learning
14 HoursThis instructor-led, live training in Taiwan (online or onsite) is aimed at advanced-level professionals who wish to implement and evaluate techniques such as federated learning, secure multiparty computation, homomorphic encryption, and differential privacy in real-world machine learning pipelines.
By the end of this training, participants will be able to:
- Understand and compare key privacy-preserving techniques in ML.
- Implement federated learning systems using open-source frameworks.
- Apply differential privacy for safe data sharing and model training.
- Use encryption and secure computation techniques to protect model inputs and outputs.
Red Teaming AI Systems: Offensive Security for ML Models
14 HoursThis instructor-led, live training in Taiwan (online or onsite) is aimed at advanced-level security professionals and ML specialists who wish to simulate attacks on AI systems, uncover vulnerabilities, and enhance the robustness of deployed AI models.
By the end of this training, participants will be able to:
- Simulate real-world threats to machine learning models.
- Generate adversarial examples to test model robustness.
- Assess the attack surface of AI APIs and pipelines.
- Design red teaming strategies for AI deployment environments.
Securing Edge AI and Embedded Intelligence
14 HoursThis instructor-led, live training in Taiwan (online or onsite) is designed for intermediate-level engineers and security professionals who want to safeguard AI models deployed at the edge against threats like tampering, data leakage, adversarial inputs, and physical attacks.
Upon completing this training, participants will be able to:
- Identify and evaluate security risks associated with edge AI deployments.
- Apply tamper-resistant measures and encrypted inference techniques.
- Harden edge-deployed models and secure data pipelines.
- Implement threat mitigation strategies tailored to embedded and constrained systems.
Securing AI Models: Threats, Attacks, and Defenses
14 HoursThis instructor-led, live training in Taiwan (online or onsite) is aimed at intermediate-level machine learning and cybersecurity professionals who wish to understand and mitigate emerging threats against AI models, using both conceptual frameworks and hands-on defenses like robust training and differential privacy.
By the end of this training, participants will be able to:
- Identify and classify AI-specific threats such as adversarial attacks, inversion, and poisoning.
- Use tools like the Adversarial Robustness Toolbox (ART) to simulate attacks and test models.
- Apply practical defenses including adversarial training, noise injection, and privacy-preserving techniques.
- Design threat-aware model evaluation strategies in production environments.
Security and Privacy in TinyML Applications
21 HoursTinyML is an approach to deploying machine learning models on low-power, resource-constrained devices operating at the network edge.
This instructor-led, live training (online or onsite) is aimed at advanced-level professionals who wish to secure TinyML pipelines and implement privacy-preserving techniques in edge AI applications.
At the conclusion of this course, participants will be able to:
- Identify security risks unique to on-device TinyML inference.
- Implement privacy-preserving mechanisms for edge AI deployments.
- Harden TinyML models and embedded systems against adversarial threats.
- Apply best practices for secure data handling in constrained environments.
Format of the Course
- Engaging lectures supported by expert-led discussions.
- Practical exercises emphasizing real-world threat scenarios.
- Hands-on implementation using embedded security and TinyML tooling.
Course Customization Options
- Organizations may request a tailored version of this training to align with their specific security and compliance needs.
Safe & Secure Agentic AI: Governance, Identity, and Red-Teaming
21 HoursThis course provides an in-depth exploration of governance, identity management, and adversarial testing strategies for agentic AI systems, with a focus on enterprise-safe deployment patterns and practical red-teaming methodologies.
Delivered by an instructor as live training (available online or onsite), this program is designed for advanced practitioners who aim to design, secure, and evaluate agent-based AI systems within production environments.
Upon completion of this training, participants will be capable of:
- Establishing governance models and policies to ensure safe deployment of agentic AI.
- Designing non-human identity and authentication mechanisms for agents, ensuring least-privilege access.
- Implementing access controls, audit trails, and observability solutions specifically tailored for autonomous agents.
- Planning and conducting red-team exercises to identify potential misuses, escalation paths, and data exfiltration risks.
- Mitigating common threats to agentic systems through policy enforcement, engineering controls, and continuous monitoring.
Course Format
- Interactive lectures coupled with threat-modeling workshops.
- Hands-on labs covering identity provisioning, policy enforcement, and adversary simulation.
- Red-team/blue-team exercises and a comprehensive end-of-course assessment.
Course Customization Options
- To request customized training for this course, please contact us to arrange details.