Home Search Profile

Master LLM Security: Ultimate Guide to AI Vulnerabilities

Focused View

1:25:33

  • 01 - Meet your instructor.mp4
    01:23
  • 01 - How do LLMs work in applications.mp4
    05:23
  • 02 - How are LLMs created.mp4
    07:57
  • 03 - What are LLMs and how do they work.mp4
    05:19
  • 01 - Introduction to language model applications.mp4
    01:04
  • 02 - Common types of generative AI applications.mp4
    03:34
  • 03 - Overview of an API-based application.mp4
    04:55
  • 04 - Overview of an embedded-model application.mp4
    04:35
  • 05 - What is a multi-modal application.mp4
    06:25
  • 06 - Challenges and highlights of AI applications.mp4
    06:13
  • 07 - Summary.mp4
    01:31
  • 01 - Introduction to model-based vulnerabilities.mp4
    00:29
  • 02 - Prompt injection.mp4
    03:46
  • 03 - Insecure output handling.mp4
    05:11
  • 04 - Model theft.mp4
    03:48
  • 05 - Model replication.mp4
    03:28
  • 06 - Summary.mp4
    00:47
  • 01 - Introduction to system vulnerabilities.mp4
    00:47
  • 02 - Application vulnerabilities.mp4
    03:43
  • 03 - Sensitive information disclosure.mp4
    04:50
  • 04 - Insecure plugin design.mp4
    03:51
  • 05 - Summary.mp4
    01:01
  • 01 - Conclusion.mp4
    01:22
  • 02 - Other types of vulnerabilities.mp4
    04:11
  • More details


    Course Overview

    This comprehensive course from Pragmatic AI Labs equips you with the technical skills to identify, mitigate, and prevent security vulnerabilities in LLM applications. Learn from expert Alfredo Deza as you explore real-world threats and deploy robust AI solutions.

    What You'll Learn

    • Identify common LLM threats like prompt injection and model theft
    • Implement secure plug-in design and input validation
    • Protect AI systems from unauthorized access and data breaches

    Who This Is For

    • AI developers building LLM applications
    • Security professionals working with generative AI
    • Tech leads ensuring system integrity

    Key Benefits

    • Practical techniques to harden LLM systems
    • Best practices for vulnerability prevention
    • Strategies to monitor dependencies for security updates

    Curriculum Highlights

    1. Foundations of Large Language Models
    2. Model Vulnerabilities (Prompt Injection, Theft)
    3. System Vulnerabilities (Information Disclosure)
    Focused display
    • language english
    • Training sessions 24
    • duration 1:25:33
    • English subtitles has
    • Release Date 2025/06/07