LEVEL: INTERMEDIATE

Free Trial.
Large language models security

The rise of large language models (LLMs) has not only transformed how we build and interact with AI systems but has also introduced new and complex security challenges.

As LLMs become increasingly integrated into real-world applications, understanding how they can be attacked, exploited, and protected is no longer optional — it’s essential.
hrough a mix of engaging video lectures, hands-on labs and interactive checkpoints, you’ll dive into how LLMs can be exploited using techniques like jailbreaks, prompt injections and more. You’ll also gain practical skills in defending against these threats at the model, prompt, system and service levels, using structured frameworks to assess and strengthen LLM security.
Write your awesome label here.

Vladislav Tushkanov
Group Manager at Kaspersky AI Technology Research Center

Vladislav has been with Kaspersky since 2015. He and his team work at applying data science and machine learning techniques to detect threats — such as malware, phishing and spam — faster and better, as, well as researching cutting-edge AI technologies to predict threats that are yet to come. 

Training objectives

Gain a solid foundation in the emerging field of LLM security.
Learn practical defense techniques across model, prompt, system and service levels.
Develop the skills to evaluate, secure and design robust LLM-based systems using real-world cases and hands-on assignments.
Understand key attack methods such as jailbreaks, prompt injections and token smuggling.
Apply structured frameworks to analyze and assess LLM security.

Help & support

Please contact us at help.kasperskyxtraining.com if you are experiencing technical issues or need help and would like to chat with a Kaspersky expert.

Also, we invite you to join our Discord community for all the Kaspersky Expert Training learners, where you can talk with your peers, discuss courses’ exercises and much more.Click the link below and enjoy https://discord.gg/Ffxvjgn7XD